Traditional automation relies on hardcoded
coordinates or complex accessibility hooks. ATLAS takes a different approach: it sees the screen
like a human does. By using native OS vision capabilities on each platform, ATLAS can capture the screen, find exact words, and click them — all completely locally
and instantly.
- → Fuzzy Text Matching: Handles slight recognition inaccuracies using
advanced distance algorithms.
- → Jitter Targeting: Automatically retries clicks with slight
coordinate offsets if the first attempt fails.
- → Visual Verification Loop: Optionally verifies that expected text
appears (or disappears) after clicking to confirm success.
The Result
Unmatched reliability in UI automation. ATLAS can interact with any application without needing
DOM trees, accessibility APIs, or slow cloud-based AI vision models. It's fast, private, and
incredibly robust to UI changes.