Modern artificial intelligence (AI) technologies have called into question the effectiveness of conventional tools for protecting against bots on the Internet. Captcha tests, designed to distinguish humans from machines, no longer do the job, The Conversation claims. Today, bots are able to solve these puzzles faster and more accurately than humans.
Captcha, which appeared in the early 2000s, was invented by scientists at Carnegie Mellon University. It was originally developed to protect sites from automated programs – bots that created fake accounts or, for example, bought tickets and distributed spam. The principle of operation was very simple: a person had to complete a task that was easy for people, but difficult for machines.
The first version of the captcha asked users to enter letters and numbers. Later, in 2007, ReCaptcha appeared, where words were added to tasks. In 2014, Google released ReCaptcha v2, which is still the most popular. She suggests either checking the “I’m not a robot” box or choosing the right images, such as bicycles or traffic lights.
However, AI systems have learned to bypass captcha. Computer vision and language processing technologies allow machines to easily “read” distorted text and recognize objects in images. For example, AI tools such as Google Vision and OpenAI Clip solve similar problems in fractions of a second, whereas a human takes much longer. And this is already becoming a problem in real life. Bots are used to buy tickets to sports matches or mass reservations of seats, thereby depriving ordinary users of access to purchasing tickets. For example, in the UK, automated programs reserve driving test places en masse and then resell them at a large markup.
However, developers are trying to adapt to new challenges. So, in 2018, Google introduced ReCaptcha v3, which no longer requires users to solve puzzles. Instead, the system analyzes behavior on the site – cursor movement, typing speed and other details that are unique to a person.
However, it turned out that such methods are not ideal. First, they raise questions about data privacy because they require the collection of user information. For example, some sites have already started using biometrics to verify users, such as fingerprints, voice commands or facial identification.
Secondly, even these systems can already make do with advanced AI, and with the advent of AI agents – programs that will perform tasks on behalf of users, the situation may become even more complicated. In the future, sites will need to distinguish between “good” bots that work for the benefit of users and “bad” bots that break the rules. One possible solution could be the introduction of digital certificates for authentication, but these are currently under development.
That is, the struggle between bots and security systems continues. Captcha, which was once a reliable tool, is losing its effectiveness, and developers will have to find new ways of protection that are both convenient for users and inaccessible to attackers.