Coders Conquer Security: Share & Learn Series - Insufficient Anti-Automation
Imagine going to the door of an old speakeasy or underground club. The little hole in the door slides open and a burly bouncer asks for the password. The potential visitor doesn't know the password and makes a guess. It's wrong, so the bouncer doesn't let them inside.
That's what normally would happen. Now imagine the visitor who guessed the wrong password immediately tries again, gets it wrong, and is again denied access. Then imagine the potential visitor opens up the dictionary and starts reading off words, starting with something like aardvark and proceeding to try every single possible word.
Most likely, the bouncer wouldn't allow that kind of activity to take place, but websites and applications with insufficient anti-automation do just that. They allow users to keep trying passwords, even using automation techniques, until they finally stumble across the proper catch phrase.
In this episode, we will learn:
- How attackers exploit insufficient anti-automation
- Why applications with insufficient anti-automation are dangerous
- Techniques that can fix this vulnerability.
How do Attackers Exploit Insufficient Anti-Automation?
Employing automation or dictionary-style attacks like our imaginary speakeasy visitor did are not new in cybersecurity. In fact, those brute-force style attacks were some of the first hacker techniques ever deployed. And as computers grew faster, they became more and more efficient. A fast computer can run through an entire dictionary of words in just a few minutes, depending on the speed of the connection between the attack computer and the targeted system.
Those kinds of automated attacks were why anti-automation software and techniques were created. It gives applications the ability to determine if actions being taken by a user are outside the norms of typical human behavior.
If an application has insufficient anti-automation checks in place, attackers can simply keep guessing at passwords until they find a match. Or, they might use automation software to do other things such as spam comments into website forums.
Why is Insufficient Anti-Automation Dangerous?
Allowing malicious users to employ automation to try and circumvent security can be dangerous. The reason that automation type attacks have persisted from the early days of computing until now is that they can be highly effective. If you give an automation program an unlimited amount of time to submit passwords with no consequences for an incorrect guess, it will eventually find the right one.
When used on something like a forum, having waves of obviously scripted comments might frustrate valid users, or even act like a kind of denial of service attack by squandering system resources. Automated posting might also be used as a tool for a phishing or other attacks to expose the lures to as many people as possible.
Fixing Insufficient Anti-Automation Problems
To fix the problem of insufficient anti-automation, all applications must be given the ability to determine whether actions being taken are being implemented by a human or a piece of automation software. One of the most popular and widely used techniques is the Completely Automated Public Turing test to tell Computers and Humans Apart, or CAPTCHA.
The CAPTCHA is basically a Turing test, first proposed by computer scientist Alan Turing in 1950, whereby human and computer behavior can be separated and identified. Modern CAPTCHAs present problems humans can easily solve, but which computers struggle with, or simply can't figure out. A popular one presents a photo separated by a grid and asks users to identify all the sectors with a specific item in it, such as a flower or a face. The computer can't understand what is being asked for, and thus can't even attempt to scan the image. Even if it could, image recognition is beyond most programs not specifically built to do so.
Other examples of CAPTCHAs include showing blurry text, asking a simple logic question or even playing the question out loud. Implementing a CAPTCHA challenge at critical points in an application, such as when prompting for a password, can stop automation programs in their tracks.
It's also possible to stop automation programs by simply limiting the number of incorrect guesses from the same source. If too many wrong guesses are sent in, the account can be temporarily locked out, thus delaying the automation program past the point of usefulness, or might even require a human administrator to unlock. Doing any of that should prevent anti-automation vulnerabilities within an application.
More Information about Insufficient Anti-Automation
For further reading, you can take a look at what OWASP says about insufficient anti-automation. You can also put your newfound defensive knowledge to the test with the free demo of the Secure Code Warrior platform, which trains cybersecurity teams to become the ultimate cyber warriors. To learn more about defeating this vulnerability, and a rogues'gallery of other threats, visit the Secure Code Warrior blog.
Ready to find and fix insufficient anti-automation right now? Test your skills in our game arena: [Start Here]
If an application has insufficient anti-automation checks in place, attackers can simply keep guessing at passwords until they find a match. Heres how to stop them.
Jaap Karan Singh is a Secure Coding Evangelist, Chief Singh and co-founder of Secure Code Warrior.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoJaap Karan Singh is a Secure Coding Evangelist, Chief Singh and co-founder of Secure Code Warrior.
Imagine going to the door of an old speakeasy or underground club. The little hole in the door slides open and a burly bouncer asks for the password. The potential visitor doesn't know the password and makes a guess. It's wrong, so the bouncer doesn't let them inside.
That's what normally would happen. Now imagine the visitor who guessed the wrong password immediately tries again, gets it wrong, and is again denied access. Then imagine the potential visitor opens up the dictionary and starts reading off words, starting with something like aardvark and proceeding to try every single possible word.
Most likely, the bouncer wouldn't allow that kind of activity to take place, but websites and applications with insufficient anti-automation do just that. They allow users to keep trying passwords, even using automation techniques, until they finally stumble across the proper catch phrase.
In this episode, we will learn:
- How attackers exploit insufficient anti-automation
- Why applications with insufficient anti-automation are dangerous
- Techniques that can fix this vulnerability.
How do Attackers Exploit Insufficient Anti-Automation?
Employing automation or dictionary-style attacks like our imaginary speakeasy visitor did are not new in cybersecurity. In fact, those brute-force style attacks were some of the first hacker techniques ever deployed. And as computers grew faster, they became more and more efficient. A fast computer can run through an entire dictionary of words in just a few minutes, depending on the speed of the connection between the attack computer and the targeted system.
Those kinds of automated attacks were why anti-automation software and techniques were created. It gives applications the ability to determine if actions being taken by a user are outside the norms of typical human behavior.
If an application has insufficient anti-automation checks in place, attackers can simply keep guessing at passwords until they find a match. Or, they might use automation software to do other things such as spam comments into website forums.
Why is Insufficient Anti-Automation Dangerous?
Allowing malicious users to employ automation to try and circumvent security can be dangerous. The reason that automation type attacks have persisted from the early days of computing until now is that they can be highly effective. If you give an automation program an unlimited amount of time to submit passwords with no consequences for an incorrect guess, it will eventually find the right one.
When used on something like a forum, having waves of obviously scripted comments might frustrate valid users, or even act like a kind of denial of service attack by squandering system resources. Automated posting might also be used as a tool for a phishing or other attacks to expose the lures to as many people as possible.
Fixing Insufficient Anti-Automation Problems
To fix the problem of insufficient anti-automation, all applications must be given the ability to determine whether actions being taken are being implemented by a human or a piece of automation software. One of the most popular and widely used techniques is the Completely Automated Public Turing test to tell Computers and Humans Apart, or CAPTCHA.
The CAPTCHA is basically a Turing test, first proposed by computer scientist Alan Turing in 1950, whereby human and computer behavior can be separated and identified. Modern CAPTCHAs present problems humans can easily solve, but which computers struggle with, or simply can't figure out. A popular one presents a photo separated by a grid and asks users to identify all the sectors with a specific item in it, such as a flower or a face. The computer can't understand what is being asked for, and thus can't even attempt to scan the image. Even if it could, image recognition is beyond most programs not specifically built to do so.
Other examples of CAPTCHAs include showing blurry text, asking a simple logic question or even playing the question out loud. Implementing a CAPTCHA challenge at critical points in an application, such as when prompting for a password, can stop automation programs in their tracks.
It's also possible to stop automation programs by simply limiting the number of incorrect guesses from the same source. If too many wrong guesses are sent in, the account can be temporarily locked out, thus delaying the automation program past the point of usefulness, or might even require a human administrator to unlock. Doing any of that should prevent anti-automation vulnerabilities within an application.
More Information about Insufficient Anti-Automation
For further reading, you can take a look at what OWASP says about insufficient anti-automation. You can also put your newfound defensive knowledge to the test with the free demo of the Secure Code Warrior platform, which trains cybersecurity teams to become the ultimate cyber warriors. To learn more about defeating this vulnerability, and a rogues'gallery of other threats, visit the Secure Code Warrior blog.
Ready to find and fix insufficient anti-automation right now? Test your skills in our game arena: [Start Here]
Imagine going to the door of an old speakeasy or underground club. The little hole in the door slides open and a burly bouncer asks for the password. The potential visitor doesn't know the password and makes a guess. It's wrong, so the bouncer doesn't let them inside.
That's what normally would happen. Now imagine the visitor who guessed the wrong password immediately tries again, gets it wrong, and is again denied access. Then imagine the potential visitor opens up the dictionary and starts reading off words, starting with something like aardvark and proceeding to try every single possible word.
Most likely, the bouncer wouldn't allow that kind of activity to take place, but websites and applications with insufficient anti-automation do just that. They allow users to keep trying passwords, even using automation techniques, until they finally stumble across the proper catch phrase.
In this episode, we will learn:
- How attackers exploit insufficient anti-automation
- Why applications with insufficient anti-automation are dangerous
- Techniques that can fix this vulnerability.
How do Attackers Exploit Insufficient Anti-Automation?
Employing automation or dictionary-style attacks like our imaginary speakeasy visitor did are not new in cybersecurity. In fact, those brute-force style attacks were some of the first hacker techniques ever deployed. And as computers grew faster, they became more and more efficient. A fast computer can run through an entire dictionary of words in just a few minutes, depending on the speed of the connection between the attack computer and the targeted system.
Those kinds of automated attacks were why anti-automation software and techniques were created. It gives applications the ability to determine if actions being taken by a user are outside the norms of typical human behavior.
If an application has insufficient anti-automation checks in place, attackers can simply keep guessing at passwords until they find a match. Or, they might use automation software to do other things such as spam comments into website forums.
Why is Insufficient Anti-Automation Dangerous?
Allowing malicious users to employ automation to try and circumvent security can be dangerous. The reason that automation type attacks have persisted from the early days of computing until now is that they can be highly effective. If you give an automation program an unlimited amount of time to submit passwords with no consequences for an incorrect guess, it will eventually find the right one.
When used on something like a forum, having waves of obviously scripted comments might frustrate valid users, or even act like a kind of denial of service attack by squandering system resources. Automated posting might also be used as a tool for a phishing or other attacks to expose the lures to as many people as possible.
Fixing Insufficient Anti-Automation Problems
To fix the problem of insufficient anti-automation, all applications must be given the ability to determine whether actions being taken are being implemented by a human or a piece of automation software. One of the most popular and widely used techniques is the Completely Automated Public Turing test to tell Computers and Humans Apart, or CAPTCHA.
The CAPTCHA is basically a Turing test, first proposed by computer scientist Alan Turing in 1950, whereby human and computer behavior can be separated and identified. Modern CAPTCHAs present problems humans can easily solve, but which computers struggle with, or simply can't figure out. A popular one presents a photo separated by a grid and asks users to identify all the sectors with a specific item in it, such as a flower or a face. The computer can't understand what is being asked for, and thus can't even attempt to scan the image. Even if it could, image recognition is beyond most programs not specifically built to do so.
Other examples of CAPTCHAs include showing blurry text, asking a simple logic question or even playing the question out loud. Implementing a CAPTCHA challenge at critical points in an application, such as when prompting for a password, can stop automation programs in their tracks.
It's also possible to stop automation programs by simply limiting the number of incorrect guesses from the same source. If too many wrong guesses are sent in, the account can be temporarily locked out, thus delaying the automation program past the point of usefulness, or might even require a human administrator to unlock. Doing any of that should prevent anti-automation vulnerabilities within an application.
More Information about Insufficient Anti-Automation
For further reading, you can take a look at what OWASP says about insufficient anti-automation. You can also put your newfound defensive knowledge to the test with the free demo of the Secure Code Warrior platform, which trains cybersecurity teams to become the ultimate cyber warriors. To learn more about defeating this vulnerability, and a rogues'gallery of other threats, visit the Secure Code Warrior blog.
Ready to find and fix insufficient anti-automation right now? Test your skills in our game arena: [Start Here]
Click on the link below and download the PDF of this resource.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
View reportBook a demoJaap Karan Singh is a Secure Coding Evangelist, Chief Singh and co-founder of Secure Code Warrior.
Imagine going to the door of an old speakeasy or underground club. The little hole in the door slides open and a burly bouncer asks for the password. The potential visitor doesn't know the password and makes a guess. It's wrong, so the bouncer doesn't let them inside.
That's what normally would happen. Now imagine the visitor who guessed the wrong password immediately tries again, gets it wrong, and is again denied access. Then imagine the potential visitor opens up the dictionary and starts reading off words, starting with something like aardvark and proceeding to try every single possible word.
Most likely, the bouncer wouldn't allow that kind of activity to take place, but websites and applications with insufficient anti-automation do just that. They allow users to keep trying passwords, even using automation techniques, until they finally stumble across the proper catch phrase.
In this episode, we will learn:
- How attackers exploit insufficient anti-automation
- Why applications with insufficient anti-automation are dangerous
- Techniques that can fix this vulnerability.
How do Attackers Exploit Insufficient Anti-Automation?
Employing automation or dictionary-style attacks like our imaginary speakeasy visitor did are not new in cybersecurity. In fact, those brute-force style attacks were some of the first hacker techniques ever deployed. And as computers grew faster, they became more and more efficient. A fast computer can run through an entire dictionary of words in just a few minutes, depending on the speed of the connection between the attack computer and the targeted system.
Those kinds of automated attacks were why anti-automation software and techniques were created. It gives applications the ability to determine if actions being taken by a user are outside the norms of typical human behavior.
If an application has insufficient anti-automation checks in place, attackers can simply keep guessing at passwords until they find a match. Or, they might use automation software to do other things such as spam comments into website forums.
Why is Insufficient Anti-Automation Dangerous?
Allowing malicious users to employ automation to try and circumvent security can be dangerous. The reason that automation type attacks have persisted from the early days of computing until now is that they can be highly effective. If you give an automation program an unlimited amount of time to submit passwords with no consequences for an incorrect guess, it will eventually find the right one.
When used on something like a forum, having waves of obviously scripted comments might frustrate valid users, or even act like a kind of denial of service attack by squandering system resources. Automated posting might also be used as a tool for a phishing or other attacks to expose the lures to as many people as possible.
Fixing Insufficient Anti-Automation Problems
To fix the problem of insufficient anti-automation, all applications must be given the ability to determine whether actions being taken are being implemented by a human or a piece of automation software. One of the most popular and widely used techniques is the Completely Automated Public Turing test to tell Computers and Humans Apart, or CAPTCHA.
The CAPTCHA is basically a Turing test, first proposed by computer scientist Alan Turing in 1950, whereby human and computer behavior can be separated and identified. Modern CAPTCHAs present problems humans can easily solve, but which computers struggle with, or simply can't figure out. A popular one presents a photo separated by a grid and asks users to identify all the sectors with a specific item in it, such as a flower or a face. The computer can't understand what is being asked for, and thus can't even attempt to scan the image. Even if it could, image recognition is beyond most programs not specifically built to do so.
Other examples of CAPTCHAs include showing blurry text, asking a simple logic question or even playing the question out loud. Implementing a CAPTCHA challenge at critical points in an application, such as when prompting for a password, can stop automation programs in their tracks.
It's also possible to stop automation programs by simply limiting the number of incorrect guesses from the same source. If too many wrong guesses are sent in, the account can be temporarily locked out, thus delaying the automation program past the point of usefulness, or might even require a human administrator to unlock. Doing any of that should prevent anti-automation vulnerabilities within an application.
More Information about Insufficient Anti-Automation
For further reading, you can take a look at what OWASP says about insufficient anti-automation. You can also put your newfound defensive knowledge to the test with the free demo of the Secure Code Warrior platform, which trains cybersecurity teams to become the ultimate cyber warriors. To learn more about defeating this vulnerability, and a rogues'gallery of other threats, visit the Secure Code Warrior blog.
Ready to find and fix insufficient anti-automation right now? Test your skills in our game arena: [Start Here]
Table of contents
Jaap Karan Singh is a Secure Coding Evangelist, Chief Singh and co-founder of Secure Code Warrior.
Secure Code Warrior is here for your organization to help you secure code across the entire software development lifecycle and create a culture in which cybersecurity is top of mind. Whether you’re an AppSec Manager, Developer, CISO, or anyone involved in security, we can help your organization reduce risks associated with insecure code.
Book a demoDownloadResources to get you started
Resources to get you started
10 Key Predictions: Secure Code Warrior on AI & Secure-by-Design’s Influence in 2025
Organizations are facing tough decisions on AI usage to support long-term productivity, sustainability, and security ROI. It’s become clear to us over the last few years that AI will never fully replace the role of the developer. From AI + developer partnerships to the increasing pressures (and confusion) around Secure-by-Design expectations, let’s take a closer look at what we can expect over the next year.
OWASP Top 10 For LLM Applications: What’s New, Changed, and How to Stay Secure
Stay ahead in securing LLM applications with the latest OWASP Top 10 updates. Discover what's new, what’s changed, and how Secure Code Warrior equips you with up-to-date learning resources to mitigate risks in Generative AI.
Trust Score Reveals the Value of Secure-by-Design Upskilling Initiatives
Our research has shown that secure code training works. Trust Score, using an algorithm drawing on more than 20 million learning data points from work by more than 250,000 learners at over 600 organizations, reveals its effectiveness in driving down vulnerabilities and how to make the initiative even more effective.
Reactive Versus Preventive Security: Prevention Is a Better Cure
The idea of bringing preventive security to legacy code and systems at the same time as newer applications can seem daunting, but a Secure-by-Design approach, enforced by upskilling developers, can apply security best practices to those systems. It’s the best chance many organizations have of improving their security postures.