When the World Wide Web was launched, Internet was still in a nascent stage. General trust level was high and application development was just about implementing the application logic. Security was one of the last checks done by the QA guys. The average developer could hardly be concerned with Security.
Internet has come a long way since then. When you activate a web application on the Internet, it gets scanned for vulnerabilities even before the site goes to Alpha. If you are a developer who ignores security, chances are that you will not keep your job.
Solutions have come to tackle the security issues. Some have stayed on and some have gone away.
With web applications, anything that an user does in a browser can be automated. This means that one can write a program to access information from a web site. Incidentally, this is the way most of the search engines work. They have programs that scan the web site and index the information in them for searches.
This ability can also be misused to
- Crack passwords using brute-force method. Programs can try out of 100’s of password per second to guess a password and once the same is found, unauthorized access can be got.
- Make web sites less efficient. Using a program, one can make a lot of requests. This will make the site sluggish and genuine users will be put off with slow response.
- Copy information. It might take weeks for you to do research and update the information on your web site, but will take just seconds for some one to copy it.
CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) was proposed as a solution to prevent misuse. In this method, whenever any part of the web site information is to be protected from programs, an image like the one below is presented to the user.
User is expected to enter the number in a separate input box and if it matches, access to information is provided. Computers (in spite of the much hyped talk about AI), find it tough to do simple task like identifying the numbers in the above image and therefore this puts a big road block for automated programs.
CAPTCHA did its work and many web sites adopted the same.
While the initial idea was good, there was a section of users who subscribed to the idea that more difficult CAPTCHA, more effective it is. There were theories floating around of some CAPTCHA being broken and this lead to a rush for making CAPTCHA more complicated.
The images were distorted, color variations made too bizarre that it became even difficult for even humans to read CAPTCHA. In addition, in the name of Innovation, puzzles like
- How much is 5+6?
- Click on all pictures with a Road sign
were put in, complicating the entire process. I remember the time when US style road signs were displayed in the image and people here were just not able to complete the CAPTCHA.
What started out as a promising solution was on its way to being a nuisance for genuine Web users.
Early in the implementation of MTNLCloud, we decided to re-look at CAPTCHA process. In any web application, there are two parts
- Server part, which runs on the server.
Application security is pretty much decided by the server security in place. So instead of relying on CAPTCHA, we decided to implement a Rate limiter at the server level for the application. The rate limiter will
- Block requests that exceed certain number per minute.
This solves all the problems that CAPTCHA solves. If an automated program sends lot of requests, they effectively get no information or at best get patchy and wrong information. Combined with client side validation of input, this clinched the issue.
We do have a simple CAPTCHA, consisting only of numbers, in single color and with no complicated image manipulation to thwart script kiddies.
So far it has been holding up fine and we have not seen a single wrong entry of CAPTCHA. Keep checking this section for further updates.