Daniel Lemire's blog

, 2 min read

How artificial intelligences are already at war with us

In the most recent Communications of the ACM (February 2007), Joshua Goodman and his coauthors tell us, in Spam and the Ongoing Battle for the Inbox1, that it is very difficult to build reliable CAPTCHAs or (reverse) Turing tests, to differentiate machines from human beings. In the most reliable tests, machines had a success rate of 5% whereas in other cases they had a success rate of 67%. This may seem to be a high failure rate (95%), but this only means that the machine needs to try 20 times on average to succeed once. So you slow down the machine by a factor of 20 (in the best of cases), and since machines are thousands of times faster than human beings, you have achieved very little. They do not report human error rates, but I know that I fail Blogger’s tests routinely and I’m not an idiot (though you may think otherwise if you wish), not blind, and so on.

This is not just a theoretical concern. I have used visual CAPTCHAs before on my blog and they failed me. I still got spammed. The solution I know use is to apply a very simple CAPTCHA but one that is unique to my blog. Since I am not a very popular blogger, I hope that spammers will not bother breaking my CAPTCHAs. If I ever, by some strange turn of events, became a popular blogger, my solution would be to craft routinely new CAPTCHAs.

This means that there are AI bots out there at war with legitimate bloggers.

To those who doubt AI can be used for evil purposes, well, there you go. There are people out there purposely designing AIs for evil (spamming is certainly unethical). We are not talking about the military. We are not talking about crazy scientists. We are talking about the worst kind of evil masterminds: greedy unethical capitalists.

1– They cite Using Machine Learning to Break Visual Human Interaction Proofs by Chellapilla and Simard.