Chapter 6: Artificial Intelligence

I have a confession to make. When I chose the title of this book, Robots Will Steal Your Job, I was not completely honest with you. Robots will eventually steal your job, but before that something else is going to jump in. In fact, it already has, in a much more pervasive way than any physical machine could ever do. I am, of course, talking about computer programs in general. Automated Planning and Scheduling, Machine Learning, Natural Language Processing, Machine Perception, Computer Vision, Speech Recognition, Affective Computing, Computational Creativity, these are all fields of Artificial Intelligence that do not have to contend with the cumbersome issues that Robotics has to face. It is much easier to enhance an algorithm than it is to build a better robot. A more accurate title for this book would have been “Machine Intelligence and Computer Algorithms Are Already Stealing Your Job, and They Will Do So Ever More in the Future” – but that was not exactly a catchy title.

The public perceives intelligent machines to be human-like robots that perform our daily duties. Thank you, Hollywood. In reality, most “intelligent” agents do not require a physical body, and they operate mostly at the level of computation. Data crunching and aggregation is what they do best. Ironically, it is harder to automate a housemaid than it is to replace a radiologist1. A radiologist is a medical doctor who specialises in analysing images generated by various medical scanning technologies. It is a popular area of focus for newly minted doctors, as it offers relatively high pay and regular work hours, there is no need to work on weekends, and there are no emergencies. The downside is that it is a very repetitive job. Even though it takes at least thirteen years of study and training beyond high school, it is quite easy to automate this job2. Think about it. The focus of the job is to analyse and evaluate visual images, the parameters of which are well defined since they are often coming directly from computerised scanning devices. It is a closed system, with a number of well-known variables that have mostly already been defined. And the process is very repetitive. What this equates to is a database of information (thirteen years of studies and training) connected to a visual recognition system (the radiologist’s brain); a process that already exists today and finds many applications.

Visual pattern recognition software is already highly sophisticated. One such example is Google Images. You can upload an image to the search engine, then Google uses computer vision simulation techniques to match your image to other images in the Google Images index and additional image collections. From those matches, they try to generate an accurate “best guess” text description of your image, as well as find other images that have the same content as your uploaded image.


PIC

Figure 1.1: Front page of Google Images. You can see the camera icon on the right of the bar, click that and you can upload your image.


PIC

Figure 1.2: I upload my image, named “guess-what-this.is.jpg”


PIC

Figure 1.3: The software correctly recognises it as the Robot ASIMO by Honda, and offers similar images in return. Notice that the proposed images show ASIMO in different positions and angles, not the same image in different sizes. This algorithm recognises millions of different patterns, as it is a general-purpose application. A task-specific pattern recognition software is less complex to develop, although it must be much more accurate as the stakes are higher.

Similarly, many governments have access to software that can help identify terrorists in airports based on visual analysis of security photographs3. CCTV cameras in London and many other cities have advanced systems that track people’s faces and can help the police identify potential criminals4.

Radiology is already subject to offshoring to India and other places where the average pay for the same task is 10 times as low5. How long do you think will pass before we “offshore” to workers that need no pay at all, and all they need is a bit of electrons to run?

In contrast, the duties of a housemaid, a job that requires no education and no particular skills, is a highly complicated set of tasks for a robot. This robot would need sophisticated motor skills and coordination in a 3D environment. It has to recognise thousands of different objects, move freely around the house, do the stairs, apply pressures with extreme care, and make millions of decisions per second; all while consuming very little energy and being cheaper than a $15 per hour housemaid. The most sophisticated robot that could do that is Honda’s ASIMO, which costs millions and can’t perform as well as a regular housemaid.

Cheap, reliable, human-like robots will eventually be available. But for now, it’s AI-time baby!

1.1 Smarter, Better, Faster, Stronger

You might think that computers are stupid because they cannot make sense of things like we do. This is true. You can take a toddler, show them a picture, and they will tell you right away if it is a picture of a person, a book or a cat. Computers do not work like that. It is very hard for computer programs to recognise patterns the same way humans do. We can look at pictures, see them in full view and recognise known patterns easily. We are good at this. We have evolved with this unique ability because it gave us an advantage over other species for survival. Computer programs, on the other hand, did not evolve the way our brains did, thus, they work in very different ways. They can do complex mathematical calculations and solve millions of differential equations in one second, whereas many of us struggle to do even the most basic math. Image interpretation, effortless and instantaneous for people, remains a significant challenge for Artificial Intelligence 6. Computers crunch data, while we make sense of it all. This has been true for quite some time, but is it still the case today?

Recent developments in the field of Artificial Intelligence , specifically Machine Learning applications, have begun to change this. Over the last 20 years, we have devised and perfected various mathematical algorithms that can learn from experience, just as we do. The principle behind them is quite simple: train a computer program to learn, without explicitly programming it. How does that work? There are various methods to achieve this: supervised and unsupervised learning, reinforcement learning, transduction, with several variations and combinations of them. Each of these methods then apply specific algorithms, some of which you may be aware of (e.g. neural networks), and most of which probably sound very obscure (e.g. support vector machines, linear regression, naive Bayes). You do not need to learn the specifics, but the main idea is this: just like we learn through experience, so do these programs. They have evolved.

We might not be so different from them after all.

1.2 It’s All About the Algorithms

Learning algorithms are improving in terms of accuracy and performance every day. Just five or six years ago they were very sloppy and their results were invalid. Today, however, things are changing rapidly. Google search results were the same for everyone, no matter where you came from. Today, it is likely that no Google search ever gives the exact same results. Instead, what you get is a personalised version, containing the pages that are most likely to interest you, based on a variety of criteria. Say you search for a Pizzeria. They can look at your IP address, they can geolocate you using GPS technology, and return the top results in your area. If you have a registered Google account, they can look at the history of all your previous searches, where you clicked, when you clicked, how many times, which domain did you visit the most (or the least). They know if you are male or female, young or old, and based on that they can narrow down the search to an even more personalised level. If you have a Gmail account, they will know many things about your habits, places you visit, places you wish to visit, and people you usually talk to. They can cross-reference their searches and use that data as well. Of course, when I say “they”, I do not mean any particular person. There is nobody personally looking at your profile, your data, your search history, or your habits. That would violate privacy laws. I mean the programs. All that I have described happens billions of times a day, in a matter of milliseconds or less, for each occurrence. Beside the fact that having a person check on you like that would violate privacy laws, it would also be practically impossible to do these operations with human supervision. Every day these programs learn something new about us.

Another major difference is that computers can learn faster and they have virtually no limitation on how much they can learn (due to the exponential increase in computational power and in memory storage, respectively). Think about it: it takes a few years to teach a child to learn a language, read, write, recognise things, and even more time to learn a sophisticated technical skill. To become a qualified medical doctor, it can take 20+ years of studying and experience before becoming proficient. If one day that doctor dies, simply stops working, goes on permanent vacation or retires, it will take another 20 years for the next person to take their place. Granted, the entire profession might advance, but the learning curve to get up-to-speed with current standards does not change much. Computers do not have such limitations. It might require a lot of time at the beginning, but once any progress is made, it is propagated throughout the whole network. The next computer does not need to re-learn everything from scratch – it can simply connect to the existing network and benefit from the collective knowledge gained by the contributions of other computers.

Surely the algorithm used is important. If you have a bad algorithm, you will end up with nothing interesting. But what has really made the most difference in the last 10 years is the sheer volume of data at our disposal. We are literally buried by data of all kinds, so much that we do not have enough minds to analyse that data and make sense of it all. Over the last few years there has been a wave of public data coming from all sources: governments, NGOs, public libraries, as well as private websites that collect real-time data from people. We contribute in making this immense database of collective knowledge, simply by living our lives. Every tweet we transmit, search we generate, picture we upload, friend we add on a social network, place we visit, phone call we make, they all feed this massive distributed super-computer that is composed of the billions of computers around the globe that are connected to each other through the Internet.

That being said, you might be wondering how far we have come with AI Systems. Have they reached human-level intelligence? If not, will they ever? What technology exists already?

For now you can rest safe. AI systems have not come anywhere near human levels of general purpose intelligence. However, they are evolving rapidly, and some expect them to reach and even surpass humans by 2030.7 Others disagree, and only time will tell for sure who is right.

What we know for certain is that today we already have machines that surpass humans in many task-specific intelligences. This leads us next into exploring the evidence of automation.

Notes

1The example is taken from The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, Martin Ford, 2009. CreateSpace. pp.64-67.

2“In reality, there is another factor that might slow the adoption of full automation in Radiology: that is malpractice liability. Because the result of a mistake or oversight in reading a medical scan would likely be dire for the patient, the maker of a completely automated system would assume huge potential liability in the event of errors. This liability, of course, also exists for radiologists, but it is distributed across thousands of doctors. However, it is certainly possible that legislation and/or court decisions will largely remove this barrier in the future. For example, in February 2008, the U.S. Supreme Court ruled in an 8-1 decision that, in certain cases, medical device manufacturers are protected from product liability cases as long as the FDA has approved the device. In general, we can expect that non-technological factors such as product liability or the power of organised labor will slow automation in certain fields, but the overall trend will remain relentless” from: The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, Martin Ford, 2009. CreateSpace. p.67.

3Can AI Fight Terrorism?, Juval Aviv, 2009. Forbes.
http://www.forbes.com/2009/06/18/ai-terrorism-interfor-opinions-contributors-artificial-intelligence-09-juval-aviv.html

4Smart CCTV System Would Use Algorithm to Zero in on Crime-Like Behavior, Clay Dillow, 2011. Popular Science.
http://www.popsci.com/technology/article/2011-08/new-cctv-system-would-use-behavior-recognition-zero-crimes

5The offshoring of radiology: myths and realities, Martin Stack, Myles Gartland, Timothy Keane, 2007. SAM Advanced Management Journal.
http://www.accessmylibrary.com/coms2/summary_028630757731_ITM

6Comparing machines and humans on a visual categorization test, François Fleuret, Ting Li, Charles Dubout, Emma K. Wampler, Steven Yantis, and Donald Geman, 2011. Proceedings of the National Academy of Sciences.
http://www.pnas.org/content/early/2011/10/11/1109168108.full.pdf

7The Singularity Is Near: When Humans Transcend Biology, Kurzweil, 2005. Penguin Books.


You're reading the free version of a book that required more than one year of work. You can show your support by purchasing a copy on Amazon (Kindle/Paperback), iBookstore, PDF or ePub direct download. All files are rigorously DRM-free. Want to support my work, but don't want to buy the book? Send a donation via Bitcoin (1LeUniaxmEqYh6WJQ9ktuSYonkLG5HvcbZ) or PayPal :)

Get new chapters every week for free!