One day, some say, computers will be able to build other computers without human intervention. They will build machines that are better than they are and, in a snowball effect, become more intelligent than humans. This day, to some, is known as Singularity. If you ever watched The Matrix, this should be familiar.

However, this vision of the future is wrong. Computers do not think. Computers are not capable of intelligence comparable to humans. Computers add numbers. They do it very, very fast. In this sense, computers have been more intelligent than humans ever since Alan Turing built his first machine in 1942.

Humans, by contrast, reach decisions by evaluating sensory stimuli against the norm. My father spent much of his working life analyzing insurance forms related to car accidents. He showed that designing computer programs that correctly understand the description of an accident is amazingly hard. Humans only write about the difference from the norm (e.g “I was driving at night” implies that the headlights were on) and can function without knowing every single detail of a scene. Missing pieces are assumed to be what we consider normal.

For a computer to understand what a “normal” state is would require a database of everything that is normal. It would not only be very long, it would also be impossible. Humans can change their interpretation of the norm in a matter of seconds, whereas computers would need to be reprogrammed.

Creating networks of computers could increase the speed at which computers gain experience. Google’s self-driving cars share information from their sensors with a central database so that any encounter by a car is known to all others [1]. However, this wealth of experiences can only be interpreted within a framework that was designed by the coders, based on what they considered the norm. Humans would be able to react to a sudden change in normality (e.g an earthquake that destroys much of the infrastructure) whereas a Google car would stop, for safety reasons [2]. And driving a vehicle is a trivial task compared to running a company or building a relationship.

More importantly, a computer program will have a hard time functionning with assumptions it knows to be false. Risk assessment, for instance, is one of the human brain’s weakest areas. Humans have no problem preparing for events they know will not happen while ignoring real risks. This is why our governments invest in terrorism prevention and not in emergency services. A computer would need to be told which risks to over or under estimate before it can mimic human behavior.

Not only are humans unable to understand risk, they reject those who do. Robert McNamara, for instance, is famous for coldly assessing all the actions he took or might have taken as the manager of Ford or head of the US Defense Department. He was called “an IBM machine with legs” [3] and is still widely viewed as abnormal.

Singularity, as a concept, is a waste of time. It is useful for its proponents, such as Google, because it supports their agendas. After all, if computers are set to rule us, why spend time trying to regulate tech companies? Surely, you do not want to anger your future master, do you?

Singularity does not help us understand technological change or its implication for society. We should focus on the actual changes that non-thinking computers are bringing upon us. Self-driving cars and trucks, for instance, will make millions of drivers redundant. We must prepare now for this day, much closer from us than Singularity will ever be [4].

Notes

1. Chris Urmson did a great TED talk on Google self-driving cars in March 2015.

2. Self-driving cars are not a bad thing. My point is that the software running them cannot be compared to human intelligence.

3. By Republican Sen. Barry Goldwater, apparently.

4. Gary Markus makes the case for this argument much better than I can in this one-hour conversation with Russ Roberts.