Hans-Johann Glock

Computers are not curious

People and intelligent machines learn in similar ways, says philosopher Hans-Johann Glock. Unlike humans, however, computers do not pursue their own goals – not yet, at least.

by Roger Nickl; translated by Caitlin Stephens

Hans-Johann Glock, what is intelligence?

Hans-Johann Glock: Intelligence is the ability to solve novel problems in a flexible way. In a way that is not pre-programmed, not prescribed. Intelligent animals, for example, can learn that certain fruits, while they may look tasty, are hard to digest. This enables them to adapt their reactions to new situations. To solve problems intelligently, alongside physical strength or stamina you need to be able to look at situations in a variety of ways.

Nowadays we share the world not only with intelligent animals, but also with machines that are capable of learning. How intelligent are they?

Glock: This is a very interesting development. Machines that are capable of learning in a technical sense are mainly neuromorphic computers which imitate the structure and working methods of human brains, in particular through artificial neural networks. With deep-learning processes, they can optimize their problem-solving performance by evaluating data in stages. They get better at it by “practicing” and gaining experience – similar to the human learning process.

So humans and machines aren’t so different after all?

Glock: The main differences are quantitative, not qualitative. AI is trained before being put into use. That means it is guided and receives access to certain data and not to others. It’s not much different for humans. Parents expose their children to certain situations and avoid others, for example, though they may not always be aware of this.

So how do we differ from smart computers?

Glock: There are two considerable differences: Humans learn not just through trial and error, but at a certain point develop their own capacity for insight and foresight. That is the highest form of learning. The second point is even more important: For humans, learning has an emotional and affective component. We are curious and look for information ourselves.

We want to find things out – a will to knowledge if you like?

Glock: Exactly, and for us humans that is more important than Nietzsche’s will to power. Computer-assisted AI systems, in contrast to people, are not curious. Even if they were, as things currently stand they would not be able to source their own information. In comparison we humans are autonomous: We look for the information that we need or that interests us.

We also need to use our senses, through which we perceive the world. What role do they play?

Glock: They play a central role. AI technology is eerily clever and enables us to do clever things. But are the systems themselves clever? It’s only possible to speak of true intelligence when, in addition to information processing and learning, two additional factors come into play: Goals and perception. If intelligence is the ability to solve problems, including one’s own, then the system must be able to pursue its own goals by orienting itself and moving about in the world.

Then it would need a body?

Glock: Yes, it would need to be an autonomous mobile system. True intelligence needs a body: That is the idea of embodied cognition, which is currently experiencing a boom in cognitive science and AI research.

Will it be possible in the future to develop AI systems that can think independently?

Glock: I don’t see any philosophical reason why not. All cognitive abilities require a material vehicle, and whether that is silica or hydrocarbon, whether it’s a product of biological or cultural and technological evolution, basically makes no difference. I therefore assume that it would in principle be possible to develop AI systems in the future that have perception and can set themselves goals. The question is, do we want to go there?

You said that current AI allows us to do clever things. What sort of things were you thinking of?

Glock: AI allows us to work out complex calculations and statistics. Artificial neural networks can also recognize patterns much better than classic digital computers. And they are capable of learning, for example by playing against themselves. In 2016, the program AlphGo beat the world’s best go player Lee Sedol in four out of five games. That is certainly impressively clever. But you have to remember: Even AlphaGo cannot on its own initiative apply its information processing and learning ability in other areas, not even in a game with fixed rules like bridge. Its intelligence doesn’t transfer to other contexts. The grail of AI research, a general artificial intelligence, is a distant dream.

Even if that’s true, isn’t it a blow to our self-confidence that computers can beat us so easily in such games?

Glock: I think the aspect of injured pride is a very important point in the current AI debate. On the one hand, we are interested in AI for very practical reasons – technological, economic, political. And on the other hand, artificial intelligence and robots fascinate us because they serve as points of orientation for humans. Thinking about the nature of artificial systems also always involves thinking about human nature. In earlier times, humans looked down at the animals and up at the gods to find comparisons with themselves. Today, computers and robots have taken the place of divine beings.

In what way?

Glock: Because we ask ourselves whether they are cleverer than us. The comparison is very reminiscent of the religious question of to what extent the divine is superior to us. We do not ask whether animals are cleverer than us. With computers we are no longer so sure. We think: They are only automatons, machines, they come below animals in the pecking order. But then: These machines can achieve things that put us all in the shade – like the above example of playing go, or with more practical things like diagnosing certain eye conditions.

How can AI help us find out more about ourselves?

Glock: There are many things that we once assumed were the prerogative of human beings, but which it now seems artificial systems can do better than us. Rationality, thinking, intelligence, language, emotions, consciousness – the AI phenomenon gives us the possibility and the opportunity to reflect on how exactly these things work and whether they really set us apart from animals and machines. Is our position in the world determined by such capacities?

What other benefits could we gain from AI?

Glock: AI helps us to identify the causes and developments of climate change: Without such systems we would not know as much as we know today. Self-driving vehicles are also interesting, because they would probably lead to a decrease in the number of accidents. I see a lot of potential there. If AI systems are well thought through, they may even help us to extend our own intelligence. Then the question of who is more intelligent would not be so important. But for that we need to gain a better understanding of how adaptive networks work. In an ideal situation, we will understand artificial systems so well in the future that we will be able to collaborate with them as easily as we do with reliable human colleagues. The indignity of admitting that AI is cleverer than us in certain areas would certainly be a bitter pill though.

What role will AI play in the future?

Glock: I’m not naive about the risks. AI also has the potential for systematic misuse. The main problem is that this technology is in the hands of individuals and institutions that are not democratically governed. Here I mean both IT titans and repressive states. I am hopeful that we will have a proper discussion about these issues before the (artificial) horse has bolted. At the moment, the opportunity still exists for us to reflect on what kind of technology we actually want to create and how we can work with it constructively.