Will machines ever become aware

"Machines have long been conscious"

Can machines develop consciousness? The question has inspired countless science fiction writers - answering it divides science. For the computer scientist Joanna Bryson, however, it is clear: "Artificial intelligence has long since developed awareness."

"The only question is: what kind of consciousness?" Said Bryson in an interview with science.ORF.at last August at the Alpbach Forum. When it comes to self-perception, computers are more aware of themselves than people, says the computer scientist at the University of Bath: “A computer has access to every bit of its memory, i.e. all of its memories, and can call them up at any time. There is no unconscious. That also makes computers more controllable than humans. "

Fundamental difference to humans

Artificial intelligence (AI), like natural intelligence, can consciously perceive experiences, make decisions and act - and include their memories for the decisions. However, according to Bryson, there is one difference to human consciousness: the specifically human experiences that we build into our consciousness.

“Machines can learn the meaning of words, but they cannot feel that meaning. Loving, feeling excluded, winning, losing - these things mean something to us because we are social beings. We share feelings with monkeys and other animals. But not with computers. "

No feelings without a body

“A lot of people think you have to give artificial intelligence personal rights because it's conscious. But they're just machines, ”explains Bryson. However, these machines learn from their experience and are becoming more and more intelligent. So what if they learned to develop feelings? Bryson does not think this is possible: "There will always be a difference in experience here." She is convinced that a biological body is needed for feelings.

If we weren't just simulating neural networks, but replicating an entire body with nerves and hormones, then it would no longer be about artificial intelligence: “Then we'll build a clone. If we were to clone humans, we would have to treat the clones like humans and grant them rights. Artificial intelligences remain machines. You have to make a clear distinction. "

Katharina Gruber, ORF

But you can program the expression of feelings: “You could program a computer to look for other computers when none are around and that this has a high priority. One would get the impression that he would feel lonely, but in reality he doesn't. "

The fact that AI can act as if it has feelings makes people insecure, says Bryson: “We are afraid of not treating computers correctly. But a computer doesn't suffer, only the programmer can suffer. "

Regulation and control necessary

According to the scientist, the ethics of artificial intelligence must therefore focus on people. “We absolutely need regulation and control of AI,” says the computer scientist, for example in the area of ​​automatic weapons and the use of big data. And on a transnational level, because:

Ö1 shipment notice

Joanna Bryson's research was also devoted to an article in Digital.Leben on August 31, 2017.

“A national government that comes to power through hacking or the targeted use of big data in the election campaign has little interest in regulating AI. We have a similar problem now in the UK. We know that the services of the data analysis company Cambridge Analytica for the Leave campaign were provided free of charge. ” With the help of the data, election advertising for “Brexit” could probably be carried out in a targeted manner.

Never blame the machines

“Another dangerous thing about artificial intelligence is that you can let it do things that you would never do yourself, such as kill people. This is particularly relevant in warfare. "

The robot laws that Isaac Asimov formulated in 1942 and which still play a role in the debate about AI ethics today state, among other things, that robots must not injure human beings. Bryson considers these laws to be obsolete: “There are already robots that kill people. They are used in war zones. Such a robot law would fail in reality. ”Nevertheless, we would have to consider what ethics should apply to artificial intelligence.

The top principle for the computer scientist in such an ethics: You can never blame a robot for its actions. There are always people who are responsible for the actions of the robots. “When those in charge say that AI makes the decisions, then it's just an attempt to evade responsibility.” Only programmers and their clients can act ethically wrong, not the robots.

Katharina Gruber, ORF regional studio Vienna, from Alpbach

More on the topic: