How advanced are robots compared to animals

Are Robots Better Employees?

A Japanese hotel has relied heavily on robots to counter labor shortages and has failed in some ways. Instead of being relieved of the workload, the staff was busy fixing the guests 'problems with the robots: e.g. the concierge robot, which could not answer the guests' questions satisfactorily, or the personal virtual assistant, which works itself at night switched on and woke the sleeper with his remarks. To what extent is the development of robotics and artificial intelligence (AI) so advanced that the systems can replace employees or “meaningfully” support them?

We met with Prof. Dr. Oliver Bendel, expert in information ethics and machine ethics, talked about the relationship between man and machine. In an interview, he explains how today's machines are designed and how they will and should be designed in the future.

Companies are already using artificial intelligence to recruit new employees. Such selection processes are more efficient and less affected by distortions of perception. Is the robot the better recruiter than the human?

First of all, the question is whether AI systems are really less affected by distortions of perception. It all depends on the approach and implementation. And then the question is whether a subjective perception is always the wrong way to go. Personally, I think a lot about gut instinct. Especially the apparently objective robot that you want could make the wrong decision. Incidentally, I also think that pictures are not bad for applications. These provide valuable information.

How do companies best deal with reservations and prejudices regarding robot recruiting?

I am aware of application processes in Switzerland in which interested parties are captured by a camera and not informed about what happens to their personal data. Companies should create transparency, inform applicants whether people or machines are doing the evaluations and, above all, allow alternatives. A good applicant could be someone who refuses to be evaluated by AI systems.

A Swiss insurance company relies on virtual assistants who support employees in assessing their own skills and in finding the right training and job offers. Do such chatbots not only offer advantages but also risks?

Chatbots on websites were and are hype around the turn of the millennium. Now they can also be found in Instant Messengers. Although they are being equipped more and more with AI, they hardly meet the requirements. In some cases, the networking and use of machine learning and deep learning make chatbots more difficult to assess and less reliable. Risks are not only that chatbots do not understand me correctly and make wrong suggestions, but also that they log, classify and spy on me. With virtual assistants in the narrower sense, i.e. with voice assistants such as Siri, Cortana and Alexa, there is also the problem that some of them analyze my voice. The voice reveals a lot about a person.

Will workers soon have robots as colleagues? If so, in which functions?

Robots are already widespread as colleagues, for example as cooperation and collaboration robots in production and logistics. Robots are also used to support and accompany therapy during therapy. In the care sector, they are increasingly entering the market. There are first small series in Europe and China. The nurse does not have to be afraid yet, on the contrary. The robot can be a relief at work. In addition to hardware robots, software robots such as chatbots and voice assistants as well as AI systems such as IBM Watson play a role in a number of industries and professions.

How should one imagine dealing with these robots? Does this require rules or a kind of code of conduct?

First of all, there are binding standards for hardware robots. Industrial and service robotics in particular are heavily regulated. As for further handling, social robotics and machine ethics come into play. Social robotics explores how machines can be designed in such a way that they appear trustworthy and reliable and do not scare us - that they are social in more than one sense. Machine ethics implants moral rules in semi-autonomous and autonomous systems. A machine morality emerges that can be useful and important, especially in closed and semi-open environments. It can refer to humans, but also to animals.

There is already the care robot that has to decide, for example, whether it should charge its own batteries or continue to care for the patient. Are robots able to always make the right decisions?

This is a work by Michael Anderson, Susan Leigh Anderson and Vincent Berenz. It is an adapted and expanded Nao in a simulated elderly care environment. I invited the two American machine ethicists to Berlin, where they presented their project. We are also currently building a system that will adjust its morals. Of course, robots are not always able to make the right decisions. Neither do people. The question of responsibility arises. Some decisions, for example about the life and death of people, have to be made by people. The robot cannot bear any responsibility.

Robots that work on the basis of AI can develop themselves further. Do we therefore have to fear immoral robots? In other words: can artificial intelligence be controlled?

Above all, we must fear immoral people who use and abuse robots for specific purposes. In the laboratory we develop moral and immoral machines in order to understand and research them. We deliberately do not allow some into the world. That is our responsibility as scientists. Artificial intelligence can be controlled in a variety of ways. For example - as we do in machine ethics - meta rules can be used. You can also issue bans. I don't see the danger that the machines will take over the world.


Prof. Dr. Oliver Bendel studied at the University of Konstanz and received his doctorate from the University of St. Gallen. He is a lecturer at the FHNW School of Business for business informatics, business ethics and information ethics and researches information, robot and machine ethics.

Read also in the Kalaidos blog:

Innovative HR: Award for Coop, AXA and UBS
Artificial intelligence risks
Artificial colleagues - a new way of working
How AI supports companies (1/2)
How AI supports companies (2/2)
When robots become work colleagues
Where AI rules Switzerland

Sources and further information

Anderson, M., Leigh Anderson, S. & Berenz, V. (2017). A Value Driven Agent: Instantiation of a Case-Supported Principle-Based Behavior Paradigm.

Bendel, O. (2016). The Morals in the Machine: Contributions to Robot and Machine Ethics. Zurich: Kindle Edition.

Freilang, C. (2019) The first robot hotel releases robots. Daily indicator.