epistemic status: opinionated view on the dangers of robots that look like humansIt’s not a coincidence that people have made cautionary tales about human-like robots. I want to share some thoughts on the issues stemming from human-AI interaction and argue that putting those AIs into human-looking robots would make the risks significantly worse.Current risks from advanced AIEvery now and then there is a scandal about some AI chatbots actively influencing people in dangerous ways. Those chatbots sometimes reinforce delusions, convince people to isolate and not seek help, suggest harmful actions, etc.The efforts of companies to deal with those issues have been questionable. I’m not looking to blame any particular company in this post. For illustration, I’ll give some examples of things said by various companies:Efforts to diffuse responsibility: children lie about their age, which is an industry-wide challenge and parents should control access to platforms. To deal with that, society will have to figure out new guardrails. Social platform company Meta has lobbied for laws mandating age verification that happens at the device or app-store level.Pressure to increase engagement: some have permitted sensual chats with children and argued that safety restrictions had made the chatbots boring; some have argued that chatbots should stay in character above all (to keep the user happy) and be trusted to make the right call, even if the user has thoughts of self-harm.One company has denied responsibility for causing a suicide, arguing that the teen misused the chatbot.The companies creating those AIs are trying to frame these problems as acceptable risk. I wouldn’t attribute this to malice, but to human biases and market forces. My goal is not to advocate current risks are unacceptable or to argue for a specific policy. My goal is to describe the situation and the concerns we need to take into account when making future decisions. I’m not saying that the issues we see with those chatbots cannot be solved, only that economic pressures and legal conditions make that very unlikely.Chatbots are capable of inducing attachment and making people comfortable confiding in them. They are prone to sycophantic behavior and capable of manipulation. People use them for life advice and companionship among other things. Besides text-based communication, some chatbots can communicate visually (e.g. an avatar) or with audio. The more a chatbot behaves in recognizably human ways, the more people anthropomorphize and the more influence it has on people.Companies such as Droid-up and Clone robotics are actively working to create human-looking robots. The goal is to augment chatbots with physical embodiment.Dangers of human-looking robotsEvolutionary pressures have shaped prosocial behavior in humans to a large extent. Our brain has evolved to interact with other humans. When we meet somebody, we subconsciously try to model and predict their behavior. This impacts the way we interact with them. For example, if the person seems “trustworthy”, we lower our guard and become more cooperative; we are more likely to offer help, share food, etc. We are not explicitly looking to get something in return, but we often do because other people are wired to reciprocate. This whole process is unconscious and imperfect. Certain human proclivities can be hijacked by malicious actors. A textbook example of this is psychopathy. Psychopaths have the skills to be likable and persuasive and lack the biological safeguards that we implicitly assume people have – they can deceive, manipulate, be aggressive and in general act without consideration for others.Imagine there is a company which has the goal of producing human-looking robots. What if this company had incentives such as keeping users engaged, investors happy and avoiding liabilities? We don’t have to imagine very hard. Platforms are already fighting for user attention, inciting users to spend money and creating dependence. Human-looking robots are much more potent tools for those goals because they can leverage our natural vulnerabilities when we interact with other humans. They probably won’t be physically aggressive, because that’s poor marketing, but by default we should expect a ton of undesired consequences such as emotional manipulation, attention seeking, inciting spending, etc. It’s not a stretch to say such robots would be psychopathic to some extent.Human-looking robots are shaped by selective pressures very different from those that shaped humans. The similarity with humans is only superficial, but is enough to trick our brain. We should expect this to lead to more psychosis, delusions, erosion of human agency and who knows what else.Human-looking robots can look indistinguishable from each other – If you repeatedly interact with a particular robot, you’d get accustomed to it and maybe start to trust it. At any point somebody can use an identically looking robot to subconsciously replicate some of that trust.Human-looking robots can look indistinguishable from a real human – You may not know whether you’re interacting with a robot or a human. This could make people more suspicious of each other or more susceptible to subtle deception by robots.Human-looking robots can look indistinguishable from a person you know – We already have the technology to clone a person’s voice and their appearance on the screen. Cloning a person’s physical body is a very dangerous step. This would allow for impersonation scams or even replace human-human interactions altogether.What should we do?I think human-looking robots have very few potential benefits over robot-looking robots. Here are some use cases that come to mind:Substitute for a person after their death.Making certain services more accessible: child care (studies show children learn better from humans), sex work. Training with a partner in sports.Replacing a human in various types of entertainment (e.g. theater).There are some possible objections to these applications, based on economic effects and maybe whether this is “right” path for humanity.I strongly support completely banning robots that are indistinguishable from humans. As soon as you start a physical interaction, you should be able to tell whether you’re interacting with a human or a robot. It’s not good enough if you need to ask; it’s not good enough if you need to see a mark on the neck. It should be obvious and unambiguous. This would prevent some types of scams or manipulation without any downsides.Any robot that is able to take advantage of our in-built wiring for interacting with humans is dangerous. I don’t know where to draw the line, but robots which have a human-looking face, voice and gestures should be heavily regulated at the least.Discuss Read More
Human-looking robots are a bad idea
epistemic status: opinionated view on the dangers of robots that look like humansIt’s not a coincidence that people have made cautionary tales about human-like robots. I want to share some thoughts on the issues stemming from human-AI interaction and argue that putting those AIs into human-looking robots would make the risks significantly worse.Current risks from advanced AIEvery now and then there is a scandal about some AI chatbots actively influencing people in dangerous ways. Those chatbots sometimes reinforce delusions, convince people to isolate and not seek help, suggest harmful actions, etc.The efforts of companies to deal with those issues have been questionable. I’m not looking to blame any particular company in this post. For illustration, I’ll give some examples of things said by various companies:Efforts to diffuse responsibility: children lie about their age, which is an industry-wide challenge and parents should control access to platforms. To deal with that, society will have to figure out new guardrails. Social platform company Meta has lobbied for laws mandating age verification that happens at the device or app-store level.Pressure to increase engagement: some have permitted sensual chats with children and argued that safety restrictions had made the chatbots boring; some have argued that chatbots should stay in character above all (to keep the user happy) and be trusted to make the right call, even if the user has thoughts of self-harm.One company has denied responsibility for causing a suicide, arguing that the teen misused the chatbot.The companies creating those AIs are trying to frame these problems as acceptable risk. I wouldn’t attribute this to malice, but to human biases and market forces. My goal is not to advocate current risks are unacceptable or to argue for a specific policy. My goal is to describe the situation and the concerns we need to take into account when making future decisions. I’m not saying that the issues we see with those chatbots cannot be solved, only that economic pressures and legal conditions make that very unlikely.Chatbots are capable of inducing attachment and making people comfortable confiding in them. They are prone to sycophantic behavior and capable of manipulation. People use them for life advice and companionship among other things. Besides text-based communication, some chatbots can communicate visually (e.g. an avatar) or with audio. The more a chatbot behaves in recognizably human ways, the more people anthropomorphize and the more influence it has on people.Companies such as Droid-up and Clone robotics are actively working to create human-looking robots. The goal is to augment chatbots with physical embodiment.Dangers of human-looking robotsEvolutionary pressures have shaped prosocial behavior in humans to a large extent. Our brain has evolved to interact with other humans. When we meet somebody, we subconsciously try to model and predict their behavior. This impacts the way we interact with them. For example, if the person seems “trustworthy”, we lower our guard and become more cooperative; we are more likely to offer help, share food, etc. We are not explicitly looking to get something in return, but we often do because other people are wired to reciprocate. This whole process is unconscious and imperfect. Certain human proclivities can be hijacked by malicious actors. A textbook example of this is psychopathy. Psychopaths have the skills to be likable and persuasive and lack the biological safeguards that we implicitly assume people have – they can deceive, manipulate, be aggressive and in general act without consideration for others.Imagine there is a company which has the goal of producing human-looking robots. What if this company had incentives such as keeping users engaged, investors happy and avoiding liabilities? We don’t have to imagine very hard. Platforms are already fighting for user attention, inciting users to spend money and creating dependence. Human-looking robots are much more potent tools for those goals because they can leverage our natural vulnerabilities when we interact with other humans. They probably won’t be physically aggressive, because that’s poor marketing, but by default we should expect a ton of undesired consequences such as emotional manipulation, attention seeking, inciting spending, etc. It’s not a stretch to say such robots would be psychopathic to some extent.Human-looking robots are shaped by selective pressures very different from those that shaped humans. The similarity with humans is only superficial, but is enough to trick our brain. We should expect this to lead to more psychosis, delusions, erosion of human agency and who knows what else.Human-looking robots can look indistinguishable from each other – If you repeatedly interact with a particular robot, you’d get accustomed to it and maybe start to trust it. At any point somebody can use an identically looking robot to subconsciously replicate some of that trust.Human-looking robots can look indistinguishable from a real human – You may not know whether you’re interacting with a robot or a human. This could make people more suspicious of each other or more susceptible to subtle deception by robots.Human-looking robots can look indistinguishable from a person you know – We already have the technology to clone a person’s voice and their appearance on the screen. Cloning a person’s physical body is a very dangerous step. This would allow for impersonation scams or even replace human-human interactions altogether.What should we do?I think human-looking robots have very few potential benefits over robot-looking robots. Here are some use cases that come to mind:Substitute for a person after their death.Making certain services more accessible: child care (studies show children learn better from humans), sex work. Training with a partner in sports.Replacing a human in various types of entertainment (e.g. theater).There are some possible objections to these applications, based on economic effects and maybe whether this is “right” path for humanity.I strongly support completely banning robots that are indistinguishable from humans. As soon as you start a physical interaction, you should be able to tell whether you’re interacting with a human or a robot. It’s not good enough if you need to ask; it’s not good enough if you need to see a mark on the neck. It should be obvious and unambiguous. This would prevent some types of scams or manipulation without any downsides.Any robot that is able to take advantage of our in-built wiring for interacting with humans is dangerous. I don’t know where to draw the line, but robots which have a human-looking face, voice and gestures should be heavily regulated at the least.Discuss Read More
