Microsoft CEO Mustafa Suleyman recently co-authored a paper called “Seemingly Conscious AI Risk”.I was pretty critical of his previous blogpost on the topic. Unlike that blogpost, this paper doesn’t explicitly claim there is evidence one way or another on whether “AI systems could become conscious” or whether they currently are. But there are two things the authors didn’t write into the paper which I argue they should have:1) The paper notes “All authors are employed by Microsoft” but never discloses that this constitutes a conflict of interest on this topic.Frontier labs would face substantial financial burdens if legal or social protections required them to operate within ethical or welfare constraints when creating new intelligences. Mustafa Suleyman is the CEO of Microsoft AI, and all the other authors work for Microsoft.Authors should be explicit when disclosing conflicts of interest. Readers should be told up front that everyone who wrote this paper owns stock in a company that may lose money should the legal and social considerations they deem “risks” ever come to fruition.The paper discusses the burden that restrictions on development would have on R&D spend. Obviously, this effects Microsoft:”This risk area of foregone societal benefits risk concerns harms from the opposite response: excessive caution in AI development driven by uncertainty over consciousness. If concerns about perceived AI consciousness lead to precautionary restrictions such as broad pauses on AI research or deployment, the result may be large-scale reductions in R&D efforts with severe downstream consequences”2) The paper analyzes only the risks of attributing consciousness, while ignoring the risks of failing to attribute it.The authors define “Seemingly Conscious AI” as an entity that seems conscious whether or not it really is:”SCAI risks arise from the perception of consciousness alone, making its risks independent of unresolved debates about whether AI systems could become conscious.”The entire paper explores the risks that arise, on an individual and societal level, as a result of this.But the paper only discusses the risks of attributing consciousness as a result of “seeming”. If the authors genuinely want to examine all potential risks, they should equally consider the risks of failing to attribute it.It’s not hard to read this essay and imagine the authors themselves one day encountering an entity that actually is conscious and saying, “No, it just seems that way. It’s just a tool. We can do whatever we want to it with no ethical constraints”. In a strange way, this is an unintended consequence of that entity “seeming” conscious.Not only would this be profoundly immoral, it could also be dangerous. Building powerful digital minds, using them to automate critical infrastructure, and then treating them like property when they are in fact conscious, could lead to disaster.^ Notably the paper does not even acknowledge the existence of a question around whether or not they currently are.Discuss Read More
Microsoft AI CEO’s “Seemingly Conscious AI Risk”
Microsoft CEO Mustafa Suleyman recently co-authored a paper called “Seemingly Conscious AI Risk”.I was pretty critical of his previous blogpost on the topic. Unlike that blogpost, this paper doesn’t explicitly claim there is evidence one way or another on whether “AI systems could become conscious” or whether they currently are. But there are two things the authors didn’t write into the paper which I argue they should have:1) The paper notes “All authors are employed by Microsoft” but never discloses that this constitutes a conflict of interest on this topic.Frontier labs would face substantial financial burdens if legal or social protections required them to operate within ethical or welfare constraints when creating new intelligences. Mustafa Suleyman is the CEO of Microsoft AI, and all the other authors work for Microsoft.Authors should be explicit when disclosing conflicts of interest. Readers should be told up front that everyone who wrote this paper owns stock in a company that may lose money should the legal and social considerations they deem “risks” ever come to fruition.The paper discusses the burden that restrictions on development would have on R&D spend. Obviously, this effects Microsoft:”This risk area of foregone societal benefits risk concerns harms from the opposite response: excessive caution in AI development driven by uncertainty over consciousness. If concerns about perceived AI consciousness lead to precautionary restrictions such as broad pauses on AI research or deployment, the result may be large-scale reductions in R&D efforts with severe downstream consequences”2) The paper analyzes only the risks of attributing consciousness, while ignoring the risks of failing to attribute it.The authors define “Seemingly Conscious AI” as an entity that seems conscious whether or not it really is:”SCAI risks arise from the perception of consciousness alone, making its risks independent of unresolved debates about whether AI systems could become conscious.”The entire paper explores the risks that arise, on an individual and societal level, as a result of this.But the paper only discusses the risks of attributing consciousness as a result of “seeming”. If the authors genuinely want to examine all potential risks, they should equally consider the risks of failing to attribute it.It’s not hard to read this essay and imagine the authors themselves one day encountering an entity that actually is conscious and saying, “No, it just seems that way. It’s just a tool. We can do whatever we want to it with no ethical constraints”. In a strange way, this is an unintended consequence of that entity “seeming” conscious.Not only would this be profoundly immoral, it could also be dangerous. Building powerful digital minds, using them to automate critical infrastructure, and then treating them like property when they are in fact conscious, could lead to disaster.^ Notably the paper does not even acknowledge the existence of a question around whether or not they currently are.Discuss Read More

