Last updated on September 9, 2023
I read Kazuo Ishiguro’s latest novel, Klara and the Sun, in only four days. It was loaned to me from a friend who knows well my concerns regarding the current reckless implementation of mindless technological innovation.
IIn Klara and the Sun, Kazuo Ishiguro takes us directly into the mind of Klara, an AF (artificial friend) who among other AFs are essentially androids created to give affluent teenagers a constant friend while they navigate the relentless pace of being a “lifted” student. The story is told in 1st person, entirely from Klara’s point of view. What happens to Klara through the course of her existence at the bidding of her human family is a testament to what human beings could become if AI technology ever achieves “sentience.” By the end of the novel, I found myself feeling compassion for Klara and disgust for how she was treated by the humans she served, even though she was incapable of feeling either herself.
This is not another pop culture dystopian story of robots taking over and enslaving humans. It’s a visceral warning that we as humans will ourselves become slave masters if we insist on creating synthetic sentient beings to serve our every need. Ishiguro portrays well how such a social culture would lead humans into engaging in brutal consequentialism, not only to their AF servants, but to each other.
When I raised my concerns to my 13-year-old grandson about the immorality of creating artificial life to serve human beings, he asked me if I felt the same concerns for a cow. It’s an interesting point. True, cows, along with many other domesticated animals have served human beings in symbiotic relationships for many centuries. And whether or not that is unethical is hotly debated, although I have yet to encounter anyone with a pet dog or cat who believes their relationship with their pet is an immoral act.
If I felt confident that designers and developers of AI would be satisfied with creating artificial servants with intelligence and self-awareness no more sophisticated than cattle, I would not have the same concerns. But, I don’t have any confidence that they will stop there. Sociable robots (like Paro) already exist with the primary requirement of being a companion for a human being. It won’t be long before the creators of AI make their first fully functional sexual companions to serve every desire for humans who want the joys of intimate companionship without the challenges of a relationship. Chances are good that, like Klara, these artificial companions won’t be capable of feeling enmity towards their human masters. But, I truly believe that we as human beings will pay dearly if we choose the easy path of preferring a subservient companion. We will lose our ability to care for each other with compassion and empathy – absolute requirements for love. It’s why I think this novel is an essential read. Klara and the Sun challenges us to look closely at what love is, ironically, from the point of view of an entity incapable of giving it.
Since AI has become the latest technological craze, I’ve thought often about the episode of Star Trek in which the sentience of Data is challenged in a court case. It was Guinan who raised the concern that replicating more “Datas” was making slaves, something that has stuck with me since seeing the episode. There are some leaps in logic that science fiction shows like Star Trek make in terms of the establishment of sentient artificial “lifeforms.” First, what sentience actually is is still debated among scientists. Webster’s definition is woefully inadequate. The best narrative on sentience and consciousness I’ve found is in Animal Ethics (many citations!). Note the line: “We don’t yet know what causes consciousness to arise. And until we know this, we can’t know which beings will be sentient.” In other words, we’re still trying to figure out which creatures in the natural world qualify as sentient. Researchers of NDE (near death experiences) will tell you that people who’ve had them have experienced consciousness – even expanded consciousness – at the time when their brains are flat lined. It suggests that consciousness is something more than a manifestation of brain activity.
Yet, science fiction portrays this problem, not only solved, but able to be engineered. And knowing my colleagues in technology, I have little doubt that many see shows like Star Trek as a blueprint. Second, I find it notable that discussions of psychology are left out of the equation of achieving artificial sentience. Where are Carl Jung’s concept of the collective unconscious and Freud’s concept of the id? There is certainly ample proof that organic creatures (including humans) are born with instincts and archetypes intact. And we don’t need to look far to find evidence of the id manifesting itself in the worst of human behavior. How can we assume that consciousness and sentience can exist without those aspects of psychology? And if we can’t determine what it is, how can it possibly be engineered, let alone in the time frame impatient AI technologists desire to achieve such a feat? I have no confidence that the industry will ever attempt it. It will be relegated to the “nice to have” category of engineering requirements.
So, what are AI technologists creating when they succeed at making autonomous entities that change behavior with programming, input and experience? They will have no instincts. They will have no genetic history of centuries of existence as a species. They will have no personal experience of a lifetime growing up. And I haven’t even touched the concept of the soul, which humans in overwhelming numbers believe exists. They will be entities made with blank slate minds, capable of enormous power and speed, prone to the unintended consequences of whatever programming their flawed human creators put into them. Yet, science fiction regularly portrays androids more noble and compassionate than their human creators and on an endless mission to “be more human.” Really? This is why I felt Klara and the Sun was so on target. Klara had no genetic history or life experience. Her perception of what the world is was formed only by her limited perceived input (thus, the assumption of the devine nurturing power of the sun). She could feel no love, no empathy, no compassion, no despair – not even for her own unimaginable (to us) sad end. Klara was for all intents and purposes, a slave to humans and their own hubris of greatness. Nonetheless, human beings will perceive such an entity as Klara as being capable of all of those things – even of having a soul (Sherry Turkle has already revealed this in her research). And, for me, therein lies the risk of humanity’s redefinition of the validity of slavery. I believe it will force humans into cognitive dissidence about AI. We will perceive them as sentient. Yet, we will insist that they serve us for our every need – even companionship. We will have to find a way to become comfortable being slave masters. I fear that the justification will become the very fact of having created them – being their gods. Human beings have shown too many times in history that they cannot become gods. Yet, here we are.
Jim,
I have not read this book. Just a few thoughts on “if AI technology ever achieves sentience…”:
1) There was an interesting show on that theme on Star Trek, The New Generation. Do we treat Data as a machine or as a “person”? At some point, I believe we’ll have to give sentient entities the same rights as human beings.
2) The other side of that coin is that if we treat them as sentient slaves; it degrades us as people. We don’t have to moralize, we might not have to work or think, just let the robot do it — leading to the eventual decline of the human race. It’s in our nature to be challenged. If we don’t, we degrade — think of our overweight couch potatoes as one example.
Hi Ronan,
You raise good questions that are not easy to succinctly discuss. But, I will try to give you my take. So, please bear with my long response.
Yes, I’ve often thought about the episode of Star Trek in which the sentience of Data is challenged in a court case. It was Guinan who raised the question of making slaves of future “Datas” to Jean Luc Picard, something that has stuck with me since seeing the episode.
I think there are some leaps in logic that science fiction shows like Star Trek make in terms of the establishment of sentient artificial “lifeforms.” First, what sentience actually is is still debated among scientists. Webster’s definition is woefully inadequate. The best narrative on sentience and consciousness I’ve found is in Animal Ethics (many citations!). Note the line: “We don’t yet know what causes consciousness to arise. And until we know this, we can’t know which beings will be sentient.” In other words, we’re still trying to figure out which creatures in the natural world qualify as sentient. Yet, science fiction portrays this problem, not only solved, but able to be engineered. And knowing my colleagues in technology, I have little doubt that many see shows like Star Trek as a blueprint. Second, I find it notable that discussions of psychology are left out of the equation of achieving artificial sentience. Where are Carl Jung’s concept of the collective unconscious and Freud’s concept of the id? There is certainly ample proof that organic creatures (including humans) are born with instincts and archetypes intact. And we don’t need to look far to find evidence of the id manifesting itself in the worst of human behavior. How can we assume that consciousness and sentience can exist without those aspects of biology? And if we can’t determine what it is, how can it possibly be engineered, let alone in the time frame impatient AI technologists desire to achieve such a feat? I have no confidence that the industry will ever attempt it. It will be relegated to the “nice to have” category of engineering requirements.
So, what are AI technologists creating when they succeed at making autonomous entities that change behavior with programming, input and experience? They will have no instincts. They will have no genetic history of centuries of existence as a species. They will have no personal experience of a lifetime growing up. And I haven’t even touched the concept of the soul, which humans in overwhelming numbers believe exists. They will be entities made with blank slate minds, capable of enormous power and speed, prone to the unintended consequences of whatever programming their flawed human creators put into them. Yet, science fiction regularly portrays androids more noble and compassionate than their human creators and on an endless mission to “be more human.” Really? This is why I felt Klara and the Sun was so on target. Klara had no genetic history or life experience. Her perception of what the world was formed only by her limited perceived input (thus, the assumption of the religious nurturing power of the sun). She could feel no love, no empathy, no compassion, no despair – not even for her own unimaginable (to us) sad end. Klara was for all intents and purposes, a slave to humans and their own hubris of greatness. Nonetheless, human beings will perceive such an entity as Klara as being capable of all of those things – even of having a soul (Sherry Turkle has already revealed this in her research). And, for me, therein lies the risk of humanity’s redefinition of the validity of slavery. I believe it will force humans into cognitive dissidence about AI. We will perceive them as sentient. Yet, we will insist that they serve us for our every need – even companionship. We will have to find a way to become comfortable being slave masters. I fear that the justification will become the very fact of having created them – being their gods. Human beings have shown too many times in history that they cannot become gods. Yet, here we are.
Jim,
Here are some thoughts.
1) The episode where Data’s right of self-determination is questioned is called “The Measure of a Man.” It’s the ninth episode of the second season of Star Trek, the New Generation.
2) Per your comment: “We don’t yet know what causes consciousness to arise. And until we know this, we can’t know which beings will be sentient.” For this sentence to have meaning, we need good definitions for consciousness and sentience. Taken to an extreme, no being is sentient, as we don’t know what causes consciousness. Of course, I disagree with this extreme. So, I went definition searching on the net.
3) terminology – What are the differences between sentience, consciousness and awareness? – Philosophy Stack Exchange
a) General note: there are a lot of definitions of the above on the internet. Here’s one set: Consciousness (Stanford Encyclopedia of Philosophy)
b) Levels of awareness: For example, is a plant aware of its surroundings? Does it vary in response in the dry versus wet season? The answer is yes. Would we call a plant intelligent? Certainly not with our current understanding of plants.
4) Your comment on how consciousness and sentience exist without instincts and archetypes leads to the question of when and where these instincts and archetypes were inserted into human behavior. These characteristics are genetically programmed as a result of many environmental situations. It’s a learned response to enable better survival for that creature or human. So, my belief for non-biological creatures like Data, is that instincts are not necessary for consciousness as what is required can be learned.
Links: https://philosophy.stackexchange.com/questions/4682/what-are-the-differences-between-sentience-consciousness-and-awareness
https://plato.stanford.edu/entries/consciousness/
Very good points, Ronan. These points certainly warrants thoughtful discussion which, unfortunately, companies that are designing and developing AI are not going to spend time to address. In the 20 years I’ve worked in the industry, I’ve almost never seen teams pause to consider the implications of the dissemination of what they create. They invariably launch the product and deal with issues as they arise from the people who use it. In an application as inconsequential as Pintrest, that’s no big deal. But, as technology continues to deeply integrate into people’s lives, unintended consequences become more of a threat to human health, social stability, even humanity itself. The Theranos debacle (Elizabeth Holmes) should have been a wake up call. As I watch the fervor over AI grow, I see no evidence of companies pausing development to evaluate consequences. They are speeding up development to acquire dominance in the field.
Regarding the debate over consciousness: I recently saw a documentary on research into near death experiences. People who have had these experiences report being conscious, even having expanded consciousness, some with brains that were flat lined at exactly that time. The evidence suggests that consciousness very well may exist outside of a functional brain. AI is proceeding on the assumption that consciousness can be engineered predicated on a belief that a human being is merely an organic machine. The presence of the soul is not, to my knowledge, part of any discussion in artificial general intelligence. To me, that’s a serious red flag.
By the way, I’ve edited my original post to include a paragraph in my first response to your first comment. I realized that it was part of what I originally wanted to say.
Jim, as always, your responses are thoughtful. Here’s an interesting article from Scientific American on machine “consciousness.”
https://www.scientificamerican.com/article/what-does-it-feel-like-to-be-a-chatbot/