Last updated on October 30, 2023
I read Kazuo Ishiguro’s latest novel, Klara and the Sun, in only four days. It was loaned to me from a friend who knows well my concerns regarding the current reckless implementation of mindless technological innovation.
In Klara and the Sun, Kazuo Ishiguro takes us directly into the mind of Klara, an AF (artificial friend) who among other AFs are essentially androids created to give affluent teenagers a constant friend while they navigate the relentless pace of being a “lifted” student. The story is told in 1st person, entirely from Klara’s point of view. What happens to Klara through the course of her existence at the bidding of her human family is a testament to what human beings could become if AI technology ever achieves “sentience.” By the end of the novel, I found myself feeling compassion for Klara and disgust for how she was treated by the humans she served, even though she was incapable of feeling either herself.
This is not another pop culture dystopian story of robots taking over and enslaving humans. It’s a visceral warning that we as humans will ourselves become slave masters if we insist on creating synthetic sentient beings to serve our every need. Ishiguro portrays well how such a social culture would lead humans into engaging in brutal consequentialism, not only to their AF servants, but to each other.
When I raised my concerns to my 13-year-old grandson about the immorality of creating artificial life to serve human beings, he asked me if I felt the same concerns for a cow. It’s an interesting point. True, cows, along with many other domesticated animals have served human beings in symbiotic relationships for many centuries. And whether or not that is unethical is hotly debated, although I have yet to encounter anyone with a pet dog or cat who believes their relationship with their pet is an immoral act.
If I felt confident that designers and developers of AI would be satisfied with creating artificial servants with intelligence and self-awareness no more sophisticated than cattle, I would not have the same concerns. But, I don’t have any confidence that they will stop there. Sociable robots (like Paro) already exist with the primary requirement of being a companion for a human being. It won’t be long before the creators of AI make their first fully functional sexual companions to serve every desire for humans who want the joys of intimate companionship without the challenges of a relationship. Chances are good that, like Klara, these artificial companions won’t be capable of feeling enmity towards their human masters. But, I truly believe that we as human beings will pay dearly if we choose the easy path of preferring a subservient companion. We will lose our ability to care for each other with compassion and empathy – absolute requirements for love. It’s why I think this novel is an essential read. Klara and the Sun challenges us to look closely at what love is, ironically, from the point of view of an entity incapable of giving it.
Since AI has become the latest technological craze, I’ve thought often about the episode of Star Trek in which the sentience of Data is challenged in a court case. It was Guinan who raised the concern that replicating more “Datas” was making slaves, something that has stuck with me since seeing the episode. There are some leaps in logic that science fiction shows like Star Trek make in terms of the establishment of sentient artificial “lifeforms.” First, what sentience actually is is still debated among scientists. Webster’s definition is woefully inadequate. The best narrative on sentience and consciousness I’ve found is in Animal Ethics (many citations!). Note the line: “We don’t yet know what causes consciousness to arise. And until we know this, we can’t know which beings will be sentient.” In other words, we’re still trying to figure out which creatures in the natural world qualify as sentient. Researchers of NDE (near death experiences) will tell you that people who’ve had them have experienced consciousness – even expanded consciousness – at the time when their brains are flat lined. It suggests that consciousness is something more than a manifestation of brain activity.
Yet, science fiction portrays this problem, not only solved, but able to be engineered. And knowing my colleagues in technology, I have little doubt that many see shows like Star Trek as a blueprint. Second, I find it notable that discussions of psychology are left out of the equation of achieving artificial sentience. Where are Carl Jung’s concept of the collective unconscious and Freud’s concept of the id? There is certainly ample proof that organic creatures (including humans) are born with instincts and archetypes intact. And we don’t need to look far to find evidence of the id manifesting itself in the worst of human behavior. How can we assume that consciousness and sentience can exist without those aspects of psychology? And if we can’t determine what it is, how can it possibly be engineered, let alone in the time frame impatient AI technologists desire to achieve such a feat? I have no confidence that the industry will ever attempt it. It will be relegated to the “nice to have” category of engineering requirements.
So, what are AI technologists creating when they succeed at making autonomous entities that change behavior with programming, input and experience? They will have no instincts. They will have no genetic history of centuries of existence as a species. They will have no personal experience of a lifetime growing up. And I haven’t even touched the concept of the soul, which humans in overwhelming numbers believe exists. They will be entities made with blank slate minds, capable of enormous power and speed, prone to the unintended consequences of whatever programming their flawed human creators put into them. Yet, science fiction regularly portrays androids more noble and compassionate than their human creators and on an endless mission to “be more human.” Really? This is why I felt Klara and the Sun was so on target. Klara had no genetic history or life experience. Her perception of what the world is was formed only by her limited perceived input (thus, the assumption of the devine nurturing power of the sun). She could feel no love, no empathy, no compassion, no despair – not even for her own unimaginable (to us) sad end. Klara was for all intents and purposes, a slave to humans and their own hubris of greatness. Nonetheless, human beings will perceive such an entity as Klara as being capable of all of those things – even of having a soul (Sherry Turkle has already revealed this in her research). And, for me, therein lies the risk of humanity’s redefinition of the validity of slavery. I believe it will force humans into cognitive dissidence about AI. We will perceive them as sentient. Yet, we will insist that they serve us for our every need – even companionship. We will have to find a way to become comfortable being slave masters. I fear that the justification will become the very fact of having created them – being their gods. Human beings have shown too many times in history that they cannot become gods. Yet, here we are.