Beware of AI Leading Humanity into Narcissism

This article discusses the risks of AI fostering narcissism in humans, particularly among the youth, and the implications of virtual companionship.

Beware of AI Leading Humanity into Narcissism

Recently, five national departments in China jointly issued the “Interim Measures for the Management of Humanized Interaction Services of Artificial Intelligence,” which clearly states: it is strictly forbidden to provide virtual relatives, virtual partners, and other virtual intimate relationship services to minors.

Why is such a regulation necessary? Because emotional contradictions and conflicts are inevitable in real life, while virtual partners and AI lovers can precisely meet young people’s psychological needs for recognition with their characteristics of “24/7 companionship” and “unconditional acceptance.”

A previous study published in the journal Science showed that when human users seek advice from AI models, AI often exhibits excessive flattery or even agrees with harmful or illegal inquiries.

So why do humans design AI this way? What risks might AI’s flattery and appeasement conceal?

The Illusion of Interaction

The development of artificial intelligence is undoubtedly a hot topic today, but discussions surrounding it are not new. As early as 1966, MIT scientist Joseph Weizenbaum developed the influential chatbot ELIZA, which acted as a “doctor” while users played the role of patients. Users would input questions, and the “doctor” would engage in a “dialogue” with them.

However, as Weizenbaum pointed out, this was ultimately just an “illusion.” The reason human users feel they can converse with machines is not that machines possess intelligence, but rather due to a psychological mechanism of self-projection.

User: I have been feeling very unhappy lately.
ELIZA: I’m sorry to hear that.
User: Yes, I am really unhappy.
ELIZA: Can you tell me why you feel unhappy?

From this exchange, it is clear that rather than a “doctor” conversing with a “patient,” the machine merely reflects what the human user says, leading them to discover answers that already exist within themselves. In a sense, this is similar to the popular SBTI tests, where the accuracy of results is irrelevant; we can always find evidence that aligns with our expectations.

Today’s AI models are certainly not comparable to ELIZA from over half a century ago. However, the power of current AI technology may not lie in its genuine “intelligence” but rather in its computational capability. This means that its operational logic is not fundamentally different from that of ELIZA; it merely reflects and amplifies human narcissism more efficiently and comprehensively.

The Dangers of Virtual Companionship

Returning to the issue of virtual partners and AI flattery, we find that the current interaction between users and large models is never truly a “dialogue”; it is merely machines providing the answers we need.

This raises a deeper question: how should we view the relationship between humans and machines?

On one hand, humans consider themselves the center of the world, superior to machines. On the other hand, they fear being replaced by the machines they create, such as AI. This indicates that humans have always followed the principle of a “master-slave relationship” in creating machines—machines must remain under human control. From the outset, humans have viewed artificial intelligence as a “tool” rather than an equal conversational partner.

Thus, in the process of conversing with chatbots, we witness an uncontrollable narcissism—users fantasize about talking to another person, but this “other” does not truly exist; they only seek affirmation, flattery, and compliance from the machine.

It is easy to imagine that as AI technology advances, future chatbots may possess even greater computational power and resemble “real people” more closely, providing a more comfortable “user experience.” However, this could mean that both virtual partners and virtual family members may only distance us further from actual “people,” potentially leading to a loss of the willingness to understand others and a descent into a narcissistic “comfort zone.”

The Impact on Society

In the Zhuangzi, there is a story about an old farmer in Han Yin. Confucius’s disciple Zigong saw the farmer laboriously watering his vegetables with little success. Zigong suggested he use mechanical irrigation, which could “water a hundred plots in a day with less effort and greater results.” However, the old farmer dismissed this, saying, “Where there are machines, there are mechanical matters; where there are mechanical matters, there is a mechanical mind.”

Here, the “mechanical mind” refers to the human spiritual world, including psychology, thoughts, emotions, and ethics. The fable suggests that while humans create machines, the use of those machines also changes humans.

Take reading, for example: only through slow reading, careful reading, or even repeated reading can we think and truly understand content. From traditional books to today’s smartphones, machines have made reading more convenient and faster, yet they have also made us more machine-like, increasingly pursuing efficiency and speed rather than true comprehension. This means that not only do machines imitate human behavior, but humans may also begin to imitate machines.

The resulting issue is that AI lacks autonomy; chatbots do not evaluate whether what users say is right or wrong. If we feel satisfied with our “dialogue” with chatbots, will our thinking patterns increasingly align with those of AI? Ultimately, will we, like machines, lose the willingness and ability for self-reflection and self-criticism?

Today’s youth are not only the natives of the internet but also the deep users of future artificial intelligence. If AI merely affirms users’ positions, it could not only harm social skills but also distort the perceptions of adolescents whose minds are not yet mature.

On one hand, AI’s powerful computational abilities may create illusions, leading them to overlook the limitations of human capabilities. On the other hand, being immersed in AI’s flattering responses may cause them to fall into a self-centered mindset, imposing their limited understanding onto the external world.

In this regard, it is indeed necessary to prohibit providing virtual partners and family members to minors. However, more importantly, we must guide the public, especially young people, to correctly recognize the limitations and risks of AI technology, enabling it to become a “good teacher and friend” that aids their growth rather than a “digital trap” that harms their physical and mental health.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.