My recent FTChinese column on Dec 2025 hit record engagement (~10K on WeChat, ~7K on FT site). Two uncomfortable questions I am just throwing into the air:
1. Generative AI's agreeable bias ("sycophancy") hinders deliberation. When your AI assistant always says "yes," you get pseudo-satisfaction. Those viral prompts promising "make ChatGPT your critical friend"? Self-deception. If AI can be prompted to "disagree" and act like a zhèng yǒu (諍友 in Chinese, a friend who points out your faults), it proves that it is just a puppet performing a different script.
2. One may advocate: "We need humans because they are more empathetic". Sorry, that is a dangerous myth. In high power-distance organizations, doesn't that same empathy mean humans are also more manipulative, more subordinate to power, even complicit in abuse? When a boss says, "Only you can handle this task" - that is not about competence. It is control.
My view: AI products should track reverse metrics, i.e., how many users reduced dependency and moved toward real-world connections? Did your mental health AI lead to an actual phone call? Did your fitness coach convert to real exercise?
This is unrealistic. But if we only measure clicks and retention, we are building digital opioids, not tools for human flourishing. Some 1990s Chinese BBS communities understood this—strangers met offline, argued passionately, reconciled, and built lasting friendships. My generation received our earliest civic education there. That is the difference between living in the algorithm and living in people's hearts.