My 7th contribution to
Financial Times Chinese (
FT中文网). I argue that the communication failure of two types of AI virtual agents is obvious:
1. One-to-many AI news readers: They fail to capture nuanced trust-building. Audiences’ para-social relationships are established through long-time human response, sharing joy and sorrow, not just a human skin chasing a high-fidelity face and letter-to-letter accuracy. That’s not accuracy. That’s hypocrisy.
2. One-to-one virtual assistants. They act like submissive subordinates. “Yes-men”. This risks the “reversed media equation”: people practice brutal commands on the machine, then treat those at the lower power position in the workplace the same way. That’s not for maintaining the professionalism: values are built via nurturing (but not conquering) the next generations.
My argument: Further investments should consider the lower end of the “Uncanny Valley”: developing AI agents that are LESS anthropomorphic. There is one exception: C-3PO in the Star Wars -
No, we don't need pretty faces; we need a droid with valued special skills in today’s world full of polarization and disparity. We need someone who knows 6 million forms of communication across the universe to bridge our disputes.