Skip to main navigation Skip to search Skip to main content

When capability hurts: Strategic adoption and disclosure of artificial intelligence in creator competition

Ziming Wang, Jingyan Li, Wenyi Zhang*, Xiaowei Guo

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Generative artificial intelligence (AI) reshapes digital content platforms by lowering barriers for creators to produce content, while widespread algorithm aversion among consumers creates uncertainty about its impact on stakeholders. Whereas prior literature focuses on monopolistic decision-making, our framework shows how competition fundamentally reshapes disclosure incentives by linking consumer heterogeneity to price interaction among creators. To examine this tension, we develop a game-theoretic framework of two competing creators differing in creativity and heterogeneous consumers who differ in both their perceived content quality and ability to recognize AI adoption. In this setting, creators may adopt AI to enhance content quality and must decide whether to disclose its use. The analysis shows that equilibrium outcomes are jointly affected by the AI capability, algorithm aversion, and opacity penalty. Although AI adoption consistently reduces rival profits, greater AI capability can intensify price competition to the extent that AI adoption becomes unprofitable for the weaker creator. To avoid such cases, the weaker creator rejects to use highly capable AI. When AI capability is relatively low, a moderate increase in the proportion of sophisticated consumers makes the weaker creator more likely to disclose AI adoption, while it declines AI usage if sophisticated consumers account for a significant share of the market. These findings suggest that more capable AI does not necessarily benefit either creators or consumers, as its consequences hinge on consumer heterogeneity and competitive dynamics. Accordingly, AI policymakers should weigh transparency requirements against the risk of discouraging adoption, since mandatory disclosure may backfire in markets with strong algorithm aversion. © 2026 Elsevier Ltd.
Original languageEnglish
Article number131406
Number of pages19
JournalExpert Systems with Applications
Volume311
Online published29 Jan 2026
DOIs
Publication statusOnline published - 29 Jan 2026

Funding

Financial support from the National Natural Science Foundation of China (72501063, 723B2023) are gratefully acknowledged.

Research Keywords

  • Digital content platform
  • Pricing strategy
  • Artificial intelligence
  • Algorithm aversion
  • Game theory

Fingerprint

Dive into the research topics of 'When capability hurts: Strategic adoption and disclosure of artificial intelligence in creator competition'. Together they form a unique fingerprint.

Cite this