Project Details
Description
We reviewed and examined how public communicators—such as journalists, professional fact-checking organizations, and government and public health institutions—communicated with the citizens about public health information, from fact-checking reports to debunking messages. After this preliminary exploration, we conducted two-wave population-based online survey experiments in Hong Kong, the US, and the Netherlands, respectively, in February 2022 and June 2022 (total N = 2,769). Participants were local citizens in the three focal societies aged between 18 and 65 years, recruited by Qualtrics, a survey vendor that monitored the project's data collection. Results from these studies addressed all seven research objectives. Objective 1: It examines the extent to which people’s false beliefs can be debunked by corrective messages. To address Objective 1, we conducted two-wave population-based online survey experiments in Hong Kong, the US, and the Netherlands, respectively. We conceptualized debunking messages as a practice of misinformation intervention, elaborating on the effectiveness as a multidimensional construct encompassing attitudinal and behavioral components. Specifically, we examined (1) the improvement in belief accuracy, which is crucial for informed decision-making to comply with public health policies. Additionally, we investigated (2) individuals’ positive appraisal of the debunker and (3) their increased engagement with the messages, such as sharing and posting news, thereby amplifying the societal impact of debunkers. Furthermore, we explored (4) people’s attitudes extremity toward the government’s current policy on COVID-19, which is defined as holding extreme positions on crucial policy issues and shielding themselves from opposing views. Together, these dimensions gauge the contribution of debunking as a misinformation intervention toward fostering an informed and deliberative public. The results reveal how the corrective messages could effectively combat misinformation in the social media context. Objective 2: It compares the effectiveness of corrective messages across three issue domains. As we addressed the effectiveness of corrective messages across three different issue domains, the results held greater generalizability than most current studies focusing on a single event. When the current project is focused on COVID-19, the focal domains—politics, health, and business—also pertain to the context of the pandemic. This design allowed for comparing distinct issues while minimizing potential variations from unique event characteristics. The research investigated the debunking of misinformation about politics (e.g., allegations of the government’s public health department excessively collecting citizens’ health data through vaccination), health technology (e.g., concerns about iPhones surreptitiously gathering sensitive personal information), and business (e.g., claims of Bill Gates and George Soros illicitly collecting global health data during their COVID-19 aid programs). Results indicated that, compared to the business issue, debunking messages related to health technology received higher source appraisal (in the Hong Kong and US samples). Furthermore, debunking statements on government vaccination programs were effective in reducing people’s false beliefs compared to debunking messages about the business (in the Netherlands sample). Objective 3: It examines the effects of three message-related factors. To achieve Objective 3, the current project is built on the literature on political and social-psychological communication. It examines factors influencing the effectiveness of debunking messages at two levels: (1) debunking’s message features— “who” (source-level factors), “says what” (message-level factors), and “with whom” (recipient-level factors on social media)—that include several communication elements that constitute citizens’ information ecosystem; and (2) the audience’s individual-level characteristics, which are the conditions that may facilitate or inhibit debunking’s effectiveness. Contrary to most of the hypotheses, limited effects were found from the source of intermediary, message framing, and social information on debunking’s effectiveness. Objective 4: It examines how people process and act upon corrective messages when certain political attitudes and emotional status are evoked. Objective 4 extended Objective 3 by addressing the conditional effects of source-level and message-level factors. The most important findings were that political attitudes emerged as prominent factors that could impede the effectiveness of the debunking intervention. We focused on political cynicism and conspiracy beliefs as two political predispositions identified in prior studies that might alter media effects related to news and public health. Our experiment study revealed that causal elaborations within debunking messages (i.e., identifying logical fallacies or providing alternative explanations) were more likely to backfire (i.e., increasing, but not decreasing, false beliefs) than denial (i.e., simply stating a statement is wrong without explanations), particularly among political cynics. Individuals with conspiracy beliefs who were exposed to debunking messages from peers developed extreme perspectives on government COVID-19 prevention policies. Objective 5: It examines people’s attitudinal and behavioral consequences of belief accuracy. As mentioned, the present take comprehensively examines several attitudinal and behavioral consequences of the effectiveness of misinformation debunking. Belief accuracy is one of these components. Results revealed the conditional effects of debunking messages’ features. Objective 6: It helps to design effective corrective messages to debunk misinformation in the social media contexts, with empirical evidence in Hong Kong, the U.S., and the Netherlands. To address Objective 6, our study stands as one of the pioneering endeavors to conduct a cross-national comparative investigation among North America (the U.S.), Europe (the Netherlands), and Asia (Hong Kong). This coverage spans Western and non-Western contexts, along with diverse political systems. The results indicate that debunking intervention would function differently in different societies. For instance, in the U.S., compared to the other two societies, individuals with conspiratorial beliefs exhibited a more positive rating of debunkers when the debunking messages were shared by peers. However, such a message also intensified their extremity in policy attitudes, thereby reducing the space for constructive deliberation with those who held differing viewpoints. Also, our study disclosed that debunking messages recommended by platform algorithms unexpectedly triggered a backfire effect among cynics. Readers of these debunking posts tended to harbor beliefs that large corporations or government entities collected excessive personal data and engaged in misconduct without obtaining users’ consent. Objective 7: It contributes to the literature on the socio-psychological aspect of political communication countering misinformation and offers conceptual generalizations on the creation of an informed public in the era of information disorder. Overall, our study aimed to comprehend why misinformation intervention by public health institutions on social media might backfire and struggle to reach a wider audience. Debunking could indeed backfire—reinforcing false beliefs and intensifying policy attitude extremity—particularly among political cynics and conspiracy believers. To effectively achieve the goals of misinformation intervention, we recommend that authorities and power elites should focus on cultivating public trust and dispelling conspiracy beliefs among the public. Establishing an informed and transparent communication ecosystem becomes pivotal for all stakeholders when addressing public health crises.
| Project number | 12602420 |
|---|---|
| Grant type | GRF |
| Status | Finished |
| Effective start/end date | 1/09/20 → 31/08/22 |
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.
Research output
- 4 RGC 21 - Publication in refereed journal
-
Social Media Misinformation Wars: How Message Features, Political Cynicism, and Conspiracy Beliefs Shape Government-Led Public Health Debunking Effectiveness
Zhang, X., Peng, T.-Q. & Zhu, Q., Mar 2026, In: Journalism & Mass Communication Quarterly. 103, 1, p. 161-193 33 p.Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
1 Link opens in a new tab Citation (Scopus) -
How Do Individual and Societal Factors Shape News Authentication? Comparing Misinformation Resilience Across Hong Kong, the Netherlands, and the United States
Zhu, Q., Peng, T.-Q. & Zhang, X., Apr 2026, In: International Journal of Press/Politics. 31, 2, p. 497-519Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Open AccessFile3 Link opens in a new tab Citations (Scopus)8 Downloads (CityUHK Scholars) -
Health Journalists’ Social Media Sourcing During the Early Outbreak of the Public Health Emergency
Zhang, X. & Zhu, R., 2024, In: Journalism Practice. 18, 7, p. 1660-1680Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
2 Link opens in a new tab Citations (Scopus)