The Structure and Evolution of Online Rating Biases in the Sharing Economy
- Angela LU (Principal Investigator / Project Coordinator)Department of Information Systems
- Bruno Abrahao (Co-Investigator)
DescriptionA wave of sharing economy companies are profoundly changing the market landscape, disrupting traditional businesses alongside the social fabrics of exchange. This new business model uses technology-mediated markets, which feature peer-to-peer (P2P) based exchange of accesses to goods and services that range from car rentals, hospitality service, to crowdsourced labor markets. A critical challenge to the growth of the sharing economy, however, is that how to generate trust from online to offline transactions. Breaking trust can lead to user dissatisfaction, the loss of reputation, as well as disastrous consequences such as the examples of Uber and Didi. How to generate trust is hence a crucial task faced by sharing economy platforms, as well as other online markets that involve significant interpersonal risks.Users in the sharing economy rely on reputational systems such as ratings to infer quality, reduce information uncertainty, and make exchange decisions. Ratings, however, are biased by various behavioral tendencies, such as the preference for homophily and power dependence. We systematically examine these biases and their relationships to social distance among heterogeneous user populations. The social distance bias indicates unreliable quality, and how to reduce this bias is the first step towards improving online trust through clear reputation signaling.Second, the coevolution of reputational systems and trust implies long-term behavioral trends, which are critical to investigate for business prediction and growth. Ratings evolve with user experience from which one revises past expectations, as exchange networks are constantly updating with novel interactions. Understanding the evolutionary direction of rating bias is the second goal of our investigation, which have dynamic implications for user behavior and bias correction in the long run.Our project examines the structure and evolution of rating biases by analyzing massive amount of platform data in research design and analysis. Using a combination of big data, machine learning, and field experiment on leading sharing economy platforms, we systematically identify the structure and trends of biases, while attempting to correct the tendencies in system design. The project not only intends to make significant theoretical contributions by studying user behaviors in an emerging technology market, but also provides strategic solutions through data science to counteract a major challenge to online trust.
|Effective start/end date||1/01/20 → …|