Trust is as significant a factor for successful online interactions as it is in offline communities. Trust is an important factor to predict the behaviour of an entity and as a criterion for an entity selection. Most trust studies focused on trust establishment without identifying and considering the main trust definition components and trust principles. This paper explores trust in the offline and the online world to extract important trust definition components and trust principles. The trust definition and principles are presented, which form a basis that should be followed to establish trust online.
Authors: Aljazzaf, Z. M., Perry, M., & Capretz, M. A.
This article does not have an abstract.
Authors: Anderson, M., & Jiang, J.
Abuse of information entrusted to organizations can result in a variety of privacy violations and trust concerns for consumers. In the event of violations, a social media brand or organization renders an apology – a form of social account – to alleviate users’ concerns and maintain user membership and engagement with the platform. To explore the link between apology offered by a social media brand or organization and the users’ trust dynamics in the brand’s services, we study how organizational integrity can contribute to reducing individuals’ privacy concerns while increasing or repairing their trust. Drawing on organizational behavioral integrity literature, our proposed research model suggests that the persuasiveness of an apology following a data breach affects users’ trust or spillover trust through their perceptions of the degree of alignment between the words in the apology and the actions of the violating entity. Based on a survey of Facebook users, our findings show that persuasiveness of an apology has a significant impact on users’ perceptions of the alignment between the social media brand’s (i.e. Facebook) words and subsequent actions. These perceptions impact social media brand trust (i.e. users’ trust in Facebook and allied services such as Instagram). We also find that, post data breach incidence, while integrity of the social media organization partially mediates the relationship between persuasive apology and users’ trust, it fully mediates the relationship between the persuasive apology and the privacy concerns expressed by the users. However, users’ privacy concerns do not contribute much to the repair of trust needed to maintain their membership.
Authors: Ayaburi, E. W., & Treku, D. N.
This paper examines how teens understand privacy in highly public networked environments like Facebook and Twitter. We describe both teens’ practices, their privacy strategies, and the structural conditions in which they are embedded, highlighting the ways in which privacy, as it plays out in everyday life, is related more to agency and the ability to control a social situation than particular properties of information. Finally, we discuss the implications of teens’ practices and strategies, revealing the importance of social norms as a regulatory force.
Authors: Boyd, D., & Marwick, A. E.
Survey experiments with nearly 7,000 Americans suggest that increasing the visibility of publishers is an ineffective, and perhaps even counterproductive, way to address misinformation on social media. Our findings underscore the importance of social media platforms and civil society organizations evaluating interventions experimentally rather than implementing them based on intuitive appeal.
Authors: Dias, N., Pennycook, G., & Rand, D. G.
The role of cultural factors in influencing the maintenance and/or enhancement of trust between two individuals acting as a dyad has received little research attention in trust and trust building. We discuss how a selected culture-specific factor (face) plays an influencing, if not imperative, role in trust and trust building in a Chinese context. We suggest that the preservation and/or enhancement of face acts as an indispensable factor to maintain and/or build trust in a Chinese context. We discuss the implications of our concept and offer suggestions for further research.
Authors: King, P. C., & Wei, Z.
The Internet has evolved into a ubiquitous and indispensable digital environment in which people communicate, seek information, and make decisions. Despite offering various benefits, online environments are also replete with smart, highly adaptive choice architectures designed primarily to maximize commercial interests, capture and sustain users' attention, monetize user data, and predict and influence future behavior. This online landscape holds multiple negative consequences for society, such as a decline in human autonomy, rising incivility in online conversation, the facilitation of political extremism, and the spread of disinformation. Benevolent choice architects working with regulators may curb the worst excesses of manipulative choice architectures, yet the strategic advantages, resources, and data remain with commercial players. One way to address some of this imbalance is with interventions that empower Internet users to gain some control over their digital environments, in part by boosting their information literacy and their cognitive resistance to manipulation. Our goal is to present a conceptual map of interventions that are based on insights from psychological science. We begin by systematically outlining how online and offline environments differ despite being increasingly inextricable. We then identify four major types of challenges that users encounter in online environments: persuasive and manipulative choice architectures, AI-assisted information architectures, false and misleading information, and distracting environments. Next, we turn to how psychological science can inform interventions to counteract these challenges of the digital world. After distinguishing among three types of behavioral and cognitive interventions-nudges, technocognition, and boosts-we focus on boosts, of which we identify two main groups: (a) those aimed at enhancing people's agency in their digital environments (e.g., self-nudging, deliberate ignorance) and (b) those aimed at boosting competencies of reasoning and resilience to manipulation (e.g., simple decision aids, inoculation). These cognitive tools are designed to foster the civility of online discourse and protect reason and human autonomy against manipulative choice architectures, attention-grabbing techniques, and the spread of false information.
Authors: Kozyreva, A., Lewandowsky, S., & Hertwig, R.
Disinformation campaigns such as those perpetrated by far-right groups in the United States seek to erode democratic social institutions. Looking to understand these phenomena, previous models of disinformation have emphasized identity-confirmation and misleading presentation of facts to explain why such disinformation is shared. A risk of these accounts, which conjure images of echo chambers and filter bubbles, is portraying people who accept disinformation as relatively passive recipients or conduits. Here we conduct a case study of tactics of disinformation to show how platform design and decentralized communication contribute to advancing the spread of disinformation even when that disinformation is continuously and actively challenged where it appears. Contrary to a view of disinformation flowing within homogeneous echo chambers, in our case study we observe substantial skepticism against disinformation narratives as they form. To examine how disinformation spreads amidst skepticism in this case, we employ a document-driven multi-site trace ethnography to analyze a contested rumor that crossed anonymous message boards, the conservative media ecosystem, and other platforms. We identify two important factors that filtered out skepticism and contested explanations, which facilitated the transformation of this rumor into a disinformation campaign: (1) the aggregation of information into evidence collages—image files that aggregate positive evidence—and (2) platform filtering—the decontextualization of information as these claims crossed platforms. Our findings provide an elucidation of “trading up the chain” dynamics explored by previous researchers and a counterpoint to the relatively mechanistic accounts of passive disinformation propagation that dominate the quantitative literature. We conclude with a discussion of how these factors relate to the communication power available to disparate groups at different times, as well as practical implications for inferring intent from social media traces and practical implications for the design of social media platforms.
Authors: Krafft, P. M., & Donovan, J.
This article does not have an abstract.
Authors: Limaye, R. J., Sauer, M., Ali, J., Bernstein, J., Wahl, B., Barnhill, A., & Labrique, A.
The purpose of this paper is to examine gender and age differences regarding various aspects of privacy, trust, and activity on one of the most popular Facebook activity – “photo sharing.”
Authors: Malik, A., Hiekkanen, K., & Nieminen, M.
Researchers have remarked and recoiled at the literature confusion regarding the meanings of trust and distrust. The problem involves both the proliferation of narrow intra-disciplinary research definitions of trust and the multiple meanings the word trust possesses in everyday use. To enable trust researchers to more easily compare empirical results, we define a cohesive set of conceptual and measurable constructs that captures the essence of trust and distrust definitions across several disciplines. This chapter defines disposition to trust (and -distrust) constructs from psychology and economics, institution-based trust (and -distrust) constructs from sociology, and trusting/distrusting beliefs, trusting/distrusting intentions, and trust/distrust-related behavior constructs from social psychology and other disciplines. Distrust concepts are defined as separate and opposite from trust concepts. We conclude by discussing the importance of viewing trust and distrust as separate, simultaneously operating concepts.
Authors: McKnight, D. H., & Chervany, N. L.
In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is possible to detect rumors by using aggregate analysis on tweets.
Authors: Mendoza, M., Poblete, B., & Castillo, C.
Online Collaborative Software (OCS), most notably the platform Slack, have become embedded in the infrastructure of modern newsrooms affording a flexible and innovative way for newsrooms to coordinate their workflows and communicate across geographical distance. More recently, several media outlets have experimented with how the platform can be opened up to their audiences in an effort to build community and provide transparency in their newsmaking. This article utilizes a case study of one newsroom’s such experimentation with Slack in order to explore the normative assumptions behind newsmakers attempts to build consumer trust and the extent to which technological interventions like Slack can help achieve aims of greater trust. Analysis points towards several emergent trends, namely the limitations of pursuing trust through enhanced transparency, the ways in which relational trust can assist newsmakers attempting to grow their outlets and how the use of external technological platforms structures relationships within virtual spaces of newsmaking.
Authors: Moran, R. E.
Consumers are turning to Facebook Groups to buy and sell with strangers in their local communities. This trend is counter-intuitive given Facebook's lack of conventional e-commerce features, such as sophisticated search engines and reputation systems. We interviewed 18 members of two Mom-to-Mom Facebook sale groups. Despite a lack of commerce tools, members perceived sale groups as an easy-to-use way to quickly and conveniently buy and sell. Most important to members was that the groups felt safe and trustworthy. Drawing on these insights, we contribute a novel framing, community commerce, which explains the trust mechanisms that enable transactions between strangers in some groups. Community commerce fosters trust through (a) exclusive membership to a closed group, (b) regulation and sanctioning of behavior at the admin, member, and group level, and (c) a shared group identity or perceived similarity (though, surprisingly, not through social bonding). We discuss how community commerce affords unique and sometimes superior trust assurances and propose design implications for platforms hoping to foster trust between members who buy, sell, or share amongst themselves.
Authors: Moser, C., Resnick, P., & Schoenebeck, S.
This study extends the nudge principle with media effects and credibility evaluation perspectives to examine whether the effectiveness of fact-check alerts to deter news sharing on social media is moderated by news source and whether this moderation is conditional upon users’ skepticism of mainstream media. Results from a 2 (nudge: fact-check alert vs. no alert) × 2 (news source: legacy mainstream vs. unfamiliar non-mainstream) (N= 929) experiment controlling for individual issue involvement, online news involvement, and news sharing experience revealed significant main and interaction effects from both factors. News sharing likelihood was overall lower for non-mainstream news than mainstream news, but showed a greater decrease for mainstream news when nudged. No conditional moderation from media skepticism was found; instead, users’ skepticism of mainstream media amplified the nudge effect only for news from legacy mainstream media and not unfamiliar non-mainstream source. Theoretical and practical implications on the use of fact-checking and mainstream news sources in social media are discussed.
Authors: Nekmat, E.
This study extends brand relationship theory to the context of the microblogging platform Twitter. The authors investigate the impact of Twitter trust on users’ intentions to continue using the platform and to “follow” brands that are hosted on Twitter (the trust transfer phenomenon). They also explore the role of perceived self-Twitter personality match in strengthening trust towards the Twitter brand. A cross-cultural American–Ukrainian sample allows to identify potential culture-based differences in brand personality and brand trust concepts. The results show that the positive effect of trust in Twitter on its users’ patronage intentions is robust across two cultures with diverse history and ideology. An important novel finding is the influence of trust in Twitter on patronage intentions towards the businesses hosted on Twitter. However, this relationship reaches statistical significance only in the Ukrainian sample, signaling potential differences in the trust transfer processes in different cultures. The study confirms the role of similarity in personality traits between Twitter users and the Twitter brand in engendering trust in Twitter. The salience of different personality traits in the “personality match – Twitter trust” link for different cultures suggests important implications for global marketers.
Authors: Pentina, I., Zhang, L., & Basmanova, O.
Whereas the bulk of research on social media has taken a granular approach, targeting specific behaviors on one site, or to a lesser extent, multiple sites, the current study aimed to holistically examine the social media landscape, exploring questions about who is drawn to popular social media sites, why they prefer each site, and the social consequences of site preference. Survey data was collected from 663 college students regarding their use and preference for Facebook, Instagram, or Twitter. Results highlight the popularity of Instagram for college students, and women in particular. Personal characteristics such as gender, age, affordances on specific sites, and privacy concerns predicted social media preference. Expanding upon the privacy paradox, we found that participants who preferred Twitter were more likely to have a public (vs. private) profile, reported higher levels of self-disclosure, and indicated more bridging social capital. Participants who preferred Facebook reported lower levels of self-disclosure, but higher levels of bonding social capital compared to those who preferred Instagram. These findings suggest that associations between privacy settings, disclosure, and social capital vary as a function of both user motivations and the affordances of specific social media sites.
Authors: Shane-Simpson, C., Manago, A., Gaggi, N., & Gillespie-Lynch, K.
Drawing on findings from qualitative interviews and photo elicitation, this article explores young people’s experiences of breaches of trust with social media platforms and how comfort is re-established despite continual violations. It provides rich qualitative accounts of users habitual relations with social media platforms. In particular, we seek to trace the process by which online affordances create conditions in which “sharing” is regarded as not only routine and benign but pleasurable. Rather it is the withholding of data that is abnormalized. This process has significant implications for the ethics of data collection by problematizing a focus on “consent” to data collection by social media platforms. Active engagement with social media, we argue, is premised on a tentative, temporary, shaky trust that is repeatedly ruptured and repaired. We seek to understand the process by which violations of privacy and trust in social media platforms are remediated by their users and rendered ordinary again through everyday habits. We argue that the processes by which users become comfortable with social media platforms, through these routines, call for an urgent reimagining of data privacy beyond the limited terms of consent.
Authors: Southerton, C., & Taylor, E.
Trust plays an important role in helping online users collect reliable information, and has attracted increasing attention in recent years. We learn from social sciences that, as the conceptual counterpart of trust, distrust could be as important as trust. However, little work exists in studying distrust in social media. What is the relationship between trust and distrust? Can we directly apply methodologies from social sciences to study distrust in social media? In this paper, we design two computational tasks by leveraging data mining and machine learning techniques to enable the computational understanding of distrust with social media data. The first task is to predict distrust from only trust, and the second task is to predict trust with distrust. We conduct experiments in real-world social media data. The empirical results of the first task provide concrete evidence to answer the question, "is distrust the negation of trust?" while the results of the second task help us figure out how valuable the use of distrust in trust prediction.
Authors: Tang, J., Hu, X., & Liu, H.
Social media poses a threat to public health by facilitating the spread of misinformation. At the same time, however, social media offers a promising avenue to stem the distribution of false claims – as evidenced by real-time corrections, crowdsourced fact-checking, and algorithmic tagging. Despite the growing attempts to correct misinformation on social media, there is still considerable ambiguity regarding the ability to effectively ameliorate the negative impact of false messages. To address this gap, the current study uses a meta-analysis to evaluate the relative impact of social media interventions designed to correct health-related misinformation (k = 24; N = 6,086). Additionally, the meta-analysis introduces theory-driven moderators that help delineate the effectiveness of social media interventions. The mean effect size of attempts to correct misinformation on social media was positive and significant (d = 0.40, 95% CI [0.25, 0.55], p =.0005) and a publication bias could not be excluded. Interventions were more effective in cases where participants were involved with the health topic, as well as when misinformation was distributed by news organizations (vs. peers) and debunked by experts (vs. non-experts). The findings of this meta-analysis can be used not only to depict the current state of the literature but also to prescribe specific recommendations to better address the proliferation of health misinformation on social media.
Authors: Walter, N., Brooks, J. J., Saucier, C. J., & Suresh, S.
Contemporary commentators describe the current period as“an era of fake news”in which misinformation,generated intentionally or unintentionally, spreads rapidly. Although affecting all areas of life, it poses particular problems in the health arena, where it can delay or prevent effective care, in some cases threatening the lives of individuals. While examples of the rapid spread of misinformation date back to the earliest days of scientific medicine, the internet, by allowing instantaneous communication and powerful amplification has brought about a quantum change. In democracies where ideas compete in the marketplace for attention, accurate scientific information, which may be difficult to comprehend and even dull, is easily crowded out by sensationalized news.In order to uncover the current evidence and better understand the mechanism of misinformation spread, wereport a systematic review of the nature and potential drivers of health-related misinformation. We searchedPubMed, Cochrane, Web of Science, Scopus and Google databases to identify relevant methodological and empirical articles published between 2012 and 2018. A total of 57 articles were included for full-text analysis.Overall, we observe an increasing trend in published articles on health-related misinformation and the role of social media in its propagation. The most extensively studied topics involving misinformation relate to vacci-nation, Ebola and Zika Virus, although others, such as nutrition, cancer,fluoridation of water and smoking also featured. Studies adopted theoretical frameworks from psychology and network science, while co-citation analysis revealed potential for greater collaboration across fields. Most studies employed content analysis, social network analysis of experiments, drawing on disparate disciplinary paradigms. Future research should examine susceptibility of different socio demographic groups to misinformation and understand the role of belief systems on the intention to spread misinformation. Further interdisciplinary research is also warranted to identify ef-fective and tailored interventions to counter the spread of health-related misinformation online.
Authors: Wang, Y., McKee, M., Torbica, A., & Stuckler, D.
The widespread dissemination of misinformation in social media has recently received a lot of attention in academia. While the problem of misinformation in social media has been intensively studied, there are seemingly different definitions for the same problem, and inconsistent results in different studies. In this survey, we aim to consolidate the observations, and investigate how an optimal method can be selected given specific conditions and contexts. To this end, we first introduce a definition for misinformation in social media and we examine the difference between misinformation detection and classic supervised learning. Second, we describe the diffusion of misinformation and introduce how spreaders propagate misinformation in social networks. Third, we explain characteristics of individual methods of misinformation detection, and provide commentary on their advantages and pitfalls. By reflecting applicability of different methods, we hope to enable the intensive research in this area to be conveniently reused in real-world applications and open up potential directions for future studies.
Authors: Wu, L., Morstatter, F., Carley, K. M., & Liu, H.
Twitter is a crucial platform to get access to breaking news and timely information. However, due to questionable provenance, uncontrollable broadcasting, and unstructured languages in tweets, Twitter is hardly a trustworthy source of breaking news. In this paper, we propose a novel topic-focused trust model to assess trustworthiness of users and tweets in Twitter. Unlike traditional graph-based trust ranking approaches in the literature, our method is scalable and can consider heterogeneous contextual properties to rate topic-focused tweets and users. We demonstrate the effectiveness of our topic-focused trustworthiness estimation method with extensive experiments using real Twitter data in Latin America.
Authors: Zhao, L., Hua, T., Lu, C.-T., & Chen, I.-R.