fbpx
  Article

Two Truths and A Lie

Admin

Misinformation

‘Information pollution’, where facts and figures become a source of division in a society, has a huge impact on behavior, social cohesion, and public trust. When information is false or misleading but spread without the intent of harm it is misinformation. On the other hand when it is spread with deliberate intent to both harm and benefit certain interest groups it is disinformation. These two types of information are often being served to audiences alongside and with the same weight as the truth.

To successfully identify and flag mis- and disinformation, audiences must critically evaluate any data or knowledge they come across. However, given the sheer abundance of information available, especially online, it is much easier for people to fall for false information rather than sift through and objectively analyze it themselves. The ongoing COVID-19 pandemic, for example, provides an important case study on how these two types of information overlap or work concurrently, whether it is in how the disease is experienced, or the socially divisive way in which it has been tackled in some locations.

An experiment to evaluate behaviors in identifying online misinformation

Approach

UNDP Accelerator Lab Kenya has had an active interest in the issue of information pollution, starting out by first tackling it through the lens of the pandemic response in 2020 and thereafter zooming out to explore governance, peace and social cohesion as a whole. In 2021, the Lab partnered with Busara Center for Behavioral Economics on a behavioral science experiment to crowdsource harmful content online in collaboration with the Healthy Internet Project (HIP) incubated at TED. The HIP plug-in tool is an open-source web browser extension that allows users to flag content online anonymously, with the goal of curbing the spread of lies, abuse and fear mongering, and uplifting useful ideas on the internet.

We had three key learning questions for this experiment:

  1. What were the end-user experiences and behaviors of using the HIP tool to identify and report harmful content online?
  2. What was the value proposition of the HIP tool as an innovative approach for tackling online mis- and disinformation?
  3. What was the value proposition of the volunteer-driven crowdsourcing approach to tackle mis- and disinformation and how did that fit into existing ecosystems?

We conducted a live experimental demonstration of the HIP plug-in to understand potential users’ motivations, experiences, and practices in using the platform to flag misinformation.

Starting off with a quantitative live experiment, we observed natural behaviors (such as user experience, motivations, accuracy, and demographic trends) of 128 users on the platform, followed by a qualitative exercise with 44 of these users. Respondents were pooled from five counties (Kajiado, Kiambu, Machakos, Murang’a, and Nairobi), and represented diverse age and ethnic groups, as well as levels of education. The qualitative exercise, through in-depth interviews and a focus group discussion, sought to understand context specific insights related to user motivations. The participants were then classified as either active, moderate, or low users based on how frequently they used the platform. Majority of our study participants (109) were considered low users, while only three were considered active users.

Key findings

Majority of the participants overall deemed HIP an appropriate tool for stopping the spread of misinformation. However, internet challenges and infrequent encounters with harmful content were cited as reasons for low usage of the platform. Participants also mentioned lack of feedback mechanisms on their flagged content, not having a computer to access the HIP tool, and rare usage of the internet.

All respondents agreed that information should be verified before sharing, but lack the education and awareness to effectively do so themselves. Most people either use their personal judgment or intuition to determine whether the information they come across is harmful content, or check to see whether it would be harmful to them or others in the society. Interestingly, despite the tool’s intent being to stop the spread of misinformation, a whopping 75% of participants used the tool to flag worthwhile content. This was due to concerns that flagging negative content: 1) was more subjective; 2) might have led to harmful repercussions for those who are flagged; and 3) was personally risky, especially with regard to political content.

Naturally, anonymity became a concern, as users feared that they would be identified through platform use, thus increasing skepticism and aversion to using HIP despite assurances that all the flagged content would be anonymous.

As for user accuracy, we engaged the support of PesaCheck, Africa’s largest indigenous fact-checking organization, to validate a sample of the claims associated with flagging activity from the study. Misinformation was associated with negative sentiments, such as a dislike for a topic, rather than as misinformation itself. Additionally, it was difficult to know what constituted misinformation amongst flagged content, because users hardly specified exactly what was misinforming about the websites they were on.

Finally, only 40% of the 128 study participants flagged more than one item using the HIP plug-in, effectively reducing data diversity. With these limitations, generalizability of results was unattainable. Even so, we can conclude that volunteer-driven identification of misinformation is limited if user perception of safety, and of accuracy, remain negative.

Recommendations

In order to improve the use and functionality of platforms like HIP, the following recommendations may be considered:

  1. Ensure anonymity – There should be more details to convince users of their anonymity to address the risks they feel on reporting misinformation.
  2. Clearly define misinformation – A detailed description on misinformation should be present to increase accuracy of user reports.
  3. Remove the “worthwhile” flag – This is to solidify the purpose of the plug-in. In the same breath, have more flagging options with simplified definitions for each such as “cruelty, violence or intimidation” instead of “abuse or harassment”.
  4. Add a required “misinformation identification” field – This is for easier fact checking, since users will specify the content they regard as misinformation, for example identifying specific phrases or sentences rather than linking to a full article.
  5. Develop a phone version – This is necessary to improve the tool’s responsiveness, incentivize active users, enable social media flagging, and to translate to other languages. This is particularly pertinent in a country like Kenya where the majority of the population access the internet via their mobile devices.
  6. Provide a simple system to demonstrate how feedback is being actioned – This not only increases usage of the tool, but it also proves that user behaviors make a difference. This may be done by connecting fact-checkers to review the database of flagged content and feeding back the findings to the users.

Read the final experiment report here that provides more details on the experiment background and findings. Feel free to reach out to Busara Center [contact@busaracenter.org] or UNDP Accelerator Lab Kenya [acceleratorlab.ke@undp.org]. We look forward to connecting further with key players and interested parties on this topic.

This blog was originally posted here.

Connect with us on our social media platforms: TwitterFacebookInstagramLinkedIn and YouTube.

Scroll to Top