Factforward: reflections on a future of disinformation

Stories on artificial intelligence, such as deepfakes, Google voice generators, and the impact of conversation chatbots like ChatGPT, seem to occupy disinformation researchers at the time. Seeing-is-believing does no longer uphold and neither can people always trust their ears. The AI tools are powerful for producing deliberately misleading information, as people can now easily make and send out 10.000 messages instead of ten a day. The usage of AI reminds one that technology will continue to shape human knowledge and research into new directions. In research publications and news articles, critics address the challenging and threatening effects AI usage have on the distribution of rightful information (Giansiracusa & Marcus 2023, Helmus 2022). The uncertainty these systems bring along make futurology and ethics hot fields. Scholars cannot escape to wonder about the probable, plural, righteous and preferred futures of technology.

While futures can be frightening, scholars should be careful in reproducing deterministic narratives about the challenges of technologies, since futures simultaneously hold potential. The production, distribution, and consumption of deliberately misleading information are also about imagining what comes after and alternatives to relaying information. What then are the potentials arising from technology and the information disorder? How do people imagine changing times? With factforward DDMAC embraces this scholarly dialogue on the future.

To learn about the workings of information distribution, it is crucial to engage in conversations with researchers, experts, and journalists. When doing interviews, Bruce Mutsvairo and I were interested in understanding what the future of disinformation and misinformation looks like in Africa. Together with experts from the region we engaged in this conversation. This is the backdrop for the configuration of the theoretical concept factforward.

Factforward is about hope and imaginaries. The concept refers to the belief in upcoming change and a world wherein facts rather than disinformation are central to information flows. The ‘fact’ in factforward can be slippery and hard to find. However, with fact we do not refer to the subjective or temporal experience of ‘truth’: if people belief today what they believe tomorrow. Rather, factforward is about the aspirations and activities unfolding from the state people are in. With the concept, we wish to explore how people experience the information disorder in everyday life and how that results in the forwarding and shaping of different customs and societies.

Our interviewees brought up practices that could curtail future misinforming narratives: the growth of factcheck agencies, media literacy trainings, and Afrocentric AI monitoring systems (Mutsvairo et. al 2023, 7). Débora Lanzeni et. al (2023) write in an Anthropology of Futures and Technologies, that “the political is entwined with science and technology and ‘telling stories of the future is always a social, material, and political practice’” (Waltrop et. al 2023, 3). Different outlooks on how a factual society should look like thus eventually bring me back to discussions on subjective understanding of the world. While people might envision a utopic world without information disorder, their imaginations and mundane usage of technology are conditioned by current political and social affairs. Imaginaries always occur in the context of present time. Scholars thus have to critically engage with the present to understand future longings. Factforward serves as a conceptual framework for empirical findings that relate to these issues. The article about the information disorder in Africa is a first step in that direction.

Conclusively, with factforward DDMAC opens up ways to theorize discussions about the controversial futures wherein AI or other technology usage impacts the information disorder. Will future AI not only increase the spread of wrongful information, but also provide solutions to curb it?

Written by Luca Bruls

Giansiracusa, Noah, and Gary Marcus. 2023. “Big Tech Hasn’t Fixed AI’s Misinformation Problem—Yet.” Time, February 13, 2023. https://time.com/6255162/big-tech-ai-misinformation-trust/?utm_source=twitter&utm_medium=social&utm_campaign=editorial&utm_term=ideas_technology&linkId=201409604

Helmus, Todd. 2022. “Artificial Intelligence, Deepfakes, and Disinformation: A Primer.” RAND. Santa Monica: RAND Corporation. https://www.rand.org/pubs/perspectives/PEA1043-1.html

Lanzeni, Débora, Karen Waltorp, Sarah Pink, and Rachel Smith (editors). 2023. An Anthropology of Futures and Technologies. London and New York: Routledge.

Mutsvairo, Bruce, Luca Bruls, Mirjam de Bruijn, and Kristin Skare Orgeret. 2023. ““Factforward:” Foretelling the Future of Africa’s Information Disorder.” Center for Information, Technology, & Public Life (CITAP), University of North Carolina at Chapel Hill. https://citap.pubpub.org/pub/lopohgbv

1920 2560 DDMAC
Start Typing
Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

For performance and security reasons we use Cloudflare
Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.