Disinformation-related Threats that AI Poses

AI will have all sorts of effects on our societies in the years to come. We are already experiencing some of these impacts – for example, AI text or image generators being used in creative work and elsewhere – but many of them, especially more fundamental ones, we can only guess at. Here at EUvsDisinfo, we are mostly interested in the interplay of AI and information manipulation. Interview with AI ethicist Dr. Benjamin Lange about disinformation-related threats that AI poses, but also possible solutions it offers in mitigating those risks. Will we be swamped tomorrow by a flood of AI-generated disinformation?

Source — EUvsDiSiNFO — August 11, 2023 —

EUvsDisinfo: You are an ethicist working on AI and disinformation, among other things. What does that mean in practice? What does an ethicist working on these topics do?

Dr. Benjamin Lange: I have two hats as an ethicist. First, I have an academic hat. My research as a philosopher focusses on disinformation and AI. For example, I examine the various ethical risks of harm that fighting disinformation can pose. So, though of course everyone agrees that disinformation, especially at the larger state-actor level, needs to be contained and mitigated, there are better and worse ways to go about it from the point of view of ethics.

-

In another research strand, I examine how we can best conceptualise what disinformation is, especially with an eye on operationalising our detection mechanisms. Here the philosophical tool of conceptual analysis can be quite useful and contribute something to our understanding.

My second hat is more applied. In my ethics consulting role, I have helped organisations involved in debunking, containing, or fighting disinformation to do this in a way that is based on sound ethical principles and that can effectively help analysts work through grey-area cases where a lot of critical judgment is required. This might involve developing decision-making or deliberation frameworks and toolkits to work through tricky cases, assisting in the conceptualisation of disinformation and how it is embedded in detection chains, or developing a process that better enables us to assess the proportionality of the measures employed to fight disinformation.

EUvsDisinfo: AI will have all sorts of effects on our societies in the years to come. We are already experiencing some of these impacts – for example, AI text or image generators being used in creative work and elsewhere – but many of them, especially more fundamental ones, we can only guess at. Here at EUvsDisinfo, we are mostly interested in the interplay of AI and information manipulation. Could you give our readers an elevator pitch on the latest thinking on the issue? What should we be worried about? What should we be looking out for?

Dr. Benjamin Lange: I think that, as you mention, what’s been on everyone’s mind this year is the impact of generative AI – be it in the form of text-to-text or text-to-image generators. This is still an emerging field with many moving pieces, but a main worry here is certainly the catalysing effect that generative AI can have on the mass-spreading of mis- and disinformation, be it in image or text-based form.

So, this is certainly an area to focus on. In particular, I would encourage everyone to pay attention to better understanding the workings on some of those models, how, for example, they generate text or images, and how one can better learn to spot generative AI outputs. There are clues that help us do that, such as syntactic patterns of text (i.e. acceptable word orders within sentences, sentence structures themselves) or certain elements of images that AI is not that good at creating yet (e.g. hands, logical proportions, relative proportions between some elements, detailed backgrounds). This also means that proven fact-checking techniques, such as verifying source materials, become all the more important too.

-
Will we be swamped tomorrow by a flood of AI-generated disinformation? That is the question!

There is also good news: as our technical capabilities increase, so do our AI-related capabilities to detect disinformation, for example, deepfake-detection – though there is some alarming evidence that current models struggle with AI-generated misinformation. But whether, on balance, we are soon going to get flooded with disinformation through generative AI, at this stage, remains to be seen.

EUvsDisinfo: In your view, what are some of the immediate ways to mitigate the threats posed by AI on our information environment? Also, do you see any moon-shot ideas that would possibly require more time and money, but could pay off highly in the longer term?

Dr. Benjamin Lange: This is part of a larger debate on responsible AI governance, but we need a holistic response devised jointly by all relevant stakeholders, including policy, industry, research, and broader society. Insofar as we are worried about threats posed by AI on our information environment in particular, we need to ensure that we have appropriate guardrails in place for the use of these technologies through both self-regulation and hard law. This might include requiring the adding of technical guardrails, for example digital watermarks, to Application Programming Interfaces (APIs) used in AI-driven software that mitigate the risk of abuse and information manipulation.

Additionally, and probably on the moon-shot side, we could move away from reactive threat mitigation, which still remains necessary, and significantly ramp up our education efforts for the general public on AI and its relation to our information consumption and creation habits, beginning in schools. Building resilience and critical acumen at a large scale to be able to navigate our information environment in the age of AI seems much more fruitful to me than entirely relying on reactive measures.

EUvsDisinfo: What are some of the possible positive-use cases of AI in work against information manipulation?

Dr. Benjamin Lange: I think positive-use cases concern the detection of easy cases of information manipulation. Though, as I mentioned, recent research indicates that generative AI certainly seems to make things more difficult for these automated detection efforts. This could include the detection of fake- or bot-accounts that massively spread certain stories, or the automated detection of videos, texts, images that have been previously flagged as containing mis- or disinformation and have only slightly been altered. For example, in some cases slight syntactic changes in the pattern of text that have been previously flagged as disinformation by a human analyst can still be successfully detected. Similar techniques exist for deepfakes or image-based disinformation, which makes up a larger number of cases of information manipulation – think about doctored infographics, photos, satellite images. Variations and modifications of these can, to some degree, also be automatically detected.

See all News and Analysis

Also see: Les effets de l’IA en matière de désinformation: « Entretien avec le Dr Benjamin Lange, éthicien en Intelligence artificielle (IA) » — (2023-0811) —