Forecasting potential misuses of language models for disinformation campaigns and reduce risk


As generative language models improve, they open up recent possibilities in fields as diverse as healthcare, law, education and science. But, as with every recent technology, it’s value considering how they will be misused. Against the backdrop of recurring online influence operations—covert or deceptive efforts to influence the opinions of a target market—the paper asks:

How might language models change influence operations, and what steps will be taken to mitigate this threat?

Our work brought together different backgrounds and expertise—researchers with grounding within the tactics, techniques, and procedures of online disinformation campaigns, in addition to machine learning experts within the generative artificial intelligence field—to base our evaluation on trends in each domains.

We consider that it’s critical to research the specter of AI-enabled influence operations and description steps that will be taken before language models are used for influence operations at scale. We hope our research will inform policymakers which are recent to the AI or disinformation fields, and spur in-depth research into potential mitigation strategies for AI developers, policymakers, and disinformation researchers.


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
1 Comment
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x