Turbocharging the race to protect nature and climate with AI
Rebalancing the planet must happen faster. Cambridge researchers are using AI to help.
Opinion: Humans should be at the heart of AI
With the right development and application, AI could become a transformative force for good. What's missing in current technologies is human insight, says Anna Korhonen.
News article or big oil ad?
In the battle against climate disinformation, native advertising is a fierce foe. A study published in the journal npj Climate Action by researchers from Boston University (BU) and the University of Cambridge, evaluates two promising tools to fight misleading native advertising campaigns put forth by big oil companies.
Many major news organisations now offer corporations the opportunity to pay for articles that mimic in tone and format the publication’s regular reported content. These ‘native advertisements’ are designed to camouflage seamlessly into their surroundings, containing only subtle disclosure messages often overlooked or misunderstood by readers. Fossil fuel companies are spending tens of millions of dollars to shape public perceptions of the climate crisis.
“Because these ads appear on reputable, trusted news platforms, and are formatted like reported pieces, they often come across to readers as genuine journalism,” said lead author Michelle Amazeen from BU’s College of Communication. “Research has shown native ads are really effective at swaying readers’ opinions.”
The study is the first to investigate how two mitigation strategies — disclosures and inoculations — may reduce climate misperceptions caused by exposure to native advertising from the fossil fuel industry. The authors found that when participants were shown a real native ad from ExxonMobil, disclosure messages helped them recognise advertising, while inoculations helped reduce their susceptibility to misleading claims.
“As fossil fuel companies invest in disguising their advertisements, this study furthers our understanding of how to help readers recognise when commercial content is masquerading as news and spreading climate misperceptions,” said co-author Benjamin Sovacool, also from BU.
“Our study showed that communication-led climate action is possible and scalable by countering covert greenwashing campaigns, such as native advertising, at the source,” said co-author Dr Ramit Debnath from Cambridge’s Department of Architecture. “The insights we’ve gained from this work will help us design better interventions for climate misinformation.”
The research builds on a growing body of work assessing how people recognise and respond to covert misinformation campaigns. By better understanding these processes, the researchers hope that they can prevent misinformation from taking root and changing people’s beliefs and actions on important issues like climate change.
‘The Future of Energy’ ad
Starting in 2018, readers of The New York Times website encountered what appeared to be an article, titled “The Future of Energy,” describing efforts by oil and gas giant ExxonMobil to invest in algae-based biofuels. Because it appeared beneath the Times’ masthead, in the outlet’s typical formatting and font, many readers likely missed the small banner at the top of the page mentioning that it was an ad sponsored by ExxonMobil.
The ad, part of a $5-million-dollar campaign, neglected to mention the company’s staggering carbon footprint. It also omitted key context, The Intercept reported, like that the stated goal for algae-based biofuel production would represent only 0.2% of the company’s overall refinery capacity. In a lawsuit against ExxonMobil, Massachusetts cited the ad as evidence of the company’s “false and misleading” communications, with several states pursuing similar cases.
Putting two interventions to the test
The researchers examined how more than a thousand participants responded to “The Future of Energy” ad in a simulated social media feed.
Before viewing the ad, participants saw one, both, or neither of the following intervention messages:
An inoculation message designed to psychologically ‘inoculate’ readers from future influence by broadly warning them of potential exposures to misleading paid content. In this study, the inoculation message was a fictitious social media post from United Nations Secretary-General Antonio Guterres reminding people to be wary of online misinformation.
A disclosure message with a simple line of text appearing on a post. In this study, the text “Paid Post by ExxonMobil” accompanied the piece. Studies have shown that more often than not, when native ads are shared on social media, this disclosure disappears.
Bolstering psychological resilience to native ads
The team found that the ad improved opinions of ExxonMobil’s sustainability across the study’s many participants, regardless of which messages they saw, but that the interventions helped to reduce this effect. Some of the key findings include:
The presence of a disclosure more than doubled the likelihood that a participant recognised the content as an ad. However, the participants who had seen a disclosure and those who had not were equally likely to agree with the statement “companies like ExxonMobil are investing heavily in becoming more environmentally friendly.”
Inoculation messages were much more effective than disclosures at protecting people’s existing beliefs on climate change, decreasing the likelihood that participants would agree with misleading claims presented in the ad.
“Disclosures helped people recognise advertising. However, they didn’t help them recognise that the material was biased and misleading,” said Amazeen. “Inoculation messaging provides general education that can be used to fill in that gap and help people resist its persuasive effects. Increasing general awareness about misinformation strategies used by self-interested actors, combined with clearer labels on sponsored content, will help people distinguish native ads from reported content.”
Reference:
Michelle A. Amazeen et al. ‘The “Future of Energy”? Building resilience to ExxonMobil’s disinformation through disclosures and inoculation.’ npj climate action (2025). DOI: 10.1038/s44168-025-00209-6
Adapted from a Boston University story.
A sneaky form of advertising favoured by oil giants influences public opinion with climate action misperceptions, but researchers are studying potential solutions.
rob dobi vai Getty ImagesFueling the Fire of Misinformation - stock photo
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Cambridge initiative to address risks of future engineered pandemics
These are some of the questions being addressed by a new initiative launched today at the University of Cambridge, which seeks to address the urgent challenge of managing the risks of future engineered pandemics.
The Engineered Pandemics Risk Management Programme aims to understand the social and biological factors that might drive an engineered pandemic and to make a major contribution to building the UK’s capability for managing these risks. It will build a network of experts from academia, government, and industry to tackle the problem.
Increased security threats from state and non-state actors, combined with increased urbanisation and global mobility, means the threat of deliberate pathogen release must be taken seriously as must other intertwined aspects of pandemic risk such as mis- and disinformation, the erosion of trust in a number of institutions and an increasingly volatile geopolitical context. Further potential risks are posed by recent developments in gene-editing tools and artificial intelligence, which have rapidly advanced technological capability that may make it easier to engineer potential pandemic pathogens.
Professor Clare Bryant from the Department of Medicine at the University of Cambridge said: “There is a great opportunity to take a joined-up approach to managing the risks posed by engineered pandemics. We need experts and agencies across the spectrum to work together to develop a better understanding of who or what might drive such events and what their likely impact would be. And we need evidence-informed policies and networks in place that would help us respond to – or better still, prevent – such an eventuality.”
- The aims of the Engineered Pandemics Risk Management Programme are:
- To develop the conceptual underpinnings for the risk management of engineered pandemics based on interdisciplinary research
- To support the capability of the UK’s engineered pandemic risk policy and practice, including building and maintaining networks that connect government, academia and industry.
- To strengthen the international networks that will support this work globally
There are four main strands of work:
Social determinants of engineered pandemic threatThis strand will look at the actors who have the potential to engineer harmful pathogens, either deliberately or accidentally. It will ask questions such as: What could motivate bioterrorism in the coming decades? Who might the relevant actors be? What are the kinds of engineered pandemic that someone might want to create?
Dr Rob Doubleday, Executive Director of the Centre for Science and Policy at the University of Cambridge, said: “The common narrative is that there’s a wide range of potential actors out there who want to create bioweapons but don’t yet have the technical means. But in fact, there’s been very little work to really understand who these people might be, and their relationship to emerging technology. To explore these questions, we need a broad network including social scientists, biosecurity researchers, criminologists, experts in geopolitics and counterterrorism.”
The strand will also look at the governance of scientific research in areas that may facilitate an engineered pandemic, whether unwittingly or maliciously, aiming to deliver a policy framework that enables freedom of intellectual research while managing real and apparent risk in infectious disease research.
Professor Bryant said: “As scientists, we’re largely responsible for policing our own work and ensuring integrity, trustworthiness and transparency, and for considering the consequences of new knowledge and how it might be used. But with the rapid progress of genomic technologies and AI, self-regulation becomes more difficult to manage. We need to find governance frameworks that balance essential scientific progress with its potential misapplication.”
Biological determinants of engineered pandemic threatRecognising that the most likely cause of an engineered pandemic would be the deliberate release of a naturally-occurring pathogen – viral or bacterial, for example – rather than a man-made pathogen, this strand aims to understand what might make a particular pathogen infectious and how our immune systems respond to infection. This knowledge will allow researchers to screen currently available drugs to prevent or treat infection and to design vaccines quickly should a pandemic occur.
Modelling threats and risk management of engineered pandemicsThe Covid-19 pandemic highlighted practical problems of dealing with pandemic infections, from the provision of personal protective equipment (PPE) to ensuring a sufficient supply of vaccine doses and availability of key medications. Modelling the potential requirements of a pandemic, how they could be delivered, how ventilation systems could be modified, what biosafety measures could be taken, for example, are all key challenges for managing any form of pandemic. This strand will address how existing modelling approaches would need to be adapted for a range of plausible engineered pandemics.
Policy innovation challengesWorking with the policy community, the Cambridge team will co-create research that directly addresses policy needs and involves policy makers. It will support policy makers in experimenting with more joined-up approaches through testing, learning and adapting solutions developed in partnership.
The Engineered Pandemics Risk Management Programme is supported by a £5.25 million donation to the Centre for Research in the Arts, Humanities and Social Sciences (CRASSH) at the University of Cambridge. The team intends it to form a central component of a future Pandemic Risk Management Centre, for which it is now fundraising.
Professor Joanna Page, Director of CRASSH, said: “Cambridge has strengths across a broad range of disciplines – from genetics and immunology to mathematical modelling to existential risk and policy engagement – that can make a much-needed initiative such as this a success.”
To find out more, visit the Engineered Pandemic Risk Management website.
Covid-19 showed us how vulnerable the world is to pandemics – but what if the next pandemic were somehow engineered? How would the world respond – and could we stop it happening in the first place?
There is a great opportunity to take a joined-up approach to managing the risks posed by engineered pandemicsClare BryantMartin SanchezIllustration showing global pandemic spread
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
The Cambridge Awards 2024 for Research Impact and Engagement
Meet the winner of the Cambridge Awards 2024 for Research Impact and Engagement and learn more about their projects.