![](https://menafn.com//includes/img/error-logo-left.png)
403
Sorry!!
Error! We're sorry, but the page you were
looking for doesn't exist.
This Is How PR Pros Are Combatting Misinformation In An AI-Driven World
(MENAFN- PRovoke)
In today's digital landscape, the rapid advancement of AI has revolutionized how we access and consume information. While AI tools have the potential to enhance communication and streamline processes, they also play a significant role in the proliferation of misinformation.
In fact, according to a report by Enkrypt AI , Chinese AI model DeepSeek-R1 is 11 times more likely to generate harmful content as compared to its established competitors, raising significant concerns about bias, toxicity, and security vulnerabilities.
As algorithms curate content tailored to individual preferences, misleading narratives can spread faster and more widely than ever before. This phenomenon raises critical questions about the integrity of information and the responsibilities of both creators and consumers in an era where the line between fact and fiction is increasingly blurred.
It also means that as communicators, we as an industry have a responsibility to be more cautious even as AI usage picks up.
Currently, generative AI adoption has approached an all-time high among communicators with a recent report by Muck Rack citing three out of four professionals using the technology. This is nearly three times the number from March 2023. 73% of Gen Z use the tool in daily tasks as well while more than half of PR pros say they use a paid version of AI services.
So what are PR professionals in the industry doing to safeguard themselves from misinformation in an AI-driven world? PRovoke Media spoke to industry leaders and pulled out three key things they are doing that you can adopt in your work.
1. Rigours Verification Processes
The most important thing agencies are doing is ensuring that their verification processes are intact.
“We take a proactive and thoughtful approach to ensuring the accuracy and integrity of our communications. This starts with sourcing information exclusively from credible, authoritative outlets and cross-referencing facts across multiple reputable platforms,” said Patricia Malay, general manager at Bud Communications.“We also use advanced fact-checking tools and collaborate with industry experts to validate claims before sharing them,” she said.
Malay also emphasized that transparency is key.“We always cite our sources clearly and provide context to avoid any misinterpretation. Every piece of content goes through a rigorous internal review by subject matter experts, and we actively monitor feedback to quickly address and correct any inaccuracies. This meticulous process helps us maintain trust and credibility, ensuring our messaging is both reliable and impactful.”
Shouvik Prasanna Mukherjee, EVP global creative innovation and chief creative officer, APAC, Golin added to her point by saying that as custodians of the brands it represents, Golin adopts both proactive and reactive approaches in navigating any potential threat to the reputation and business of our clients. “We use a PPT (people, process, technology) approach to leverage the power of information and point out misinformation,” he said.
“In dealing with secondary research or third-party data, we have stringent processes to qualify the credibility of the sources. Experienced analysts from our creative intelligence unit and sector experts from our practice groups work together to analyze and vet such data before considering it for any kind of usage,” he added.
Mukherjee also added that Golin uses a tool called Cyabra to identify and combat misinformation.“Cyabra helps us uncover fake profiles and harmful narratives against our clients, as well as identify disinformation bot networks and social media profiles with malafide intentions,” he said, adding that saying that, while implementing the right processes and deploying the right tools are critical, it is engaging the right people for the job that makes all the difference.
“We believe in people-driven communications where AI helps augment the impact our teams create. Our emotional intelligence, the innate ability to think critically, comprehend and adapt to new situations are enhanced by the speed and scalability of AI tools in processing large, complex data sets and identifying patterns,” he said.
He added that his agency's capacity for creativity, empathy, and the capability to express complex emotions and tell nuanced stories are only elevated by quicker or better packaging of content using AI tools.
“It's not a question of balance, rather of enabling our people with technology to deliver greater impact,” he said.
2. The Strategic Use Of AI Tools
While AI tools can sometimes generate harmful or misleading information, professionals such as Malay believe in using it strategically and sparingly.
“We see AI as a valuable tool for enhancing efficiency, but we use it strategically and sparingly-only for repetitive, low-stakes tasks like data analysis, scheduling, and initial drafting,” she explained.
“The core of our work, including strategy, creativity, and decision-making, remains firmly human-driven. Every piece of AI-generated content is carefully reviewed, refined, and validated by our team to ensure it aligns with our brand values and messaging goals,” she said.
Malay added that the agency also needs to stay ahead of the curve by keeping up with the latest AI research and developments, which allows it to adapt its practices and mitigate emerging risks.“By combining AI's efficiency with human expertise, we ensure our content is not only accurate but also innovative, ethical, and meaningful,” she said.
Adam Harper, founder and managing partner of Ashbury agreed, likening the use of AI to the automotive industry.“There's a parallel with the auto industry here. Automation enabled the mass production of decent cars, but top brands like Rolls-Royce still make bespoke, incredibly high-quality cars by hand with the help of the latest technology. There will be companies who want inexpensive, fully automated communications services and will accept the imperfections that come with this approach. But many clients will continue to value the bespoke, top-quality model, and that's going to mean a human being in charge, using smart AI tools to deliver better outcomes.
He added that saying that, with mass-market AI models, though, one sometimes gets“rubbish out because rubbish goes in”.
“So it's critical that any AI tools that communications professionals use to support messaging and content development are secure and only draw on authoritative data, especially public content from the organization itself,” he said.
3. Ongoing Team Training
Finally, experts note that what is really important at the end of the day is to ensure that communication teams are trained on how to use AI safely and how to mitigate the risks of misinformation. According to Muck Rack's State of AI in PR 2025 study, despite the widespread usage of the technology, only 38% of PR pros report having company guidelines for AI use.
This is up from 21% last year. However, more than half, 55%, don't have anything in place. This likely means many PR pros are working with AI, yet they don't have guidance on best practices or how and when to disclose its use, according to Muck Rack.
“At W Communications, a lot of our client and company information is first-hand or even private information that we hold close to our hearts. Hence, a lot of our content is done without an AI tool. We have an internal understanding and SOP that AI tools can help in initial research but that it is not entirely copied to pass off as final drafts,” said Robin Chang, APAC general manager at W Communications.
He added that regular reinforcement and case studies highlighting instances of AI-generated content inaccuracies are essential.
“Internally, we adhere to a rigorous best-work policy that prioritises integrity and factual accuracy. This approach emphasises thorough information gathering, critical analysis, and precise articulation to ensure that all content meets the highest standards of credibility and reliability,” he said.
Meanwhile, Malay emphasised that Bud Communications is“deeply committed” to equipping its team with the skills and knowledge they need to navigate the challenges of AI-generated misinformation.
“This includes regular, forward-thinking training and knowledge sharing that focus on identifying AI-generated content, understanding its limitations, and recognising patterns of misinformation,” she said, adding that the team places a strong emphasis on critical thinking, encouraging its team to question sources, verify claims, and consider the broader context of information, not just accept what's presented at face value.
“When we can, we run practical, scenario-based exercises so they get hands-on experience in responding to real-world challenges. We also actively seek out learnings from AI ethicists and researchers to stay informed about emerging trends and threats. This comprehensive approach ensures our team is not only prepared but also continually curious and willing to adapt to fast-moving times,” she said.
In fact, according to a report by Enkrypt AI , Chinese AI model DeepSeek-R1 is 11 times more likely to generate harmful content as compared to its established competitors, raising significant concerns about bias, toxicity, and security vulnerabilities.
As algorithms curate content tailored to individual preferences, misleading narratives can spread faster and more widely than ever before. This phenomenon raises critical questions about the integrity of information and the responsibilities of both creators and consumers in an era where the line between fact and fiction is increasingly blurred.
It also means that as communicators, we as an industry have a responsibility to be more cautious even as AI usage picks up.
Currently, generative AI adoption has approached an all-time high among communicators with a recent report by Muck Rack citing three out of four professionals using the technology. This is nearly three times the number from March 2023. 73% of Gen Z use the tool in daily tasks as well while more than half of PR pros say they use a paid version of AI services.
So what are PR professionals in the industry doing to safeguard themselves from misinformation in an AI-driven world? PRovoke Media spoke to industry leaders and pulled out three key things they are doing that you can adopt in your work.
1. Rigours Verification Processes
The most important thing agencies are doing is ensuring that their verification processes are intact.
“We take a proactive and thoughtful approach to ensuring the accuracy and integrity of our communications. This starts with sourcing information exclusively from credible, authoritative outlets and cross-referencing facts across multiple reputable platforms,” said Patricia Malay, general manager at Bud Communications.“We also use advanced fact-checking tools and collaborate with industry experts to validate claims before sharing them,” she said.
Malay also emphasized that transparency is key.“We always cite our sources clearly and provide context to avoid any misinterpretation. Every piece of content goes through a rigorous internal review by subject matter experts, and we actively monitor feedback to quickly address and correct any inaccuracies. This meticulous process helps us maintain trust and credibility, ensuring our messaging is both reliable and impactful.”
Shouvik Prasanna Mukherjee, EVP global creative innovation and chief creative officer, APAC, Golin added to her point by saying that as custodians of the brands it represents, Golin adopts both proactive and reactive approaches in navigating any potential threat to the reputation and business of our clients. “We use a PPT (people, process, technology) approach to leverage the power of information and point out misinformation,” he said.
“In dealing with secondary research or third-party data, we have stringent processes to qualify the credibility of the sources. Experienced analysts from our creative intelligence unit and sector experts from our practice groups work together to analyze and vet such data before considering it for any kind of usage,” he added.
Mukherjee also added that Golin uses a tool called Cyabra to identify and combat misinformation.“Cyabra helps us uncover fake profiles and harmful narratives against our clients, as well as identify disinformation bot networks and social media profiles with malafide intentions,” he said, adding that saying that, while implementing the right processes and deploying the right tools are critical, it is engaging the right people for the job that makes all the difference.
“We believe in people-driven communications where AI helps augment the impact our teams create. Our emotional intelligence, the innate ability to think critically, comprehend and adapt to new situations are enhanced by the speed and scalability of AI tools in processing large, complex data sets and identifying patterns,” he said.
He added that his agency's capacity for creativity, empathy, and the capability to express complex emotions and tell nuanced stories are only elevated by quicker or better packaging of content using AI tools.
“It's not a question of balance, rather of enabling our people with technology to deliver greater impact,” he said.
2. The Strategic Use Of AI Tools
While AI tools can sometimes generate harmful or misleading information, professionals such as Malay believe in using it strategically and sparingly.
“We see AI as a valuable tool for enhancing efficiency, but we use it strategically and sparingly-only for repetitive, low-stakes tasks like data analysis, scheduling, and initial drafting,” she explained.
“The core of our work, including strategy, creativity, and decision-making, remains firmly human-driven. Every piece of AI-generated content is carefully reviewed, refined, and validated by our team to ensure it aligns with our brand values and messaging goals,” she said.
Malay added that the agency also needs to stay ahead of the curve by keeping up with the latest AI research and developments, which allows it to adapt its practices and mitigate emerging risks.“By combining AI's efficiency with human expertise, we ensure our content is not only accurate but also innovative, ethical, and meaningful,” she said.
Adam Harper, founder and managing partner of Ashbury agreed, likening the use of AI to the automotive industry.“There's a parallel with the auto industry here. Automation enabled the mass production of decent cars, but top brands like Rolls-Royce still make bespoke, incredibly high-quality cars by hand with the help of the latest technology. There will be companies who want inexpensive, fully automated communications services and will accept the imperfections that come with this approach. But many clients will continue to value the bespoke, top-quality model, and that's going to mean a human being in charge, using smart AI tools to deliver better outcomes.
He added that saying that, with mass-market AI models, though, one sometimes gets“rubbish out because rubbish goes in”.
“So it's critical that any AI tools that communications professionals use to support messaging and content development are secure and only draw on authoritative data, especially public content from the organization itself,” he said.
3. Ongoing Team Training
Finally, experts note that what is really important at the end of the day is to ensure that communication teams are trained on how to use AI safely and how to mitigate the risks of misinformation. According to Muck Rack's State of AI in PR 2025 study, despite the widespread usage of the technology, only 38% of PR pros report having company guidelines for AI use.
This is up from 21% last year. However, more than half, 55%, don't have anything in place. This likely means many PR pros are working with AI, yet they don't have guidance on best practices or how and when to disclose its use, according to Muck Rack.
“At W Communications, a lot of our client and company information is first-hand or even private information that we hold close to our hearts. Hence, a lot of our content is done without an AI tool. We have an internal understanding and SOP that AI tools can help in initial research but that it is not entirely copied to pass off as final drafts,” said Robin Chang, APAC general manager at W Communications.
He added that regular reinforcement and case studies highlighting instances of AI-generated content inaccuracies are essential.
“Internally, we adhere to a rigorous best-work policy that prioritises integrity and factual accuracy. This approach emphasises thorough information gathering, critical analysis, and precise articulation to ensure that all content meets the highest standards of credibility and reliability,” he said.
Meanwhile, Malay emphasised that Bud Communications is“deeply committed” to equipping its team with the skills and knowledge they need to navigate the challenges of AI-generated misinformation.
“This includes regular, forward-thinking training and knowledge sharing that focus on identifying AI-generated content, understanding its limitations, and recognising patterns of misinformation,” she said, adding that the team places a strong emphasis on critical thinking, encouraging its team to question sources, verify claims, and consider the broader context of information, not just accept what's presented at face value.
“When we can, we run practical, scenario-based exercises so they get hands-on experience in responding to real-world challenges. We also actively seek out learnings from AI ethicists and researchers to stay informed about emerging trends and threats. This comprehensive approach ensures our team is not only prepared but also continually curious and willing to adapt to fast-moving times,” she said.
![](https://menafn.com/updates/provider/PRovoke.jpg)
Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.
Most popular stories
Market Research
![Market Research](/Updates/Alliance.png)
- Manuka Honey Market Report 2024, Industry Growth, Size, Share, Top Compan...
- Modular Kitchen Market 2024, Industry Growth, Share, Size, Key Players An...
- Acrylamide Production Cost Analysis Report: A Comprehensive Assessment Of...
- Fish Sauce Market 2024, Industry Trends, Growth, Demand And Analysis Repo...
- Australia Foreign Exchange Market Size, Growth, Industry Demand And Forec...
- Cold Pressed Oil Market Trends 2024, Leading Companies Share, Size And Fo...
- Pasta Sauce Market 2024, Industry Growth, Share, Size, Key Players Analys...
Comments
No comment