Friday 18 April 2025 11:19 GMT

Provoke EMEA: Tracking The Dark Arts


(MENAFN- PRovoke) At our PRovoke EMEA Summit in London last week, we worked with Edelman on a panel discussion about the proliferation of misinformation and disinformation and how companies can protect their brand integrity amid increasing reputational risks. Moderated by Edelman's senior director of crisis & risk Liza Ravenscroft, the conversation focused on crisis management, corporate resilience, and trust building.

Participants:

. Liza Ravenscroft, senior director of crisis & risk, Edelman
. Jack Stubbs, chief intelligence officer, Graphika
. Oliver Hayes, EMEA head of counter disinformation, Edelman

The resulting conversation has been edited for brevity and clarity.

  • Misinformation (shared without intent to mislead), disinformation (deliberate lies), and malinformation (truth used maliciously out of context) all erode trust, but require different responses.

  • Brands are particularly vulnerable when narratives align with existing audience biases.

  • Brands need a disinformation-specific playbook and a plan for recovery.

  • Sophisticated network analysis helps brands understand who is driving the conversation, who is actually impacted, and who can help bridge divides.

  • Brands need to invest in trust-building before a crisis, map likely vulnerabilities, and prepare digital assets in advance.

    Liza Ravenscroft: I'm going to start with Ollie. When we talk about disinformation, misinformation, malinformation - what's the difference?


    Oliver Hayes:
    There are three terms we can think about along two key dimensions: the intent of the person who's sharing the information and the level of accuracy involved. Disinformation is inaccurate information shared deliberately - it's knowingly false and intended to deceive. Misinformation is also inaccurate, but it's spread by someone who doesn't realize it's wrong. Malinformation, on the other hand, might begin as something true, but it's taken out of context or distorted in a way that causes harm.

    All three mislead in different ways, and that's what makes them dangerous. According to this year's Edelman Trust Barometer, 70% of global respondents fear being misled by political, business, or media leaders - but perhaps even more troubling is that 25% say they're willing to spread disinformation to advance a cause they care about. That tells us this isn't just a fringe problem - it's embedded in how people think about persuasion and activism.

    LR : Let's look at an example. (Shows a video depicting a fake ICE raid at a fake Coca-Cola plant.)


    Jack Stubbs:
    This Coca-Cola example is upsettingly familiar. We see things like this all the time - fake content that would fall apart under basic scrutiny, but still spreads. Take a moment to look closely: the uniforms don't match real ICE attire, the truck is being loaded incorrectly, and the factory setting is off. Yet it went viral. Why? Because people don't stop to think. In today's attention-deficit environment, people respond based on gut feeling and emotional resonance, not logic. It's a perfect storm for disinformation.

    OH : The World Economic Forum ranks disinformation among the most severe short-term risks. That's because the drivers are everywhere - geopolitical tensions, social divisions, fast-evolving technologies, and weakened regulatory environments. Companies can get dragged into conflicts even when they try to stay neutral. The Coca-Cola example we discussed came from a left-leaning narrative, but people often assume disinformation is a right-wing tactic. In truth, it cuts across ideologies.

    AI has only intensified this. With open-source models and fewer guardrails, it's easier than ever to create and distribute convincing false content at scale - from deepfakes to bot-driven narratives. And efforts to regulate the tech are being rolled back. Even the term 'disinformation' itself is becoming politicized, making enforcement more difficult.

    LR
    : How bad is the landscape for companies right now?

    OH:
    Disinformation crises are fundamentally different from traditional reputation issues. If a classic crisis is like an accidental fire, disinformation is like arson. Someone has planned it, executed it, and keeps fueling it. That requires a different level of preparedness and response. These attacks can feel personal - clients have told us about seeing deepfake videos of themselves saying awful things. It creates an emotional charge that can cloud strategic thinking.

    What can you do? First, build trust while skies are clear - trusted brands are harder to take down. Second, monitor the online landscape continuously, not reactively. Third, prepare playbooks for different scenarios, including stakeholder-specific messaging. Fourth, avoid amplifying false narratives, but don't ignore them if they're catching fire. And lastly, have a post-crisis strategy to recover brand health. Think of it like a cancer patient in remission - treatment worked, but they still need rebuilding.

    What's changed is the reach and speed of narrative distribution. In the past, a false rumor about a brand would circulate slowly - maybe through gossip or newsprint. Now, one viral TikTok or Instagram Reel can reach millions in minutes. Everyone is a potential broadcaster. And since we're narrative-driven by nature, we're drawn to stories that confirm what we already believe. This makes it incredibly difficult for brands to react in time unless they've already mapped their risk zones.

    LR: How do we make sense of this overwhelming noise?

    JS: Understanding online narratives means getting past the noise. At Graphika, we use network analysis - imagine a visual map of how people interact on social platforms. Every dot represents an account, and clusters show shared interests or ideologies. In a real case involving a US beverage brand, we found that most of the boycott noise came from two groups: critics who never liked the brand, and fringe conspiracy theorists. Core customers weren't driving the backlash. That changes the response strategy dramatically. We also look for 'bridge' communities - influencers and commentators who span multiple audiences. These are the people who can help counter misinformation credibly, because they're seen as trustworthy by both sides.

    OH: Disinformation crises differ from reputational ones. A typical crisis is like knocking over a candle. Disinformation is like someone setting your house on fire on purpose - and returning to throw fuel on it. It's personal, emotional, and persistent.

    Key strategies:
    - Build trust before a crisis.
    - Monitor for early signals.
    - Prepare in advance.
    - Have assets ready.
    - Respond without amplifying falsehoods.
    - Plan for recovery and reputation repair.

    JS: Even if you're a small brand without big resources, you can do a lot. Monitor conversations. Track which voices are growing louder. Think about what your loyal customers are saying - and who might be starting to turn against you. In most crises we've seen, the warning signs were there. With just a little awareness, you can spot trouble brewing before it hits the mainstream.

    OH : This problem is only going to get worse. Brands need to treat it as an ongoing strategic risk and plan accordingly.

    MENAFN08042025000219011063ID1109402395


  • Legal Disclaimer:
    MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

    Search