(MENAFN- PRovoke)
CANNES - Eighteen months into the new era of generative AI, two sessions on the fringe at Cannes Lions looked at the challenges of the next phase of AI for the PR and marketing industry, and how being human and emotional intelligence are still critical as the creative sector embraces new AI tools.
In a roundtable discussion convened by global PR umbrella body ICCO on the Little Black book Beach, Nita Song, president and CEO of multicultural marketing agency IW Group, said the biggest challenge was keeping up with the phenomenal pace of AI developments.
“We have an internal team that is staying on top of new tools and options. Every week there is something new, like this week's new text-to-video tool, Luma Dream Machine. So it's about understanding how you get your organisation in a state of readiness to experiment, while assessing the risk.
“The other challenge is the talent piece – understanding how PR or marketing professionals can learn about AI and integrate it into what we do, versus someone who knows about AI but not PR.”
On the subject of the impact of AI on talent and creativity in the industry, Song said there's no doubt skillsets are going to change:“We have to be running along with AI real tight to stay competitive, hiring people who understand ethicology and are prompting engineers. AI should augment not replace what we do, and knowing how to do that well is critical. For the consumer, AI is fascinating now but it will get to a point where it becomes so mass that they will miss the human touch, something bespoke, that increases the value of that creativity. The human element will actually become more important.”
A third challenge is client readiness, as some still don't want to touch it and are influenced by their legal teams, so don't want to take risks.
Song cited McDonalds as an example of a company that is prepared to“create a safe place where you can fail and then progress,” referencing the launch campaign for the 'Grandma McFlurry', which included an AI tool allowing young people to connect to their grandparents.“In our specific Asian campaigns, a lot of the younger generation don't speak the same language as their grandparents – the tool allows the child to record a heartfelt message, say, in English, and the app will have your video picture and voice but it appears that the message is being said in, say, Korean. It required so many layers of approvals and stakeholders, with the AI legal team, marketing, comms, and product teams all coming to the table, but it was an important learning, which has created confidence.”
At communications measurement and evaluation body AMEC, CEO Johna Burke said a further consideration was ensuring AI did not marginalise communities without access to advanced technology:“We talk about being a global society, and AI creates a bigger divide in underrepresented groups. Many African dialects won't be represented in AI models. The technology is not yet at critical mass, but there is a societal piece, where we could knowingly or unwittingly turning our back on huge parts of society who don't have ready access to information and tech.”
Burke also talked about additional risks around risk and misinformation:“Where you're not in control, and using a free OpenAI sub, it's a toy – and it's a dangerous toy, where you might be floating internal, confidential info. That creates liability for brands and comms – and that's why organisations are staying close to their legal teams.”
Citing the ongoing conflict between Open AI and the New York Times – which sued the generative AI giant over its use of NYT articles to train chatbots that now compete with the publication without citing it as a source – Burke said:“It will further erode content publishers' value and copyright and that's an ethical issue that will have a ripple effect we're all going to understand. It's potentially decimating trustworthy sources and exacerbating dis- and misinformation. We need mindfulness and intentionality as trusted counsellors to brands, so we're not learning from things which are false, incorrect and misleading. At the moment, we're having to think like machines – and that's being perpetuated.”
On ethics in AI, Grzegorz Szczepanski, Burson Poland CEO and ICCO president agreed:“You can't unplug your critical thinking, that's stupid and risky – the ethical aspects are still unknown and the copyright aspect is not solved yet and is potentially a huge issue. ICCO adopted the Warsaw Principles last year, advocating for transparency in AI, but it's already now almost impossible to detect, so it will be extremely difficult to demand full transparency.”
And Nitin Mantri, group CEO of Avian WE, added:“We have a huge opportunity as agencies to embrace technology and use it in our daily work life; the challenge is to question AI and the output we're getting – it requires a different mindset. Human creativity and intelligence are still pivotal, but our inputs have to start to get better – otherwise we'll get misleading information.”
Szczepanski said that he believed the comms and media industries would survive and there would likely be increased requirement for corporate and strategic communications advisory, partly because there is already so many synthetic content that“AI will just be talking to AI”.
But Burke said the tension between technology and human beings was an increasing issue as AI reaches the peak of its hype cycle:“In comms and reputation management we talk about the emotional quotient (EQ) piece in terms of understanding the customers, and at the moment it feels like we're being pulled into a tech solution rather than having the tech solve human problems.
“The human will only win through when we set realistic expectations on how we are going to use information. We talk about excitement, but as soon as we surrender to the algorithm and let something else dictate what we consume, we have made a contract. Do we take this opportunity to renegotiate that contract, or are we renewing that contract? Mis- and disinformation are perpetuated by computers – but humans are responsible for making it viral, because that's the trust piece. If tech is not accountable, when no-one can tell you what's below the first layer of the onion, then trust and transparency is gone.”
In another session on the LBB Beach, led by Jillian Janaczek, global CEO of Porter Novelli, panellists also looked at the role of being human in the AI age. Janaczek posed the question of whether emotional intelligence (EI)“can have the last laugh over AI”, to which Sofia Hernandez, TikTok's global head of business marketing, said:“EI is thinking about connection, creativity and emotions – and that can only come from us. As brands and marketers, we need to be braver. It's scary, but there is an opportunity if we allow ourselves to be vulnerable.”
Pepsi Lipton senior marketing director Cathy Graham Kidd said that while AI can give“firepower” to connections between brands and consumers, marketers should never forget they are connecting with humans:“It's a feeling, a set of associations, memories, nostalgia – AI can help us learn about individuals but it can't replace us, it's an enabling tool to building understanding of our audiences, but staying connected, empathy, and how we show up in culture is the never-ending task of marketers.”
At non-profit Comic Relief US, CEO Alison Moore echoed Burke's concerns about AI and human marginalisation:“We don't vilify AI, we think it will be additive. But we are a little nervous about it – marginalised communities have a big gap in access. The digital divide in the US is wide, who's looking out for the acceleration of AI to where these communities can enjoy the benefits? Let's utilise it to get more funds to people who need it, and ask how we make change in society to make sure that happens.”
Google's global creative director Noël Paasch had another take on this, saying YouTube was trying to democratise access to AI tools for creators“to give them a voice where they might not have had one.” She added she had been“really shocked” at advancement in the past year:“Our research examining the top ads run on YouTube showed that a year ago only 10% of output was using AI, and the rest was human manual coding. This year, with generative AI advancements, inputs are now 90% AI driven. The tech is mind blowing.”
In answer to Janaczek's question about how technology is changing relationships with audiences, Hernandez said:“At TikTok we're building solutions for the way we've been marketing for a long time in an inefficient way. The AI solutions we're excited about help marketers cost cut and move quickly – but in no way replace creativity.”
Graham Kidd concluded that Lipton was focused on“enabling the power of AI to understand consumers better and connect in more impactful way,” and also underlined the use of generative AI in content creation:“Something like 92% of creators are using AI-based tech daily, it's integrated into their lives. When it comes to creativity, sometimes AI takes away the heavy lifting to enable you to be more creative.”
MENAFN24062024000219011063ID1108364817
Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.