
403
Sorry!!
Error! We're sorry, but the page you were looking for doesn't exist.
AMEC Summit: Put Human Intelligence Front And Centre Of Evaluation
(MENAFN- PRovoke)
The global communication evaluation community meets at the AMEC Summit in Vienna this week to consider“Reputation, Reliability and Results”, at a time when measurement faces a crisis point.
The case for measuring communication impact has never been stronger, and the tools never better. There's some great work going on – but evaluation remains partial, variable and in too many cases unconvincing, particularly in the light of the information revolution we face as artificial intelligence develops.
Evaluation guru Richard Bagnall has reported that only 37% of in-house PR teams say they actively measure their efforts, and in a brilliant blog on internal communications, Sharon O'Dea calls out“shallow numbers”, saying that the problem is that“these kinds of claims don't land with the people who hold the purse strings”.
And that seems to be the view of the AMEC community as well. Keynote addresses at the summit will seek to deal with the problems such as“our industry often speaks idealistically about being audience-centric, but for some that can be too abstract to implement” and“communications measurement professionals need to do a better job of linking communications to hard business outcomes”.
At the heart of the problem is leadership. Too many organisations seem reluctant to prioritise evaluation and do the hard work of understanding the insight it offers for the business. And communication leaders also still seem reluctant to talk about this issue; the European Communicators Directors Conference in Belgium (also this week) is covering some fascinating topics, but the single session on evaluation is the penultimate one, like some embarrassing relative shovelled to the end of the event.
The emerging technical solutions seem compelling – I get at least one approach a month from companies offering to solve my analysis problem. But some seem to provide an avalanche of data, and a deficit of insight.
For example, the AMEC session this week on“Audience Compass: Navigating Cross-Channel Influence” with Microsoft's senior director of communications strategy and insights Stephanie Cohen Glass, and We VP of insight and analytics Vincent Jacobi will argue that“data is siloed in online dashboards, agency Excel files, or image barometers”.
Ten out of 10 for honesty, but do the AI-driven solutions on offer actually meet this challenge? One exhibiting company argues:“advanced software infrastructure empowers PR teams”. Another:“leveraging AI-driven automation... streamline media monitoring, enhance data enrichment, and generate more insightful reports”.
There are both leadership issues and credibility ones with these approaches. The leadership test is about the ability to identify the right metrics, regularly track them, report them and act on them. That takes discipline and practice. Credibility is about whether“enriched data” really generates insight without scepticism and interrogation.
The credibility test is what I would term the“Queen Elizabeth paradox”. We put a lot of effort into the evaluation of 'London Bridge' , the operation around the funeral of the late Queen. We relied on a variety of AI tools to track sentiment, volume and source. It was interesting that media sentiment was assessed by the machine tools we used as overwhelmingly negative, although we knew that around the world they were outpourings of sympathy, affection, and respect for the United Kingdom. But the artificial intelligence simply reported all this as negative coverage, because it was associated with 'death'.
As a result, we developed a stronger, human-led evaluation model. The tools are improving, and some are very impressive, but I can still point to a recent campaign where the evaluation of media coverage was significantly different between three companies assessing the same issue.
In a different century, at Westminster Council we used to hand analyse press cuttings on a -5 to +5 basis. It provided a net score which was reported to leadership and was credible. Similarly at the Cabinet Office during the Covid pandemic, a brilliant evaluation team led by Matthew Walmsley produced a weekly report on“five things we've learnt, and five things we could do about it”. Actionable insight that influenced policy and delivery, and data delivered with huge and impressive human input.
Reliance on the software and AI-driven automation is inevitable, but we need human intelligence in using and assessing machine output, or it leads to less credible reporting.
Yuval Noah Harari in his brilliant book 'Nexus: A Brief History of Information Networks from the Stone Age to AI' makes the point that artificial intelligence could put much of the communications business out of work:“Why bother searching and processing when I can just ask the Oracle”. And recent academic studies have concluded use of AI leads to“knowledge workers perceiving decreased effort for cognitive activities associated with critical thinking”. If we fail to think and add value, the AI will take on the task.
This is an argument for talented communicators, evaluators and analysts to put in more effort to scrutinise, assess and recommend based on the data. We need more human leadership, less reliance on the machines, as well as using the terrific evaluation models including AMEC's own and the GCS Evaluation Framework 2.0.
This matters because, as strategic advisor Julio Romo wrote recently in his Reputation Matters newsletter:“Reputation is now more than a performance metric; it is an asset class”. And Harari argues that the AI revolution is also an information revolution where wealth and power are measured by the size of your“data bank of information”.
AMEC Vienna 2025 will have great people, insights and sessions, from how LLMs are shaping your brand, to the revolution in broadcast monitoring and the PR of tomorrow, as well as the legendary Professor Jim Macnamara on public communication. The AMEC mission is right; summed up in Gates Foundation deputy director of measurement and insights David Cantor's session this week which will explore how“great reputation campaigns don't just tell a story – they're built on data”.
Hopefully in Vienna we'll see this community of evaluation professionals step up to the leadership challenge and place human intelligence front and centre of the information revolution.
Alex Aiken has been a communications advisor to governments around the world, including as a former director of the UK Government Communication Service.
The case for measuring communication impact has never been stronger, and the tools never better. There's some great work going on – but evaluation remains partial, variable and in too many cases unconvincing, particularly in the light of the information revolution we face as artificial intelligence develops.
Evaluation guru Richard Bagnall has reported that only 37% of in-house PR teams say they actively measure their efforts, and in a brilliant blog on internal communications, Sharon O'Dea calls out“shallow numbers”, saying that the problem is that“these kinds of claims don't land with the people who hold the purse strings”.
And that seems to be the view of the AMEC community as well. Keynote addresses at the summit will seek to deal with the problems such as“our industry often speaks idealistically about being audience-centric, but for some that can be too abstract to implement” and“communications measurement professionals need to do a better job of linking communications to hard business outcomes”.
At the heart of the problem is leadership. Too many organisations seem reluctant to prioritise evaluation and do the hard work of understanding the insight it offers for the business. And communication leaders also still seem reluctant to talk about this issue; the European Communicators Directors Conference in Belgium (also this week) is covering some fascinating topics, but the single session on evaluation is the penultimate one, like some embarrassing relative shovelled to the end of the event.
The emerging technical solutions seem compelling – I get at least one approach a month from companies offering to solve my analysis problem. But some seem to provide an avalanche of data, and a deficit of insight.
For example, the AMEC session this week on“Audience Compass: Navigating Cross-Channel Influence” with Microsoft's senior director of communications strategy and insights Stephanie Cohen Glass, and We VP of insight and analytics Vincent Jacobi will argue that“data is siloed in online dashboards, agency Excel files, or image barometers”.
Ten out of 10 for honesty, but do the AI-driven solutions on offer actually meet this challenge? One exhibiting company argues:“advanced software infrastructure empowers PR teams”. Another:“leveraging AI-driven automation... streamline media monitoring, enhance data enrichment, and generate more insightful reports”.
There are both leadership issues and credibility ones with these approaches. The leadership test is about the ability to identify the right metrics, regularly track them, report them and act on them. That takes discipline and practice. Credibility is about whether“enriched data” really generates insight without scepticism and interrogation.
The credibility test is what I would term the“Queen Elizabeth paradox”. We put a lot of effort into the evaluation of 'London Bridge' , the operation around the funeral of the late Queen. We relied on a variety of AI tools to track sentiment, volume and source. It was interesting that media sentiment was assessed by the machine tools we used as overwhelmingly negative, although we knew that around the world they were outpourings of sympathy, affection, and respect for the United Kingdom. But the artificial intelligence simply reported all this as negative coverage, because it was associated with 'death'.
As a result, we developed a stronger, human-led evaluation model. The tools are improving, and some are very impressive, but I can still point to a recent campaign where the evaluation of media coverage was significantly different between three companies assessing the same issue.
In a different century, at Westminster Council we used to hand analyse press cuttings on a -5 to +5 basis. It provided a net score which was reported to leadership and was credible. Similarly at the Cabinet Office during the Covid pandemic, a brilliant evaluation team led by Matthew Walmsley produced a weekly report on“five things we've learnt, and five things we could do about it”. Actionable insight that influenced policy and delivery, and data delivered with huge and impressive human input.
Reliance on the software and AI-driven automation is inevitable, but we need human intelligence in using and assessing machine output, or it leads to less credible reporting.
Yuval Noah Harari in his brilliant book 'Nexus: A Brief History of Information Networks from the Stone Age to AI' makes the point that artificial intelligence could put much of the communications business out of work:“Why bother searching and processing when I can just ask the Oracle”. And recent academic studies have concluded use of AI leads to“knowledge workers perceiving decreased effort for cognitive activities associated with critical thinking”. If we fail to think and add value, the AI will take on the task.
This is an argument for talented communicators, evaluators and analysts to put in more effort to scrutinise, assess and recommend based on the data. We need more human leadership, less reliance on the machines, as well as using the terrific evaluation models including AMEC's own and the GCS Evaluation Framework 2.0.
This matters because, as strategic advisor Julio Romo wrote recently in his Reputation Matters newsletter:“Reputation is now more than a performance metric; it is an asset class”. And Harari argues that the AI revolution is also an information revolution where wealth and power are measured by the size of your“data bank of information”.
AMEC Vienna 2025 will have great people, insights and sessions, from how LLMs are shaping your brand, to the revolution in broadcast monitoring and the PR of tomorrow, as well as the legendary Professor Jim Macnamara on public communication. The AMEC mission is right; summed up in Gates Foundation deputy director of measurement and insights David Cantor's session this week which will explore how“great reputation campaigns don't just tell a story – they're built on data”.
Hopefully in Vienna we'll see this community of evaluation professionals step up to the leadership challenge and place human intelligence front and centre of the information revolution.
Alex Aiken has been a communications advisor to governments around the world, including as a former director of the UK Government Communication Service.

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- XDC Network's XVC Tech Announces Investment In Laser Digital Carry Fund, Launches Institutional Fund Infrastructure With Libre
- AIXA Miner Announces Major Updates To Its Cloud Mining Platform
- Currency Goes Mobile-First With Brand-New App Available In Over 100 Countries
- Aixuspeed Reports $500K In Token Commitments Within First 72 Hours Of Pre-Sale
- FBS Releases Market Outlook On Bitcoin Following US-China Trade Truce
- Foraxi Introduces The World's First Trading Fund Insurance Plan To Empower Global Forex Traders
Comments
No comment