13/09/2024

Automated AI reporting for greater efficiency in editorial offices

Authors: Marie Oelgemöller & Daria Kolesova

Many of our customers manage huge amounts of online content. Ensuring that the content is relevant and up to date can be a challenge. In the public sector in particular, the political situation and laws often change. These changes need to be communicated to the public quickly and in an accessible way. How can editorial offices ensure that thousands of items of online content are always relevant, up to date, understandable and appropriate for their target groups?

AI-based reporting can quickly and reliably examine large amounts of content for quality, tone and subject accuracy. For a client in the Defense & Public sector, we are currently analysing all online content regularly using individual criteria and success metrics: with generative AI and natural language processing algorithms, which we combine with usage data from the web analytics tool. This allows us to identify anomalies and dependencies. Based on a detailed evaluation, our customers receive suggestions for improvement. AI reporting significantly improves the quality of online content.

Strategy as a starting point

Initially, we focus on the content strategy. What are our customers’ communication goals? Which target groups do they want to reach? This information can be used to define the scope and focus of the analyses: for example, which content and topics the analysis should centre on. Language models categorise newly published content into different topics. This ensures our customers always have an overview of how topics are distributed.
Once the objectives are clear, IBM iX sets criteria and metrics. Supported by large language models, these criteria and metrics are used to assess the text quality and subject accuracy of the content.

AI in practice: metrics are the key to success

Defining target metrics is a keystone of AI reporting. Web analysis tools provide metrics such as page views, visit time, bounce rate and scroll depth. These metrics provide information on how successful content is.
Written articles are presented in terms of numbers by automated crawls: How many words does the article have? How many pictures and videos are inserted? How often have readers commented on the article?

At the same time, language models test content for qualitative criteria such as comprehensibility and successful appeal to target groups. The combination of quantitative and qualitative analysis gives a complete picture.

Automated evaluation of text quality

Good online content is easy to understand – no matter how complex the topic is. The automated analysis is based on the Hamburg comprehensibility model and checks the four main features: simplicity, structure and order, brevity and conciseness, and motivational aspects.
Numerous metrics are defined for each characteristic, which evaluate things like logical structure, meaningful titles and relevant content. In addition, the AI examines content for format variety, storytelling and interactive elements that make the content more appealing. AI reporting also assesses tonality and checks that texts are up to date and use correct punctuation and spelling. The AI tool “Kixa” from IBM iX (in German) provides assistance in creating editorial content.

Getting prompt results

Prompts are instructions that tell the language model what actions it should perform. A system prompt tailored to the needs of our customers is created for the automated evaluation. This system prompt serves as the initial instruction for the language models and is used in the background for all other prompts. It contains all the relevant information from the content strategy and editorial guidelines.

All additional prompts focus on the evaluation of content and are specifically geared to the evaluation criteria. Once the evaluation is complete, the AI assigns a numerical rating that reflects the extent to which the specified criteria are met. 

The evaluation takes place through the IBM Consulting Advantage platform, where the Mixtral 8x7B models are accessed through the IBM watsonx.ai platform and GPT-4 models through Microsoft Azure in an EMEA deployment. No data is cached in either case.

AI reporting: automated and monitored by experts

Generative AI boosts efficiency, but expert knowledge remains indispensable. This is why IBM iX ensures that selected content is spot-checked by our content experts. For example, the most visited items are reviewed. The analysis focuses on editorial quality and assessing whether the target group was properly addressed.

Dashboard: all results at a glance

A customised dashboard updates all developments on a monthly basis. The metrics are visualised in various graphs. Thanks to the clear structure, changes compared to the previous month are immediately recognisable. In addition, all data for the published content is recorded in a table. Developments, findings and recommendations are also presented in short texts. Editorial teams can implement these directly. As well as better content, AI reporting also makes editorial offices more efficient.

Outlook

AI reporting offers enormous benefits for organisations that make large amounts of content available online: they get a scaled qualitative analysis and evaluation at the touch of a button. IBM iX has successfully implemented this solution for a client in the defence sector and for a social security and pension provider. This not only leads to addressing target groups more effectively but also to a lasting improvement in communication performance. Even in rapidly changing circumstances, editorial offices can ensure that target groups get the content they need. Ultimately, it enables organisations to better position themselves in the digital sphere and achieve long-term success.
Contact

Are you interested in a solution like this? Let’s talk!

Please reach out to me if you have any questions!

Quirin Johannes Koch
Director Digital Transformation IBM iX

This might interest you too