LLM4News
LLM4News
Personalisation is one of the most promising opportunities offered by Generative AI. From tailoring recommendations to adapting news coverage to individual interests, large language models (LLMs) are opening new horizons for digital media. Yet, their limitations in reliability and neutrality cannot be ignored.
A recent BBC study highlighted how current AI assistants still struggle with accuracy and bias in their answers. Against this backdrop, our new project LLM4News seeks to explore how Retrieval-Augmented Generation (RAG) can offer a path towards more trustworthy, personalised news in the Flemish media landscape.

Main section
Quick facts
/
Generative AI enables personalised news experiences, but reliability remains a challenge.
/
A BBC study (2025) found AI assistants often provide incomplete or biased answers.
/
LLM4News investigates how RAG improves factual grounding and neutrality.
/
The project focuses on applications within the Flemish media ecosystem.
Exploring RAG for trustworthy personalised news
The LLM4News project is built on a central question: how can we personalise news content using AI without compromising on reliability and neutrality? While generative models are powerful in creating tailored experiences, their tendency to produce errors or biased phrasing makes them risky in the sensitive context of journalism.
The BBC’s recent research reinforces these concerns. Their study demonstrated that even leading AI assistants regularly provide incomplete, misleading, or skewed information. This creates a significant challenge for media organisations seeking to maintain public trust while experimenting with AI-driven personalisation.
Our project investigates Retrieval-Augmented Generation (RAG) as a potential solution. Unlike standard LLM responses that rely solely on pre-trained knowledge, RAG combines generative capabilities with retrieval from trusted, up-to-date sources. This means an LLM can draw on curated, verified information before producing personalised outputs, thereby reducing hallucinations and bias.
Additionally, the Flemish media context struggles with the fact that LLM's are dominantly trained with English data and in the case of Dutch LLM's, with data from the Netherlands. This is a critical barrier for Flemish content creation.
LLM4News aims to build prototypes of RAG-powered systems tailored for Flemish journalism.
Bottom section
The next steps
LLM4News represents a step forward in aligning AI innovation with the values of journalism. The project stands on the recognition that news personalisation should not come at the cost of truth or neutrality. By combining RAG techniques with the Flemish media context, the initiative aims to identify a sustainable path for AI-driven news delivery.
We will collaborate with media partners to evaluate RAG models on real-world content, building a practical evidence base for their potential. We seek not only to test the technical aspects of RAG, but also to understand how it can fit within editorial workflows, how it impacts audience trust, and how it could shape the future of personalised news delivery.
Contributors
Researchers
/
Pieter Verbeke, AI Researcher
Want to know more about our team?
Visit the team page
Last updated on: 9/19/2025
/
More stuff to read