People get their news from AI, and it changes their perspectives

Meta’s decision Finishing its professional fact-checking program It sparked a wave of criticism in the world of technology and media. Critics have warned that dropping expert oversight could erode trust and reliability in the digital information landscape, especially when profit-driven platforms are left mostly to police themselves.

However, what much of this discussion has overlooked is that large linguistic models of AI exist today Increasingly used To write news summaries, headlines and content that catches your attention long before traditional content moderation mechanisms can intervene. The problem is not clear cases of misinformation or harmful topics that go unreported in the absence of content moderation. What is missing from the discussion is how apparently accurate information is selected, framed and emphasized in ways that can shape public perception.

Large language models gradually influence the way people form their opinions by generating the information that chatbots and virtual assistants present to people over time. These models are now also being integrated into news websites, social media platforms, and search services, making them The primary portal for obtaining information.

Studies show that large linguistic models do this More than just passing information. Their answers can subtly highlight certain viewpoints while downplaying others, often without users realizing it.

Communication bias

My colleague is a computer scientist Stefan Schmid, And Ia researcher in technology law and policy, shows in a forthcoming accepted paper in Communications of the ACM that large linguistic models Show communication bias. We have found that they may have a tendency to highlight certain viewpoints while omitting or downplaying others. This bias can affect the way users think or feel, regardless Whether the information provided is true or false.

Empirical research has been produced over the past few years Reference datasets Linking the model outputs to partisan positions before and during the elections. They reveal differences in how existing large language models handle general content. Depending on the personality or context used to motivate macrolinguistic models, existing models are subtly skewed toward certain situations—even when factual accuracy remains intact.

These shifts point to an emerging form of personality-based coachability—the tendency of the model to align its tone and focus with the perceived expectations of the user. For example, when one user describes themselves as an environmental activist and another as a business owner, the form might answer the same question about the new climate law by emphasizing different, but factually accurate, concerns for each. For example, criticisms could be that the law does not go far enough in promoting environmental benefits and that the law imposes regulatory burdens and compliance costs.

Such an alignment can easily be misconstrued as flattery. This phenomenon is called flatter: Forms effectively tell users what they want to hear. But while ingratiation is a symptom of user-model interaction, connection bias runs deeper. It reflects variations in who designs and builds these systems, what data sets they draw from, and what incentives drive them to improve them. When a large language model market is dominated by a few developers and their systems consistently present some views more favorably than others, small differences in model behavior can turn into large distortions in overall communication.

What the organization can and cannot do

Modern society increasingly relies on large linguistic models such as The primary interface between people and information. Governments around the world have launched policies to address concerns about AI bias. For example, the European Union Artificial Intelligence Law and Digital Services Law Trying to impose transparency and accountability. But neither is designed to address the precise problem of communication bias in AI output.

Proponents of AI regulation often cite neutral AI as a goal, but true neutrality is often elusive. AI systems reflect built-in biases in their data, training, and design, and attempts to regulate this bias often fail Trade one flavor of bias for another.

Communication bias is not just about accuracy, but also about content creation and framing. Imagine asking an AI system a question about controversial legislation. The model’s answer is shaped not only by the facts, but also by how those facts are presented, what sources are highlighted and the tone and point of view they adopt.

This means that the root of the bias problem lies not only in processing biased training data or skewed outputs, but in… Market structures that shape technology design In the first place. When only a few large linguistic models have access to information, the risk of bias in communication increases. Beyond regulation, effective mitigation of bias requires competition protection, user-driven accountability, and regulatory openness to different ways of constructing and delivering large language models.

Most regulations so far aim to prohibit harmful outputs after technology is deployed, or force companies to conduct audits before launch. Our analysis shows that although pre-launch checks and post-deployment monitoring may catch the most obvious errors, they may be less effective in addressing subtle communication bias that emerges through user interactions.

Beyond AI regulation

It is tempting to expect that regulation can eliminate all biases in AI systems. In some cases, these policies may be helpful, but they tend to fail to address a deeper issue: the incentives that determine the technologies that convey information to the general public.

Our findings show that the most sustainable solution lies in promoting competition, transparency, and meaningful user engagement, and empowering consumers to play an active role in how companies design, test, and deploy large language models.

The reason these policies are important is that AI will ultimately impact not only the information we seek and the daily news we read, but it will also play a critical role in shaping the kind of society we envision for the future.

This article was republished from Conversationan independent, non-profit news organization bringing you trustworthy facts and analysis to help you understand our complex world. It was written by: Adrian Kuenzler, University of Denver; University of Hong Kong

Read more:

Adrian Kuenzler does not work for, advise, own stock in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond his academic appointment.

Leave a Comment