Information Integrity in the AI Era: (Central) European Perspective
Aspen Institute CE and Aspen Digital cohosted an event in Prague on September 26, 2023 to discuss Artificial Intelligence and its impact on news and media. The event featured a panel of experts, including journalists, academics, and regulators specializing in the field of AI and technology.
The event was composed of two discussions: AI and Information Integrity
Independent Media and Regulations
Recording of the whole event is available here
Aspen CE Executive Director Milan Vašina and Aspen Digital Executive Director Vivian Schiller opened the conference by referencing the launch of ChatGPT – the first generative AI tool released to the general public – on November 30, 2022 that set off “both simultaneously a gold rush and an arms race” in AI technology.
Speakers:
Generative AI and Risk
For many, the launch of ChatGPT in November 2022 was their first experience with AI. Or so they thought. In fact AI has been part of our daily lives for many years. AI is an umbrella term encompassing many types of technologies and algorithmic functions that broadly mimic certain human capabilities such as understanding, sensing, reasoning, data processing and decision making.
“AI is overtaking human communication capabilities …because AI evolved into language and generative AI. Until recently, the main job of AI was to analyze…it now analyzes and generates,” Michal Pěchouček noted the recent innovation.
Both panels touched on the topic of personalized content. One side of the debate highlighted the potential dangers AI-enabled personalization could have. Michal Pěchouček pointed out that personalization could lead to disconnection. Personalization can drive users into different ideological camps based on information that is delivered to them. There are two risks here: the information communicated is factual but is limited to that specific ideology; and misinformation.
Mis and disinformation is a unique challenge because it is often protected by free speech. “If you want pluralism, if you want a variety of opinions in society, you cannot use regulation so stringently that the problem of disinformation will go away.” Johann Laux said. Because of the AI generated misinformation and ideological skew, the overall quality of the information ecosystem is affected. It will become harder for the public to verify the information because they do not have equal access to facts. Each camp is left receiving parts of the whole. This impacts researchers and journalists who will have less access to data in order to understand information trends, and risks.
Another danger highlighted is the rapidly progressing AI intelligence arms race. On the one hand, AI can be a useful tool to identify other AI generated content and manipulation. However, Pěchouček stated, AI can just as easily generate work-arounds. In other words, the technology can be weaponized against itself. AI is progressing so quickly that nobody is able to predict where it will be in the future, the panelists said. This leaves opportunities for innovation, and parallel opportunities for abuse.
“The greatest AI danger is the same as the greatest AI opportunity” Michal Pěchouček concluded.
While personalization presents risk, the panel also discussed its potential upside to personalize news based on a user’s interests or geography. Everyone likes to receive their news differently, and AI can be utilized in a way that caters to them. It presents the opportunity to optimize good reporting. For instance, a news story could be automatically generated into audio, video, long-form writing, short articles, or other formats, which can increase engagement with reliable news sources.
The Upside of AI in journalism and newsrooms
While there are various dangers to AI, the panelists also discussed the benefits of embracing the technology as a tool. A key discussion was around AI as an efficient and cost-effective utility, so long as it’s used in an ethical, transparent manner.
“It’s our job to use technology to make us even more efficient, to maybe even save funds on one side, to then have more room for better reporting and better coverage,” Tanit Koch stated.
Panelists praised AI technology as a means to aid and expedite research. They also discussed AI’s archive function which can make recorded history–written, visual, or auditory–more accessible via key search instructions. AI is also a useful rough draft editor, as it is capable of identifying and fixing grammatical errors and typos without human intervention. The panelists also discussed recommendation algorithms that can suppress click bait content and push factual, relevant news. While this is a common function on most media platforms, if used ethically, it can maintain the credibility of the media site and keep the information space healthy. These practical functions lighten journalists’ workloads and help media sites function more efficiently.
“We have a very, very clever algorithm that can identify clickbait in a Czech title,” Peter Jančárik commented on the AI development and uses at Seznam.cz.
Panelists also discussed challenges with large language models (LLM’s) which can deliver highly fallible results – a particular risk when used to generate news articles or other journalistic outputs. However generative AI can be useful in creating first drafts. It’s also extraordinarily useful for translation. “Suddenly, the world is your marketplace. And suddenly, you don’t have to be able to speak languages like English or German to be able to understand English or German journalism.” Charlie Beckett said.
“Users are less interested in whether the text was AI generated or not. They are more concerned with trust, whether they can trust the information,” Michal Pěchouček revealed.
AI and social media regulation
AI delivers great power to those who use it. As such, the panel suggests that regulation is necessary to ensure this power is being used responsibly and all users can enjoy the technology’s various benefits while protected from its dangers.
The European Commission has several regulations in place already, two of which were discussed at length: the Digital Services Act (DSA) and the AI Act. The DSA’s basic principle is that what is illegal offline is illegal online. The AI Act addresses and monitors specific high-risk AI uses. The aim of this regulation is to enshrine original values: “more freedom, more information, and more democracy,” Daniel Braun stated. Having regulations in place not only protects the users, but it also protects the quality of information being disseminated. Crucially, these acts impose regulations based on how the technology is used rather than content on news and social media platforms. New rules under consideration include labeling AI-generated content. This protects the idea of AI as a utility, while making sure the public understand what is created by humans, and what is generated by machine. All panelists agreed there should be a standard to how AI can be used against AI abuse–machines should be capable of recognizing machine generated content and manipulation. The right regulations should promote “trustworthy information and empower those in the information environment…and reduce the prevalence in the space for negative externalities,” Daniel Braun concluded.
The speed of innovation in AI presents a unique set of challenges to regulators. Though the European Commission is trying to stay ahead of technological advancements, “it’s incredibly complicated to regulate a moving target,” Johann Laux pointed out. He further commented that though there is sound legislation in place, there still needs to be a strategy to implement these laws in an effective and meaningful way. Cooperation between countries and governments is crucial so methods will not significantly vary from region to region. But that level of cooperation may be challenging. While governments in free and democratic countries may choose to responsibly enforce regulations, autocratically-leaning governments may want to take advantage of AI to further suppress speech and journalism Boehler noted that regulations “could possibly be abused by these state actors” where there is a history of speech regulation and abuse, and that journalists and lawmakers alike should be asking these tough questions.
In summary, the panelists agreed that journalists should:
- Embrace and utilize the technology rather than fear it
- Continue to fact check, verify and protect sources
- Be transparent about how and when AI is being used
- Cooperate to identify collaborative regulations and solutions for AI
- Hold other government powers, politicians, journalists, and media organizations accountable
- Push for transparency and for more explainable AI from scientists and experts
- Ask tough questions
- Promote trustworthy information – and trust in the organizations stewarding it – above all else
“As a person dealing with the media, you have an incredible power in checking what the media are doing. So actually, this makes the industry faster, more savvy about what we do, and more careful and more precise in our work,” Tanit Koch stated.