An Initiative of

Supported by

AI Happy Hour website (2)

Date

27 JAN 2025

Entrance fee

Free

Time

5:30 PM - 7:00 PM

Address

Cantersteen 16, 1000 Brussels

Speakers

Laurence Dierickx

Register now!

AI Happy Hour | Journalism caught in the AI hype?

Offline

Join us for an evening of networking and explore the ethical implications and challenges of AI in journalism at our AI Happy Hour event! This in-person event will take place on Monday 27 January at 17:30 in the FARI Test and Experience Center (Cantersteen 16, 1000 Brussels).

Laurence Dierickx – lecturer at ULB and postdoc at the University of Bergen – will be discussing the role of AI in journalism, emphasizing its limitations, ethical implications, and the need for human oversight and AI literacy to navigate its challenges and prevent misinformation.- will be discussing the role of AI in journalism, emphasizing its limitations, ethical implications, and the need for human oversight and AI literacy to navigate its challenges and prevent misinformation.

Description of the session:
Since the launch of ChatGPT in November 2022, the role of AI in journalism has sparked new discussions, highlighting ethical concerns about its use, accuracy, and impact on journalistic standards. While AI tools like large language models (LLMs) offer potential, they cannot replace human judgment and may contribute to misinformation due to their limitations. Trust in AI-generated content is a growing issue, as many journalists use AI tools despite doubts about their reliability. Rather than reshaping journalism, AI serves as a “remix” of existing editorial processes, augmenting workflows while raising critical questions about the role of journalists in combating disinformation. Effective AI integration requires understanding its limitations, promoting AI literacy, and maintaining human oversight to navigate the challenges and complexities of AI-driven societies.

Is this for you? Administrators, industry professionals and AI enthusiasts are invited to join this session.

A balanced mix of learning and relaxation – Attendees will have the opportunity to engage in insightful discussions with Laurence, followed by a casual happy hour with snacks and beverages!

This event is free to attend, but registration is compulsory.


Laurence Dierickx

Laurence Dierickx is a researcher and teacher specialising in information and communication technologies. She holds a master’s degree and a doctorate in information and communication sciences and is currently a postdoctoral researcher at the University of Bergen (Norway), as part of the NORDIS project (Nordic Observatory for Digital Media and Information Disorder). She also teaches digital investigation techniques, data and AI-driven journalism at the Université libre de Bruxelles (Belgium), where she is a member of the ReSIC research centre and the LaPIJ laboratory.  

Her research interests include the development of responsible AI systems, that consider the needs, practices and values of their end-users, in the context of fact-checking. Her more recent work, carried out at the Digital Democracy Centre at the University of Southern Denmark, has focused on the uses of generative AI and risk reduction strategies. Laurence Dierickx’s initial professional career alternated between journalism and technology, and she has developed an expertise in technological developments in journalism. She has published a number of scientific articles dealing with the practices, ethics and responsible developments of technologies, underlining the importance of a nuanced and critical approach. 

She is also co-author of the chapter “From Bytes to Bylines: A History of AI in Journalism Practices”, in the book Histories of Digital Journalism, released in November 2024.  

– https://www.taylorfrancis.com/chapters/edit/10.4324/9781003492436-12/bytes-bylines-laurence-dierickx-carl-gustav-lind%C3%A9n  

– https://orcid.org/0000-0001-5926-8720 

– https://ohmybox.info/about/ 

 


Complete description

Journalists use AI-based tools every day, from automated interview transcription to machine translation and advanced search engines, often without questioning the technology behind them. However, since the launch of ChatGPT in November 2022, discussions about the role of AI in journalism have taken a new direction, fuelled by hype and driven by marketing strategies that exaggerate the technology’s capabilities. This shift has sparked ethical reflection on the responsible use of AI, human agency and journalistic standards. While AI technology is not new to the news media sector, it has never been so widely discussed or scrutinised. 

Despite their potential, large language models (LLMs) such as ChatGPT cannot replace the human judgement required to produce credible, nuanced stories. Instead, they are more likely to be used for secondary tasks due to their many drawbacks, including semantic noise, lack of nuanced understanding, and the generation of inaccuracies that can contribute to misinformation. From a public perspective, the use of LLMs and other generative AI systems raises critical questions about factuality in the age of AI. Because AI can both inform and mislead, it challenges traditional notions of truth and can lead to paradoxical outcomes, including the increasingly complex issue of trust and the growing rejection of AI-generated content as legitimate news. Trust is also central to the relationship between LLMs and journalists. In this area, research is beginning to show that trust is not always a prerequisite for use, as many professionals use the technology despite not fully trusting it or its results. 

Technology has been part of the journalistic apparatus for years, often reflecting an ambiguous relationship with professionals, ranging from dystopian to utopian visions. At this stage in the history of journalism, it would be misleading to think of AI as reshaping journalism. Rather, it should be seen as a ‘remix’ of the editorial process. AI provides tools that can augment existing workflows and routines, while raising important questions about the role of journalists in AI-driven societies, where quality journalism serves as a shield against the growing flow of disinformation. 

Effective integration of AI requires a clear understanding of its capabilities and limitations, with a focus on risk mitigation strategies that include AI literacy, human oversight and responsible practices. In all cases, AI is not a silver bullet, and the hype surrounding it should not obscure the deep challenges that the news media sector has faced for two decades, including the continued deterioration of working conditions. Therefore, AI does not offer solutions, but rather creates new challenges. Furthermore, AI literacy should also be seen as a powerful tool to explore the complexities of AI-driven societies that are not prone to bias. After all, tools are just tools made by humans to serve human purposes. 

 


About the AI Happy Hours

The AI Happy Hours are workshop sessions that will be held every last Monday of the month, starting at 17:30 pm and finishing at 19:00 pm. These sessions will provide an appetizer of what Al, data and robotics has to offer, how it can benefit citizens, administrators & companies in day-to-day life and how to safeguard sustainability, compliance and responsible use of these technologies.

This event is organized by FARI – AI for the Common Good Institute.

Share