Observations on the reflections of generative AI on media analysis and its methodology

Published: 13.06.2024 / Blog / Publication

This blog post explores the impact of generative AI (GenAI) on media analysis and research methodologies. It highlights the Media Intelligence Laboratory (MILa) at Arcada, which focuses on developing analytical tools for media education.

The Media Intelligence Laboratory (MILa) was launched at Arcada with the purpose of researching and developing analytical tools specifically for media education.

Traditional tools for media analysis in quantitative research include content analysis (e.g., the frequency of words or themes in material), surveys on media usage, and experiments on media effects on users. In qualitative research, methods have included text or image analysis, ethnography (e.g., observing media use in natural environments), interviews (e.g., with users or journalists), and data collection through various group discussions.

Combining relevant quantitative and qualitative characteristics, triangulation methods have been used.

These methods can be implemented using various software tools, which require different levels and types of expertise depending on the software and the task. Proficiency in using software is a key part of the skill set for experts and researchers.

There is a vast number of software available for research purposes (e.g., NVIVO, MAXQDA, SPSS, Atlas.ti, Qualtrics, SurveyMonkey, Dedoose, AntConc) for collecting, handling, and analysing data. The development and use of the software tools and their impact on research themes, scope, and methods, or on theory formation and testing, have been surprisingly under-researched. Some methodology books have provided a broader overview of the use of software in research (e.g., Ignatow & Mihalcea 2017).

Software is widely used in scientific research (Pan & al 2015) and is becoming increasingly important in scientific research (Gomez-Diaz & Recio 2019, van Es & al 2021). The proliferation and dominance of information technology and software in current research is also described by the term "computational turn" (Berry 2011).

In media research, research software is significant not only as a research tool but also in its impact on the epistemological and methodological foundations of research (Rieder & al 2022).

The general adoption of generative AI (GenAI) and large language models (LLM) since the launch of ChatGPT in November 2022 has brought the software question into discussions about the content and methods of scientific research. This also applies to social sciences and media research.

Since then, the role and status of GenAI in scientific research and in higher education have been presented in hundreds of scientific articles. The mood of the papers seems to be one of anticipation: in both research and higher education, it appears essential to accept that, when used correctly, GenAI can significantly support researchers, teachers, and students.

One of the central research questions of the MILa project has become to what extent generative AI applications are displacing other applications in social sciences and media research, and how the use of GenAI reflects on media education as a whole. Additionally, the project aims to answer the question of what the increasing use of GenAI means for research questions, methods, data, and theory formation.

The research use of GenAI in social sciences and  media research, which is mostly under two years old, has been a subject in some academic studies (i.e. Grossmann & al 2023, Bail 2024). These studies have led to the conclusion that the launch of GenAI represents a significant change for research and higher education.

When considering research use of GenAI, the MILa project has to particularly examine whether the benefits of GenAI for research can be harnessed through the current chat or similar natural language-based interfaces and how much programming skills requiring interfaces (API) are still needed for research functions.

When this question was posed to the ChatGPT40, the following summary was obtained:

"API interfaces and natural language-based chat services are likely to complement each other in the future. With technological development, natural language interfaces may become more common, but API interfaces will remain an important part of performing more complex and technically demanding tasks. In social science research, chat services can offer new opportunities, but their full use still requires time and development, especially concerning reliability and ethical issues."

When asked to what extent chat-based large language models can replace traditional research software, the following response was received:

"Natural language-based chat services can significantly improve social science research processes, particularly in data collection and preliminary analysis. They offer flexibility and speed, but traditional software remains important for more complex analyses and ensuring research quality. In the future, the role of chat services may grow, but their ability to fully replace traditional tools depends on technological development and the adaptation of research practices to new tools."

Thus, generative AI is not set to entirely displace traditional research software in the near future. This means that teaching media analysis should continue to include traditional research software. At the same time, it implies that the role of generative AI as a tool for media analysis should be emphasised, and the development and usability of generative AI features should be monitored more vigilantly.

Society and media in transition

Based on already published research on the use of GenAI, it is changing both society and media, as well as social sciences and media research. Consequently, it offers new opportunities and imposes demands on media education (Nylund & Honkanen 2024).

According to Gil de Zúñiga & al (2023), AI is changing the dynamics of investigative journalism, news production, and distribution, including targeting audiences based on their news preferences. GenAI also affects communication, particularly persuasive communication, which focuses on the interplay of technology and persuasion and how GenAI can influence actors in forming general political opinions.

According to Pavlik (2023), ChatGPT can quickly produce expert text on various topics, which can revolutionise news production and journalistic content. The review concludes that while ChatGPT and similar tools offer significant opportunities for journalism and media education, their use also involves challenges and limitations. The research recommends integrating GenAI into media education carefully and developing new curricula that consider these technologies.

Bdoor & Habes (2024) concluded that GenAI has revolutionised journalism by enabling the handling of large amounts of data and enhancing news functions such as editing and content personalisation. GenAI improves efficiency and productivity but also brings challenges, such as potential dependence on technology companies' funding. The research emphasises AI's ability to produce high-quality content that competes with human-produced content. However, according to the research, GenAI cannot replace the unique touch and creativity of humans.

Biases and prejudices in GenAI

In post-ChatGPT research, in addition to numerous opportunities, GenAI has been associated with several challenges and uncertainties. Identified challenges include, among others, the safety of online assessments, plagiarism risks, erroneous or fabricated information, biases in training data, ethical issues broadly, environmental impacts, and datafication (Nylund & Honkanen 2024).

Although there are other challenges, one key issue with GenAI appears to be its content reliability. Numerous studies have referred to the production of morally and/or politically biased content by GenAI. Evidence of ChatGPT's "left-leaning" bias has been found regarding political bias (Rozado 2023). Additionally, significant and systematic political bias towards Democrats in the United States, Lula in Brazil, and the Labour Party in the United Kingdom was observed in ChatGPT's outputs (Motoki & al 2023).

From the perspective of media research and education, it is interesting that GenAI has been found to prioritise facts over sensational journalism. Breazu & Katson (2024) examined narratives generated by generative AI models (ChatGPT-4) on sensitive topics such as politics, racism, immigration, public health, gender, and violence. Their research compared content produced by ChatGPT-4 with articles from leading British newspapers, The Daily Mail and The Guardian, on immigration, specifically focusing on Eastern European Roma in the context of the 2016 EU referendum.

The study found that ChatGPT-4 demonstrates objectivity in its reporting and emphasises racial awareness in its content. It prioritises facts over sensationalism, distinguishing it from The Daily Mail's articles. ChatGPT-4 refrains from producing content if it perceives the headlines as sensational or offensive, indicating the model's strong ability to identify and avoid harmful narratives. ChatGPT-4 is also capable of creating a more balanced and less discriminatory portrayal of marginalised groups compared to traditional media.

The research concludes that ChatGPT-4 has the potential to promote more responsible and equitable media practices. However, it is not entirely free from biases that may arise from its training data and algorithmic choices. From a media analysis and education perspective, Generative AI presents significant opportunities but requires guidance for research and educational use. This necessity for clear guidelines is evident at Arcada, where challenges related to the use of ChatGPT in thesis highlight issues identified in higher education research, such as academic dishonesty.

GenAI in media research and media education

How should Generative AI be used in media research and education? It is clear that there needs to be concrete guidelines, known to all, defining what can and cannot be done with Generative AI. The question of guidelines is broad, and research suggests that all stakeholders should be involved, including teachers, researchers, and students, who are key users of Generative AI. The research also highlights the role of technology companies that provide Generative AI applications in regulation and development.

The MILa project's task is more limited. Since the aim is to investigate and develop analytical tools for use in media research and education, it is necessary to consider the role of Generative AI compared to more traditional and manual software and applications. The starting point should be that each study uses tools best suited to the research design: addressing the research question, presenting the theoretical background and previous research, and collecting and analysing data. Generative AI is suitable for most of these tasks, but there is still much work to be done in data collection and analysis for many studies, where more traditional methods must be used.

Using both traditional research software and Generative AI in research means we must always assess the role of the machine in answering research questions. A justified question is whether we choose research questions that we know we can more easily answer with software or AI, or questions that are relevant and innovative for the theme, which remains the researcher's responsibility.


Bail, C. (2024). Can generative artificial intelligence improve social science? Proceedings of the National Academy of Sciences, 121(24), e2314021121. https://doi.org/10.1073/pnas.2314021121 External link Extern länk

Bdoor, S. and Habes, M. (2024). Use Chat GPT in Media Content Production Digital Newsrooms Perspective. 10.1007/978-3-031-52280-2_34.

Berry, D. M. (2012). Introduction: Understanding the Digital Humanities. In Understanding Digital Humanities, ed. by David M. Berry, 1–20. New York: Palgrave Macmillan.

Breazu, P. and Katson, N. (2024). ChatGPT-4 as a journalist: Whose perspectives is it reproducing? Discourse & Society, 0(0). https://doi.org/10.1177/095792652412514 External link… Extern länk

van Es, K., Schäfer, M. T., and Wieringa, M. (2021). Tool criticism and the computational turn: A ‘methodological moment’ in media and communication studies. M&K Medien & Kommunikationswissenschaft, 69(1), 46-64. https://doi.org/10.5771/1615-634X-2021 External link-… Extern länk

Gil de Zúñiga, H., Goyanes, M. and Durotoye, T. (2023). A Scholarly Definition of Artificial Intelligence (AI): Advancing AI as a Conceptual Framework in Communication Research. Political Communication. 18. 10.1080/10584609.2023.2290497.

Gomez-Diaz, T., and Recio, T. (2019). On the evaluation of research software: The CDUR procedure. *F1000Research, 8*, 1353. https://doi.org/10.12688/f1000research External link… Extern länk

Grossmann, I.,Feinberg, M.,Parker, D.,Christakis, N., Tetlock, P. and Cunningham, W. (2023). AI and the transformation of social science research. Science (New York, N.Y.). 380. 1108-1109. 10.1126/science.adi1778.

Ignatow, G., and Mihalcea, R. (2017). Text Mining: A Guidebook for the Social Sciences. SAGE Publications

Motoki, F., Pinho N. V. and Rangel, V. (2023). More human than human: measuring ChatGPT political bias. Public Choice. 198. 10.1007/s11127-023-01097-2.

Nylund, M. and Honkanen, P. (2024). Media, Media Education, GenAI and Radical Uncertainty. Conference paper EMMA, 5-7.6.2024, Netherlands.

Pan, X., Yan, E., Wang, Q., and Hua, W. (2015). Assessing the impact of software on science: A bootstrapped learning of software entities in full-text papers. Journal of Informetrics, 9(4), 860-871. https://doi.org/10.1016/j.joi.2015.07.0 External link… Extern länk

Pavlik, J. (2023). Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education. Journalism & Mass Communication Educator. 78. 107769582211495. 10.1177/10776958221149577.

Rieder, B., Peeters, S., and Borra, E. (2024). From tool to tool-making: Reflections on authorship in social media research software. *Convergence: The International Journal of Research into New Media Technologies, 30*(1), 216-235. https://doi.org/10.1177/135485652211270 External link… Extern länk

Rozado, David. (2023). The Political Biases of ChatGPT. Social Sciences 12: 148. https://doi.org/ External link Extern länk 10.3390/socsci12030148

Photo: ChatGPT generated 

Cultural diversity in healthcare – the role of leadership and education

The shortage of nurses is a global issue that already threatens the ability to deliver safe and effective care. According to the World Health Organization (WHO), this gap in the healthcare workforce, especially in Europe, could be characterised as a ‘ticking time bomb’ that could worsen health outcomes and, in extreme cases, lead to system collapse (WHO, 2022).

Category: Publication

Can Machine Learning aid in finding key factors to improve the Finnish healthcare system?

Finland is in the process of change in our health care system. The Nordic well-fare system is challenged in Finland, for instance, due to difficulties in attracting nurses, changing demographics in Finland, and a general pressure to reduce costs in the whole public sector. This poses severe challenges for the entire healthcare sector. Can Machine Learning (i.e., the subfield of Artificial Intelligence, which focuses on having a machine imitate intelligent human behavior) be used to understand relationships between different critical properties of our healthcare system? Yes, it can! An excellent example of how this can be done is found in a scientific paper by Hu et al. (2020), where the authors investigated nurses' willingness to report errors in a specific geographical area of the US.

Category: Publication