In the Age of AI: Who and what counts in open innovation

Published: 24.03.2026 / Artificial intelligence / Blog

Some of the world's most important innovations no longer come from single companies working alone – they emerge from crowds. But who decides whose contributions count? And what happens when AI enters that equation? Two researchers explore the question from different angles.

Introduction

Open innovation is the practice of building on knowledge from outside your own organisation. Rather than relying solely on internal research and development, firms invite external contributors – developers, users, independent researchers, even competitors – into parts of their innovation process. The results can be found everywhere: in the software running on your phone, in the medical devices developed through patient communities, in the urban services shaped by citizen feedback.

But openness is rarely as simple as it sounds. Someone always decides who gets invited, whose contributions count, and who is accountable when something goes wrong. Those decisions are often invisible – built into platform architecture rather than written into any policy.

AI is now changing the terms of that invisibility. As artificial intelligence enters the workflows of media production, public debate, and innovation evaluation, the question of accountability becomes both more pressing and harder to locate. Ahmed Hashish's research into online discussions about AI in media reveals that people are already grappling with this: debating who should design AI systems, who should oversee them, and who bears responsibility for the content they produce.

What follows is a dialogue between two researchers who approached that question from different directions – and found they were circling the same problem.

The hidden grammar of open innovation

Tomas:

When Matti Skoog and I studied open innovation platforms (Träskman &Skoog, 2022), one finding stayed with me long after the paper was published. Firms such as Nokia can choose how open to be. Openness is not “open” and it’s not a commitment built into the architecture of a platform. It is a resource that organisations strategically manage.

The crowd is invited in, but the firm decides what their participation means, what gets counted, and crucially, who gets held accountable for what. That asymmetry is, I think, the hidden grammar of open innovation.

To understand how that grammar operates, it helps to look at what Kornberger identifies as three design parameters of distributed information systems (Kornberger, 2017). The first is interfaces, which structure interaction between heterogeneous actors. The second is architectures of participation, which define how contributions can be made. The third is evaluative infrastructure, which determines how contributions are compared and valued.

This evaluative layer is important. It determines what counts as valuable and therefore who counts at all (Kornberger, 2017).

Ahmed:

What I found interesting in your description of the hidden grammar of open innovation is that it focuses on the design of participation and evaluation within innovation systems. My research approaches the same phenomenon from a different direction. Instead of examining how organisations structure participation, I examine how people outside those structures interpret and make sense of technologies that emerge from them.

With Reddit AI online discussions I analysed, people constantly interpret AI in relation to roles, responsibilities, and legitimacy. Participants debate who should design AI systems, who should supervise them, and who should be accountable when AI generated media causes problems.

These discussions occur outside the formal structures of open innovation platforms. Yet they still influence how technologies are understood and evaluated. In my thesis I show that individuals interpret AI through technological frames that shape how they understand what AI is, why it is used, and how it should be applied in media contexts (Hashish, 2026).

These interpretations do not directly redesign the infrastructure you describe, but they shape the social environment in which new technologies are accepted, resisted, or questioned. In that sense, open innovation is influenced not only by platform architecture but also by the broader interpretive environment in which these technologies are discussed.

The trust cycle and its breaking points

Tomas:

What made open innovation genuinely open, when it worked, was what open innovation intermediaries and community managers called a trust cycle. Transparency and accountability reinforced each other across the firm, the intermediaries, and the crowd.

Importantly, this was not the firm’s own argument. It was the intermediaries’ vision. These actors sit between the firm and the crowd and design the participatory infrastructure of innovation platforms. Their ideal was that anyone could observe how the organisation, its people, and its technologies are continuously performed, what the consequences are, and for whom.

But that cycle only works if the platform is designed to sustain it. When the firm controls the dial of openness, the cycle can break quietly without anyone noticing. The crowd continues contributing, but the feedback loop disappears.

Ahmed:

The discussions about AI in media reveal something similar, although from a different perspective. In the dataset I analysed, people rarely speak about AI in a single unified way. Instead, they interpret AI through several recurring frames.

In my thematic analysis four major themes emerged. AI is often described as a creativity accelerator that helps people produce content more efficiently. At the same time, it is also discussed as technically limited, ethically risky, and something that still requires human intervention (Hashish, 2026).

Each of these frames implies different expectations about trust and responsibility. When AI is framed mainly as a creativity tool, people tend to treat it as an assistive technology that supports human work. Trust is based on collaboration between humans and AI.

However, when AI is framed in terms of ethical risks such as misinformation or manipulation, the discussion shifts toward institutional responsibility. Participants begin asking who should regulate AI systems and who should be accountable for their outcomes.

From the perspective of technological frames theory (Orlikowski & Gash, 1994), these differences illustrate how the same technology can be interpreted through multiple lenses. Because these frames coexist, expectations about accountability and trust also diverge.

This helps explain why public debates around AI in media often appear fragmented. People are not necessarily disagreeing about the technology itself. They are interpreting it through different assumptions about what the technology is and what responsibilities it implies.

The endless loop: accountability without a finish line

Tomas:

Something that Skoog and I found troubling was the temporality of accountability in open innovation. Innovation rarely follows a linear path. It unfolds over years, sometimes decades, through unexpected recombinations (Schumpeter, 1947).

These processes also involve what Revellino and Mouritsen describe as relational drift. Innovations move across networks of actors and gradually transform as they are adopted in new contexts (Revellino & Mouritsen, 2015).

A contribution to an innovation platform may sit dormant for years. Later it might be rediscovered and developed into something its original contributor never imagined.

Yet the digital trace remains. It connects that original contribution to a chain of consequences that were never intended. As Messner argues, this raises important questions about the limits of accountability (Messner, 2009).

Ahmed:

This temporal dimension becomes visible in discussions about the role of human oversight in AI supported media production. Many participants in the discussions I analysed emphasise that humans must remain involved in AI driven workflows.

At first glance this argument appears to focus on creativity or quality. However, when participants explain their reasoning, the concern often shifts toward responsibility. People want to know who is accountable when AI generated content produces misleading or harmful outcomes.

Human intervention therefore functions as a way of maintaining a visible chain of responsibility. When a journalist edits an AI generated draft or verifies AI produced information, responsibility remains attached to a recognisable human role.

However, the discussions rarely propose concrete structures for how this responsibility should be organised. Participants emphasise the importance of human oversight, but the practical mechanisms for implementing it remain unclear.

In that sense the conversations reflect an ongoing process of sensemaking around AI in media. People recognise that the introduction of AI complicates traditional accountability structures. Yet the institutional arrangements capable of addressing this complexity are still evolving.

Taken together, this suggests that understanding open innovation in AI mediated media environments requires attention not only to platform design but also to the ways technologies are interpreted and negotiated in public discourse.

In the age of AI, who counts in open innovation?

The answer, this dialogue suggests, is that accounting is never neutral. To count is to make a choice – about what matters, who belongs, and whose contributions leave a trace. Platform architecture embeds those choices in digital infrastructure. Public discourse contests them in language. In the age of AI, both processes are accelerating, and the distance between them is growing harder to cross.

This raises questions that are as much philosophical as empirical. If accountability requires a visible chain of responsibility, what happens when that chain is distributed across algorithms, platforms, and publics that never directly encounter one another? If legitimacy depends on shared interpretive frames, what happens when those frames multiply faster than institutions can respond?

Who counts, in the end, is not a question that innovation systems answer – it is a question they continuously defer. That deferral is worth sitting with – with scholars (like we here), with media worker and politicians… in the quieter moments when we simply wonder whose voice is shaping the world we share.

References

Hashish, A. (2026). AI as a driver of open media innovation: A qualitative study of perceptions and perspectives (Master’s thesis, Arcada University of Applied Sciences). https://urn.fi/URN:NBN:fi:amk-202603023582

Kornberger, M. (2017). The visible hand and the crowd: Analyzing organization design in distributed innovation systems. Strategic Organization, 15(2), 174–193.
https://doi.org/10.1177/1476127016648499

Messner, M. (2009). The limits of accountability. Accounting, Organizations and Society, 34(6–7), 918–938.  https://doi.org/10.1016/j.aos.2009.07.003

Orlikowski, W. J., & Gash, D. C. (1994). Technological frames: Making sense of information technology in organizations. ACM Transactions on Information Systems, 12(2), 174–207. https://doi.org/10.1145/196734.196745

Revellino, S., & Mouritsen, J. (2015). Accounting as an engine: The performativity of calculative practices and the dynamics of innovation. Management Accounting Research, 28, 31–49. https://doi.org/10.1016/j.mar.2015.04.005

Schumpeter, J. A. (1947). Capitalism, socialism and democracy (2nd ed.). Harper & Brothers.

Träskman, T. I., & Skoog, M. (2022). Performing openness: How the interplay between knowledge sharing and digital infrastructure creates multiple accountabilities. Journal of Strategy and Management, 15(2), 194–219. https://doi.org/10.1108/JSMA-12-2020-0359

"Come on kids, let’s play”, says the robot – new Arcada project focuses on child-robot interaction

Already ten years ago Arcada set off its research in AI (artificial intelligence) and pioneered the field with a master's programme in Big Data Analytics. In recent years, Arcada further sharpened its focus in AI research by specifically focusing on ethical issues related to new technologies and on the interaction between humans and robots. So far, dentistry, elderly care, rehabilitation, and preventive health care have benefited from this research – and now the focus is set on child-robot interaction.

Category: Artificial intelligence

To the top of the page