TSElosophers meeting 15.12.2022. Erkki Lassila, Kari Lukka, Otto Rosendahl.

(1) Gendron, Y., Andrew, J., & Cooper, C. (2022). The perils of artificial intelligence in academic publishing. Critical Perspectives on Accounting, 87, 102411.

(2) Korica, M. (2022). A Hopeful Manifesto for a More Humane Academia. Organization Studies, 43(9), 1523-1526.

Summary

Both of these articles raise a fundamental concern about the contemporary state and future of academia and scholarly work. Gendron et al. (2022) use the colonization of artificial intelligence (AI) in academic publishing as their example. They highlight the possible implications the inclusion of AI technologies might have on the role of human actors in the publishing process, i.e. concerning editors, reviewers, and authors. Korica (2022), on the other hand, uses her own experience from academia to highlight concerns about the current state of scholarly work and the working environment, along with some suggestions on how things might be improved. Therefore, both articles bring out the necessity to ponder what academia is, and should be, about, and how every member of academia has agency on this matter.

Our discussion

We agreed that the fundamental concern behind both of these articles is important and worth discussion and debate. The article by Gendron et al. (2022) revolves very much around the critiques of the data-driven evolutions of our life, like Zuboff (2019) and Han (2017), highlighting the issue of the “surveillance society”, which is today harder and harder for anybody to escape. The article pointed out well how too much emphasis and trust on the abilities of AI and algorithmic-based software – despite their promise to add to the productivity of processes – is moving us towards surface-oriented, mechanical and performance-focused academic publishing. We agree with the authors that this type of development is a real threat to good scholarship in academic publishing. In relation to the central editorial task of reviewer selection, for instance, we agree with Gendron et al. (2020) that AI would be a problematic route to fix the alleged ‘problem’ of human bias, since it would likely only bring an “elite bias” into these processes.

However, we felt Gendron et al. (2020) did not elaborate sufficiently on individuals’ ability to see and understand how exactly the evolution towards further digitalization happens in our life, and where this type of technology-oriented development might be leading us. It is difficult, or even impossible for most of us, to detect and comprehend the fundamental accumulating effects of each microscopic addition of the digital into our everyday lives. It is difficult for us to see the connection between some new minuscule software application, which is sold to us as a ‘help’ or ‘improvement’ of some insignificant daily task, and the accelerating ‘digi-colonization’ of various aspects of human life in our society.

While we agreed on the basic idea and many parts of the Korica’s (2022) paper, we specifically did not agree on the seventh suggestion made in the article. This suggestion seemed to echo instrumentalism – in the sense that publishing research is staged as the final aim over and above conducting high-quality research – and was therefore quite thoroughly against the idea of good scholarship. The paper also seemed to cover almost every worry of our lives, which made the scope of the article overly broad.