Turun kauppakorkeakoulun tieteenfilosofinen kerho

Category: Uncategorized (Page 2 of 4)

Should we bring the mammoth back?

TSElosophers meeting 7.10.2021. Andrea Mariani, Elina Järvinen, Erkki Lassila, Kari Lukka, Milla Unkila, Morgan Shaw, Otto Rosendahl, Toni Ahlqvist.

Thiele, L. P. (2020). Nature 4.0: Assisted evolution, de-extinction, and ecological restoration technologies. Global Environmental Politics, 20(3), 9-27.

Summary

The Earth is 4,5 billion years old, and life on it around 3,7 billion years. Mammals get to spend their 200-million-year birthday, whereas hominids have been around mere 200 000 years. We Homo Sapiens managed to conquer the competition around 30 000 years ago, tamed the dog to help us 20 000 years ago and figured out farming 12 000 years ago (Dasgupta review 2021).

Everything preceding the use of tools and farming Thiele dubs Nature 1.0. It’s the state of the nature without the touch of humans. Nature 2.0 emerged as humans started to tinker with their environment, to plow fields, set up irrigation, to domesticate kettle – to build cities, roads, energy infrastructure, to extract minerals, to dig oil. While still a shorter period than the previous, Nature 2.0 has existed notably longer than Nature 3.0 that started mere decades ago: “It is chiefly characterized by the capacity for the accelerated, nonincremental, and precisely controlled modification or creation of life-forms and their environments. The primary Nature 3.0 technologies are nanotechnology, geoengineering, and biotechnology.”

In the article, Thiele discusses the implications of Nature 4.0, the potential next step in this trajectory. While utilizing the technologies designed during Nature 3.0, the distinction emerges from the motivation underpinning their use. Nature 3.0 is all about pleasing humans, whereas Nature 4.0 is about attempting to turn the tide of biodiversity loss of our making. We now have the technology to modify the habitats we once destroyed to recreate bounded ecosystems, to tinker with genes to bring back the passenger pigeon we hunted to extinction, to artificially engineer species that tolerate the changes we made to the biosphere in Nature 2.0 and 3.0.

In sum, we might be able to bring back the woolly mammoth and many other species. But the focal question Thiele asks is, should we, given that the potential risks of Nature 4.0 may be huge and the unpredictable consequences irreversible?

Our discussion

In discussing the article, the TSElosopher camps of using either the big or the detailed brush re-emerged. For some of us, the accuracy of the examples given in the article was of lesser value than the overall message the article carried, whereas for others, the lack of accuracy in detail made the overarching message less convincing. We all agreed that the article indeed gives food for thought.

Have we humans really come so far in our destruction of the biosphere that the only means to conserve and restore its livability (first to other species and then to ourselves) is to start artificially modifying species, habitats and natural processes? How should we evaluate the risks of releasing artificial DNA to natural processes? With the fallibility of technologies that seldom work exactly as envisioned in the designing phase, what kinds of Pandora’s boxes will we be unleashing when creating species or habitats that natural evolution did not account for?

If this is not yet the case, what could be done? We discussed the proposition by Dasgupta to re-envision nature as an asset, to be included in the accounting of the types of capital we possess. However, despite some support for this solution in TSElosophers, there were two criticisms. First and foremost, fixing the problem of valuing money over everything else through endowing also nature with price tags is a bit like fighting fire with oil. In other words, trying to solve the serious problems caused by Modernism, inserting more Modernism. Instead of assessing nature in monetary terms, we should instead do more to make people take its intrinsic value seriously – not all that counts can, or should be (ac)counted. The second criticism was born out of the first one: fighting the problems with the same mechanisms that caused them can only succeed in postponing the inevitable. We could all agree that the more responsible avenue is to try to work towards a paradigm shift in terms of the fundamental values and our anthropocentrically selfish and myopic life-style we adhere to already today – although ‘the Modernists’ in us would couple this approach with letting the economy treat nature as an asset.

While the description of both the past deeds of humans and the possibilities we now have at our perusal evoked sentiments of doom and gloom, not thinking about the choices we are currently making is not an option. We perceive human nature as such that the curiosity of driving natural scientists to uncover all that is humanly possible is seldom balanced with the patience of thinking through the implications of using all technologies we potentially could wield. We discussed that it falls for us social scientists to stay updated about the developing technologies and to take an active role in thinking through what of the things we could, should we actually be (not) doing.

As we humans are not exogenous to the nature, but a part of it, it can be viewed that all the tools and technologies we have designed and all actions we’ve taken, are due to the evolutionary processes that made us what we are. As humans we are predestined to be the representatives of our species and to act as the types of animals we are – to seek shelter, sustenance and comfort with all the means we have at our disposal, just as do any other animals. Hence, isn’t it just a fluke of evolution that made us capable of changing our environment more than the beavers or ants can, is it?

As the type of animal we are, we are capable of both destruction and creation beyond the possibilities of other species. The very interesting question therefore is, which of these sides of humanity, the destructive or the creative prevails when we are faced with the scale of changes we have wrought to the biosphere maintaining also our lives? Is our collective survival instinct strong enough to turn the tide of destruction? Because ultimately, though we are a tougher breed to kill than even rats or cockroaches, the kind of biodiversity that existed when humans evolved is still necessary for our survival.

We need the type of air to breathe that the Amazonia produces us, and the type of water to drink as gets filtered by untarnished soil, and dependence on technology to produce these comes with unimaginable uncertainties. The attempt to apply Nature 3.0 technologies to support assisted evolution and de-extinction leads to ethical and practical questions of considerable importance both in positive and negative terms. And yet at the end, we can ask ourselves, if the so called ‘unnatural’ human made artifacts are actually very natural and very normal part of evolution on this planet?

Hues of normativity in positive economics

TSElosophers meeting 28.5.2021. Erkki Lassila, Joonas Uotinen, Kari Lukka, Milla Unkila, Otto Rosendahl.

Reiss, J. (2017). Fact-value entanglement in positive economics. Journal of Economic Methodology, 24(2), 134-149.

Summary

The article by Reiss (2017) outlined historical developments of thinking in positive economics based on David Hume’s ”fork”, i.e. the separability of facts from values. Hume’s fork maintains that factual statements can be known without referring to non-epistemic values such as beauty, good, right, bad and wrong.

Hume’s fork is frequently applied in generating the distinction between normative and positive economics, argued forcefully for instance by Milton Friedman. In this mode of thought, the former are arguably about values and the latter about scientific facts.

The article reiterates, through examples from various aspects of conducting research, the already elsewhere made argument that economics is hardly able to provide a purely positive theoretical body and intricate statements presented as such are only seemingly so. A central theme of the article is that it may be that whenever generalizations are made beyond the immediate observations like that “this leaf here is green”, we may not be able to avoid the inclusion of non-epistemic values.

Our discussion

After many-sided and also critical discussion, TSElosophers came to the conclusion that Reiss manages to do what he wishes to achieve: supporting the blurring of the distinction between positive and normative economics. However, neither Reiss nor we are saying that science would become impossible to distinguish from opinions as Reiss elaborates a cognitivist metatheoretical stance to ethics that emphasizes human capability for reasonable argumentation about normative statements as well.

Thus, the blurring of the separability thesis enables more active role to economists who may now discuss about the normative hues that unavoidably shade the scientific inquiry – coming from e.g. the underdetermination of epistemic values and the expectations about the use of theories. Acknowledging this would allow economists to excel in this and leverage their role in the society with greater awareness and transparency about the values impacting one’s theoretical work.

Most importantly, the blurring of the separability thesis need not become a crisis in economics. Even if positive and normative statements cannot be sharply distinguished, some statements are still more based on facts than others; and academia places considerably more weight on the epistemic values in knowledge-production than happens outside of it.

TSElosophers sympathized with the use of separability thesis as a rhetoric device, although it doesn’t fully capture the complexities of science-making. Hence, it seems to function partly as an unrealistically straightforward solution to distinguish between the more and the less epistemic argumentations. Employing the separability thesis may be helpful in the context of economists’ theorizations when they are challenged in societal discourse; they still need a way to signal the convergence of their theorizations with the epistemic body of economics.

It was pointed out, however, that careless usage of such rhetorical devices may, however, corrupt the credibility of science in the long run. They are always political because they exclude some approaches from discussions instead of others without a watertight basis.

We concluded that a critical mindset and keeping one’s conflicting non-epistemic interests in a tight rein should be among the key strengths of all academicians – regardless whether one supports or rejects the facts-values dichotomy.

Science and values with dawning virtue ethics

TSElosophers meeting 22.4.2021. Joonas Uotinen, Kari Lukka, Maija-Riitta Ollila, Milla Wirén, Morgan Shaw, Otto Rosendahl.

Hicks, Daniel J. (2014). A new direction for science and values. Synthese, 191(14), 3271-3295.

Summary

Which values influence science and in which ways? Which values may legitimately affect science, and which values have an illegitimate effect?

Daniel Hicks presupposes that values are an integral part of scholarly research. “Many philosophers of science now agree that even ethical and political values may play a substantial role in all aspects of scientific inquiry.” (p. 3271) Discussions on isolationism and transactionism are not very relevant anymore. (Isolationism believes that ethical and political values may not legitimately influence the standards for acceptance and rejection of hypotheses. Transactionism, the negation of isolationism, states that some ethical and political values may legitimately make a difference to the standards of acceptance and rejection.) Daniel Hicks thinks that there are both legitimate and illegitimate values affecting science, and we should be able to distinguish them from each other.

Values can affect science in different phases of the research process: the pre-epistemic phase, the epistemic phase, and the post-epistemic phase. Hicks states that the distinction among pre-, post-, and epistemic phases is useful for some analytic purposes but cannot be directly applied “to the concrete complexities of the real world.” (p. 3289) In this paper, Hicks uses an Aristotelian framework to capture these complexities of real life, in the footsteps of Alasdair MacIntyre and Philippa Foot.

Hicks compares two cases: feminist values in archaeology and commercial values in pharmaceutical research to make his point. In the Feminist Case, self-identified feminist scientists criticized the androcentric presuppositions and research agendas. This project brought about new contributions, which changed archeological practices and the understanding of the cultural past. The Pharma Case describes the impact of commercial values on science. One example deals with the results of a clinical trial of an antidepressant. The trial did not show the effectiveness of the drug. However, the results were presented in a way that suggested the drug was effective. In the preliminary sketch presented by Hicks, in the Feminist Case, the impact of values is legitimate while illegitimate in the Pharma Case. In the more detailed analysis that follows, Hicks deals with three major approaches to legitimacy vs. illegitimacy. In his conclusion, he outlines his own approach, which he claims to emphasize ethics besides epistemology.

Hicks presents useful theoretical tools for analyzing values – e.g. direct, indirect, and cultural impacts of values – and finds inadequacies in using these tools. He implicitly suggests an Aristotelian virtue-based ethical framework to supplement these theories with an ethical perspective. Hicks reaches beyond the discussion on science and values: researchers should emphasize the virtues of good scholarship and grow as scientists to full maturity.

Our discussion

Some TSElosophers were more sympathetic to the dawning virtue ethics in science than others and all agreed that this is surely thought-provoking – but far from unproblematic. In particular, Hicks’ position on the Feminist and Pharma Cases seems to be predetermined by his own values.

In general, Hicks claims to avoid “pernicious relativism” (p. 3291) but fails to provide argumentation for his claim (and openly admits as much). Relativism is apparent in Hicks’ own distinction between constitutive (e.g., seeking for truth in science) and contextual (e.g., profit-making in the Pharma Case) values that seem to allow the corruption of epistemic values. From the point of view of science, epistemic values are constitutive, while from Hicks’ point of view, the epistemic values were contextual for the agents in the Pharma Case. In contrast, TSElosophers insisted that epistemic values are, and should be, at the core of research in the epistemic phase.

The three different phases of the research process inspired a vivid conversation. At which stage do values impact the research process? In the Feminist Case, values influence the pre-epistemic phase. During the pre-epistemic phase, “research programs are chosen, hypotheses are formulated, and experiments are designed and conducted.” Initially, there is no evidence to back up the new paradigm, research program or theory, but it is produced in the course of the research process. In the Pharma Case, a severe problem arises when unwanted values influence the epistemological phase. In the epistemic phase, “hypotheses are evaluated in terms of their relationship to empirical evidence, among other things, and accepted or rejected.” (p. 3273)

For TSElosophers, the pre-epistemic phase turned out to be about paradigmatic or attitude-related matters and about engaging in everyday research practice. For instance, the currently ruling publish or perish -mentality encourages the instrumental interest in science. Researchers need to pay heed to, e.g., the paradigmatic or methodological tastes and values of journal editors and the potential referees of the papers. Researchers frame their research and articles in such a way that they might appeal to the publishers. If the effect extends to the epistemic phase, all the worse.

What about the values of the epistemic phase? Is it possible that different epistemic values conflict? Or can there be important cases of epistemological underdetermination? For example, scientists choose methodologies with quite a lot of epistemological underdetermination in long-term prediction models of complex phenomena, such as climate change. Epistemological uncertainties make elbow room for other decision-making procedures. In those cases, should one for example exaggerate the effects of climate change if the evidence is ambiguous? The suggested solution was to communicate the unknown or the degree of uncertainty more effectively, e.g. with a scenario analysis. In sound science, we report the range of uncertainty. Another beacon of hope is the self-correcting process of science.

Finally, TSElosophers scrutinized the issues related to the third phase of the scientific process, the post-epistemic phase, “during which accepted hypotheses are utilized in other research (whether to produce more knowledge or new technology or both); this phase also includes the impacts of the accepted hypotheses on the broader society.” (p. 3273) For the sake of argument, let us assume that there might someday be research that could fuel racism. For example, we might have studies that corroborate the hypothesis of races based on biological differences, for instance, regarding IQ. Should we refrain from publishing such research or even conducting it, pre-shadowing the likely ensuing, problematic public discourse or cultural processes? Another tricky example is the Manhattan Case, the project that resulted in creating the nuclear bomb. It would not have been possible without Einstein’s theory of relativity. Should we stop doing any research that might lead to disastrous applied science and technology?

TSElosophers concluded that ethics are embedded in the scientific process and tend to be included in all three phases. That said, in the epistemic phase, precisely epistemic values should be kept as dominant as possible – this is the very lifeline of scholarly work, without which it ceases, sooner or later, to be meaningful. However, the results and the ethically justified research process need to be taken into account. An example of an unethical process is the utterly dehumanizing studies on human subjects by Josef Mengele.

TSElosophers wrapped up by realizing that science is both a logical process and a historical one. The cultural context has an impact on the concepts we use and the values we employ. However, epistemic values are the inalienable core of science: Prioritizing the truth, in the sense of a purpose of well-grounded scholarship, is the right procedure in the ethics of science. It is also pragmatically the most prolific policy: in the long run, trustworthiness pays off. Truthfulness is the basis of reliability – and it can be often communicated most effectively by direct reference to the epistemic phase and epistemic values, even though Hicks is correct in that advanced scholarship does well to analyze the entire research process with a broader ethical framework.

How forests think?

TSElosophers meeting 12.3.2021. Erkki Lassila, Kari Lukka, Maija-Riitta Ollila, Morgan Shaw, Otto Rosendahl.

Kohn, Eduardo (2013) How forests think: Toward an anthropology beyond the human. University of California Press, Berkeley.

Summary

In How Forests Think, Eduardo Kohn explores the question of how to create an analytical framework for anthropology that can include both humans and nonhumans. Kohn’s investigation is based in his long-term fieldwork in Ávila, a village of Quichua-speaking Runa people in Ecuador’s Upper Amazon. Kohn brings his readings of the semiotic theories of pragmatist philosopher Charles Peirce, the application of Peirce’s work to biology by Terrence Deacon, and a number of other theoretical reference points into conversation with his observations from Ávila. A central contention of the book is that “seeing, representing, and perhaps knowing, even thinking, are not exclusively human affairs” (Kohn 2013, 16). The TSElosophers read the book’s Introduction and first chapter, “The Open Whole”, which examine what it would mean for anthropology to take this claim seriously. This section also opens a number of questions about what doing so might tell us about how to live as humans in a world inhabited by many other kinds of living beings.   

Our Discussion

Kohn problematizes a conventional view of anthropology, arguing for the development of an “anthropology beyond the human” based on the assumption that there is more continuity between anthropos and other forms of life that has been recognized in the past. Kohn argues that by focusing exclusively on the processes of meaning-making that are unique to human language, the human sciences have so far overlooked the many ways in which all life is produced through the creation and use of signs (i.e. semiosis). “Provincializing” linguistic representation based on the type of signs that are used by humans alone would treat language as a very special human case of what Kohn holds to be a vastly more widespread phenomenon. Thus, this book invites us to entertain the possibility that other living beings, through their own ways of making sense of and representing their surroundings and relations, also think. This suggests we need to pay attention to the specific ways in which even, for instance, forests think alongside us, but not exactly like us.

Recurring questions surrounded Kohn’s elaboration of an extensive and idiosyncratic theoretical framework, which often felt cumbersome. Was all of it really necessary, and if so, where did this perceived need come from? Is it primarily relevant to a readership that is not familiar with ongoing paradigmatic debates within anthropology? The extended treatment of Pierce’s semiotics and conception of realism was helpful to those of us who were completely unfamiliar with his work, but Kohn’s strategy of interweaving it with his ethnographic material was not always successful, leaving many key points ambiguous. However, to be fair, since we read only a portion of the text, it is possible that the ideas opened up in these early sections are dealt with more fully in the remainder of the book.

Kohn periodically critiques other approaches he situates within the Posthumanities, particularly the work of Bruno Latour and Jane Bennett. He particularly charges that Latour mistakenly brings the human and nonhuman together using an “analytic of mixture” that elides meaningful differences between language and things (Kohn 2013, 56). Unfortunately, this line of argumentation is not further developed, remaining too vague to consider more thoroughly. What we can be confident in that Kohn aligns himself with some of the aims, but not the means of these other thinkers, while making the case that his own perspective is a viable alternative.

Our discussion pulled on the numerous loose threads left dangling in a tantalizing way from Kohn’s text. How does the concept of consciousness fit into his framework? Is self-awareness just a rare and exceptional aspect of becoming a self? Is Kohn completely rejecting the agency of inanimate matter even as he tries to more firmly ground it for living things? Should thinking be stretched beyond cognition in this way, or would another verb have avoided unnecessary confusion? Those TSElosophers hungry for answers will look to what insights the next chapters of How Forests Think hold. 

Overall, the TSElosphers found this book to be difficult but intriguing reading. Those of us who were enthusiastic about it focused on its potential to inform efforts to rethink human environmental ethics in the Anthropocene. However, we also questioned whether the particular approach Kohn takes creates the most fertile ground for new ideas in this area. In particular, the question was raised whether the book’s worry is only a worry about an allegedly too narrow window of analysis of anthropology – it is already well-known in many other fields of science, not least biology, that all living creatures communicate with their environment, yet not with human language. A strength of this work, however, is the way it got us talking about the possibility of seeing our human relations to the wider world in a new and surprising way.

A glance to various performativities of performativity

TSElosophers meeting 1.2.2021. Elina Järvinen, Erkki Lassila, Kari Lukka, Maija-Riitta Ollila, Morgan Shaw, Otto Rosendahl, Toni Ahlqvist.

Gond, J. P., Cabantous, L., Harding, N., & Learmonth, M. (2016). What do we mean by performativity in organizational and management theory? The uses and abuses of performativity. International Journal of Management Reviews, 18(4), 440-463.

Summary

The concept of performativity has been interpreted in many ways after John Austin introduced the idea of ‘performative utterance’ in the beginning of the 1960’s. This paper by Gond, Cabantous, Harding and Learmonth takes up this perhaps quite complex concept and tries to summarize the way how different versions of it have been utilized by scholars in the field of organization and management theory (OMT). There are indeed several different ways as for how the original version has been interpreted. Gond et al. introduce five conceptualizations of performativity in their paper:

  1. doing things with words (Austin);
  2. searching for efficiency (Lyotard);
  3. constituting the self (Butler, Derrida);
  4. bringing theory into being (Callon, MacKenzie);
  5. sociomateriality mattering (Barad).

These five conceptualizations are linked to four so called ‘turns’ in OMT, which according to Gond et al. can be identified as influencers for the upsurge of performativity studies in OMT. These four turns are the ‘linguistic turn’, the ‘practice turn’, the ‘process turn’ and the ‘material turn’.  Each of these ‘turns’ can then be linked to certain interpretations of performativity. For example, ‘linguistic turn’ may be linked to non-representational view of discourse, whereas ‘practice turn’ relates more to the interest in the actual doing or acting of organizational actors.

After providing these conceptualizations and the possible reasons for their success, Gond et al. make distinction between two dominant ways of using the theoretical concepts of performativity in OMT. The first way is described as ‘one-way process’, where performativity concepts are more or less just ‘borrowed’ from one specific source domain and then used to generate new knowledge in the organizational context. The other way is said to be more sophisticated ‘two-way’ exchange process, for example for theory-building.

The article presents several examples of such OMT studies which have approached performativity from certain angle, which in itself can be quite illuminating for anyone who previously might have had only narrow view on performativity. By presenting these different uses of theories and example studies, Gond et al. try to find a way to provoke a ‘performative turn’ in OMT, which might ultimately “unleash the power” of the concept itself “to generate new and stronger organizational theories”. However, even if this aim might be clear, the article had some deficiencies which might hinder its own performative aims.

Our discussion

Perhaps not so surprisingly, the concept of performativity was familiar to all TSElosophers who joined the discussion. However, the perspective towards performativity varied between TSElosophers depending on their background and their earlier readings related to these different concepts of performativity. All participants were familiar with the Austinian performativity as to how to do things with words and the famous example of “I pronounce you husband and wife”. If Austin was a common read to all, it was different with other foundational concepts and authors of performativity. Some were more familiar with the studies from such scholars as Latour, Callon and Mackenzie, while others knew better texts from Barad, Butler and Derrida or even Lyotard. Many had read Barad before (see TSElosophers blog post from last May). Therefore, everyone’s perspective towards performativity was perhaps a bit narrower than the overall collection that was presented in this paper. This was one of the appreciated features of the paper on which TSElosophers agreed on.

Participants also viewed positively the presentation how OMT scholars use performativity in their own domain. This section echoed some similarities with Lukka and Vinnari’s (2014) idea of distinguishing domain and method theory as two different roles which theories can play in a piece of research, even if it used a bit different way of presenting the idea. Gond et al. criticizes the way how OMT scholars had mostly borrowed performativity concepts from other domains without any attempt to adding anything on the concepts themselves. Therefore, the paper seemed to be provocative not only towards OMT scholars to think about using more of these different performativity theories as such, but also to the way how OMT researchers should aim for contributing to those theories which are taken from other domains and thus should try to generate stronger organizational theories.

Lyotard’s performativity as searching for efficiency was considered by TSElosophers as the odd one out of the bunch and ill-fitted in the collection of other selected authors and their critical approach towards performativity. One participant pointed out that the reason for this could be the way how the authors had chosen their literature just by picking up words related to performativity. There was a suggestion among the TSElosophers that the most fruitful way to view Lyotard’s perspective might be by understanding performativity as a form of self-optimization or self-improvement striving towards efficiency through measurement and optimization of the input/output ratio. The demand for efficiency creates circumstances for turning people into “their own tyrannical boss” according to critical management scholars Cederström and Spicer’s (2017) mirth-producing column in the Guardian.

Our discussion also disapproved with the paper’s dismissive stance towards the notion of critical performativity by Alvesson, Spicer and others. Gond et al. seemed to understand their critical performativity as supporting precisely the agenda that Lyotard had put his critical finger on: Making organizations show improved performance. In contrast, TSElosophers felt critical performativity is more broadly about using the performative features of human action for something that is socially desirable – being thereby often critical towards suppressing goals to only efficiency improvements, which effectively would tend to uphold the societal status quo.

As a conclusion, TSElosophers thought Gond et al. (2016) was, despite some misunderstandings, a good read about performativity, because it provided a broad overview on the various conceptualizations of performativity.

References

Cederström, Carl and Spicer, André (2017). We dedicated a year to self-improvement: here’s what it taught us. Retrieved from https://www.theguardian.com/commentisfree/2017/jan/02/self-improvement-optimization.

Lukka, K. & Vinnari, E. (2014) Domain theory and method theory in management accounting research. Accounting, Auditing and Accountability Journal. 27:8, 1308-38. DOI: 10.1108/AAAJ-03-2013-1265

Being smart about the role of theory (in top journals)

TSElosophers meeting 3.12.2020. Erkki Lassila, Kari Lukka, Maija-Riitta Ollila, Morgan Shaw, Otto Rosendahl.

1) Straub, D. W. (2009). Editor’s Comments: Why top journals accept your paper. MIS Quarterly, iii-x.
2) Avison, D., & Malaurent, J. (2014). Is theory king? Questioning the theory fetish in information systems. Journal of Information Technology, 29(4), 327-336.

Summary

We read two information systems science articles with contrasting positions to the role of theory in top journal publications. Straub (2009) is concerned that only a small minority of researchers are capable of publishing many articles in the top journals and gives general advice to researchers how to publish in the top journals. Avison and Malaurent (2014) describe the counterproductive implications of everybody trying to strive for the narrow criteria set by top journal publishing. They criticize Straub’s catchphrase “theory is king” and consider that accepting “theory light” articles in the top journals would benefit information systems science.

Our discussion

Straub’s (2009) article presents his view on the four requirements of and the six enhancements to publishing in top journals. His requirements emphasize newness, nontriviality, thematic popularity and the role of theory. Enhancements include familiar structure, finetuning and constructive relation to the “major movers and shakers”. Although all of these are, at their face value, good pieces of advice by themselves, the article fails to consider how the instrumentalist nature of the given advice might produce problems within the academe regarding good scholarship.

Straub’s list reveals how top journals disincentivize the creativity needed to pursue scholarly discoveries. Even though Straub calls for finding non-competed “blue ocean” spaces with theorization, the newness rule conjoins with conservativeness rules to water down most article contributions to incremental gap spotting. Also, the catchphrase “theory is king” encourages feigning theoretical development with unnecessary theoretical complexity. The formally presented ‘theory contributions’ (these days strikingly often in the form of precisely three of them!) are often actually very forced and artificial ones as for their true meaning content-wise. The discoveries of an entire discipline might be prevented, if other journals – in their quest to increase credibility and ranking – mimic narrow top journal requirements.

Avison and Malaurent (2014) trace issues of “theory is king” approach and suggest a mitigation to top journal requirements. Their paramount argument implies that theory-driven pursuits tend to perceive what the theory preconditions as visible and real. Consequently, they suggest that also articles without strong focus on theoretical contribution might be appropriate for publishing in top journals, if the journal editors, for instance, perceive the empirical contents of the piece exciting and novel as such or that some theoretical contributions can ensue from the discussion inspired by the article.

Avison and Malaurent (2014) adopt a radical position, since they assume theory as a uniform argument towards a clear direction, although it is much more constructive to be seen as a multifaceted structure that offers a wide range of possibilities. They fail to consider how “theory is king” has likely emerged as a contrast to studies that are very empiricist and descriptive, have no real direction and do not develop any meaningful argument. Hence, in TSElosophers’ view, their catchphrase “theory light” bears a risk of going too far in accepting empirical reports as scientific studies.

In sum, TSElosophers raised concerns regarding the catchphrases of both articles. Both catchphrases serve a certain sub-segment of researcher interests within the discipline and, hence, can easily politicize the discussion. Instead, we suggest ‘theory smart’ alternative that every article needs to fulfill. Theory smart follows Straub’s guidelines to the extent they encourage developing an interesting argument that skillfully relates to what has been argued before in the specific domain. Theory smart recognizes Avison and Malaurent’s concerns to the extent that the central driver of the study, if one needs to be assigned, should not be anything else than the well-motivated matter of concern explicated in the research question of the study.

Precious, precarious democracy

TSElosophers meeting 23.10.2020. Erkki Lassila, Kai Kimppa, Kari Lukka, Maija-Riitta Ollila, Milla Wirén, Otto Rosendahl.

Diefenbach, T. (2019). Why Michels’ ‘iron law of oligarchy’ is not an iron law – and how democratic organisations can stay ‘oligarchy-free’. Organization Studies40(4), 545-562.

Summary

Diefenbach’s article aims to refute Michels’ Iron Law of Oligarchy, which states that the essence of organization “gives birth to the elected over the electors, of the mandatories over the mandators, of the delegates over the delegators.” The article divides Michels’ prosaic writings into six arguments:

1. Organisation is based on division of labour, leading to specialization
2. Specialisation creates specialists and leadership must be provided by specialists
3. It leads to a distinction between superiors and subordinates
4-6. Professional leaders cannot be influenced or controlled by the subordinates, strict compliance becomes a necessity for subordinates and leaders form a cartel or ‘closed caste’, making their ruling permanent

These points show a compelling slide from democracy into oligarchy. Moreover, the Iron Law cannot be empirically disproven, since any extant democratic organization might later turn into oligarchy. Therefore, Diefenbach sets to counter each of the above points on theoretical and methodological grounds.

The article approaches an important concern, but suffers from structural shortcomings. It is motivated to oppose the performativity of the Iron Law, which is sometimes simplistically applied e.g. to provide ironclad justification for the oppressors or a solid rationalization for the passivity of cynics and spectators. Ironically, the article itself adopts some simplistic stances due to its mechanistic approach and short length.

Our discussion

In contrast to the technic-functionalistic approach in the article, TSElosophers’ discussion omitted the point-by-point structure and concentrated on what was more or less overlooked in the article: power considerations, scale and type of organizations and perspectives from social psychology.

We felt that Diefenbach’s definition on legitimacy would have needed to include power considerations. Generally, we suggested that the underlying driver of the kind of processes like the emergence of oligarchies is seeking powerful positions, and once gained, keeping such positions intact. Diefanbach emphasized the acceptance of internal and external stakeholders, but remained mute on the relative power of stakeholders. Although oligarchy draws the support from the ruling elites and the related beneficiaries (plutocracy, class ideology, nepotism, etc.), democratic legitimacy comes from supreme power being subjugated to the tiniest of powers, especially the power of individual persons. Oligarchies can hardly demonstrate that their supreme power is subjected under a network of powers to include the poor, the sick and the nonconformists.

Diefenbach soon abandons the starting point of discussing all organizations in favour of pitting the (varying) legitimacy of democratic organizations against the (varying) illegitimacy of oligarchic organizations. TSElosophers discussion moved beyond this distinction to consider other important organizational qualities, such as scale and type. We agreed that the scale of organizations positively correlates with the prevalence of oligarchy; it requires less insight and institutional work to keep smaller organizations democratic.

Also, the political, business, educational, scientific and other type of organizations’ legitimacy concerns differ. For example, many business organizations are ruled by the few over the many with little qualms to their legitimacy. To the extent the business organization is perceived to serve customers who are informed symmetrically and provided with competing choices, it gains legitimacy as its survival depends on paying attention to the viewpoints of a plurality of stakeholders. In sharp contrast, we feel that the scientific and educational organizations, including University of Turku, too often centralize and standardize, although effectiveness could be substantially improved with more grassroots democratic administration and teaching practices.

We further contextualized the topic with psychological perspectives. One in our group found evolutionary psychological hypotheses useful for considering the gap between personal traits of good leaders and those adept at climbing the career ladder. Another referred to Fromm’s book ‘Escape from Freedom’ that posits a substantial minority of humans as inflicted by behavioral sado-masochism: with tendencies to desire strongman leaders and act as one if placed in a superior position. Still another emphasized prospect theoretical uncertainty aversion: Superiors might fear vengeance if their power position weakens and sub-ordinates might continue to tolerate the ruler if only because that’s the devil they already know.

Overall, the article diluted the Iron Law into the Iron Threat of Oligarchy. Not having read the original text on Iron Law by Michels, we remain unsure if any refutations were made or if Diefenbach merely framed the same issue with more positive overtones. The novel framing emphasizes the constant need to take care of democracy. As such, Diefenbach’s article is best to be read as a list of threats against democracy and the key mechanisms for internally nurturing democracy in organizations.

Revelations on human kindness

TSElosophers meeting 23.9.2020. Toni Ahlqvist, Mohamed Farhoud, Elina Järvinen, Kai Kimppa, Erkki Lassila, Kari Lukka, Maija-Riitta Ollila, Ekaterina Panina, Otto Rosendahl, Morgan Shaw, Milla Wirén.

Rutger Bregman: Humankind – A hopeful history

Summary

Humankind is a world-explaining opus aimed at wide audiences in the style of Neil Diamond or Yuval Harari. While Bregman draws from research, the book is not academic, but unashamedly popular, with the mission of making one point. In addition to making his point, Bregman also discusses its implication and concludes by an easily digestible list of suggestions for all readers to consume at a glance.

The key point of the book is that while we’ve (as a humanity) learned to view us humans through the “veneer theory” proposing that underneath a thin veneer of civilization we’re all selfish savages, the very contrary is true. Fundamentally the homo sapiens is a kind creature that has accomplished all its collective efforts through the collaboration enabling power of that kindness.

To prove his point, Bregman showcases some of the most notable examples used to argue for the underlying savagery of the human, takes them apart, and shows how completely different outcomes would be at least equally possible. In discussing the implications of his key point, he draws from the power of performativity – claiming that should we consider each other as trustworthy and kind, we would be able to create a society where trustworthiness and kindness reign.

Our discussion

To begin with the main point of the book, the inherent nature of human, the tselosophers represented three standing points, however none of them subscribing to the veneer theory view. First, part of us represented the choir to which Bregman preached: yes, humans are good and when in doubt, should first and foremost be treated as such – even when in some contexts positivity and kindness are viewed as naivete. Secondly, some of us pointed out that good and bad are constructed and highly context specific: none of us are ever either-or, but depending on the combination of setting, actions and underlying traits, either better or worse outcomes follow. Thirdly, there was also the view that humans are good but sinful, meaning that regardless of our aims to strive for goodness, we are fundamentally imperfect.

In terms of the technicalities of the book, the tselosophers agreed that Bregman developed his argument through sampling certain cases, not building a logically or statistically iron-clad theoretical argument. Some of us liked and accepted the bigger picture that emerged as a result of this eclectic and case-bound effort, whereas some of us had difficulties in swallowing a) the eclecticism resulting in superficiality instead of depth, b) the lack of solid theorizing sometimes visible in circular logic, or c) the seemingly thin understanding of some of the building blocks (like the writings of Hobbes, Rousseau or Dawkins) arguing that while the effort is laudable, can such a bigger picture be trusted where the connected dots are not rightly positioned (understood)?

One of the themes at the forefront in our discussions was the problem of micro-macro, also discussed as the problem of “us vs them”, or the problem of aggregation. As we’re all in agreement that there are notable societal level problems afoot, to what extent is it possible to try to fix them through attempting to change the individual? While certain problems can be solved with kindness extended to the “us” near me, can the scope of “us” be extended to encompass such a number of both human and non-human actors and entities as to actually nudge things towards a better constellation?

As a spin-off of the micro-macro theme, the previous tselosophers’ discussion around the concept of psychopolitics (in the book of that name by Han) was brought to the fore: in rolling down the responsibility of kindness on the shoulders of the individual, are we ultimately just contributing to the trend of internalizing the social governance mechanisms? Can kindness become the type of a “superficial goodness” that the individuals internalize and the ones in power harness to continue suppressing the individual into a mere source of revenues upholding the capitalist power structures? (See also tselosophers’ discussions on Zuboff.)

However, this line of thought was clearly not the one on Bregman’s mind: we detected nuances of anarcho-syndicalism in his writing. To us it seemed that Bregman tackled any macro level problems through proposing less structure, less organizing, more grass roots democracy and power to the little people. We tselosophers were somewhat doubtful whether the macro level problems can be solved only through erasing all structures, and chatted about the ‘iron law of oligarchy’ – as there are more people (and non-people) on the planet that can be accommodated in any setting of compassion-based us, some structures are (unfortunately) needed, and as long as there are structures, there are hierarchies, and as long as there are hierarchies, there are those with more power than others, and as long as some have power, they do not want to give it up.

Nevertheless, the book awoke several individually valuable insights: first of all, the power of performativity coupled with the power of us as teachers and researchers. In our teaching, do we continue to channel such old theories which are built on the assumption of humans as inherently lazy and self-advantage seeking? If we continue to do so, are we just passing along ‘truths’ or actually contributing to upholding a world where such individuals reign? The taking apart of the famous Stanford experiment, or the findings of Milgram raised thoughts about how important it is for us researchers to strive for ethical research, to ensure the validity of whatever we offer for the building blocks of the next knowledge creation efforts – and to be self-reflective of our own basic assumptions that bleed into the findings we thus offer.

Additionally, a note and concern about the role of psychology was raised: it seems that currently many of the theories in several fields governing our societal operations are grounded on the findings from psychology without questioning the validity of such findings. Maybe it would be the time to both question the role of psychology and be more critical about its findings, especially when aggregated into the principles governing society level structures, such as economics.

To conclude, we saw the value of these types of popular books as they can help seed beneficial discussions also among such people who do not spend their time perusing the (sometimes obscure, but) profound and nigh flawlessly argued academic texts. Some of us also felt that the importance of this book emerged from the very personal level feelings we had after reading the book: to some of us, the book read as a beacon of hope, regardless of its shortcomings as a watertight bundle of theoretical and logical argumentation. Such feelings of hope are welcome, also to us researchers.

Meeting the universe halfway

TSElosophers meeting 15.5.2020. Ekaterina Panina, Erkki Lassila, Kari Lukka, Milla Wirén, Morgan Shaw, Otto Rosendahl, Toni Ahlqvist

Barad, K. (1996). Meeting the universe halfway: Realism and social constructivism without contradiction. In Feminism, science, and the philosophy of science (pp. 161-194).

Summary

Inspired by the philosophy-physics of Niels Bohr, Karen Barad introduces the new notion of realism, which she calls agential realism. She positions herself in relation to scientific realist and feminist-constructivist approaches, and argues for inseparability of ontological and epistemological issues. Barad’s insightful reading of Bohr’s understanding of quantum physics preludes the introduction of the onto-epistemological framework of agential realism. By considering such broad philosophical issues as the role of natural and cultural factors in scientific knowledge production, conditions for objectivity and the efficacy of science, Barad proposes a framework that is widely applicable across disciplines.
The framework of agential realism consists of four clearly drawn-out points.

  1. Agential realism grounds and situates knowledge claims in local experiences: objectivity is literally embodied.
  2. Agential realism privileges neither the material nor the cultural. The apparatus of bodily production is material-cultural, and so is agential reality.
  3. Agential realism entails the interrogation of boundaries and critical reflexivity.
  4. Agential realism underlines the necessity of an ethics of knowing.

The first point involves one of the central themes of feminist philosophy – the idea of embodiment. This idea refers to a constitutive relationship of the lived body to thought, contrary to Cartesian mind-body duality. Hence, in agential realism objective knowledge is situated knowledge.

The second point emphasizes the absence of opposition between materiality and social construction. Barad introduces the concept of intra-actions, which are contextually decided and enacted in-phenomenon. This concept describes reality as being in-between, and the inseparability of nature-culture, physical-conceptual and material-discursive.

According to Barad’s philosophy, a phenomenon is an instance of a wholeness, which includes both an object and agencies of its observation. However, there is no agential reality without constructed boundaries. The definition of theoretical concepts happens within a given context, which is specified by constructed boundaries, necessary for developing meanings. In addition, the described human conceptual schema becomes itself a part of a phenomenon.

Finally, agential realism emphasizes that constructed knowledges have real material consequences, which introduces the topics of accountability, responsibility, and ethics of knowing.

Our discussion

Overall, TSElosophers really liked the ideas represented in this paper and could buy almost all the arguments for reconciling realism and social constructivism approaches. Participants particularly took notice of the pragmatic approach to science, the view that knowledge systems should be a reliable guide to action. Agential realism seems to underline the efficacy of science and establish the direction towards which the knowledge is created, taking into account the constructive nature and ethical issues of scientific activity.

TSElosophers appreciated Barad’s development of an argument at the broadest level, being applicable to all scientific fields. Many in our group noticed similarities with other approaches, such as Actor-Network Theory (Latour), Social Systems Theory (Luhmann), and Process philosophy (e.g. Rescher, Chia). We discussed that this notion of the inseparability of nature and culture could be a more ‘natural’ and widely accepted principle in social, business and organizational studies, than in natural sciences. However, TSElosophers considered the discussion on ontological issues still relevant today, despite the article dating back over 20 years.

The majority of the discussion centered around the first point of Barad’s framework: agential realism grounds and situates knowledge claims in local experiences. Does the notion that knowledge is embodied and local impede theorizing? Some of the participants felt that agential realism has similar limitations in theorizing as ANT, while others emphasized that in agential realism theorizing happens in the detailed description of physical apparatus, as a description of the agentially positioned constructed cut between the object and the agencies of observation. One of the interesting points of agential realism is its emphasis on the agency of the material, as well as the interlinked agencies of the object and the observer. In addition, the locality of knowledge does not necessarily mean its spatial position, but relates to the constructed boundaries, so theorizing also happens in the boundary-making. Framing and focus matter, and they also have real-life implications. According to the third point of the framewor
k, the theorizing also happens in-phenomenon, and hence detailed descriptions and framing become a part of theory building.

Another issue that has brought interesting discussion is Barad’s notion of objectivity as reproducible and unambiguously communicable (in contrast to Newtonian objectivity indicating observer independence). TSElosophers wondered if even this conception of objectivity might not be suitable for social science. When the object of research are humans with their own subjective meaning-structures, the question of reproducibility becomes a difficult one. Developing on Heraclitus’s thought: “no man ever steps in the same river twice”, one can wonder if even with unambiguous communication the reproduction of social phenomena is impossible and hence objectivity is impossible to reach. The subjectivity of the researcher and the concepts seem to be underexplored in this paper, perhaps due to the fact that some of the starting points of paper are in physics. As a counterargument, TSElosophers emphasized that the significance of differences in the object of study should be questioned and included in the description of the boundaries. What are the consequences the differences in the reproducible object of study make? What matters? The objectivity here relates to the description one is making, to drawing the boundaries. Objectivity in terms of unambiguous communication and critical reflexivity is more important here than perfect reproducibility.

Finally, TSElosophers returned to the fourth point of Barad’s framework – the implications of knowledge. We discussed the ethics of knowing and importance of considering material consequences of knowledge production.

Further reading

Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.

Kakkuri-Knuuttila, M. L., Lukka, K., & Kuorikoski, J. (2008). Straddling between paradigms: A naturalistic philosophical case study on interpretive research in management accounting. Accounting, Organizations and Society, 33(2-3), 267-291.

Nonsense in management studies

TSElosophers meeting 25.2.2020. Ekaterina Panina, Kai Kimppa, Kari Lukka, Milla Wirén, Mohamed Farhoud, Morgan Shaw, Otto Rosendahl

Tourish, D. (2019). The triumph of nonsense in management studies. Academy of Management Learning & Education.

Summary

Tourish approaches his own scientific discipline with an admirable dose of self-reflexivity. He particularly draws attention to the existence of a notable amount of nonsense in management studies publications. TSElosophers interpreted the article as a criticism of the academic publication system in general. We could even consider whether management, at least in the kind of critical analysis that Tourish undertakes here, may be ahead of some other disciplines. To what extent does self-reflexive communication improve the intellectual integrity and societal responsibility of a particular discipline, and thereby indicate that it is in fact in a healthy condition?

This article lists several issues in the publishing process. Often, Tourish argues, the style of writing is too complicated and contains pointless and artificial ‘theorising’. Authors needlessly complicate their language to create an impression of sophistication and theory development at an advanced level. This largely results from the theory contribution requirements of top journals, which are then mimicked by other less-influential journals in the forlorn hope of improving their reputation and ranking.

Furthermore, the rules-of-the-game oblige reviewers to offer suggestions for improvement which authors then feel obliged to follow, whether they truly make sense or not for their piece. The result is often dysfunctional, as the publishing process drags on for years and further drafts become increasingly nonsensical, with too many ideas packed into the same article. All this has become ‘the normal’, i.e., it has become naturalised in the academe.

TSElosophers agree that the measure of success for academics is moving away from communicating ideas with clarity and towards merely getting more and more publications. We discussed whether or not academics these days can escape the publication game as long as they do not yet have tenure. Firstly, those not playing the publication game are at risk of losing their research positions to those who are. Secondly, supervisors often feel obliged to help the careers of their students by teaching them the publication game, which encourages publication efficiency but risks the underlying assumptions of this research approach being accepted uncritically. Finally, it seems implausible that researchers who win tenured positions through success in the publication game will flexibly change their focus towards fixing problematic aspects of a system that has so far rewarded them.

Although we considered Tourish’s article relevant and well-written, we also noticed some shortcomings. For example, the article concentrates on symptoms without providing a compelling overall diagnosis. We would argue that the central issue is that instrumentalism in publishing has become too widespread and self-reinforcing. A major underlying explanatory factor for this might be the expectation to publish too many studies, which many of us do our best to respond to in a constant rush of cranking out research manuscripts one after another.

Additionally, the article seems to conflate grand theories with bad theories, despite correctly identifying grand theories as a major foundation upon which some construct nonsensical abstractions. Tourish contends that the “endless elaboration of distinctions” (Mills 1959) within grand theories takes practitioners too far away from a logical route between theorizing and making observations. However, the examples of grand theories he presents are either cherry picked or misrepresented. He refers to Mills’ (1959) efforts at translating the work of Talcott Parsons, which demonstrated that 555 pages of Parsons’ academese could be written in 150 pages of simpler prose. Ironically, these concerns could be countered by referring to a student of Parsons, Niklas Luhmann, who created a (densely written) grand theory that highlights observation as its central concept and a starting point for making distinctions.

TSElosophers also disagreed on the representation of Lacan’s grand theory as innately nonsensical. First, the article defers to Chomsky as an authority figure to create a bias against Lacan. While Chomsky does have a point in criticizing language made complex in order to appear more academic than it is, both Tourish and Chomsky seem to find text that is actually quite understandable, if one has actually read Lacan, to be problematic. Tourish’s article seems to argue that reference to Lacan in any management article is nonsensical, even though the texts chosen actually are relevant (although of course one can then dispute whether the application of Lacanian theory is justified in this case). Interestingly, Tourish considers non-Lacanian parts of the article to be brilliant and fascinating, but does not elaborate on why insights taken from reading Lacan could not have helped to construct these other parts. Unfortunately this is not the only instance in which Tourish does not seem to understand that using difficult but rigorous concepts is sometimes necessary in order to avoid misunderstandings brought about by the unrelated connotations concepts may have in every-day language.

Tourish concludes with an appealing message. He reminds us of our continued agency within the publication system. For instance, we are not at the mercy of any particular journals. We do not need to accept review processes that confuse our key ideas and arguments, as there are other journals, book publishers and niche strategies for surviving, or even flourishing in academe. At times, it might be better to publish in new and less acknowledged venues in order to change the field for the better. Who knows, these venues might be the ones making a difference for the better in the future. Tourish concludes with a positive message encouraging us to write with “a little more humour, curiosity and passion”, something with which we were all happy to agree.


Power tensions dressed up as organizational paradoxes

TSElosophers meeting 28.1.2020, Kari Lukka, Milla Wirén, Mohamed Farhoud, Otto Rosendahl

Berti & Simpson: THE DARK SIDE OF ORGANIZATIONAL PARADOXES: THE DYNAMICS OF DISEMPOWERMENT, Forthcoming in Academy of Management Review

Summary

The literature on organizational paradoxes pivots on themes such as ‘change – stability’, ‘exploration – exploitation’ or ‘competition – collaboration’ and predominantly views the simultaneous existence of these contradictions as sources of beneficial organizational versatility. Berti and Simpson want to join the discussion by highlighting the ‘dark side’ of paradoxes, building on a view that the extant paradox literature falsely assumes similar agency on both sides of the paradoxes. Their claimed key contribution is that the power disparity needs to be included into the discussion of organizational paradoxes, especially when, or if, endowing the paradoxes with beneficial qualities.

Berti and Simpson present several, genuine sounding and relevant themes where the power disparity in organizations indeed positions the employees in between the rock and the hard place. They also go further and propose means of mitigating the ensuing problems. These discussions are written well, with clarity and insight, and merit ample attention.

However, there is one notable problem with the paper. We TSElosophers were not convinced that the paper is actually about paradoxes at all. Paradoxes mean, well, by definition, simultaneously existing polar opposites that cannot logically coexist. Instead, what the authors focus on are tensions, that can (at least in theory, if not in organizational practice) be solved, remedied, or mitigated. Some circular reasoning occurs: At least some, if not all, of the ‘paradoxes’ the paper talks about might actually source from power differences – not only that power differences enter the picture later on when trying to live/deal with the paradox. Hence, resolving, or developing a remedy on, a paradox must mean somehow changing the power difference in question, which would in turn mean that no paradox would then exist. The problem the paper actually addresses is the power disparity that creates tensions, not the tensions-as-paradoxes themselves. There is little we learn of the “dark side of paradoxes”, but a lot about the impact of power differences for the organizational actor.

In our discussion we pondered whether this apparent mismatch with the literature into which the authors have positioned their discussion, and the discussion itself, could be due to the twists and turns of the review process. Yes, paradoxes may have more scholarly appeal than tensions, but TSElosophers were left wondering whether the authors could have originally been quite so blind to the issues of consistency that our discussion spotted.

Dystopia or reality?

TSElosophers meeting 18 November 2019. Toni Ahlqvist, Elina Järvinen, Kari Lukka, Otto Rosendahl, Morgan Shaw, Ekaterina Panina, Milla Wirén

Psychopolitics – Neoliberalism and New Technologies of Power (2017), Buyng-Chul Han

Summary

The overarching theme of “psychopolitics” in Han’s book pivots around the changes in the nature of power geared towards upholding the neoliberal regime of capitalism: the new (primarily digital) technologies have transformed the traditionally disciplinary power of ‘should’ into an internalized and thus invisible soft power of ‘can’. This builds on a few trajectories, each of which Han touches in its own fragment: people are made to believe that they are ‘projects’ that need constant improvement; the access to behavioral data granted by omnipresent digital technologies enables manipulating people psychologically in ways that benefit neoliberal capitalism; this manipulation plays on emotions, exploits the embedded faith in the need of self-improvement, and draws from the urge of being ‘Liked’ in sharing one’s life in the gamified realm of digital social life, to name just a few of the trajectories. The label “psychopolitics” builds on the Foucauldian notion of biopolitics, with Han making the argument that compared to the times of Foucault, contemporary technologies do not stop at controlling the physical aspects of human life, but are insidious to the point of penetrating the realm of the psyche as well.

The book has more width than its relatively few pages would at a first glance suggest. As a result, Han seems to be content with throwing out numerous ideas, and tracing the contours of some connections, without digging deeply into any of the resulting openings or bothering with the waterproofing of the underlying building blocks which he utilizes in making propositions. The book’s resulting shape divided the TSElosophers, leading to one of the most polarized discussions in the history of our little group. Some of us appreciated the ideas and connections that were offered without missing more solid underpinnings, whereas others doubted whether any substantial ideas or connections could be put together from such flimsy building blocks.

From the viewpoint of the proponents of the book, this emerging picture of our current society is realistic: digitalization is a mighty tool for neoliberal powers that have reaped the benefits of capitalism in its diverse, ever-evolving forms throughout the ages, especially as it becomes a means of disguising manipulative and exploitative power in the invisibility cloak of ‘freedom’. This controlling mechanism of ‘freedom’ differs from genuine freedom, because it is built on an embedded, but externally imposed, imperative of making people believe and want something they then are given license to ‘freely’ pursue. This reading of Han understands capitalism as a systemic feature of most modern societies of today, as a part of which most of us simply are and act – both rich and poor – most often not really paying much attention to this fact at all. The way out, as suggested by Han, is to draw from the power of what he calls “idiotism”, namely the ability to not conform to the environmental expectations even at the risk of looking like the ‘god’s fool’ or ‘king’s jester’ – stupid from the viewpoint of the flock – which would allow one agency that can function ‘outside-of-the-box’. These thoughts resonated with the ones of us liking the book, especially as they crystallized some of the notions they have themselves been recently working with.

Opponents of Han’s approach among the group criticized the thinness of Psychopolitics as a work of scholarship, challenged its seeming negation of individual and collective agency, and questioned whether Han’s suggested “idiotism” abnegates social responsibility and possibilities for cooperation in ways that just echo individualist tendencies of neoliberalism rather than confront them.

Despite briefly raising points that explicitly reference Marx, Hegel, Kant, Foucault and Deleuze, Han’s treatment of these thinkers’ ideas often seems cursory, and in some instances suggests questionable readings of important points. At the same time, unacknowledged traces of Critical Theory haunt some aspects of the book’s discussion of freedom, ultimately leaving its conceptual role rather ambiguous. What Han ends up assembling, therefore, came across to some of the TSElosophers as a precarious stack of often underdeveloped and ill-fitting pieces. While sometimes interesting in themselves, for them they fail to cohere into a solid foundation for taking prior philosophical work in a new direction.

However, Han is far more successful in the depiction of a frightening dystopia in which the forces of capitalism oversee an omnipresent and yet imperceptible psychological influence operation that harnesses populations to its (unfortunately largely unarticulated) ends. Under the regime Han describes, digital confession and zealous work on the self as a project lures all of capitalism’s congregants to ‘freely’ align themselves with its subliminally implanted agenda. The extensive catalog of superlatives Han employs (“utter”, “total”, “complete”) conjures this effort not as a development still in progress but as an unassailable finished edifice and thus a perfect exercise of power. However, questions linger: who is actively writing the software behind this apparatus, and what are they aiming to accomplish with it? While Han’s ‘collective psychogram’ may be an emergent and impersonal phenomenon, the building and maintenance of the systems of surveillance, inducement, and monetization that operationalize it – in the view of the opponents of the book – cannot be as disembodied and without strategic purpose as he would have them appear. Capitalism is, in the end, the work of capitalists no matter how quiet, frictionless, and automatic the systems they create to carry out this effort out may become.

This begs the question (which worry also united the proponents and opponents of the book), then, Is ‘psychopolitics’ a ‘politics’ at all? Can there be a politics that seems to assume the total negation of most forms of individual and collective agency? Is Han to be taken literally in his assertions that the conditions of psychological influence he describes makes any form of opposition, whether understood as class struggle or political resistance, completely impossible? Some in this group hold that this trap is less inescapable than Han makes out.

Han himself suggests one opportunity for escape: the embrace of “idiotism”. To be an idiot has historically been to be both holy and afflicted, enduring a sanctified suffering at the margins of society. An idiot is someone from whom almost any form of behavior is tolerated, and from whom next to nothing is expected or required. They are therefore ‘sub-optimal’, even superfluous, to the orderly workings of economic and political systems. When induced to ‘Like’, Han seems to suggest that an ‘idiot’ can, like Melville’s Bartleby the Scrivener, evade the issue by simply stating “I would prefer not to,” becoming a puzzle that the wielders of Big Data will be entirely uninterested in solving.

But where does that leave us, especially as an ‘us’ that is more than a collection of self-optimizing ‘I’s? Is becoming irritants to the silent and effortless processes of capitalism, eventually banned from its hyper-efficient workings, but left free to make our cryptic pronouncements at the margins where we seek to preserve or rehabilitate our souls, really the best we can hope for? This is a question we hope the TSElosophers will return to in future discussions.

Misunderstandings about misunderstandings

TSElosophers meeting 18 October 2019. Kari Lukka, Milla Wirén, Otto Rosendahl

Flyvbjerg, B. (2006) Five Misunderstandings About Case-Study Research. Qualitative Inquiry, 12:2, 219-245.

Summary

Examination of the potential misunderstandings that still surround case study research is an excellent theme. We very much agree with Flyvbjerg that science needs meaningful, good quality case based research. The five misunderstandings he identifies go largely to the point:

1. General, theoretical (context-independent) knowledge is more valuable than concrete, practical (context-dependent) knowledge.

2. One cannot generalize on the basis of an individual case; therefore, the case study cannot contribute to scientific development.

3. The case study is most useful for generating hypotheses; that is, in the first stage of a total research process, while other methods are more suitable for hypotheses testing and theory building.

4. The case study contains a bias toward verification, that is, a tendency to confirm the researcher’s preconceived notions.

5. It is often difficult to summarize and develop general propositions and theories on the basis of specific case studies.

However, misunderstandings underlying Flyvbjerg’s analysis about these misunderstandings makes his own account inconsistent and even misleading. His response to the first point is to emphasize the human learning process, which requires context-dependent practice, but in terms of the importance of the theoretical (context-independent) knowledge, he seems to use a seriously outdated, narrow view on theory (Lukka & Suomala, 2014). This hides the potential of combining context-based empirical analysis of case studies with focused and motivated theorising.

Regarding the second point, Flyvbjerg as such correctly applauds the richness of narratives in creating understanding about phenomena. However, in his response to the second point, he undermines the need and role of generalising in the context of research, including case study research. In essence, he seems to be somewhat blind to the possibility and need of drawing insights from a context-specific case to a higher level of abstraction, which then makes it possible to create generalized theorems. We discussed this point through Flyvberg’s example of how London can only be familiarized through strolling the streets to gain in-depth understanding of the city, and pointed out that while indeed the insights of the nature of diverse alleys does require visiting them, having a map of London is still valuable for gaining other type of understanding of the overall city.

The third misunderstanding is well-analysed, as he points out the diversity of case types that can aid in unveiling specific types of phenomena and scientific propositions. He suggests various theoretical sampling methods as a starting point for testing and building theories, although again he neglects the deep theoretical insight that is needed for designing these non-random sampling methods.

The answer to the fourth point was in Tselosopher’s view blatantly wrong as he states that the case scholars are less prone to verification bias, and even goes as far as to say, ‘ in contrary’. His response to the fourth point does not consider that all studies, if not properly conducted, can be biased and that applies to case study research, too. This is especially a risk in case study research, where the researcher is often in close and long-lasting real-life contact with people in the field. Flyvbjerg should have started from accepting these premises and then examined openly the ways to avoid the risk. In our view, these include being conscious about this risk and then employing the principle of ‘critical independence’ (McSweeney, 2004). And here again the underlying problem is Flyvbjerg’s serious omitting of the role of theory and theorising, which makes him not to realise the possibilities of fruitfully combining rich case-based materials and theorising.

Responding to the final point, Flyvbjerg tries to defend fully descriptive, narrative case study reports, continuing the discussions in the first two points. However, their value as representing scholarly research needs to be questioned. Related to this point, he also unnecessarily limits his attention to the balance between keeping the case study account open and rich vis-à-vis summarising the findings, favouring the former. The much more relevant matter to consider would have been the challenge of using the rich case-based materials in elaborating a theoretically well-motivated research question and producing a meaningful theoretical argument as the conclusion of such analysis.

In sum, Flyvbjerg (2006) does not appreciate the potential and value of theorising in case study research. This makes him to oscillate between overstating and understating his defence for case studies: he concludes that cases are a great tool in a world without context-independent knowledge, but delimits their scholarly value with arguments that case studies cannot be summarized but only narrated. The outcome is therefore a series of further misunderstandings about theorizing. It is a pity since his topic is of great relevance. His article has been cited over 14000 times (Google Scholar on 23 October 2019), which generates a risk that many case study researchers have followed his misguidance along with his guidance.

References:

Lukka, K., Suomala, P., 2014. Relevant interventionist research: balancing three intellectual virtues. Accounting and Business Research, 44, 204–220. https://doi.org/10.1080/00014788.2013.872554

McSweeney, B. (2004) Critical independence. In Humphrey, C. & Lee, B. (eds.) The real life guide to accounting research. A behind the scenes view of using qualitative research. 207-226.

Imagining realities beyond progress

TSElosophers meeting 7.6.2019. Joonas Uotinen, Kari Lukka, Milla Wirén, Otto Rosendahl

Tsing, A. (2012). The Mushroom at the End of the World. On the Possibility of Life in Capitalist Ruins. Chapter 1: Arts of Noticing (s. 17-26).

Summary

The book tells an atypical story about commercial mushroom trade. Typically, economically-oriented stories focus on the modernity’s requirement of progress. Modernity perceives everything as resources for economic growth and humans as “different from the rest of the world, because we look forward (p. 21)”. Consequently, the ones unable to compete economically and secure their future tend to be categorized as less valuable. For example, the boom of Oregon lumber trade in early 20th Century was followed by the closures and relocations of lumber mills in late 20th Century. The progress that reached Oregon’s forests eventually moved to more efficient sites.

However, the author emphasizes that this was not the end of commercial activities in Oregon forests. The forests re-grew and became home to commercial mushroom picking, which connected to the strong East Asian demand for matsutake mushrooms. Commercial mushroom picking demonstrates supply chain practices of collaborative survival. It is a trade practiced mainly by “drop-outs”, but the tales of collaborative survival are not exceptional. For example, the author posits that similar supply chains developed in Greece after the recent financial crisis. Also, the intensifying environmental crisis is likely lead to more supply chains that rely on collaborative survival.

The reader is invited to feed their imagination by describing the hidden realities beyond progress. The stories of “drop-outs” are generally told only in relation to progress, e.g. the lay-outs in Oregon lumber mills were widely publicized. However, there is a stark difference in the supply chain structures between stable communities of healthy wage workers and open-ended gatherings of vulnerable foragers, including war veterans, refugees and undocumented immigrants.

The phenomena of collaborative survival is approached with an assemblage approach. Assemblage refers to an open-ended gathering that circumvents “sometimes fixed and bounded connotations of ecological ‘community’ (p.22).” The author emphasizes that her idea of assemblage is polyphonic, which refers to pre-harmonic music building on the intertwinement of the independent melody lines: each melody has stand-alone beauty. (In contrast, in post-polyphonic music the sounding whole has inherent hierarchies: there is the primary melody line, supported by the harmonic elements.) Hence, polyphonic assemblages revolve around circular and seasonal rhythms created by multiple actors that represent multiple species. Although polyphonic assemblages do not assume linearity of time or teleology of progress, improvement happens when emergent qualities transform gatherings into happenings with long-term impact.

Our discussion

Inspired by the article, TSElosophers discussed about overcoming some excesses of modernism. Firstly, we want to clarify that modern mindset is hardly the worst option as modern capitalism has supplanted totalitarian and unfair societal orders. However, this issue is complex since modern capitalism can also become combined with totalitarianism. Secondly, modernity’s requirement on progress promotes action based on external goals – e.g. career, money, appearances – rather than the intrinsic good in the activity, such as serving a greater ethical purpose. However, we see possibilities for learning intrinsically purposeful collaboration even in modern society. For example, people mastering horseback riding find out the inadequacy of giving the correct signals on the right direction and speed. Instead, one needs to create a connection with the other; to become aware of the horse’s mood and its needs.

Finally, the modernity’s demand for growth remains unsustainable. Increasing consumption neutralizes the effect of efficiency-increasing technological innovations. Also, the ignorance of other ways of living cause trouble for tribal cultures. For example, Ecuador’s government recently decided to drill oil in the ancient lands of the Waorani tribe (a decision later foiled by a court ruling). Instead of myopic approach on economic growth, we could emphasize ethical, spiritual, social, environmental or other forms of growth. Tribal cultures that are governed relatively fairly and sustainably, and the supply chains with collaborative survival, can inspire us in the pursuit for more holistic forms of growth.

ANT 101

TSElosophers meeting 16.4.2019 Ekaterina Panina, Elina Järvinen, Kari Lukka, Morgan Shaw, Otto Rosendahl

Reading material:

1. Bruno Latour (2005). Reassembling the social: An introduction to actor-network-theory. Oxford university press. Of which pages 141-156, ”On the difficulty of being an ANT: An interlude in the form of a dialog.”
2. Turo-Kimmo Lehtonen (2000) Kuinka monta meitä on. Tiede & Edistys, No.4. See http://elektra.helsinki.fi/se/t/0356-3677/25/4/kuinkamo.pdf

Our discussion:

We noted that the reading materials offer valuable insights both to ANT experts and to those who have only discovered it recently. Our first text involves a Socratic dialogue between a professor and a PhD student (Latour 2005). The student asks many questions as for how to apply ANT, only to receive rather elusive responses such as ANT is only useful when it is not applied to anything. It is a strange but consistent text that may provoke strong feelings as it demonstrates the several notable clashes between ANT and other approaches to research. ANT is unique in many ways. In our second text, Lehtonen (2000) presents ANT’s key concepts and analyzes ANT’s insights regarding collectives based on Latour’s three texts: “Les Microbes: guerre et paix”; “Aramis ou l’amour des techniques”; and “Paris ville invisible”.

The discussion of TSElosophers concentrated mostly on ANT’s key concepts, because ANT seems to be extremely difficult to pin down. This is not only due to Latour’s cryptic writing style, but also as his ANT seems to be also a somewhat ‘moving target’: Latour’s own views on ANT seem to have developed over time, too. Latour has even criticized the name he gave to it: ‘actor’ is a too strong expression, ‘network’ is often misunderstood and ANT is actually an ontology (e.g. Latour 2005, 9). Actor-network theory defines actors as things which have an effect. A network in ANT is not a safely stable network formed by ostensively defined actors, but very dynamically appearing and disappearing network of performatively defined actors having effects. One in our group with experience on Deleuze and Guattari noted a similarity between the idea of network in ANT and rhizomes, both differing from the more traditional ideas of networks. Also, renaming ANT as an ontology fits with its anti-essentialist, a-theoretical approach.

Anti-essentialist approach provides a strong contrast to traditional sociology. The anti-essentialism of ANT denies that any properties of objects are results of their essence. Rather, in ANT, all properties are viewed as relational outcomes of the actor-network. A stability of a network can emerge and Latour calls these ‘black boxes’, but these black boxes are contingent on the ‘network’ of the actors. Black box, e.g. the election results, can become a new actor that affects the creation of other actors, such as a new government. If, however, an election fraud was uncovered, the election results could become questionable, suddenly destabilizing the network so that it would no longer be a black box. This would make, for instance, the forming of a new government difficult. Latour’s interest lies in the detailed descriptions as for how black boxes are formed and kept up, whereas traditional sociology usually just analyzes these black boxes, e.g. theorizing on democracy. Latour emphasizes that in his approach the researcher doesn’t have to pretend – actually he is not allowed to pretend – to be wiser than the studied actors; he encourages the employers of ANT be happy with just building empirical descriptions and possibly giving these back to the explored actors for reflection.

We were puzzled by the ANT’s seemingly strong commitment to an a-theoretical approach and its possible implications. If Latour was taken completely to his letter, ANT cannot be applied to anything, because it does not provide much theory and it is incompatible with using other theoretical framings. We considered whether it actually is possible (even if one so wished) to employ a categorically a-theoretical approach in research and if so how. We felt that writing a fully and only descriptive research text on actors creating new actors remains a very problematic endeavor: For instance, how to find proper mentoring support and how to present the findings so that they could be found persuasive and interesting for practitioners and funders, leading to publications? We were left wondering why exactly could theory that the researcher would like to employ be not used in an ANT-based research, being hence understood like any actor in ANT (an entity having effects) and treated like theories are typically treated: Having a focusing and a limiting effect simultaneously.

We concluded that ANT was a worthy actor in our meeting. There were many suggestions as for how to continue looking into ANT in particular and the anti-essentialist tradition in general. One of us suggested taking a good look at Harman’s (2009) “Prince of Networks”, which provides a most insightful analysis of the ontology of ANT. Others aim to look into anti-essentialism (e.g. Fuchs 2001), develop their understanding on the differences between ANT and Deleuze & Guattari, or read Latour’s “Science in action” (1987) and “Reassembling the Social” (2005) entirely. Regarding the a-theoretical nature of ANT, the piece by Modell, Vinnari and Lukka (2017), published in “Accounting, Organizations and Society”, could be useful reading.

« Older posts Newer posts »

© 2024 TSElosophers

Theme by Anders NorenUp ↑