Turun kauppakorkeakoulun tieteenfilosofinen kerho

Category: Uncategorized (Page 1 of 4)

Two disagreeing theories on the role of ‘society as it is not’

TSElosophers meeting on 14.11.2023. Participants: Albrecht Becker, Behnam Pourahmadi, Erkki Lassila, François-René Lherm, Kari Lukka, Mia Salo, Milla Unkila, Minna-Liina Ojala, Otto Rosendahl

Luhmann (2002) I see something you don’t see. In: Luhmann & Rasch (2002) Theories of distinction. Redescribing the descriptions of modernity. pp. 187-193.


We read a conference speech given by Niklas Luhmann that criticized the continuing relevance of Frankfurt School, that is late 20th Century critical theory. We were unable to date the conference speech, but the original German text was published in 1990.

Luhmann focuses his critique on the Frankfurt School’s ontological standpoint, which many other European philosophical and sociological traditions follow, too. The ontological metaphysics of the Frankfurt School, Luhmann argues, i.e. its guiding distinction of existence/non-existence, enforces a bivalent logic that differs from the guiding distinction of system/environment and the paradox embracing logic in Luhmann’s systems theory. In his view, the Frankfurt School includes in their definition of rationality an aim for consensus of knowledge through intersubjectivity. This contrasts with Luhmann’s position, which renounces the object/subject distinction in favor of the cybernetic take on observation in which an observation can always be challenged by observing the observation.

Our discussion

The text was described as dense, and some parts required several readings to become at least reasonably understandable. The conference speech was most likely delivered to a German audience already familiar with both theories, which might explain the very limited contextualization of the text. However, as one of us suggested, other Luhmann’s texts may well be equally, if not even more, difficult to read. Some had given up before finishing the text, but many endured, and the discussion was very lively. His writing style was also strategically appreciated since he presented his own position quite briefly, concentrating on the arguments about the insufficiency of the Frankfurt school.

While some of us had doubts about certain aspects of Luhmann’s argumentation, we did not reject Luhmann’s criticisms of the Frankfurt school. We noted however that it’s out of our scope to discuss to which extent contemporary critical theory can respond to these criticisms. We discussed especially Luhmann’s criticism towards any social theory that emphasizes ‘society as it isn’t’. The Frankfurt School had a knack for creating future utopias and dystopias and criticizing society from those vantage points. Luhmann claims that this strategy is used to mask the inability of the Frankfurt School to ‘sufficiently’ describe our complex ‘society as it is’. Luhmann’s social theory, in contrast, includes the paradox between ‘society as it is’ and (the paradox of) ‘society as it isn’t’.

Perhaps not least because Luhmann’s speech of course could not comprehensively elaborate his theory, many concerns were raised that his theory could offer something worthwhile. Many of us noted some similarities to Latour’s work, as ANT proposes taking a similarish ‘bird’s eye’ view as Luhmann’s suggestion of second order observation: instead of inserting oneself into the discussions between observers, one should focus on mapping the terrain as represented by the diverse observations and interactions. Concerns included, for example, the rejection of the guiding distinction of whole/parts as expressed in such reductionism that results in a focus on cognition and a seeming neglect to materiality; his non-humanism and the somewhat nihilistic attitude that follows from the sought-after distancing of the researcher from the first level observers; doubts about how the self-reflection of the blind spot could improve his theory; that reflection might lead to infinite regression although the text also mentions the emergence of stability with system eigen-values; and that his position remains marginal both in systems theories and philosophy of science.

As a final note, we might ask if Luhmann’s theory is outright conservative or just differently radical. While Luhmann’s lack of an explicitly critical agenda compared to the Frankfurt School felt “disastrous” to several of us, gaining knowledge about ‘society as it is’ and how to relate it with ‘society as it isn’t’ by increasing the complexity of the theoretical framework could support constructive responses to societal problems.

Breaking incommensurability boundaries?

TSElosophers meeting 29.9.2023. Participants: Erkki Lassila, Kari Lukka, Mia Salo, Minna-Liina Ojala, Otto Rosendahl.

Gendron, Y., Paugam, L., & Stolowy, H. (2023). Breaking incommensurability boundaries? On the production and publication of interparadigmatic research. Qualitative Research in Accounting & Management.


The overarching theme of this article by Gendron et al. (2023) is to challenge Kuhn’s (1970) incommensurability thesis (see also Burrell and Morgan, 1979), which assumes that meaningful research work across different paradigms is not possible nor feasible from the philosophy of science perspective. The authors start by questioning this view and suggest that inter-paradigmatic research is not only possible because the assumed boundaries between paradigms are actually permeable, but that such research would also be desirable and beneficial for stimulating inter-paradigmatic dialogue between researchers of different philosophical assumptions and methodological approaches.

To justify their views, the authors rely on the analysis of four different inter-paradigmatic publications (Greenwood et al., 2002; Stolowy et al., 2019; Paugam et al., 2021; Stolowy et al., 2022 ). The authors have worked as coauthors on three of these four papers and therefore are able to reflect on the process that led to them getting them published. However, the main pair of comparisons is formed by the papers Greenwood et al. (2002) and Paugam et al. (2021). The authors emphasize the importance of “epistemic mediation”, the ability to reach “conforming” epistemological and other compromises during the research process, without which the required mutual agreement about the justifiability, or sustainability, of the research might not materialize – not only between the coauthors themselves but also with parties involved in the review process.

Our discussion

As a starting point, the idea of promoting inter-paradigmatic, or at least heterogeneity, in research methodologies was appreciated by the group. In addition, all group members seemed to agree that the paper was easy to read and understand, shedding some light on the practical aspects of the co-writing and publishing process of an inter-paradigmatic research paper. It touched on some of the ontological and methodological difficulties related to causality and complexity arising from such inter-paradigmatic endeavor. Overall, TSElosophers regarded the authors as seeking to advance the fashionable ‘phenomenon-based research’.

However, this particular piece of research, which seemed to be based on pragmatic premises, representing ‘naturalism’ in the analysis of knowledge production, could have been expected to offer more examination and interesting insights and discussions about the potential tensions likely present in inter-paradigmatic research. Therefore, TSElosophers felt that, in the end, the paper was not quite able to live up to the reasonable expectations of the readers. Instead of seriously examining epistemological tensions and inconsistencies that might ensue from an inter-paradigmatic mixture, the paper focused on discussing how interpretive and positivist methodologies can be combined in a single study by favoring one over the other, while the other is used more for complementing and reinforcing the primary view. Even this was done perhaps a bit too one-sided, as the authors seemed to focus mainly on their own work where the interpretive approach was dominant. It was a pity the authors largely skipped – perhaps were bound to skip since they did not write that paper – a more profound analysis of the Greenwood et al. (2002) paper, which would have represented the opposite approach.

Some of the group members also wondered how the inter-paradigmatic research presented in this study differed from the mixed-methods approach, simply combining qualitative and quantitative empirical work. In the end, because the article focused so much on the successful publication processes instead of ontological and epistemological tensions in inter-paradigmatic research, it could even be seen to represent a certain form of instrumentalism. In this paper, how researchers just happen to conduct their research overshadows the potential paradigmatic inconsistencies of that research. The process was seemingly just consecrated by the research output eventually getting published. TSElosophers were left wondering whether it is favorable for the scholarship to extend ‘naturalism’ in the analysis of knowledge production so easily that far.

Relevant academia in a post-truth world?

TSElosophers meeting 5.5.2023. Albrecht Becker, Kari Lukka, Mia Salo, Otto Rosendahl, Veli Virmajoki

Aaltola, E. (2022, 04.08.2022). The limits of science – what can we study?  https://blogit.utu.fi/utu/2022/04/08/tieteen-rajat-mita-saamme-tutkia/ (translated by Kari Lukka)

Meyer, R. E., & Quattrone, P. (2021). Living in a post-truth world? Research, doubt and organization Studies. Organization Studies, 42(9), 1373-1383

Tweedie, J. (2022). Against mystifying complexity: On asking simple, burning questions. Organization Studies, 43(11), 1853-1856.


These three texts of very different types share a common theme: the challenge and legitimacy crisis social sciences face in the light of the growing force of ‘post-truth’ and the contribution of the science-internal critique of the postulate of value-free science to creating an ‘anything-goes’ public discourse.

  • Aaltola in her blog post argues that this criticism of the idea of value-freedom of science has led to the argument that, given that all knowledge is tainted by value, all knowledge claims are equal to scientific ones, and in the end, it has led to a situation where right-wing actors try to censor research on topics they consider as not in line with their own values. The burning issue, thus, is how we can restore the role and integrity of free science.
  • Meyer and Quattrone, in their first editorial as new editors of Organization Studies, start from the same concern as Aaltola, also emphasising how researchers themselves have unintendedly become “accomplices” in nurturing the concept of ‘post-truth’. The challenge, according to them, is how to restore acknowledgement “of the value of our work” in a situation where truth “is a constant struggle to interrogate [the] ephemeral nature of knowledge”, but where the public discourse is more and more structured around binaries, such as true/false, us/them, etc.
  • Tweedie takes up Meyer and Quattrone’s idea of academics’ complicity and notes the irony of them striving for impact when their major impact is undermining their own legitimacy. He locates the major source of this complicity in the “complexity arms race” where academics value complexity per se over simplicity, thus reinforcing the ivory tower of incomprehensibility. Instead, he pleads for “elegant simplicity” in research and suggests that research questions should be stated “in the simplest terms we can” and that they should concern the “’burning questions’ of our times”, such as climate change etc.

Our discussion

We first noted that the texts are very different of type, one is a blog post, thus formulated in a bit more everyday language style (Aalto), the second a programmatic editorial (Meyer & Quattrone), and the third an essay (Tweedie). While this may account for the fact that the texts present their arguments in a too straightforward way, it at the same time made them specifically thought-provoking. Probably unsurprising, TSElosophers shared the general concerns raised in these texts regarding the current tendencies inclined to delegitimise the value of research and science.

One strand of our discussion concerned the stated ‘complicity’ of researchers stated in different ways in the three texts. Many of us agreed and saw not least recent discussions on sensitive – or, more pointedly: ‘politically correct’ – use of language as an important driver of the chance for allegations of value-bias and partisanship of social sciences. At least one of us argued, however, that ‘science scepticism’ is much older than these recent discussions and even the critique of value-free science and that during the COVID-19 pandemic, it was rather the traditional idea of self-correction through falsification which fed post-truth and the discourse on the equality of scientific and non-scientific knowledge claims. It seemed clear to us, however, that there is a paradox: On the one hand, authors claim that their critique of the assumption of value-free research has had a public impact and, on the other hand, their complaint that social science research is not adequately heard in public.

We further discussed what distinguishes scientific knowledge from other types of knowledge to make a convincing claim for legitimacy or even superiority in certain situations. One suggestion from TSElosophers, taking up Aalto’s argument, was that scientific knowledge is in a specific way methods-based and systematic. Others, however, countered this by arguing that these aspects are necessary conditions, but not sufficient since it is not obvious which methods should be designated as legitimate. For example, even astrology can also be perceived as rigorous and methods-based.

Other suggestions for solutions from the texts could also not completely convince us. Meyer and Quattrone, for example, go a long way to analyse the issues coming with a social science that accepts that there is no ultimate truth in the era of post-truth. However, their proposed programme for Organisation Studies seems a very standard programme of a social sciences journal and it remains unclear how their analysis is addressed. Tweedie’s suggestion to distinguish ‘crude’ from ‘elegant simplicity’ and ‘mystifying’ from ‘enlightening complexity’, and more profoundly distinguishing simplicity from complexity, seems plausible at first glance but may turn out to be less clear-cut than suggested.

In conclusion, the three texts triggered intensive discussion among the TSElosophers on themes that are of vital importance for all researchers, especially in humanities and social studies. Indicative of the great interest in the themes at stake was that our discussion showed no real saturation, but we only needed to end it due to time limitations.

Woodward on ”causation with a human face”: Inspirations and research voids

TSElosophers meeting 28.3.2023. Albrecht Becker, Erkki Lassila, Kari Lukka, Mia Salo, Milla Unkila, Otto Rosendahl, Veli Virmajoki

Woodward, J. (2002) What is a mechanism? A counterfactual account. Philosophy of science, 69(3), p.366-377 & Woodward, J. (2021) Causation with a human face: Normative theory and descriptive psychology, Oxford University Press, p.1-14 (Introduction).


In Woodward’s approach, the philosophy of causation should clarify notions that are confused, unclear, and ambiguous and suggest how these limitations might be addressed. In particular, Woodward defends the interventionist account of causal explanation, where causality holds between two variables if an intervention (ideal experimental manipulation) on one of the variables would change the other variable. Woodward uses the interventionist account to discuss issues such as mechanistic explanation and modularity, especially in the context of systems.

Our discussion

TSElosophers generally found these texts interesting, yet rather demanding reads. We found it useful to get an idea of the “interventionist take on causality”, emblematic of Woodward, and how he wished to avoid drowning in the metaphysical debates on what the notions of causality should stand for and instead focus on how to decipher causality in practice.

Regarding the book Chapter, just based on the Introduction of the book, we did not get a quite clear idea of how precisely the “descriptive accounts” of causality – the beliefs that people have on causal relationships – bear relevance regarding what is, for Woodward, the more essential thing, the “normative account”. The latter refers to those causal claims which can be sustained by the interventionist approach, leaning on counterfactual analysis. However, the biggest concern among a few TSElosophers is whether, and if so, the potential performativity related to any utterings of humans, be they researchers or lay persons, might play a role in the system of thinking of Woodward. It seemed as if he had not taken that into consideration at all. Given that for Woodward, in line with his “minimal realism”, the central test of causal claims is how the world works, omitting performativity actually seems like a very notable issue – since it could bring an endogenous challenge to the analysis and thereby significantly complicate carrying it out. At least, the problem of performativity might complicate the use of Woodward’s theoretical frame in social sciences.

TSElosophers found the article on mechanisms in causality somewhat easier to grasp, yet perhaps a bit less inspiring. It did not add to the credibility of the text that there seems to be a typo in equation (2) on p.367. At least one of us had a pre-understanding that explaining through mechanisms means primarily ‘fleshing out’ the contents to the ‘explanation’ as compared to making causal claims on mere naked correlations: It opens up more precisely how e.g. the correlations can be seen as part of a meaningful explanation. TSElosophers found the take of Woodward notably stricter and narrower than this general idea. This is, in particular, since his definition requires the independence of the elements (“modules”) of mechanisms. This requirement seemed to us not well suitable for humanities and social sciences. We were left wondering whether Woodward’s take is actually too binary or black and white. Maybe it even leads to an overly idealistic picture of mechanistic explanation in social sciences, thus distancing the practice in the field from philosophical analysis – something Woodward accuses many other accounts of causation of doing.

Reading these texts gave a good lesson for TSElosophers on what kind of reasoning and write-up can be found from the representatives of the current frontiers of the philosophy of causality. It strikes us how there might be notable room for integrating the recent advances in the philosophy of causality with a more genuine take on how humanities and social sciences are surrounded and how they work. This could provide a more apt philosophy of causality for social scientists and humanists!

An Inspiring Disorder of the Second-order

TSElosophers meeting 8.2.2023. Albrecht Becker, Erkki Lassila, Joonas Uotinen, Kari Lukka, Mia Salo, Milla Unkila, Otto Rosendahl, Veli Virmajoki

Von Foerster, H. & Poerksen, B. (2002). Understanding systems: Conversations on epistemology and ethics. New York: Kluwer Academic/Plenum Publishers. Pages 11-63.


We read the first Chapter of a book that consists of a dialogue between “a physicist and philosopher Heinz von Foerster and journalist Bernhard Poerksen (back cover)”. Von Foerster, a leading figure of systems theoretical circles in the mid-20th Century, rejects all thought stream labels in this book except being a Viennese although he is widely recognized as a radical constructivist. The book is written after his active career in 2002.

The first Chapter, Images of Reality, describes how human neural systems can only observe their environment with perturbations that are not specific. Thus, it rejects all truth claims based on correspondence between human knowledge and being. It provocatively casts doubt on all causal explanatory principles presented by scientific realism, including gravity and evolution. The Chapter emphasizes the ethicality of second-order observations such as describing descriptions and explaining explanations.

Our discussion

The book divided our sympathies. Some liked it, some were annoyed or almost angry and some remained ambiguous. Many agreed that academia needs more inspiring dialogues and attempts to defend bold positions in conversations. Many were also inspired by some of Von Foerster’s ideas and ideals at large. The minority, who were sympathetic to the text, read it as an anti-thesis rather than a synthesis; as reactive rather than refined. The majority was frustrated, not least as Von Foerster does not develop his ideas, but only keeps ‘dropping’ them; seems to lack sufficient consistency; and the dialogue format is pointless since Von Foerster dodges so many of the most relevant questions.

We presumed that Von Foerster’s background as a magician contributed to his tendency to seek to shock with his anti-thetical statements to realism, which made him appear as a more extremist thinker than warranted by his constructivism. He posited that the system constructs its world and all knowledge about it, but he also argued against solipsism and anti-realism. He seems to agree with the existence of the system’s environment, but with the impossibility of creating knowledge that corresponds with the environment. In sum, his system’s theoretical underpinnings support metaphysical but reject epistemological realism.

We considered Von Foerster’s presentation insufficient for any sociological reading due to its methodological individualism. He wanted e.g. to replace the concept of truth with trust. This might work for local interactions – while many doubted even that – but it remains unclear how this change could be applied to globally spanning communication. Would trust in scientists eventually dilute into something like trust in politicians or journalists? We concluded that the book presents a sample of ideas from Von Foerster’s active career rather than integrating into subsequent theoretical developments in radically constructivist systems theories such as social systems theory.

TSElosophers also discussed ethics. Von Foerster associated all references to the external as excuses for people to free themselves from the responsibility of their decisions. He promoted people not to trivialize themselves, but retain the unpredictability of the non-trivial machines. However, we feel the argument is more balanced when you also recognize that interaction benefits from predictability and the common use of explanatory principles. Certainly, most people should not (and could not) make their world as complex as Von Foerster has done for himself.

Contemporary concerns of scholarly work

TSElosophers meeting 15.12.2022. Erkki Lassila, Kari Lukka, Otto Rosendahl.

(1) Gendron, Y., Andrew, J., & Cooper, C. (2022). The perils of artificial intelligence in academic publishing. Critical Perspectives on Accounting, 87, 102411.

(2) Korica, M. (2022). A Hopeful Manifesto for a More Humane Academia. Organization Studies, 43(9), 1523-1526.


Both of these articles raise a fundamental concern about the contemporary state and future of academia and scholarly work. Gendron et al. (2022) use the colonization of artificial intelligence (AI) in academic publishing as their example. They highlight the possible implications the inclusion of AI technologies might have on the role of human actors in the publishing process, i.e. concerning editors, reviewers, and authors. Korica (2022), on the other hand, uses her own experience from academia to highlight concerns about the current state of scholarly work and the working environment, along with some suggestions on how things might be improved. Therefore, both articles bring out the necessity to ponder what academia is, and should be, about, and how every member of academia has agency on this matter.

Our discussion

We agreed that the fundamental concern behind both of these articles is important and worth discussion and debate. The article by Gendron et al. (2022) revolves very much around the critiques of the data-driven evolutions of our life, like Zuboff (2019) and Han (2017), highlighting the issue of the “surveillance society”, which is today harder and harder for anybody to escape. The article pointed out well how too much emphasis and trust on the abilities of AI and algorithmic-based software – despite their promise to add to the productivity of processes – is moving us towards surface-oriented, mechanical and performance-focused academic publishing. We agree with the authors that this type of development is a real threat to good scholarship in academic publishing. In relation to the central editorial task of reviewer selection, for instance, we agree with Gendron et al. (2020) that AI would be a problematic route to fix the alleged ‘problem’ of human bias, since it would likely only bring an “elite bias” into these processes.

However, we felt Gendron et al. (2020) did not elaborate sufficiently on individuals’ ability to see and understand how exactly the evolution towards further digitalization happens in our life, and where this type of technology-oriented development might be leading us. It is difficult, or even impossible for most of us, to detect and comprehend the fundamental accumulating effects of each microscopic addition of the digital into our everyday lives. It is difficult for us to see the connection between some new minuscule software application, which is sold to us as a ‘help’ or ‘improvement’ of some insignificant daily task, and the accelerating ‘digi-colonization’ of various aspects of human life in our society.

While we agreed on the basic idea and many parts of the Korica’s (2022) paper, we specifically did not agree on the seventh suggestion made in the article. This suggestion seemed to echo instrumentalism – in the sense that publishing research is staged as the final aim over and above conducting high-quality research – and was therefore quite thoroughly against the idea of good scholarship. The paper also seemed to cover almost every worry of our lives, which made the scope of the article overly broad.

Futures of Values

TSElosophers meeting on 21.10.2022. Elina Järvinen, Erkki Lassila, Joonas Uotinen, Kari Lukka, Mia Salo, Milla Unkila, Morgan Shaw, Otto Rosendahl, Siddhant Ritwick, Veli Virmajoki

Danaher, John (2021). ”Axiological futurism: The systematic study of the future of values”. Futures 132.


Danaher argues that value change in the future needs to be systematically studied. Danaher points out that there have been changes in values throughout history and these changes will most likely continue in the future. Understanding the possible changes in values in the future is “both desirable in and of itself, and complementary to other futurological inquiries”. Danaher names the inquiry into the future of values axiological futurism. Danaher sketches a set of possible methods that can be used in axiological futurism and a model for value change where “one of the main determinants of our movement through future axiological possibility space is [–] the form of intelligence that is prioritised and mobilised in society.”

Our discussion

Danaher’s argument for the need for axiological futurism is simple, convincing, and deep. When we discuss what the future should be like, we tend to use our own values to frame our views on the matter or even project our values into the future. However, if values would change, the desirability of a future is determined by the values and needs of future generations. Relating to normative future studies, one challenge is that there may be different moral truths and values in the future from those of today and therefore any normative projection to the future made today may sound unacceptable in the future. Relatedly, TSElosophers found the idea of contemplating and deducing what these potentially different future values (enacted by people in their everyday life) should be in order to achieve, for instance, a more ecologically sustainable future than how it looks based on the current such values.

There were some concerns about Danaher’s strategy. Introducing a novel field of inquiry with a daunting task such as mapping axiological possibility space is difficult in one paper. One needs to balance the abstract frame with some concrete suggestions on how to proceed. We were not quite convinced that the methods Danaher suggests are described in enough detail to give a sense of how the daunting task can be tackled. Moreover, one of us did not buy that forms of intelligence could very much determine movement in axiological space, but rather believed in the central role of material aspects. Others, however, seemed to accept Danaher’s main argument that it can well be a mix of both. TSElosophers anyhow were supportive of Danaher’s main project described in the paper. Rather, the concern was primarily how to execute the project.

There were additional worries that notions such as “the moral paradigm” may be misleading and give the false impression that there is a shared location in an axiological space where we all stand and move together. The western connotations of the project also worried us, for example when the forms of intelligence were defined in terms of a dichotomy between individual and collective. However, for a few of us more normatively oriented scholars interested in evoking value transformation pertaining to a more sustainable future, the main take-away of the paper was the interesting structuring of the possibility space of values. We felt that Danaher’s discussion of its constitution opened welcome avenues of reflection and action aimed at a potentially greener world.

Overall, the paper made a convincing case for the need for axiological futurism but made us realize how the complexity of axiological considerations casts a shadow of vertiginous complexity over the project.

For an interested reader, see also https://blogit.utu.fi/futuresofscience/2021/09/20/future-of-values-some-reflection/

Meaning of effectiveness in work

TSElosophers meeting 23.9.2022. Participants: Eeva Nummi, Erkki Lassila, Mia Salo, Milla Unkila, Otto Rosendahl, Veli Virmajoki

Morin, Estelle M. (1995) Organizational effectiveness and the meaning of work. In T.C. Pauchant and associates (Ed.). In Search for Meaning. Managing for the health of our organizations, our communities, and the natural world. San Francisco: Jossey-Bass, p. 29-64.


In her 1995 paper, Morin suggests an existentialist perspective on organizational effectiveness. She criticizes the priority senior managers place on the economic perspective in the evaluation of organizational performance. Morin argues that economic prioritization distorts the meaning of effectiveness and affects the meaning of human work and human existence. She proposes new ways to discover the meaning of work building on existential psychotherapy of e.g. Viktor E. Frankl and Irwin D. Yalom. She also offers empirical evidence on the narrow approach to organizational effectiveness among senior managers and suggests ”means that could be used to achieve more humane management practices based on the lessons of existential psychotherapy”.

Our discussion

TSElosophers agreed that Morin’s criticism about organizational effectiveness remains valid. It was also suggested that organizational goals seem to be divided: while the senior management values financial effectiveness, in everyday organizational life, the employees’ actions are to an increasing extent guided more by a broader set of values. An example of this value-incongruence can be found in the crisis-ridden work situation of nurses in Finland.

Through highlighting the broad range of existential meanings given to work, Morin opens up the avenue towards reflecting the role we individually and collectively give to work. Morin links the narrow definition of organizational effectiveness to the disappearance of the meaning of work; Organizations sometimes pursue things that have no meaning for individuals and their efforts e.g. on sustainability. Hence, individuals might need to take distance and find meaning from somewhere else than their identity as an employee. In our time, the loss of meaning in work manifests itself in many ways, for example burnout or quiet quitting.

We reflected on the fact that the existential perspective is contradictory in itself: how can we measure something for which the measurement itself creates a problem? The fundamental problem is that most of the things we need to take into our calculations are qualitatively different. To bypass the problem, we’re trying to position all we want to measure (and thus value) onto the one same standard of desirability (see March 1982, Thompson 1967), namely the financial one. Instead of health being a value in itself, the value of health is calculated in terms of how much a healthy or an unhealthy individual costs to the society. The discussion of ecosystem services does the same in the environmental side: we cannot appreciate nature in itself, but need to have a mechanism for articulating its value in money.

Overall, we found the paper relevant and interesting, although it seemed to address too many issues. We agreed that the humanistic approach of this paper successfully described many problems, but did less to solve any of these. For example, if we were to guide and control organizations based on a broader definition of effectiveness, perhaps one with less emphasis on money, how would we define the variables and methods of calculations that would fit to this purpose, and where would it lead organizations (scenarios)? Having read the article, we do not know. The article showed us a good direction for meaningful discussions about organizational effectiveness, but unfortunately it lost its own focus in the end.

“Perspective relativism” – thought-provoking arguments and confusions

TSElosophers meeting 8.4.2022. Participants: Erkki Lassila, Kari Lukka, Otto Rosendahl and Mia Salo

Antti Hautamäki (2019) Näkökulmarelativismi [Perspective relativism](SoPhi).


In this book, which appears only in Finnish, Antti Hautamäki introduces the notion of “perspective relativism” (our translation of the Finnish term “näkökulmarelativismi”). It is presented as a middle-of-the-road approach to the philosophy of science, positioned by Hautamäki between the extreme forms of realism and relativism. The key ideas of his perspective relativism include:

• There is no perspective-independent way to look at the world.

• It is helpful to distinguish between the subject, the object, and the aspect – from which the last-mentioned captures the distinctive feature of perspective relativism

• Perspectives are subjective, but they can be objectified.

• The same objects can be looked at from different perspectives.

• There is no absolute, privileged, or universal perspective.

• Perspectives can be further developed, revised, and swapped.

• Perspectives can be compared through various criteria.

In the book, Hautamäki argues for the validity of perspective relativism in numerous ways, using several examples. He goes through the typical themes for this type of treatise like relationships of perspective relativism to rationality, truth claims, justification of knowledge claims, ontology, and philosophy of science at large. In all of these analyses, Hautamäki seeks to make a distance, on the one hand, to (scientific) realism and, on the other hand, to (extreme forms of) relativism. Central to his argumentation, allowing him to keep a distance from extreme forms of relativism, is his idea of “core rationality”: To be taken seriously, any argumentation has to fulfill certain minimum conditions, such as the principle of deduction (the logic of implication) and the principle of consistency (for instance, we cannot accept and deny the same thing looked at from a certain perspective).

Our discussion

TSElosophers generally supported the contents of the notion that Hautamäki was propagating in his book. We not only found it intuitively appealing and helpful, but many of us also perceived it in certain ways familiar. One of the TSElosophers found Hautamäki’s position similar to the idea of combining moderate realism with moderate social constructionism, which this member had adopted some 15 years ago as the platform for his scholarly work. We also found the book topical, especially from the viewpoint of the famous ‘science wars’ between realists and social constructionists. Hautamäki’s notion sits well with the general idea of Niiniluoto and Saarinen (1986) that in the heated debates between various ’isms’ we tend to overlook how many similarities there are across various approaches.

While we liked the general idea, we struggled with some of the distinctions through which Hautamäki tried to make room for his notion. In particular, we felt Hautamäki was exercising a losing battle in his numerous attempts to make the distance to realism, which he at times calls by that name, while at times calling it ‘scientific realism’. The problem is that it is, in fact, rather hard to draw the demarcation line between most of the ideas of scientific realism and Hautamäki’s perspective relativism. For instance, when we take into consideration the three-level ontology of Popper, the notion of theory-ladenness of observations, Kuhn’s paradigms and, overall, the formulations like those of Niiniluoto (1999) for scientific realism (which he calls “critical scientific realism”), it is nearly impossible to see any genuine differences any longer.

To be blunt, Hautamäki’s perspective relativism can be argued to be fundamentally similar to scientific realism, only peppered with certain accentuations nodding towards relativism. It is a pity Hautamäki is so confusing regarding these distinctions up to the point that he can be claimed to fabricate a strawman of (scientific) realism to develop his claims of uniqueness. It would have been far easier for the reader to digest had he chosen naïve realism (e.g. logical positivism) as his ‘enemy’ at the realism end: All of his distinctions would work against that position. However, perhaps he did not opt for that strategy since the schools of thought linked to naïve realism are these days viewed as dead as they go. Hence, making distinctions to them would not have been very effective.


We found Hautamäki’s notion of perspective relativism as a valid notion content-wise, which however is far less innovative than the author claims it to be. It is, after all, a notion under the umbrella of scientific realism, only stressing the constructionist (or relativist) aspects of that stream of thought. The book is worth reading especially if one wishes to go comprehensively through the philosophical position of one’s own in a self-critical manner. Bold, even wild claims are often helpful as ‘test-balls’ in such exercises.

Sartre, Weick, and existential sensemaking

TSElosophers meeting 24.2.2022. Participants: Erkki Lassila, Kari Lukka, Eeva Nummi, Siddhant Ritwick, Otto Rosendahl, Mia Salo

Yue, A. and Mills A. (2008) Making sense out of bad faith. Sartre, Weick, and existential sensemaking in organizational analysis. Tamara 7:7.1, p. 66-80.


Yue and Mills propose a novel approach called ‘existential sensemaking’ to identity construction and organizational analysis by combining Weick’s sensemaking epistemology and Sartre’s phenomenological ontology. They suggest that in situations where the ordinary and ongoing sensemaking process fails, we ‘are presented with an opportunity for existential sensemaking’. This means that we are no longer dealing only with how we make sense of our world (epistemology) but also what the nature of our reality might be (ontology). Consequently, existential sensemaking shifts the focus from social to subjective, specifically, to the individual and their decision-making process in a particular situation. According to Yue and Mills, Sartre’s existential phenomenology with its emphasis on human free will to choose, responsibility, and the individual actor, offers ontological and ethical grounds for existential sensemaking. To illustrate their point, the authors analyze a case of a mountaineering expedition in the Andes, arguably as it captures an extreme situation, a question of life or death, that occurred.

Our discussion

The article prompted a lively discussion. In particular, we appreciated how the writers exploited Sartre’s existential phenomenology in their analysis. Putting the focus on the individual, their freedom, responsibility, and decisions based on ‘good faith’ has key relevance in many respects in practice and may have become – like Yue and Mills argue – too overlooked in social studies often focused on the role of structures. For instance, we discussed what is the meaning of scientific research today and whose concern is it whether we routinely conform to the publish or perish -mentality in academia. Many of us also pointed to the importance of better understanding the subjective perspective and inner dialogue alongside the social view and intersubjective dialogue. These came distinctive by the extreme decision-making moment (of Simon Yates cutting the climbing rope that connected him to his fellow mountaineer Joe Simpson, thus, sending Simpson to an almost certain death) depicted in the case study of the paper.

While the topic of the paper, existential sensemaking, caught our interest, we agreed on expecting more from the article, especially with respect to conceptual clarity and theoretical contribution. What surprised us most was that no key concept was defined, not even existential sensemaking at the core of it. This led us to discuss what the authors actually mean with different notions, for instance, existential, essentialist or non-essentialist individual, ethical behavior, bad faith in relation to sensemaking. An especially intriguing debate emerged from our different approaches to human behavior in an extreme situation, how this relates to our understanding of essentialism and, consequently, to Sartre’s ontological concepts of ‘being in itself’ and ‘being for itself’. To our disappointment, the paper’s contribution to organizational literature remained vague.

Finally, we discussed the connection between existential sensemaking and identity construction process, arguably a central theme in the article. Whereas existential sensemaking seemed to fundamentally refer to the use of free will in decision-making, we could not follow how existential sensemaking was connected to identity construction – not least as that should be necessarily viewed as a process, not only a passing event. Instead of identity construction, we found the paper illustrating an identity break when something radical happens, thus, extending beyond retrospective and ordinary sensemaking and, in this case, calling for existential sensemaking. In accordance with Yue and Mills (footnote 15), we arrived at stressing that extreme or crisis context is perhaps actually not ‘required for the presentation of existential sensemaking’, rather, it might be quite ordinary!

To conclude, we would have hoped this compelling article had received at least one more revision before publishing, especially with regard to definitions and contributions. However, as a conversation trigger, it provided an excellent base. We welcome future research on existential sensemaking!

Rules for ethnographic research: To have and have not.

TSElosophers meeting 20.12.2021. Participants: Andrea Mariani, Eeva Nummi, Kari Lukka, Mia Salo, Milla Unkila, Otto Rosendahl.

Van Maanen, J. (2011). Ethnography as work: Some rules of engagement. Journal of management studies48(1), 218-234.


The article presents a counterpoint to a point; Van Maanen responds to an article by Tony Watson. Van Maanen sets out by emphasizing that his own musings about ethnography that mostly correspond with Watson. However, Van Maanen adopts a more protecting position about ethnography’s uniqueness than Watson: He outlines ethnography as inherently a relatively marginal method and feels the need to defend the ethnographers’ capabilities to extract evidence by observing the observations of natives.

Van Maanen concentrates on ethnography as a combination of “fieldwork, headwork, and textwork” (p. 218). Fieldword is especially difficult in ethnography and requires considerably more commitment than in mainstream science. Thus, he doubts that ethnography could turn into a mainstream approach without diluting itself to mediocre scholarship and results. Headwork and textwork refer insightful process of theorization and communication of ethnographic research.

Our discussion

We greatly appreciated Van Maanen’s three-part distinction that adjusts the typical idea about ethnography as fieldwork towards theorizing and skillful communication about the research. It is extremely important that an ethnographic researcher (just like any other empirical researcher) keeps in mind all these three ‘works’ in a reasonably balanced manner. Van Maanen successfully employs this distinction to stress how (ethnographic) research is full of making choices by the researcher.

However, despite his laudable approach, Van Maanen still downplays the role of theorizing. His take on ethnography is eventually rather open-ended and empirically tuned. The examples given by Van Maanen set ethnographies as rhetorically appealing theoretical collages that intricately describe local realities. Still, it remains shrouded how to select theories and, most importantly, how to validate the outcome of theorizing in the ethnographic research. Even if an eclectic and intuitive approach works for the niche Van Maanen has developed for himself, it might be too shaky foundation for the broader ethnographic field, its researchers and for generating influence outside ethnographic circles.

It seems that Van Maanen’s idea of ethnography as a marginal method requires creative, social and rhetorical geniuses that operate under very few rules. He perceives Watson’s pursuits towards mainstream to endanger his idea about ethnography. However,  it might be that Van Maanen’s representation of ethnography reflects more his own niche take on it than the views of the entire field.

Unfortunately, we had no chance to read the original text by Watson, but only Van Maanen’s reflections on it. Watson might have interesting thoughts as for how to approach the potentials involved in adjusting ethnography in parallel with the mainstream. If so, we hope he adopts Van Maanen’s important distinction between fieldwork, headwork and textwork and uses it to relate ethnography further with the mainstream, yet carefully ensuring that the key characteristics of ethnographic pursuits are not lost in this move.

Appealing argumentation for five types of theory

TSElosophers meeting 12.11.2021. Participants: Erkki Lassila, Kari Lukka, Mia Salo, Milla Unkila, Morgan Shaw, Otto Rosendahl.

Sandberg, J., & Alvesson, M. (2021). Meanings of theory: Clarifying theory through typification. Journal of Management Studies58(2), 487-516.


Sandberg and Alvesson (2021) present a novel approach to define and classify theories. They argue that management and organization studies (MOS) definitions of theory tend to be narrow and/or built on a single social paradigm. Especially, they see a problem with requiring explanative theory in all research, seeing this as being related to researchers often presenting artificial pseudo-contributions and, effectively, making the entire idea of contribution a fetish. Instead, they classify explanative theory as only one theory type, which needs to be complemented by other types of theory in order to advance the knowledge of the discipline.

The authors adopt a wide constructivist lens and perceive theory as a human pursuit with various aspects. Through this lens they perceive altogether seven criteria for theoretical knowledge. The primary criteria which make difference between the various theory types are what is the purpose of theory and how the targeted phenomenon is assumed to exist. Indeed, based on the seven criteria, they develop a typology of five different theory types: explanative theory, comprehending theory, ordering theory, enacting theory and provoking theory.

Sandberg and Alvesson suggest that their approach to defining theory has potential to overcome many ontological and epistemological differences and thereby provides a more neutral way of communicating about the role of theory in the scientific pursuit. They make an extensive effort to hedge their contribution so as not to step on anyone’s onto-epistemological toes: their approach might still yield more theory types and, besides, any research is not forced to select only one theory type since theory types are somewhat overlapping.

Our discussion

On the positive side, the article is splendidly written. Its rhetoric is thoroughly appealing, which increases its potential to fulfill its own intended purpose of “pointing at a range of different theory types and levelling the playing field within the MOS community” (p. 491). The latter part of this purpose implies that the role of theory in the community should shift from “political-practical controlling device” (p. 509) towards enabling “researchers to advance knowledge development” (p. 490-491).

However, TSElosophers also found three significant shortcomings in the article. Firstly, we didn’t find much argumentation as for how the seven criteria behind the typology were chosen. It seemed as if the deep experience and professionalism of the authors were trusted to the extent that they could present their list of seven criteria without extensive analytical elaborations. Some of us felt the suggested set of criteria is too complex and formulaic; for instance the two-item formulation of Friedman (1953) goes arguably better to the point and is more helpful for researchers.

Secondly, the article seems to present a strawman of what explanative theory means. Especially problematic is the claim that Whetten (1989) defined explanative theory narrowly, since it misreads the scope of Whetten’s (1989, 490, emphasis added) short article, where the intent is merely “to propose several simple concepts for discussing the theory-development process.” Explanation can well be defined much more broadly; it is not just limited to ‘positivist’ notions of explanation typical of e.g. quantitatively oriented research! For example, Wittgenstein characterizes scientific explanation as profound understanding.

Finally, it was suggested in our discussions that the article provides less actionable advice about theorizing than e.g. MacInnis’ (2011) “A framework for conceptual contributions in marketing”. Therefore, Sandberg and Alvesson’s contribution might be reduced to raising awareness without urging for widespread changes.

Despite our criticisms, we consider that this article admirably follows the adage ‘better being approximately correct than exactly false’. As long as the reader keeps in mind that some of the appeal of its narrative is achieved with a tradeoff from its accuracy, we may endorse reading this article.

Should we bring the mammoth back?

TSElosophers meeting 7.10.2021. Andrea Mariani, Elina Järvinen, Erkki Lassila, Kari Lukka, Milla Unkila, Morgan Shaw, Otto Rosendahl, Toni Ahlqvist.

Thiele, L. P. (2020). Nature 4.0: Assisted evolution, de-extinction, and ecological restoration technologies. Global Environmental Politics, 20(3), 9-27.


The Earth is 4,5 billion years old, and life on it around 3,7 billion years. Mammals get to spend their 200-million-year birthday, whereas hominids have been around mere 200 000 years. We Homo Sapiens managed to conquer the competition around 30 000 years ago, tamed the dog to help us 20 000 years ago and figured out farming 12 000 years ago (Dasgupta review 2021).

Everything preceding the use of tools and farming Thiele dubs Nature 1.0. It’s the state of the nature without the touch of humans. Nature 2.0 emerged as humans started to tinker with their environment, to plow fields, set up irrigation, to domesticate kettle – to build cities, roads, energy infrastructure, to extract minerals, to dig oil. While still a shorter period than the previous, Nature 2.0 has existed notably longer than Nature 3.0 that started mere decades ago: “It is chiefly characterized by the capacity for the accelerated, nonincremental, and precisely controlled modification or creation of life-forms and their environments. The primary Nature 3.0 technologies are nanotechnology, geoengineering, and biotechnology.”

In the article, Thiele discusses the implications of Nature 4.0, the potential next step in this trajectory. While utilizing the technologies designed during Nature 3.0, the distinction emerges from the motivation underpinning their use. Nature 3.0 is all about pleasing humans, whereas Nature 4.0 is about attempting to turn the tide of biodiversity loss of our making. We now have the technology to modify the habitats we once destroyed to recreate bounded ecosystems, to tinker with genes to bring back the passenger pigeon we hunted to extinction, to artificially engineer species that tolerate the changes we made to the biosphere in Nature 2.0 and 3.0.

In sum, we might be able to bring back the woolly mammoth and many other species. But the focal question Thiele asks is, should we, given that the potential risks of Nature 4.0 may be huge and the unpredictable consequences irreversible?

Our discussion

In discussing the article, the TSElosopher camps of using either the big or the detailed brush re-emerged. For some of us, the accuracy of the examples given in the article was of lesser value than the overall message the article carried, whereas for others, the lack of accuracy in detail made the overarching message less convincing. We all agreed that the article indeed gives food for thought.

Have we humans really come so far in our destruction of the biosphere that the only means to conserve and restore its livability (first to other species and then to ourselves) is to start artificially modifying species, habitats and natural processes? How should we evaluate the risks of releasing artificial DNA to natural processes? With the fallibility of technologies that seldom work exactly as envisioned in the designing phase, what kinds of Pandora’s boxes will we be unleashing when creating species or habitats that natural evolution did not account for?

If this is not yet the case, what could be done? We discussed the proposition by Dasgupta to re-envision nature as an asset, to be included in the accounting of the types of capital we possess. However, despite some support for this solution in TSElosophers, there were two criticisms. First and foremost, fixing the problem of valuing money over everything else through endowing also nature with price tags is a bit like fighting fire with oil. In other words, trying to solve the serious problems caused by Modernism, inserting more Modernism. Instead of assessing nature in monetary terms, we should instead do more to make people take its intrinsic value seriously – not all that counts can, or should be (ac)counted. The second criticism was born out of the first one: fighting the problems with the same mechanisms that caused them can only succeed in postponing the inevitable. We could all agree that the more responsible avenue is to try to work towards a paradigm shift in terms of the fundamental values and our anthropocentrically selfish and myopic life-style we adhere to already today – although ‘the Modernists’ in us would couple this approach with letting the economy treat nature as an asset.

While the description of both the past deeds of humans and the possibilities we now have at our perusal evoked sentiments of doom and gloom, not thinking about the choices we are currently making is not an option. We perceive human nature as such that the curiosity of driving natural scientists to uncover all that is humanly possible is seldom balanced with the patience of thinking through the implications of using all technologies we potentially could wield. We discussed that it falls for us social scientists to stay updated about the developing technologies and to take an active role in thinking through what of the things we could, should we actually be (not) doing.

As we humans are not exogenous to the nature, but a part of it, it can be viewed that all the tools and technologies we have designed and all actions we’ve taken, are due to the evolutionary processes that made us what we are. As humans we are predestined to be the representatives of our species and to act as the types of animals we are – to seek shelter, sustenance and comfort with all the means we have at our disposal, just as do any other animals. Hence, isn’t it just a fluke of evolution that made us capable of changing our environment more than the beavers or ants can, is it?

As the type of animal we are, we are capable of both destruction and creation beyond the possibilities of other species. The very interesting question therefore is, which of these sides of humanity, the destructive or the creative prevails when we are faced with the scale of changes we have wrought to the biosphere maintaining also our lives? Is our collective survival instinct strong enough to turn the tide of destruction? Because ultimately, though we are a tougher breed to kill than even rats or cockroaches, the kind of biodiversity that existed when humans evolved is still necessary for our survival.

We need the type of air to breathe that the Amazonia produces us, and the type of water to drink as gets filtered by untarnished soil, and dependence on technology to produce these comes with unimaginable uncertainties. The attempt to apply Nature 3.0 technologies to support assisted evolution and de-extinction leads to ethical and practical questions of considerable importance both in positive and negative terms. And yet at the end, we can ask ourselves, if the so called ‘unnatural’ human made artifacts are actually very natural and very normal part of evolution on this planet?

Hues of normativity in positive economics

TSElosophers meeting 28.5.2021. Erkki Lassila, Joonas Uotinen, Kari Lukka, Milla Unkila, Otto Rosendahl.

Reiss, J. (2017). Fact-value entanglement in positive economics. Journal of Economic Methodology, 24(2), 134-149.


The article by Reiss (2017) outlined historical developments of thinking in positive economics based on David Hume’s ”fork”, i.e. the separability of facts from values. Hume’s fork maintains that factual statements can be known without referring to non-epistemic values such as beauty, good, right, bad and wrong.

Hume’s fork is frequently applied in generating the distinction between normative and positive economics, argued forcefully for instance by Milton Friedman. In this mode of thought, the former are arguably about values and the latter about scientific facts.

The article reiterates, through examples from various aspects of conducting research, the already elsewhere made argument that economics is hardly able to provide a purely positive theoretical body and intricate statements presented as such are only seemingly so. A central theme of the article is that it may be that whenever generalizations are made beyond the immediate observations like that “this leaf here is green”, we may not be able to avoid the inclusion of non-epistemic values.

Our discussion

After many-sided and also critical discussion, TSElosophers came to the conclusion that Reiss manages to do what he wishes to achieve: supporting the blurring of the distinction between positive and normative economics. However, neither Reiss nor we are saying that science would become impossible to distinguish from opinions as Reiss elaborates a cognitivist metatheoretical stance to ethics that emphasizes human capability for reasonable argumentation about normative statements as well.

Thus, the blurring of the separability thesis enables more active role to economists who may now discuss about the normative hues that unavoidably shade the scientific inquiry – coming from e.g. the underdetermination of epistemic values and the expectations about the use of theories. Acknowledging this would allow economists to excel in this and leverage their role in the society with greater awareness and transparency about the values impacting one’s theoretical work.

Most importantly, the blurring of the separability thesis need not become a crisis in economics. Even if positive and normative statements cannot be sharply distinguished, some statements are still more based on facts than others; and academia places considerably more weight on the epistemic values in knowledge-production than happens outside of it.

TSElosophers sympathized with the use of separability thesis as a rhetoric device, although it doesn’t fully capture the complexities of science-making. Hence, it seems to function partly as an unrealistically straightforward solution to distinguish between the more and the less epistemic argumentations. Employing the separability thesis may be helpful in the context of economists’ theorizations when they are challenged in societal discourse; they still need a way to signal the convergence of their theorizations with the epistemic body of economics.

It was pointed out, however, that careless usage of such rhetorical devices may, however, corrupt the credibility of science in the long run. They are always political because they exclude some approaches from discussions instead of others without a watertight basis.

We concluded that a critical mindset and keeping one’s conflicting non-epistemic interests in a tight rein should be among the key strengths of all academicians – regardless whether one supports or rejects the facts-values dichotomy.

Science and values with dawning virtue ethics

TSElosophers meeting 22.4.2021. Joonas Uotinen, Kari Lukka, Maija-Riitta Ollila, Milla Wirén, Morgan Shaw, Otto Rosendahl.

Hicks, Daniel J. (2014). A new direction for science and values. Synthese, 191(14), 3271-3295.


Which values influence science and in which ways? Which values may legitimately affect science, and which values have an illegitimate effect?

Daniel Hicks presupposes that values are an integral part of scholarly research. “Many philosophers of science now agree that even ethical and political values may play a substantial role in all aspects of scientific inquiry.” (p. 3271) Discussions on isolationism and transactionism are not very relevant anymore. (Isolationism believes that ethical and political values may not legitimately influence the standards for acceptance and rejection of hypotheses. Transactionism, the negation of isolationism, states that some ethical and political values may legitimately make a difference to the standards of acceptance and rejection.) Daniel Hicks thinks that there are both legitimate and illegitimate values affecting science, and we should be able to distinguish them from each other.

Values can affect science in different phases of the research process: the pre-epistemic phase, the epistemic phase, and the post-epistemic phase. Hicks states that the distinction among pre-, post-, and epistemic phases is useful for some analytic purposes but cannot be directly applied “to the concrete complexities of the real world.” (p. 3289) In this paper, Hicks uses an Aristotelian framework to capture these complexities of real life, in the footsteps of Alasdair MacIntyre and Philippa Foot.

Hicks compares two cases: feminist values in archaeology and commercial values in pharmaceutical research to make his point. In the Feminist Case, self-identified feminist scientists criticized the androcentric presuppositions and research agendas. This project brought about new contributions, which changed archeological practices and the understanding of the cultural past. The Pharma Case describes the impact of commercial values on science. One example deals with the results of a clinical trial of an antidepressant. The trial did not show the effectiveness of the drug. However, the results were presented in a way that suggested the drug was effective. In the preliminary sketch presented by Hicks, in the Feminist Case, the impact of values is legitimate while illegitimate in the Pharma Case. In the more detailed analysis that follows, Hicks deals with three major approaches to legitimacy vs. illegitimacy. In his conclusion, he outlines his own approach, which he claims to emphasize ethics besides epistemology.

Hicks presents useful theoretical tools for analyzing values – e.g. direct, indirect, and cultural impacts of values – and finds inadequacies in using these tools. He implicitly suggests an Aristotelian virtue-based ethical framework to supplement these theories with an ethical perspective. Hicks reaches beyond the discussion on science and values: researchers should emphasize the virtues of good scholarship and grow as scientists to full maturity.

Our discussion

Some TSElosophers were more sympathetic to the dawning virtue ethics in science than others and all agreed that this is surely thought-provoking – but far from unproblematic. In particular, Hicks’ position on the Feminist and Pharma Cases seems to be predetermined by his own values.

In general, Hicks claims to avoid “pernicious relativism” (p. 3291) but fails to provide argumentation for his claim (and openly admits as much). Relativism is apparent in Hicks’ own distinction between constitutive (e.g., seeking for truth in science) and contextual (e.g., profit-making in the Pharma Case) values that seem to allow the corruption of epistemic values. From the point of view of science, epistemic values are constitutive, while from Hicks’ point of view, the epistemic values were contextual for the agents in the Pharma Case. In contrast, TSElosophers insisted that epistemic values are, and should be, at the core of research in the epistemic phase.

The three different phases of the research process inspired a vivid conversation. At which stage do values impact the research process? In the Feminist Case, values influence the pre-epistemic phase. During the pre-epistemic phase, “research programs are chosen, hypotheses are formulated, and experiments are designed and conducted.” Initially, there is no evidence to back up the new paradigm, research program or theory, but it is produced in the course of the research process. In the Pharma Case, a severe problem arises when unwanted values influence the epistemological phase. In the epistemic phase, “hypotheses are evaluated in terms of their relationship to empirical evidence, among other things, and accepted or rejected.” (p. 3273)

For TSElosophers, the pre-epistemic phase turned out to be about paradigmatic or attitude-related matters and about engaging in everyday research practice. For instance, the currently ruling publish or perish -mentality encourages the instrumental interest in science. Researchers need to pay heed to, e.g., the paradigmatic or methodological tastes and values of journal editors and the potential referees of the papers. Researchers frame their research and articles in such a way that they might appeal to the publishers. If the effect extends to the epistemic phase, all the worse.

What about the values of the epistemic phase? Is it possible that different epistemic values conflict? Or can there be important cases of epistemological underdetermination? For example, scientists choose methodologies with quite a lot of epistemological underdetermination in long-term prediction models of complex phenomena, such as climate change. Epistemological uncertainties make elbow room for other decision-making procedures. In those cases, should one for example exaggerate the effects of climate change if the evidence is ambiguous? The suggested solution was to communicate the unknown or the degree of uncertainty more effectively, e.g. with a scenario analysis. In sound science, we report the range of uncertainty. Another beacon of hope is the self-correcting process of science.

Finally, TSElosophers scrutinized the issues related to the third phase of the scientific process, the post-epistemic phase, “during which accepted hypotheses are utilized in other research (whether to produce more knowledge or new technology or both); this phase also includes the impacts of the accepted hypotheses on the broader society.” (p. 3273) For the sake of argument, let us assume that there might someday be research that could fuel racism. For example, we might have studies that corroborate the hypothesis of races based on biological differences, for instance, regarding IQ. Should we refrain from publishing such research or even conducting it, pre-shadowing the likely ensuing, problematic public discourse or cultural processes? Another tricky example is the Manhattan Case, the project that resulted in creating the nuclear bomb. It would not have been possible without Einstein’s theory of relativity. Should we stop doing any research that might lead to disastrous applied science and technology?

TSElosophers concluded that ethics are embedded in the scientific process and tend to be included in all three phases. That said, in the epistemic phase, precisely epistemic values should be kept as dominant as possible – this is the very lifeline of scholarly work, without which it ceases, sooner or later, to be meaningful. However, the results and the ethically justified research process need to be taken into account. An example of an unethical process is the utterly dehumanizing studies on human subjects by Josef Mengele.

TSElosophers wrapped up by realizing that science is both a logical process and a historical one. The cultural context has an impact on the concepts we use and the values we employ. However, epistemic values are the inalienable core of science: Prioritizing the truth, in the sense of a purpose of well-grounded scholarship, is the right procedure in the ethics of science. It is also pragmatically the most prolific policy: in the long run, trustworthiness pays off. Truthfulness is the basis of reliability – and it can be often communicated most effectively by direct reference to the epistemic phase and epistemic values, even though Hicks is correct in that advanced scholarship does well to analyze the entire research process with a broader ethical framework.

« Older posts

© 2023 TSElosophers

Theme by Anders NorenUp ↑