Categories
Essays

One to Many, Many to One

Included in the same 2020 book edited by David Malinowski and Stefania Tufi, ‘Reterritorializing Linguistic Landscapes: Questioning Boundaries and Opening Spaces’, William Amos and Barbara Soukup address categorization of data in quantitative linguistic landscape studies in their book chapter ‘Quantitative 2.0: Toward Variationist Linguistic Landscape Study (VaLLS) and a Standard Canon of LL Variables’. This is the book chapter that led me to this book and it’s actually the one I intended to check out first, but, following the introduction, I got distracted by a couple of other book chapters (that I covered in the previous essay).

So, to quickly summarize this book chapter, it has to do with quantitative approaches and the data annotation categories, as evident from the chapter title. To be more specific, their chapter focuses on data annotation and suggests certain categories that would be useful to incorporate to all quantitative linguistic landscape studies. This would help with comparing data and conducting meta-analyses. If everyone does things their own way, it’s not that it becomes impossible to compare studies, but it’s difficult nonetheless. This is very much a practical issue and I agree that it would be beneficial in the sense that I’d be less of a hassle to compare the results of different studies.

To get on with this, Amos and Soukup (56) summarize how things were in the past and what’s the current situation. It used to be the case that many linguistic landscape studies were quantitative, but, as they (56) point out, this ‘first wave’ of studies was followed by a turn toward qualitative studies. Following the shift from quantitative to qualitative, those opting for the quantitative were heavily (if not harshly) criticized for applying methodology that results in “undue simplification of their data’s character and context, to the point where the approach has been dismissed as merely ‘counting sings’”, as they (56) go on to point out. I agree and, to be clear, I am one of these heretics. I’ve got a lot of flak from my ‘peers’ who, behind the veil of anonymity, appear to have opted to quell such heresy. Nothing like getting feedback where it is apparent that there’s nothing wrong with one’s work, as such, except that it’s not done the orthodox way. I’m actually quite surprised that Amos and Soukup managed to get this text published. Maybe it’s just a slip up and future copies will have this chapter torn from the book when you open it. Nothing to see here!

Anyway, joking aside, Amos and Soukup (56) state, boldly and bravely, that they are on a mission to challenge such ‘criticism’, to show that quantitative approaches “are capable of capturing and explicating details regarding the appearance and context of LL signs and their function in public space, by their power to throw into relief general patterns and trends of distribution and co-occurence.” I want to emphasize the last word here, co-occurence, because it’s something that has not been given due emphasis in many studies. Like I’ve explained in my essays and in my published work, it’s one thing to work with a single variable and a whole other thing to work with two or more variables at a time. Sure, dealing with multiple variables does not, in itself, tell us why this and/or that occurs alongside this and/or that and, conversely, why that isn’t the case some other phenomena, but it does tell us that, somehow, these things seem to be linked, that there seems to be a pattern. This is something you can’t do with qualitative analyses. Or, well, maybe you can, but then we need to have a serious discussion as to what counts as qualitative and quantitative. On a general level, I’m a bit puzzled why people like me, Amos and Soukup, among others, have to even explain the usefulness of quantitative analysis. Hello, ever read heard of Gabriel Tarde or, if he’s too obscure, Pierre Bourdieu? Oh, and if your response to that is that, well, they are not linguists or sociolinguists, not to mention linguistic landscape scholars, and thus that proves nothing, then what can I say, except, that how parochialist and dogmatist of you, how convenient that their work doesn’t count just because they are not part of your crew.

While I’m not 100 percent onboard with having an “agreed method”, nor canonizing anything, as I like to think everyone should get to do their own thing, be creative with their work, Amos and Soukup (56) make another good point when they state that there was some discussion, some proposed categories that would work in all quantitative studies, the discussion got muddled when, all the sudden, there was basically no room for quantitative studies. I think it’s great that they get to even discuss this, no matter how ‘wrong’ others think it is. As I pointed out already, there is something to this, being able to compare sets of data, rather than having to parse them in various ways before being able to compare them. At least it saves time, if nothing else.

To justify their project, why they propose what they propose in this book chapter, Amos and Soukup (57-58), cover how things were in the past. The gist is that back in the day when the first special issue dedicated to what became known as linguistic landscape studies (LLS) came out, in 2006, much of the work was actually quantitative but the data categorization was rather simplistic. This ‘first wave’ or ‘empirical-distributive’ approach was criticized for being overly reductive. For example, authorship or agency, whatever you want to call it, was presented as a binary, ‘top-down’ or ‘bottom-up’, as in ‘public’ or ‘private’. As they (58) point out, this criticism was actually warranted, that is to say, constructive. Classifying something as this or that language was another point of criticism, not in the sense that you have much trouble distinguishing, let’s say, Finnish from English, but in the sense that how do you decide between two very similar languages, let’s say Finnish and Meänkieli, especially when what you are dealing with may be limited to a couple of overlapping words. While that may seem trivial, it’s not when you have to deal with a large set of data and be consistent with the data annotation.

However, if you ask me, none of that was, nor is, the main issue that led people to opt for qualitative over quantitative. As Amos and Soukup (58) go on to point out, the biggest issue was the unit of analysis itself. In short, the biggest issue (no pun intended, you’ll see) was that size does matter (or does it) and if it does (or doesn’t) matter, to what extent it matters (or doesn’t matter)? As each unit or instance is, by default, valued as ‘1’ in the data, something like a small sticker is thus seen as equally important as large billboard. That’s an obvious issue. Then again, for some reason there seemed to be only talk of size, not other properties that may, in fact, cancel out the importance of sheer size. For example, let’s examine a billboard the size of a football field (to exaggerate things, just a bit) which has what’s depicted on it in very low contrast. Let’s say it consists of a white background and writing on it in light yellow. It’s like looking at a blank canvas. Conversely, we can have something much smaller that is nothing in comparison to the humongous billboard, for example a poster, yet the expression of its content is in high contrast. Which one is now more important? We could do similar things with orientation. Let’s assume that we have ten billboards, placed near one another. For the sake of simplicity, let’s also assume that the content is the same. They are all same size, but one of them is vertically orientation (tall > wide), whereas the others are horizontally orientated (tall < wide). Which one of them stands out? We can even make the others bigger than the vertically oriented one and we’d still be paying attention to the vertically oriented one. We could even flip that vertically oriented one on its side, make it horizontally oriented, but it would still stand out from the nine others because it is smaller than them. We can also extend this discussion to the conditions that pertain to the observation, such as reflectivity of the materials used, how something may, for example, shimmer, thus catching your attention, or be lit, so that you pay attention to it, regardless of its features, but not to something else. I actually mentioned this issue in my own methodology article a couple of years ago. It used to have a segment dedicated on this issue pertaining to visual attention, the different factors involved, but, for some reason, it got axed as unnecessary and thus reduced to a mere mention.

Anyway, Amos and Soukup (58) argue that instead of addressing these issues (which, I acknowledge to be quite daunting), most researchers opted to go the qualitative route, because “[i]n this strand, discrete elements of a given LL are selected and discussed individually, and are not typically compared with other items in that space or elsewhere in terms of quantitative distribution patterns.” They don’t say this but I reckon that many opted for the qualitative route because it’s just less work. No need to spend your time figuring solutions to the aforementioned issues. No need to spend that much time gathering the data. No need to spend that much time processing that data. No need to make sure that the data was processed consistently. This is not even just about who wants to dedicate their time to doing arduous and often simply boring task, but about productivity. You do not need all that data to produce an article, which is now a standard measurement of productivity in universities. I mean it only makes sense to do one-off studies. They are self-contained. Quantitative studies tend to need a lot of time for preparation, execution and processing, so they don’t have high initial turnover. When contracts are like half a year, a year, maybe two years, you need to produce in that time. It took me, what, a week to gather the data, and a couple of weeks to process it, add a week or two to going through it, again, and again, and again, for the sake of consistency, for a total of couple of months. Then again, those were really long days, something like 14 to 16 hours a day going through the data. At a more relaxed pace, that would have taken me like half a year, give or take. This doesn’t mean that qualitative work is easy. All I’m saying is that it takes way less time to work with the data because the assessment of linguistic landscape data is always manual, dare I say, qualitative. It’s not like you have an online form that people complete with two to five options that people select from per question, so that a database is formed automatically. You actually have to analyze each item, one by one, multiple times, often just for the sake of consistency. In terms of effort and efficiency, yeah, I’d also pick a handful of items to analyze instead of thousands of items to analyze. Then again, as also pointed out by Amos and Soukup (56), you can’t really say anything about patterns or trends, how systematic something is, by looking at a handful of items, which is why I went through all that.

Amos and Soukup (59) acknowledge the limitations of the so called ‘first wave’ of studies, noting that they were, indeed, simplistic and rather unsatisfactory, when compared with what the qualitative assessments dealt with. They (59) gather that the simplicity of these studies had to do with how the data handling becomes more complex the more categories you deal with. I can’t say if it was intentional or not, I wouldn’t know, but it would make sense. I mean the more categories you deal with, the more work it involves. For example, if you annotate 100 items, you have to make 100 annotations per category. That means that if you have two categories, you have to go through 200 annotations, involving 200 clicks. If you have three categories, you have to go through 300 annotations, involving 300 clicks. On top of that, you probably have to go through them again and again, to make sure you were consistent. This issue becomes greater the more data you have. So, yeah, it would make sense that prior studies had only a limited number of categories because the more categories you have, the harder it becomes to manage that data. It’s sort of obvious that if you deal with a small set of data, assessing it in multiple ways is easier.

Amos and Soukup seek to rework the quantitative side (hence the 2.0 in the chapter title) by taking cues from variationist sociolinguistics. They (59) state that the goal is to better understand the “interactions between linguistic and social structures and dynamics” by exploring “the relationship between written language use in public space and its social character, context and contingencies.” What I’d like to emphasize here is contingency. This is not a matter necessity. There’s nothing inevitable about this. What you are looking into is just that this occurs alongside this, in this data, gathered in this time and place. You can, of course, infer all kinds of things based on that, but it doesn’t mean that things will still be that way. Maybe, maybe not. You might also want to compare the findings with other findings from some other study conducted elsewhere, hence the interest in comparable sets of data.

Anyway, they (60) state that the proposed variationist approach has three basic requirements. Firstly, they (60) indicate that “an objectively imposable, ex ante definition of unit of analysis under study … that is ideally applicable across a wide variety of research settings is necessary.” I agree that the unit of analysis should not be defined too narrowly. That said, I don’t really know what is meant by objective in this context, some transcendent truth or just what is considered truth in a regime of truth? I don’t think it is possible to be objective, nor subjective. Both presuppose the self as a starting point. To me, what matters is that it makes sense and what makes sense is always a collective matter (that pertains to collective assemblages of enunciation, as Gilles Deleuze and Félix Guattari would put it). It may seem a bit crazy, but, in a way, it’s a matter of intuition. You can’t really explain it. Either you get it, or you don’t (like with the Möbius strip). Secondly, Amos and Soukup (60) state that the data should be clearly defined and delineated. This is known as sampling. What’s important here is that all data points should be recorded. In other words, you shouldn’t be picky with what you include in the data. This is known as the count-all procedure, as they (60) point out. Oddly enough, I’ve received flak for that specific wording, because, apparently, for a detractor, it means that there is just these things, a fixed number of them, waiting for me to record. Now, coming from a detractor, that comment was, of course, in bad faith. What is meant by it should be rather obvious. It just means that you aren’t skipping data points because you couldn’t be arsed to do a better job. That’s all. Thirdly, they (60) state that the data categories should be relevant to the study, whatever it is that one is studying.

Following the set criteria, what they think should be the basic requirements, they (60) further elaborate on these three requirements. They (60) endorse what is known as the physical definition of unit of analysis, arguing that it “sets a reasonable, satisfactory, and workable precedent” and that it “is required for a true count-all collection of items in an LL if surveyed at a comprehensive level.” In the notes (72) this is also referred to “spatial (material) delimitation”. I’m actually quite surprised that they propose the much criticized physical, material or spatial definition as the basis. I mean it’s so easy to tear apart, as I’ve pointed out in my own published work. The gist of the definition is that each unit of analysis is contained in its own spatially definable frame, which may or may not be the same thing as its carrier. For example, a road sign is typically a sheet of metal bolted to a pole that is stuck to the ground. The sheet of metal is the frame. The pole is the carrier. What’s depicted on the frame is often painted on the sheet of metal, albeit it could also be taped/laminated on to it (I’ve seen both types). In some cases the frame and the carrier can be one and the same thing. For example, if something is applied directly to a wall, then the wall is the frame and the carrier, unless one considers the paint on the wall as its own frame and the wall the carrier of that frame. They (72) refer to this as layering, how in problematic cases each layer is considered its own frame and thus its own unit of analysis. Now, to be fair, this makes a lot of sense, but I’m still not convinced, because, as they (72) point out themselves, “this particular operationalization remains subject to grey areas and ad hoc judgement calls.” It’s kind of hard to assess whether, for example, graffiti is one layer, one unit, or a number of layers, multiple units.

Things get way more problematic when we consider encountering a number of frames that seem to form a larger whole but aren’t connected to one another physically. For example, I’ve encountered writing on taping that is contained on that frame, the tape itself, only to encounter the same taping, made from the very same material, containing the same writing, but split into two pieces of tape. In the latter case the tape is split in two is because it appears to have been retrofitted, put into place to replace prior taping. Because there’s a protruding object, it would have been impossible to replace the original tape without cutting the tape that replaced it in half. The thing is that both pieces of tape are necessary because they pertain to health and safety: the same information must be provided in Finnish and Swedish. If we apply the physical definition, there are three units of analysis, one that contains Finnish and Swedish, one that contains Finnish and one that contains Swedish. Applying this definition ends up obscuring this requirement. This may be only a minor thing in the data, inconsequential, but if there are many instances where the two languages are presented on separate frames, it may end distorting the results, giving a wrong impression of the situation. This is by no means the only issue with the physical definition.

This becomes even more of an issue when we address something like taping on a carrier/frame, such as a door. Let’s say that the door has writing on it, applied to it in the form of taping. Each letter is a physically separate entity, distinct not only from the door, but also from the other letters. If we go with a strict physical definition, each piece of the tape is, by all logic, its own unit of analysis. The problem with this is that it just makes no sense. It’s not wrong. It’s just absurd.

They (60) further comment on the second basic requirement. They emphasize the point I made about how the so called count-all procedure has to do with accountability, taking into account not only what strikes your fancy but also what doesn’t. I agree. I remember gathering data, being drawn to by all that’s unexpected, only to subsequently realize that these cases were actually fairly rare in the data. Had I focused solely on such aspects, I would erred to present what I was studying as far more linguistically heterogeneous than was actually the case. In fact, the data was actually highly homogeneous in this respect. So, yeah, I agree with Amos and Soukup (60) that it is important to assess what patterns emerge, what features are and aren’t salient, as well as in what contexts they are or aren’t salient.

With regard to the third basic requirement, they (61) comment it by adding that one needs to be careful when selecting the area that one seeks to study. It’s unwise to attempt to do more than what’s possible. It’s unlikely that a researcher or even a research team has unlimited resources, so it’s recommendable to focus on something that one can actually pull off. Trying to cover a whole city or a town will likely result in something that never gets done. They (61) suggest a problem oriented approach, basing the choice on hypothesis or research questions. For example, one might focus on an area that is considered tourist oriented. This also helps with coming up with data annotation categories, as they (61) point out.

Following the first part of their book chapter that focuses on prior research and addresses basic requirements for conducting research, Amos and Soukup (61) turn their attention to the data annotation categories that they propose to be common across studies in order to make it easier to compare findings of different studies. In summary, this second part introduces categories that pertain to the material or physical aspects and discursive or symbolic aspects. The former pertains to the material and spatial aspects, the appearance of an item and its placement, whereas the latter pertains to what the item is about.

The first category pertains to the physical location. For them (62-63) it has to do with where something is located. They (62) suggest that, if possible, one should input details such as country and municipality (city, town, village etc.) and, if possible, the street name and number. Now, I have a better idea. If located outdoors, why not just use GPS coordinates. It’s easy to do. I did such marking in undergrad geography like a decade ago and I bet the tools are much better and easier to use these days. Cameras tend to have to have GPS these days as well, but the problem with that is that it records the location of the camera, not the location of what it is that you photograph. In addition, they (63) suggest indicating whether the item is, for example, on a wall, on the ground (pavement, road, grass) etc. They (62-63) also propose taking into account the physical dimensions of the items, indicating whether they are, for example, in square meters or larger/smaller than … but not smaller/larger than … in terms of ISO standardized paper sizes. They (62-63) suggest a quick on the spot approximation of size, to get a rough idea of the size. I’d add here that if one wanted to be more accurate, one could use a laser distance measurement tool to figure out the surface area. That of course depends on the available resources, whether one has the budget for such and the time to do the measurements. That’s probably not doable for solo projects, but it should not be much of an issue for a well coordinated team. Their (63-64) third category deals with the actual material that the items are made out of, for example, glass, metal, paper, plastic or wood. Their (63-64) fourth category has to do with how whatever is displayed is presented, whether it’s, for example, embossed, engraved, written on the surface or printed. The third and the fourth categories also include the option of indicating whether the item is a digital display. While I consider the third and fourth categories useful and rather easy to implement in a study (which is exactly why they suggest these categories), I think I would separate the digital/non-digital into its own category, because not all digital displays are alike. In terms of material durability, it makes a difference whether we are talking about a flimsy LCD panel meant to be used indoors or a more robust LED display meant to be viewed at a distance outdoors. I would also separate lighting from the materiality to indicate whether the item is illuminated and how it is illuminated, from within (lighting incorporated into the item) or from without (separate lighting).

The third part of the book chapter deals with what they call the discourse level categories. They (64) propose three categories: authorship, contextual setting and discourse type. Firstly, instead of going with the ‘top-down’ or ‘bottom-up’ classification, they (64-65) argue in favor of having three options to choose from in the authorship category: ‘official’, ‘private’ and ‘unauthorized’. The first two options match the binary classification but they are also considered ‘authorized’ whereas the third is not, hence the moniker ‘unauthorized’. They (65) acknowledge that it is possible to further differentiate these categories, but leave it up to people to figure whether their work warrants such or not. Their (66) interest lies in adding or enhancing compatibility between data sets, to make it easier to do subsequent meta-analyses. Secondly, they (66) indicate that ‘contextual setting’ pertains to the discursive context and “thus provides important metadata about the roles of certain types of places and groupings in the LL and their relationship(s) with specific objects, authors, and languages.” The purpose of this category is to go “beyond the official/private and authorized/unauthorized dichotomies”, as they (66) go on to clarify. Without going through the specifics, they (66-68) propose categorization that is based on an existing classification of economic activities known as ISIC (International Standard Industrial Classification of All Economic Activities) with 21 options. The ISIC documentation includes further information that can be used to select the appropriate option. The obvious strength of using this preset is that there’s no need to start from scratch. It’s also connected to anything else that uses the classification, so one can draw from those statistical resources as well, as they (68) point out. The obvious negative is that the listing is rather generic (yes, I know, it’s also kind of the point here), unless one implements the further levels of categorization presented in the ISIC documentation. Be as it may, I can’t vouch for its usefulness though, but that’s only because I have not assessed it myself, to see if something is missing or contradictory in the classification. Thirdly, they (68-69) they state that the ‘discourse type’ category functions to indicate the type of discourse one is dealing with in the case of each item, for example, artistic, commercial, infrastructural or political discourse (to name some in their non-exhaustive list). This is what I’ve called genre in my own work.

What I like about what Amos and Soukup propose in this book chapter is that the classification is simple enough and that it is not presented as an exhaustive, one size fits all approach. It serves more as a template that is intended to facilitate subsequent meta-analyses. I also like the general emphasis on co-occurrence, assessing two or more categories or variables at a time. Things get particularly interesting when you can assess the co-occurrence of phenomena context sensitively, as promoted in this book chapter. I like that they have the guts to go against the grain by advocating for quantitative studies. I think people don’t really understand what the point of quantitative social studies. I think issue has to do with the definitions of qualitative and quantitative.

I subscribe to Tardean definitions of qualitative and quantitative, which are, more or less, the exact opposite of how they are typically understood. Bruno Latour explains this well in his book chapter titled ‘Tarde’s idea of quantification’. I’ve covered this issue in a previous essay, but I’ll summarize the main points here because it’s highly relevant here. He (147) states that:

“In the twentieth century, the schism between those who dealt with numbers and those who dealt with qualities was never been bridged. This is a fair statement given that so many scholars have resigned themselves to being partitioned into those who follow the model of the ‘natural’ sciences, and those who prefer the model of the ‘interpretive’ or ‘hermeneutic’ disciplines.”

In other words, there’s this deeply ingrained notion of natural sciences as being the ones that deal with numbers, that is to say hard, cold facts. Conversely, social sciences (or arts and humanities) are often referred to as the soft sciences that deal with what can’t be quantified or generalized. In Latour’s (147) words:

“All too often, fields have been divided between number crunching, devoid (its enemies claim) of any subtlety; and rich, thick, local descriptions, devoid (its enemies say) of any way to generalize from these observations.”

Which means that (156):

“The division between a qualitative and a quantitative social science is in essence the same as the division between individuals and society, tokens and type, actors and system.”

And (149):

“In the tired old debate pitting a naturalistic versus an interpretative social science, a strange idea appears: that if we stick to the individual, the local, the situated, you will detect only qualities, while if we move towards the structural and towards the distant, we will begin to gather quantities.”

Now, if you’ve read Tarde, you know that he is not buying into this. As explained by Latour (149):

“For Tarde the situation is almost exactly the opposite: the more we get into the intimacy of the individual, the more discrete quantities we’ll find; and if we move away from the individual towards the aggregate we might begin to lose quantities, more and more, along the way because we lack the instruments to collect enough of their quantitative evaluations.”

Make note of how Latour points out that the issue with aggregation was for Tarde an issue of not having appropriate instruments. Latour (159-161) expands on this issue, noting that we need to keep in mind that there were no computers when Tarde was around (1843-1904). I mean typewriters were a recent invention in later 1800s. Now “we can produce out of the same data points, as many aggregates as we see fit, while reverting back at any time, to the individual components”, which also means that “the whole has lost its privileged status” as it’s “now nothing more than a provisional visualization which can be modified and reversed at will, by moving back to the individual components, and then looking for yet other tools to regroup the same elements into alternative assemblages”, as elaborated by Latour (160-161).

To make sense of this, Latour (148, 160) points out that for Tarde everything is a society (which, by the way, reminds me of how for Deleuze and Guattari all they know are assemblages), be it molecules, cells or people, “ants, bacteria, cells, scientific paradigms, or markets.” There’s no need to think otherwise, which, I realize will come across as a monstrosity to many on the qualitative side because it undermines their starting point, that the individual is a given, autonomous and free of any constraints to thought and action. The point is to abandon objectivity and subjectivity and replace them with collectivity. Humans are components of society, just as cells are components of humans. Importantly, as Latour (148) points out, you cannot separate the components from what they are components of. The gist of this is that you cannot explain anything with recourse to society (as that’s an abstraction that needs to be explained) or to the individual (as that’s yet another abstraction that needs to be explained). You can’t explain what people do by stating that it’s because of social structure (society, culture, etc., they are all abstractions that need to be explained) or free will of the individual (it’s the same problem again). As explained by Latour (148), we “should not be (mis)led into imagining that there could be a strict distinction between structural features and individual or sub-individual components.”

To make more sense of this, Latour (156) points out that “[t]he quantitative nature of all associations will seem bizarre if we mistakenly impute an idea of the individual element seen as an atom to Tarde.” In fact, Tarde is against “the very idea of an individual as an atom” as it “is a consequence of the social theory he is fighting against”, how “quantification starts when we have assembled enough individual atoms so that the outline of a structure begins to appear, first as a shadowy aggregate, then as a whole, and finally as a law dictating how to behave to the elements”, as clarified by Latour (156). It’s therefore crucial to emphasize that neither one, nor the whole, enjoy any privileged status in how Tarde conceptualizes a society. To put it bluntly, “[t]he reason why there is no need for an overarching society is because there is no individual to begin with, or at least no individual atoms”, as stated by Latour (156). If we do not have ones (atoms) and/or the (overarching) whole, what do we have then? Well, as aptly summarized by Latour (156), instead of an atom, one, “[t]he individual element is a monad, that is … an interiorization of a whole set of other elements borrowed from the world around it”, and the whole is merely “a vast crowd of elements already present in every single entity.” You still have ones and wholes, but they function in reciprocal presupposition. So, as I already point out, we can certainly think of a human as a whole, but the whole is only an aggregate of elements, such as cells, that have come together. We can also think of the human cell as a whole, consisting of elements. We can also think of humans as cells of a whole, such as a society. However, what’s crucial here is to understand that there’s no overarching or all encompassing whole that covers everything (no fixed structure, no law that we can uncover). It’s equally important to understand that there are no wholes that have fixed elements. In summary, as explained by Latour (157), “if we are able to quantify an individual ‘one,’ it is because this instance is already ‘many.’ Behind every ‘he’ and ‘she,’ one could say, there are a vast numbers of other ‘he’s’ and ‘she’s’ to which they have been interrelated.” So, paradoxically, one is simultaneously an individual, in the sense that one is indivisible (a haecceity), but also a dividual, in the sense that one is divisible into many (quiddities) , as he (157) points out.

To get back to the book chapter, to wrap things up, for now, there are some things that I’d do differently, as I pointed out on a couple of occasions, but nothing major when it comes to the proposed categories. It’s more like fine tuning. My biggest gripe concerns the units of analysis. While I think that the physical definition may serve as a starting point, it just doesn’t work across the board. It works in many cases, but it still runs into issues way too often. There are just too many cases where it fails to make sense. It results in absurdity.

References

  • Amos, H. W. and B. Soukup (2020). Quantitative 2.0: Toward Variationist Linguistic Landscape Study (VaLLS) and a Standard Canon of LL Variables. In D. Malinowksi and S. Tufi (Eds.), Reterritorializing Linguistic Lansdscapes: Questioning Boundaries and Opening Spaces (pp. 56–76). London, United Kingdom: Bloomsbury Academic.
  • Deleuze, G., and F. Guattari ([1980] 1987). A Thousand Plateaus: Capitalism and Schizophrenia (B. Massumi, Trans.). Minneapolis, MN: University of Minnesota Press.
  • Latour, B. (2010). Tarde’s idea of quantification. In M. Candea (Ed.), The Social After Gabriel Tarde: Debates and Assessments (pp. 145–162). London, United Kingdom: Routledge.
  • Malinowski, D., and S. Tufi (Eds.) (2020). Reterritorializing Linguistic Lansdscapes: Questioning Boundaries and Opening Spaces. London, United Kingdom: Bloomsbury Academic.