Categories
Essays

With the Lights Off

I don’t know what it is or was that pushed me to this direction, or at least I can’t remember. Well, I guess that doesn’t matter anyway. Perhaps it just struck me one day how dark it gets in Finland and how I hadn’t given that much thought previously. Anyway, so I did a bit of digging and came across an article published in ‘cultural geographies’. Nina Morris addresses the issue in her article titled ‘Night walking: darkness and sensory perception in a night-time landscape installation’, so I’ll be discussing it in this essay. I will then progress to discuss differences between human eyes and cameras because it is highly relevant in what I’m interested: conducting linguistic landscape research in the dark.

Morris (315-316) begins the article with a quote by Robert MacFarlane, who she refers to as a mountaineer and outdoor enthusiast. I’ll add a bit from that, from the original source ‘The Wild Places’ by MacFarlane. The pagination is from ‘Noctambulism’ in ‘The Way of Natural History’. That’s what I got my hands on, so it’ll have to do. MacFarlane first explains his interest in the topic (29):

“Winter is the best time to night-walk, you see, because snow perpetuates the effect of moonlight, which means that on a clear night, in winter hills, you see for a distance up to thirty miles or so.”

Going slightly off topic first, it is indeed the case that you can see quite well at night, pending the conditions are suitable for that. I’ve been to Lapland. Upon arrival there was no snow on the ground. It was pitch black outside at night, and well, even late in the afternoon for obvious reasons, but once the snow fell, it was easy to see. The snow acts acts as an extensive reflector for moonlight if you didn’t get it already. Anyway, that’s a bit on the side of things, but I’ll let MacFarlane continue (30):

“[D]ark confers on even a mundane landscape. Sailors speak of the uncanny beauty of seeing a well-known country from the sea, the way that such perspective can make the homeliest coastline seem strange. Something similar happens to a landscape in darkness.”

Skipping a bit here, MacFarlane continues (30):

“At night new orders of connection assert themselves: sonic, olfactory, tactile. The sensorium is transformed. Associations swarm out of the darkness. You become even more aware of landscape as a medley of effects, a mingling of geology, memory, movement, life. New kinds of attention are demanded of you, as walker, as human. The landforms remain, but they exist as presences: inferred, less substantial, more powerful. You inhabit a new topology.”

Now, for my purposes, I can sort of skip the parts don’t have to do with vision. That said, I think MacFarlane is on to something. Unlike MacFarlane, my own interest is on urban areas rather than rural areas, but I agree that darkness demands new kinds of attention. You can no longer rely on your vision to the degree that you do in daylight conditions. MacFarlane (31) elaborates this:

“Homo sapiens evolved as a diurnal species, adapted to excel in sunlit conditions, and ill equipped to maneuver at night.”

For the purposes of this essay, I’m not too fussed about what he goes on after this statement as it’s unrelated to the topic at hand, but anyway, he (31) rightly points out that humans overcome this, let’s say limitation, by artificial lighting. Yes, we can keep on doing things when it gets dark by using artificial lighting. At first one could for example light torches, candles and lamps. Now there’s plenty of fixed sources of light that switch on and off at certain times of the day or based on how much ambient (sun)light there is at any given time. Vehicles also tend to have lights for this very purpose. There’s less of a need to carry sources of light as they are fixed into our surroundings, operating when needed. Anyway, the main purpose of the lights in the dark is to overcome this limitation, our shoddy vision in the dark, to make it easier and safer for us to go about our business. Fair enough, that seems all well and good, but it also guides us, our seeing and our acting.

Simply put, you can’t see in the absence of light. I don’t mean that you cannot see in dark. You actually can, but it’s probably not much of a comfort to you if you can’t make much out of it. Try a room with no windows and close the door behind you. Switch off the lights, obviously. That’s what I mean. Human eyes adapt to the different lighting conditions, but there are tradeoffs. You might remember some school class, I think it was in biology, where this gets covered: the rods and cones. If my memory serves me, the rods are the ones sensitive to light, but not that great when it comes to resolution. With cones it’s the other way around, unless I get the two mixed up. Now, in daylight conditions there is plenty of ambient light, so you don’t need that much sensitivity to it, meaning that cones are fine and you get the resolution, which I think is preferred. In low light conditions, say at night time, you’ll rather trade for the ability to see anything for the resolution. You’ll encounter this also in photography. Your camera will have hard time in low light, so you’ll either have to let it do its thing, meaning a longer exposure to the available light, or make it more sensitive to light, meaning using a more sensitive film or bumping up the sensor light sensitivity. Like with human vision, the latter option comes with certain tradeoffs, meaning that it will be on the expense of the details. With digital technology, this isn’t as big an issue as it used to be, but it still holds that low light is not preferable if you can help it. You’d rather not amplify the signal if you can help it. If resolution is what you are after, then you’ll fix the camera, typically to a tripod and use a longer exposure. Now, obviously that’s not an option for the human eye, which you have little control over. You wouldn’t even want long exposure vision for that matter. Just imagine getting a snapshot of what was every ten seconds or so, not to mention being all muddled if you or your eyes move even just a bit. Managing the aperture, the thing that mimics iris, on the camera lens is one option as well, but opening it up like a dilated pupil leads to a shallow depth of field, which isn’t preferable if you want to look at the whole scene rather than just a part of it in the distance. If you wonder what is meant by a shallow depth of field, take a look at a wall in your room at distance, then bring your hand in front of you and stare at your hand, then the wall, then the hand again. See the difference? When it comes to landscape photography, it’s generally preferable to have the whole landscape rendered sharp, not just some near object. This does actually result in a representation of the view that is overall sharper than what the human vision can render at any given time, but that’s a reason tackled next. So, human eyes and cameras are sort of the same, but different. As a result, you can mimic the eye on a camera to a large degree, but you lack that control over your eyes.

There’s plenty more to the comparison of human eye(s) and cameras. The notable difference is obviously that humans have two eyes, whereas most cameras have only one (film/sensor). So, what we see is quite wide, wider than what most camera lenses can do, but the peripheries of our vision, left, right, top, bottom, not to mention the corners, are hardly sharp. It’s all there, but you can’t make out the details, unless move your eyes and/or reorient yourself to center on it. Of course then it’s no longer in the periphery. I remember being told that in terms of lenses made for the 35mm (36x24mm) film format cameras, nowadays referred to as the ‘full frame’ on digital cameras, a 50mm lens, also known as a normal focal length lens, is equivalent to how the human eye perceives the world. I believe it’s not exactly 50mm, but a bit less. Anyway, that doesn’t make a great difference here. So when you look through a viewfinder objects appear the same size and at the same distance as they do … normally. Now, that doesn’t mean that the field of view of such lens matches the human vision, not even when you close one of your eyes. The angle of view is actually already rather narrow on a normal focal length lens. So, it corresponds to the central area of the human field of view. In summary, the field of view of the human vision is wide, very wide, but only a small part of it is offered in high detail. So, when you engage with the landscape, the totality of your view, it’s not all perfectly tack sharp as only the central area of your vision manages that. In that sense a landscape photo, when taken on settings that maximize the overall quality of the captured scene, as mentioned above, creates a more detailed representation of the world we can hope for with our own eyes. That said, the human vision does not suffer from the defects that tend to plague wide angle photography. When you look at a scene, you don’t see distortion and things falling over. So straight lines actually look like straight lines rather than oddly curving or falling over. High quality camera lenses manage these quite well these days and there are also tilt shift lenses that, as far as I know, are favored in architectural photography. This can now also be tackled by software as well. Anyway, the point is that you don’t have to do any corrections to your eyes, it all happens automatically when you look around. It’s not that you everything as equal size regardless of the distance. That’d be terrible, everything flattened to a plane. It’s that you don’t see the world appear wonky or distorted. One can also use non-wide angle lenses in landscape photography. It’s not actually forbidden. Obviously you won’t capture your view from where you are standing with a normal focal length lens, only the central portion of your vision. You have to fall back. Then again then you are no longer perceiving the world from that spot, but attempting to simulate the view, the landscape, as it appears from where you are not in order to get things to appear proportionate and at the right distance. In practice that means that the depth of field must be taken into account, otherwise you’ll get plenty of blur. For example, on a full frame sensor, a 22mm wide angle lens focusing on an object at 1,5 meter distance with aperture stopped down at F/11 yields everything in focus from 0,74 meters in front of you to infinity. If a 50mm lens is used on the same settings, the depth of field ranges from 0.61 meters to 1.87 meters. Now of course, what’s in the frame will not be that close, so the subject distance must be changed. The subject distance must be then set to at approximately 7,5 meters, yielding things in focus from 3,73 meters to infinity. I haven’t really experimented with this on normal focal lengths, how feasible it is, but at least on paper it seems feasible. What I take issue with is rather the mind warp that it involves. The way I understand landscape, it unfolds from where you are, not from an artificially set distance from yourself.

I think it’s worth pointing out, even though it’s obvious, that you can’t store what you see, at least not in the way that cameras do. In the world of consumer digital cameras, it is, or at least used to be (less so nowadays), all about the megapixels. First we had some megapixels, then five megapixels was considered good, then ten, then sixteen, then twenty, then twenty-four and so on. Now twenty or so megapixels is nothing out of the ordinary. My first DSLR (K10D) has ten megapixels, the second one I got (K-5) has sixteen and the one bought last year (K-3II) has twenty-four. My compact camera (GR) has sixteen megapixels. All offer plenty, pending you know what you are doing and pair them with good lenses. Now rather obviously the newer cameras outperform the older ones. There’s been additional changes to the sensor technology, not only increase in megapixels. Handling noise on higher sensitivities has improved significantly, despite cramming more pixels into the same sized sensor. Also at least my camera manufacturer removed the low-pass filter (anti-aliasing) to improve the sharpness. Anyway, this is about megapixels, so back to that. My cameras have APS-C format sensors (24x16mm), which are smaller than the ones in the so called full frame cameras (36x24mm) and in the much more expensive medium format cameras (typically 44x33mm). While you can cram a high number of megapixels in the smaller sensors, even the ones used in phones, you’ll find the 50s and 60s, as well as up to 100 megapixels in the medium format cameras. There’s also multi-sampling now offered in certain cameras, like in my K-3II, meaning that each pixel on the sensor is sampled in each color, red, green and blue, instead of each pixel recording only one of these and the result interpolated. The multi-sampling is used to overcome this limitation of the Bayer filter, improving the image quality. How does this fare in comparison to a human eye then? Well, if we can trust the measurements stated by Christine Curcio, Kenneth Sloan, Robert Kalina and Anita Hendrickson (497) in ‘Human Photoreceptor Topography’, on average a human retina “contains 4.6 million cones”, ranging from 4,08 to 5,29 million cones, and “92 million rods”, ranging from 77,9 to 107,3 million rods. Now, they only tested seven individuals aged between 27 and 44 years, so take that into consideration. Age and health do impact this. Not everyone is eagle eyed. Some other studies report slightly different numbers, that’s not what’s really at stake here. Anyway, so rounding things a bit first to get even number (for the sake of clarity), we could say 5 million cones yield five megapixels and 100 million rods yield 100 megapixels. Remember that unlike the cones, the rods are highly light sensitive but not color sensitive. Conversely, only five megapixels worth of human vision is sensitive to color. To make things further complicated, the area in the human eye, the fovea centralis, responsible for this area is very small, does not contain all of the cones. The Curcio, Sloan, Kalina and Hendrickson (497) article reports 199 000 comes/mm2 on average with variation from 100 000 to 324 000 cones/mm2. The indicated density seems to vary a bit study by study, which is indicated by the authors. Assuming that the fovea centralis is 1,5mm in diameter, then the area is approximately 1,8mm2, meaning that there are some 360 000 cones in the fovea centralis. With the same math, 150 000 cones/mm2, would result in approximately 265 000 cones in the area. Therefore, the output is smaller than the five megapixels, some 0,265 to 0,360 megapixels if we use those numbers. Even if those numbers don’t hold, it still holds that what we see accurately is sensed by a tiny number of photoreceptors.

It would be far too simplistic to claim that observing the world with one eye yields worse results than my first DSLR just because the color sensitive part is in the central area of the eye. Conversely claiming that it matches a state of the art medium format sensor is equally off. The problem is that what is seen, even with only one eye, is parsed together by the brain, making it all work in a way that doesn’t have sharp falloff in what we see. Our eyes are not locked, but move about ever so slightly, parsing it all together like a constantly evolving panorama. Roger Clark calculates it at about at about 576 megapixels. Those who want a good summary of this can take a look at his site. Of course this does not change the fact that the central area of our vision, that is to say what we pay attention to, has much much lower resolution. So when we compare cameras and the human vision, it’s not a straightforward comparison in terms of the resulting resolution. Simply put, as a snapshot, camera can push things more than our eyes in the entire field of view, resulting in higher still resolution.

Now, what about low light performance? How do cameras fare in comparison to the human eyes? The dynamic range (DR, maximum and minimum light intensities, white to black) of the human eye is vastly superior to camera sensors. Based on the measurements conducted by DxO Labs, the dynamic range of my cameras range from 11 to 14 stops, i.e. exposure values (EV) on camera base ISO (the lowest possible). Increase of one stop entails doubling the light by one EV and vice versa. The logic here is that the wider the range, the more detail is retained in the whites and the blacks. If the range is low, the whites appear white and blacks black regardless of whether they are some shade of gray. Your camera can only perform so well and the measured results decline when the sensor light sensitive is bumped up. For example, my K-5 drops to about 11 EVs from its base 14 EVs when going from ISO100 to ISO1600, a five stop bump up in sensor light sensitivity. Unlike the human eye, a camera sensor does not need to be bumped up in sensitivity to capture a scene in low light, unless you want to stop motion that is. This means that it is possible to retain the baseline DR in photography. The human eye is similar to a camera sensor, yet again clearly different. The human eye adapts to the situation at all times, so, for example, unlike a camera lens aperture that remains constant unless changed, the pupil opening changes accordingly, but not at will. The human eye doesn’t measure light the way cameras do either. It rather brackets it. This means that when it is said that the human eye has a DR of up to whopping 30 EVs (or at least 20+ EVs), it is only when stargazing in the dark, if we follow Roger Clark on this. Now that’s the extreme end of it, benefiting from those light sensitive rods, the opposite of bright sunlight during the day time which makes use of the cones. That’s not the range in use at all times. Your eyes adjust accordingly. For example, when I play outdoor hockey in the winter time and get out of the building after putting the skates on, the sunshine, reflected from the snow and ice, yeah, your eyes weren’t ready for that, but it’ll go away once I realize that I have to deal with it. The actualized DR is thus lower, close to that of a camera sensor it seems. In a way the human visual complex cheats, working constantly, changing settings accordingly and parsing the world together seamlessly, so as you go on about in your life. In other words, that range fluctuates. You aren’t stuck in a certain DR, unless you are fixed on to something (on purpose). What you can do with cameras is to fix things and adjust only the shutter speed between each shot, then fusing them into one shot in order to gain a high(er) dynamic range (HDR). It is a form of stacking, using a stack of images and blending them together to achieve a desired outcome. With HDR what one is after is maxing the DR, the overall resolution, and/or a certain visual appeal. It simply gives you more to work with. This can be done on a computer in post-processing with software that can do that. Some cameras can do that for you, but you have less control over the outcome.

Is DR or HDR relevant in photography. Yes and no. Starting from no, to my knowledge LCD computer screens can usually handle DR up 10 EVs, give or take, assuming the conditions are ideal, which tends not to be the case. The technology is getting better and improvements are being made to address this, so throwing money at it helps, but only to some extent at the moment. You also have to take into consideration that while your screen might have good DR, that’s not exactly the case with every screen out there. One also has to take into consideration that most images viewed on screens are JPEGs, which limit the DR to about 11 EVs. Now that’s not bad, not bad at all, and your screen might be able to manage that. The issue here, the no part, is that currently your camera outperforms the screens and the commonly used image format. Both can be fixed by upgrades in technology, but as I pointed out, it involves having to pay for it. The yes part is that the cameras, while outperforming the screens, allow more to work with. In RAW file format, before post-processing the image allows more to work with than with a JPEG, even if you end up converting from RAW to JPEG. Conversely, starting from JPEG, something may already have been lost, so you cannot use that lost data to fine tune the image, to max it. HDR imaging allows to surpass these limitations further. Capturing a single scene multiple times with different exposures allows you even more to work with. What can be learned from this is that one should be aware of these things and detect possible bottlenecks.

What is relevant for my purposes is comparing human vision and camera output in the night time in surroundings that contain artificial lighting, such as cities. The human eye won’t reach that peak of its DR unless it gets to adapt to the lack of light, so it’s somewhat obvious that the DR comes down from that peak figure when it must handle the light in the dark. The problem for cameras, as good as they are, generally speaking matching the human eye in terms of the DR, is having to make compromises. The human visual complex makes due with all kinds of situations. Most importantly, it doesn’t need to produce images. If high overall quality is what we are looking for, then camera lens aperture must be stopped down, meaning that it won’t get as much light as it could, resulting in slower shutter speeds, which then necessitates a tripod to get a good exposure. The other alternative is to bump up the sensor sensitivity, but that results in noise and loss of DR. Noise isn’t as big a problem as one might think, but it’s still hardly ideal. You end up going for the stationary long exposure if you want to max the quality. The problem with long exposure is coping with the DR. The human eyes can see the differences in dark areas quite well, even in the presence of light sources. I’d say there are limits to that and at least I struggle in retaining the detail in the dark areas when the outdoor lighting floods the scene. How to explain it … you are temporarily partially blinded by the light to an extent that everything in the dark nearby it is obscured, often to the extent that you miss all the detail. Nevertheless, I still reckon that the human eyes manage it quite well, better than cameras by default. Not staring at bright lights and/or changing the vantage point help substantially to begin with. The problem for cameras is tied to the long exposure. In order to get the detail in the dark end of the DR, long exposures end up blowing the highlights. In simple terms that means that bright sources of light, such as street lamps, end up overexposed. The alternative is a shorter exposure, so the highlights look fine, but then anything else the human eye could manage ends up too dark. HDR can solve that to a great extent if DR is the issue. However, for research purposes blowing the highlights is not an issue unless it ends up exaggerating what is depicted. To be more specific, it’s hardly an issue if some street lamps end up blown all the way to pure white. The light sources themselves are not what’s important. What they make possible to see and pay attention to is. Now, if only it was that simple, to say to hell with the highlights!

If you’ve ever paid attention to bright light sources in the dark and squinted your eyes, you’ll notice the light bursts into the shape of a star. This should happen anyway, but squinting your eyes will get you there faster. Why does this happen? The reason is not entirely clear, but if we are to trust science, then it is plausible that it has to do with suture lines (junction lines of fibers), as explained by Rafael Navarro (10) in a review article titled ‘The Optical Design of the Human Eye: a Critical Review’:

“Suture lines of the lens are transparent structures formed by the union of lens fibres, which grow from the periphery to the centre of the lens.”

He (11) clarifies how they work:

“[S]utures are transparent objects which could affect light propagation; i.e. they diffract the incident wavefront.”

It should be noted that Navarro (10) does not go as far to claim that this causes the star pattern, but rather:

[Sutures] could explain the well-known phenomenon of star images by diffraction at the suture lines.”

He (11) is making the case, but acknowledges that there “too many unknowns” when it comes to crystalline lenses, such as the ones on human eyes, so instead, he (11) contends to point out that:

“Therefore, diffraction at the suture lines of the lens might cause star images.”

What I take from this is that it is likely the cause, but there might be other factors that at least contribute to the phenomenon. Anyway, one way or another, this is something that does happen, even if the exact details as to why are a bit murky. Interestingly, the same phenomenon, or at least what we think is the same phenomenon, occurs on cameras. The camera aperture mechanism mimics the iris, using a diaphragm constructed of a number of blades that are placed in relation to one another in a way that moving them simulates the opening and closing of the iris. The blades can be straight or rounded, which affects the rounding of the simulated iris to certain extent. Rudolf Kingslake (61-62) explains this phenomenon as it occurs on a non-circular iris in ‘Optics in Photography’:

“Light waves passing through this opening will be slightly spread out by diffraction in a direction perpendicular to the edge of each diaphragm blade.”

This results in what Kingslake (61-62) calls diffraction spikes, projecting from light sources seen against dark backgrounds. He (62) states that there are two ways to counter this phenomenon:

“[W]ork with the iris wide open or … use an iris with so many blades that its closed-down shape is virtually a circle.”

If the intention is to maximize the depth of field, like it typically is in landscape photography, then the first remedy is out of the question. To the second one then. In my own collection of lenses, my 15mm lens wide angle lens has seven a straight (unrounded) aperture blades, and it is known for the resulting spikes, often referred to as starbursts. An updated version of the lens is largely the same, but among the changes made to it, the blades are rounded, which results in less spiking. However, while this can be seen as an advantage, many actually like the spiking and find it a highly desirable effect. My GR has a fixed lens 18,3mm lens with nine aperture blades. In my experience it’s less prone to spiking, but it still happens. Same applies to my 70-200mm lens with its nine rounded aperture blades, albeit I haven’t really used it in circumstances where spiking would occur much. Having more than nine aperture blades, rounded or not, is quite rate, so I wouldn’t expect the second remedy to work out great either.

You might be wondering what the fuss is all about here. Well, considering that in this case it is desirable to maximize the depth of field, as well as corner to corner resolution, one is in desperate need of light, especially when there is an overall lack of ambient light. Fixing the camera on a tripod will remedy that issue, but the needed long exposures will result in spiking. What matters is that whatever needs to be examined in the scene should be legible. In some cases the spiking won’t be an issue, but with long exposures, it for sure can be, resulting in gigantic starbursts.

One workaround for spiking is stacking. In this case, stacking can be used to avoid undesired effects such as spiking. Using a larger aperture will prevent that from happening, but the depth of field will end up on the shallow side, meaning that not the whole scene from close up to where you want it will be sharp. That’s exactly what we don’t want! One can overcome the depth of field limitation by taking multiple photos (with the same settings, but changing the focus) and then blending the stack of images, sort of like with HDR. If corner to corner image sharpness is desired, this might not be optimal though. Some lenses perform better at the more open apertures than others, so you need to know your gear as well. Typically the more expensive gear yields better results at extreme settings. It’s not a guarantee that expensive glass will perform great wide open, but you have only yourself to blame if you expect great results from cheap glass. Some fare better, some worse. Knowing your gear helps a ton. So, throwing money at the issue is no guarantee, but it tends to help. The differences tend to be small, but often enough to warrant the high prices. Great glass just has that … that something you just can’t quite put your finger on in order to say what it is, but it’s there alright.

Now, I’ve covered quite a bit here and I think it’s time to wrap things up, at least for now. Morris doesn’t go into the details in her article, at all, but she (318) does acknowledge that darkness poses major issues to landscape research. No light, no vision, no photography. That’s the baseline here. The issue is less pertinent for linguistic landscape research as it tends to focus on highly populated areas, unlike the Isle of Skye examined by Morris. Nevertheless, it’s still a major issue. That’s why I spent hours in the night, trying out things, things that I sort of understood on paper, but just couldn’t wrap my head around without having a go myself. That’s why I wrote all this. One night, standing at an empty parking lot, next to a car dealership, testing different settings on a stationary camera, minding my own business, a police van stops by me. I make notice of the police and turn towards them. The officer on the driver seat rolled down the window and asked something along the lines of: “And who might you be?” Instead of saying that I’m a photographer, I pointed towards a front lit sign and stated that I’m having troubles capturing the vibrancy of the colors on the sign (which I did not cover here, but maybe one day…), getting it match how I see it, rather than sort of, while retaining the details in the darker areas of the photo. The officer was silent for a moment, then replied something like: “Oh, a photographer. Right.” The window rolled up and they leave the scene. Is this relevant to nocturnal landscape research? The stuff that went into this. Well, at least there was that moment, if nothing else came out of this.

This essay may seem unnecessarily technical, but unless you are an experienced photographer, you are probably … out of luck, trying to figure out how to do linguistic landscape research in the dark. Anyway, I think I still owe the reader a clear cut answer to the most important question to nocturnal landscape research. So, can it be done? Yes, for sure. It’s far from easy though. After all, it’s a walk in the dark, not a walk in a park. If you want to capture scenes, then, well, it can be tricky. It’s not impossible, but you may have to do all kinds of unconventional things, far from simply pressing a button. Ideally you’d have a budget high enough to warrant expensive cameras and lenses, screens, computers and software. They do not overcome all limitations, no light is still no light, but they do help quite a bit. I chose to not really discuss non-consumer grade cameras, that is digital medium and large format cameras in much detail, because, well, on the cheap side, the Pentax 645Z, a medium format camera body, costs about 8000-9000€ (depends on taxes etc.). A wide angle for it is about a bit under 2000€, give or take. So, that’d be 10 to 11K in total. Somehow I just can’t see a university, a foundation or whatever the entity looking at a project funding proposal and being like, yeah, sure, that sounds reasonable, blowing 10K on this. Another alternative is to figure out what it is in the nocturnal landscapes that one wants to focus on, say all blatantly obvious lit signage that stands out in high contrast from the rest of the landscape. It won’t be an all inclusive endeavor, but unless one can come up with reasonably priced and convenient options, I don’t see why one would try to accomplish something as … draining as what I’ve tried to explain here. I’m not an expert in videography, but considering it requires a constant flow of imaging, namely at high shutter speeds, higher than the frames per second anyway (24 to 60), I don’t see that to be a viable option either. Having to bump up the sensitivity will affect the quality negatively. I might be wrong and throwing a ton of money at the issue might just do it, but as I pointed out, I just don’t see that happening. Unless you are made out of money, buying expensive gear only makes sense if it makes you money and, well, this kind of academic research hardly does. Lowering quality standards is also possible, but come on, what kind of empiricist would do that? They say that either you do something properly or not at all. I realize I may have forgot something or just haven’t thought of it yet, but for now, good night, may it be dark!

References

  • Clark, R. N. (n. d.). Notes on the Resolution and Other Details of the Human Eye (http://www.clarkvision.com/articles/human-eye/).
  • Curcio, C. A., K. R. Sloan, R. E. Kalina, and A. E. Hendrickson (1990). Human Photoreceptor Topography. The Journal of Comparative Neurology, 292 (4), 497–523.
  • Kingslake, R. (1992). Optics in Photography. Bellingham, WA: SPIE Optical Engineering Press.
  • Macfarlane, R. ([2007] 2008). The Wild Places. New York, NY: Penguin.
  • Macfarlane, R. ([2007] 2011). Noctambulism. In T. L. Fleischner (Ed.), The Way of Natural History (pp. 29–41). San Antonio, TX: Trinity University Press.
  • Morris, N. J. (2011). Night walking: darkness and sensory perception in a night-time landscape installation. cultural geographies, 18 (3), 315–342.
  • Navarro, R. (2009). The Optical Design of the Human Eye: a Critical Review. Journal of Optometry, 2 (1), 3–18.