Aalaw background

Algorithmic agency is coming. Snippets of code and formulas, taking in informational inputs and producing outputs, increasingly control our lives, from Facebook, interactions with authorities and kinetic world safety. Two issues are fundamental in these developments: scale and quality. Even if superhuman artificial intelligence (Bostrom 2014) was never to emerge, algorithms and machine learning will move many decisions to other than human actors, new, opaque logic formations, inaccessible processes, and, possibly, beyond the effective control of any single human or human collective (Hildebrandt 2016; O’Neill 2017; Pasquale 2015). Social and economic structures, work and leisure, value chains and industries (Brynjolfsson and McAfee 2014; Cheney-Lippold 2017; Ford 2015; Schwab 2016), even our future as a species (Kurzweil 2005) will hang in the balance as the new non-human actors, algorithmic agencies, as we call them, will appear.

Law and regulation will not be exempt from this process. Algorithms are poised to cut deep into their fabric. Explorations of our non-human legal futures, however, are still in a nascent stage (Hildebrandt 2011). Even if early research identified some of the problems already during the 1980s and 1990s (Cole 1990; Gemignani 1984; Solum 1992; Wein 1992), systematic accounts of what algorithmic agency will entail in legal settings have only emerged during the last few years. The existing research falls roughly into four categories. First, plans for and actual deployments of algorithmic and AI technologies have triggered very practical discussions on what legislative changes, if any, are needed to accommodate these new things. These discussion have, in particular, centred around problems of bias, inequality and power in data-driven algorithmic decision-making systems (Citron and Pasquale 2014; O’Neill 2017; Pasquale 2015), and liability for autonomous vehicles and vessels (Boeglin 2015; Duffy and Hopkins 2013; Gurney 2016a, 2016b; Hooydonk 2014). Second, the use of robots in armed conflict has sparked a literature on the transformations of ethics of killing, human rights and laws of war (Chamayou and Lloyd 2015; Singer 2010). Third, a budding literature addresses the conceptual repercussions of algorithmic agency, for example in relation to legal personhood (Chopra and White 2011; Hubbard 2010; Solum 1992) and privacy rights (Calo 2010; Gutwirth and Hildebrandt 2010). These studies seem, fourth, the forebears of a nascent, more general jurisprudential engagement with imagining a new legal future where non-human agents exist in parallel to humans (Balkin 2015; Brownsword 2015; Calo 2015; Hildebrandt 2015, 2016; Wallach 2011).

The AALAW research horizon builds on this hypothesis of profound discontinuity. Algorithms will introduce non-human agency onto the regulatory playing field. As legal imaginations are inherently humanist offshoots of Cartesian thought, the traditional regulatory modalities (Friedman 2016; McAdams 2015; Schauer 2015; Shapiro 2011) seem bound to lose their grip. On one hand, we will need novel responses, adaptations and legal modalities. On the other hand, the ensuing reconceptualizations are likely to contribute to reimaginings within the existing anthropocentric conceptual structures.

 Scientific objectives

The project consists of four work packages. In WP 1 and WP 2, the project will explore what legal and regulatory transformations the proliferation of non-human algorithmic agencies will trigger in two thematic fields: criminal law (WP 1) and tort law (WP 2). In WP 3, the project will engage with the theme on a theoretical, jurisprudential level using new materialist and traditional analytical philosophy as intellectual resources. Work Package 4 is dedicated to ensuring that the project has ties to real world understandings of algorithms, a key critical challenge for the project.

WP 1: Criminal law and non-human agents?

WP 1 focuses on what implications the increasing presence of algorithmic agents has for criminal law.

Our hypothesis is that the algorithmic agency will pose a serious challenge to the traditional structures of criminal law. Traditional accounts build on a narrative where criminal law uses multiple pathways to evoke behavioural changes in humans (Duff 2009; Tadros 2005; Vincent 2010). Intervention triggers have anthropomorphic frames such as volitional acts, mens rea, intent, and negligence (Ashworth and Horder 2013; Duff 2009; Moore 2012). Law also punishes, gives just deserts, incapacitates, rehabilitates and deters individuals (Duff 2001; Moore 2012). In short, criminal law makes sense when deployed to govern free and rational human cognitions, but will, presumably, struggle with algorithms.

WP1 consists of three tasks discussing criminal law in algorithmic contexts.

Task 1: Individual criminal responsibility for algorithms

Criminal law serves a crucial, yet contested (Robinson and Darley 2004), function—behavioural control—in governing societies. Consequently, the key pragmatic question is how and whether we can affect the decisions algorithms make (Hallevy 2013).

Algorithmic agents will put a strain on the anthropomorphic core doctrines of e.g. mens rea, intent, and negligence (Kroll et al. 2017; Schuppli 2014) as humans play an increasingly lesser role as causative agents. The key question is can we deploy the traditional sensibly in algorithmic contexts and what, if any, dysfunctions are likely to emerge. Key questions include, for example, ascription (who or what stands for algorithms?) and the role of knowledge or control (what happens if no human has exact knowledge of or control over an algorithm?).

Much of the work under the task takes the form of doctrinal speculation. Researchers will chart the boundaries of existing law rules and speculate on their usefulness, possible deployments, and probable directions of development in algorithmic contexts.

Task 2: Collective criminal responsibility for algorithms

Redefining criminal law agency conceptions seems a promising avenue for algorithmic governance (Hallevy 2015). Instead of focusing on individuals, criminal law target the collectives that create, shape, and possibly control the algorithms.

In Task 2, the explorations of this theme will find their starting point in recent studies on corporate criminal liability (Bittle 2012; Pieth and Ivory 2011; Slapper and Tombs 1999). The objective of the task is to chart whether the current European doctrines of collective criminal responsibility (Gobert and Pascal 2014) can provide effective tools for sustaining criminal law interventions, yet not suffocate innovation and development. As failure seems likely Task 2 also seeks to engage with the theoretical discussion of collective criminal responsibility to contribute to understanding its limits, pitfalls, and potential in algorithmic contexts. The development is likely to trigger fundamental questions on the distinct characteristics and utility of criminal law as law will move away from imposing responsibility on moral agents for wrongful actions.

Task 3: Conceptual transformations

Work in Tasks 1 and 2 will feed into Task 3, which will integrate with the work conducted under WP 3. All evidence suggests that multiple theoretical problems will emerge. As the distance between humans and decisions grows, the entire criminal law paradigm will likely become strained, forcing the reconsideration of its relevance, functions, modalities, and justifications. Criminal responsibility without at least a semblance of effective control by a reflexive moral agent (Duff and Green 2011, chaps. 7–10, 22) is an oxymoron.

Should the relevance and power be resuscitated, traditional cognitive frames for criminal responsibility have to be, first, reimagined. The project will attempt this by introducing the tools developed in WP 3, primarily by deploying post-humanist theory futures in criminal contests. Where this process will lead remains uncertain. Criminal law for algorithms may be a dead end, and we have to abandon the traditional categories and move to design new legal modalities (Brownsword 2015; Hildebrandt 2011, 2016) or reconsider our framing of algorithms and recognize them a status of legal subjects (Hildebrandt 2011; Teubner 2006).

Second, the reimaginings triggered by algorithmic agents are poised to re-enter the remaining human contexts as destabilizing factors. Should we start, for example, to view humans increasingly as cybernetic or algorithmic selves (Pardo and Patterson 2015), introduce novel strict responsibility regimes, or attenuate the boundaries between criminal law and other regulatory modalities, the traditional anthropomorphic patterns and criminal law truisms will need to be reconsidered.

WP 2: Liability for non-human agents?

WP2 deals with tort liability and insurance implication of algorithmic agents.

The basic premise of WP2 is reminiscent of WP1. Algorithms are bound to destabilize established tort law and insurance structures. The field, however, seems more robust as tort law and insurance recognize non-human actors as capable of bearing legal responsibility and acting, through doctrines such as respondeat superior, vicarious, strict, and product liability. Further and in stark contrast to criminal law, corporate entities are treated as bearers of civil rights and duties with no qualms. Nevertheless, significant challenges remain.

Task 1: Options for algorithmic tort liability

A number of avenues exist for using tort law to govern algorithmic contexts. Negligence-based attribution strategies target owners, users, and other controlling parties, but may result in a narrow scope of liability or in risible intellectual contortions (Chopra & White 2011 124–127). A strict liability approach, in turn, focuses on allocating externalities to the party in control, who may be hard to identify in algorithmic contexts (Duffy & Hopkins 2013). In product liability frame (Garza 2012; Wu 2011; Lohmann 2016) producers typically have more control over algorithms than e.g. owners, users or operators, making it desirable. Insufficient safety as a template for ground of liability similarly seems a good fit to algorithmic contexts as it sidesteps human objectionable choices as its nexus. Doctrinal answers to the questions are, however, far from settled both domestically and internationally. WP 2 Task 1 will explore these options to identify the benefits, drawbacks, and limits of each identified option in a Nordic frame.

Task 2: Insurance and production algorithmic contexts

Insurance seems a natural way out of the liability quagmire (Hubbard 2014 1866–1868; Calo 2011 609–611; Schroll 2015), but is no panacea. Many problems are likely to emerge. Even if e.g. the Nordic approach to traffic insurance seems already algorithm-proof, the appearance of non-human agents is likely to undermine existing insurance categories, triggering changing in both insured interests and policy holders. Second, algorithmic agency will also destabilize the concept of insurable risk. Insurance traditionally builds on natural and human processes as en masse actuarially governable (Ewald 1993), while technological uncertainty becomes a key component of the risk in algorithmic contexts. Third, the liability and insurance transformations are crucial to the future outlook of production arrangements and contracting patterns. This may cause further pressure for developing legal tools for governing global value chains (The IGLP Law and Global Production Working Group 2016).

Task 2 is two-pronged. Firstly, the work focuses on what implications algorithmic agents might have for the insurantial imaginaries (Baker and Simon 2002) underlying current insurance schemes. Secondly, the team will also tackle the contract law-related issues the transformations will raise.

Task 3: Interferences

Taken together, the changes in liability and insurance patterns pose a fundamental challenge to tort law. This challenge resembles to a large degree that in WP 2 Task 3. The transformations, first, seem likely to trigger a reconsideration of the basic structures and functions of tort law, much like in criminal law. In algorithmic contexts, the behavioural control aspects of tort law will likely become increasingly attenuated as tort law strategies are likely inefficient in guiding algorithm development. Simultaneously, the allocation of externalities will command more weight. This seems conducive to reactivating long-forgotten disputes on the goals, functions, and utility of tort law strategies, particularly in the Nordic countries (Hellner 1972; Strahl 1959).

The team will, first, explore the theoretical transformations changes in liability and insurance structures will trigger to facilitate the migration to algorithmic systems. Akin to WP 1 Task 3, the finding may well be negative and signify the impotence of tort law and a radical reconfiguration of insurance.

WP 3: Theoretical non-human agencies?

Ultimately, the thematic explorations in WP1 and WP2 will allow the project to move to the metatheory level and make sense of the chain of affectivities between existing law, algorithms and the future. The key move is to deploy two divergent sets of philosophical resources: new materialism (Barad 2007; Coole and Frost 2010; Dolphijn and Tuin 2012) and post-humanism (Braidotti 2013), on one hand, and analytical philosophy (Glock 2009), on the other.

WP 3 will consist of three tasks.

Task 1: Tracing and theorizing emerging legal agencies

Algorithmic agencies seem too un-human to be governed like humans. Thus, conceptualizing algorithms as mere extensions of human agency or as machinic humans will likely fail. Both approaches perpetuate a humanist, Cartesian ontological fallacy, missing the polyvalence and multiplicity of agencies we will face and have to tackle. This trend is amplified by the emergence of other monstrous (post)human agencies: technologically enhanced cyborg-humans (Haraway 1991), neurobiological selves (Morse 2008; Pardo and Patterson 2015; Vincent 2013), virtual but embodied mindbodies (Hayles 2002) and man-animal chimaera. Human imaginaries will not be enough. Something else, a reimagined posthumanist theory of law’s subjects, is direly needed.

To facilitate the reimaginings, the project will, first, continue engaging with the emerging legal and regulatory subjectivities that currently are on the verge of disrupting traditional conceptions of legal personhood (Selkälä and Rajavuori 2017; Viljanen 2017). Now, the emphasis will be on considering the new subjectivities as precursors and intellectual resources for imagining frameworks legal algorithmic agencies, with new materialism as the methodological frame. 

Task 2: Tracing emerging laws

The second theoretical objective is to imagine legal interventions capable of affecting the new agencies. To do that, the project will seek to map the plural impact mechanisms law utilizes to work on its subjects, whatever they may be. The hypothesis is that law works through multiple subjects, levers and mechanisms of affectivity. Law typically performs and enacts interventions as reasons in mental operations, incentives in utility calculi, and moral decisions (Friedman 2016; McAdams 2015; Schauer 2015), but sometimes also as a blueprint of cyborg cognitions (Viljanen 2017), physical and virtual architectures that shape behavioural responses (Balkin 2014), or fleeting cognitive reorderings (Thaler and Sunstein 2008). Ultimately, these explorations in other non-standard regulatory imaginations might inform the dreamings of new laws in the disruptive algorithmic contexts (For existing explorations, see e.g. Brownsword 2008, 2015; Hildebrandt 2016; Yeung 2008, 2017), from which the humans have disappeared, allowing us to devise attractive, proactive, non-reactive strategies for governing the things..

Task 3: Pitting (legal) philosophies against each other

In WP 3, the project will deploy, on one hand, new materialist (Barad 2007; Coole and Frost 2010; Dolphijn and Tuin 2012) and post-humanist (Braidotti 2013) approaches and analytical philosophy (Glock 2009) to make sense of algorithmic contexts. One objective of the project is to trace the differences the approaches have in their potential to provide fundaments to algorithmic legal theory. The project will for example explore the differences in analytical group agency accounts (List and Pettit 2011) and post-humanist accounts of object-actors and their affectivity.

WP 4: Staying up-to-date on algorithm development

The primary objectives and methodological focus of the project is internal to legal studies. The future and technology oriented theme, however, requires strong interdisciplinary cooperation to ensure that the legal researchers understand the novel technologies which provide the context for the project. The learning curves will be steep, but understanding what algorithms, in fact, are and what they do is a crucial success factor in the project. The gravity of the matter is reflected in its status as a separate work package.

Methodologies

The project concentrates on the legal study of algorithmic agencies.

The researchers deploy two distinct methodological toolsets. The first two toolsets, doctrinal and theoretical analysis of existing law, are standard fare in legal studies, but gain non-standard undertones as the project deploys them in a radically uncertain, prospective setting. As the subject-matter of the investigations is only emerging, the law is not there. This means that the team will have to parse together the future law applicable to algorithmic agencies from weak signals, speculative accounts of the direction of conceptual structures, argumentative openings and silences, and materials emanating from other contexts. Doctrinal speculation is a misnomer and carries with it significant risks. The team has, however, a strong expertise in both two thematic fields under study and an established track record in conducting research on the theoretical and conceptual structures. This experience will help the team in exploring the likely conceptual changes that may be triggered and also in understanding the limits of the law’s immanent possibilities the facilitate these adaptive processes.

WP 3, in turn, requires a very different toolset. As the task is to reimagine legal subjects and legal ontologies, a break with standard legal approaches is inevitable. The project will use two divergent approaches. The project will explore how new materialist (Barad 2007; Dolphijn and Tuin 2012) and post-humanist philosophy (Braidotti 2013) could be deployed to make sense of algorithmic agencies in legal settings. These philosophical approaches seem uniquely capable of facilitating the radical reimagination of existing legal structures the project calls for as they brazenly break free from the anthropocentric obsessions of most other philosophical approaches. Traditional analytical philosophy (Glock 2009) serves as a counterpoint to this avant-garde theory framework.

A key challenge for the project is to facilitate fruitful cooperation and interaction between the “black letter lawyers” and the more theoretically oriented team members. The project will implement a monthly workshop structure to ensure this. All deliverable drafts will be subjected to multiple revision rounds in intra-project workshops. The key objective of the work mode is to identify common themes and theoretical motifs that could feed into the project’s work.

The project will mainly rely on publicly available materials. Any materials that are collected will be handled in accordance with established best practices.

BIBLIOGRAPHY

Ashworth, Andrew, and Jeremy Horder. 2013. Principles of Criminal Law. Oxford: Oxford University Press.

Baker, Tom, and Jonathan Simon. 2002. Embracing Risk. The Changing Culture of Insurance and Responsibility. Chicago: University of Chicago Press.

Balkin, Jack M. 2014. “Old-School/new-School Speech Regulation.” Harvard Law Review 127(8): 2296–2342.

———. 2015. “The Path of Robotics Law.” California Law Review 6(June): 45–60.

Barad, Karen. 2007. Meeting the Universe Halfway. Durham: Duke University Press.

Bittle, Steven. 2012. Still Dying for a Living. Corporate Criminal Liability after the Westray Mine Disaster. Vancouver: UBC Press.

Boeglin, Jack. 2015. “The Cost of Self-Driving Cars: Reconciling Freedom and Privacy with Tort Liability in Autonomous Vehicle Regulation.” Yale Journal of Law & Technology 17: 171–203.

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Braidotti, Rosi. 2013. The Posthuman. Cambridge: Polity.

Brownsword, Roger. 2008. “So What Does the World Need Now? Reflections on Regulating Technologies.” In Regulating Technologies. Legal Futures, Regulatory Frames and Technological Fixes, eds. Roger Brownsword and Karen Yeung. Oxford: Hart, 23–48.

———. 2015. “In the Year 2061: From Law to Technological Management.” Law, Innovation and Technology 7(1): 1–51.

Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age. Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W.W. Norton & Company.

Calo, Ryan. 2010. “Peeping HALs: Making Sense of Artificial Intelligence and Privacy.” European Journal of Legal Studies 2(3): 168–192.

———. 2015. “Robotics and the Lessons of Cyberlaw.” California Law Review 103(3): 513–63.

Chamayou, Grégoire., and Janet. Lloyd. 2015. Drone Theory. London: Penguin books.

Cheney-Lippold, John. 2017. We Are Data. Algorithms and the Making of Our Digital Selves. New York: New York University Press.

Chopra, Samir., and Laurence F White. 2011. A legal theory for autonomous artificial agents. Ann Arbor: University of Michigan Press.

Citron, Danielle Keats, and Frank Pasquale. 2014. “The Scored Society: Due Process for Automated Predictions.” Washington Law Review 89(1): 1–33.

Cole, George S. 1990. “Tort Liability for Artificial Intelligence and Expert Systems.” Computer/Law Journal 10: 127–232.

Coole, Diana, and Samantha Frost, eds. 2010. New Materialisms. Ontology, Agency, and Politics. Durham: Duke University Press.

Dolphijn, Rick, and Iris van der Tuin. 2012. New Materialism: Interviews & Cartographies. Open Humanitites Press.

Duff, R.A. 2001. Punishment, Communication and Community. Oxfod: Oxford University Press.

Duff, R A. 2009. Answering for Crime. Responsibility and Liability in the Criminal Law. Oxford: Hart.

Duff, R A, and Stuart Green. 2011. “Philosophical Foundations of Criminal Law.” : 543.

Duffy, Sophia H, and Jamie Patrick Hopkins. 2013. “Sit, Stay, Drive: The Future of Autnonomous Car Liability.” SMU Science & Technology Law Review 16(Winter): 101–123.

Ford, Martin R. 2015. Rise of the Robots. Technology and the Threat of a Jobless Future. New York: Basic Books.

Friedman, Lawrence M. 2016. Impact. How Law Affects Behavior. Cambridge, Massachusetts; London: Harvard University Press.

Gemignani, Michael. 1984. “Laying Down the Law to Robots Frontiers of Science.” San Diego Law Review 21: 1045–60.

Glock, Hans-Johann. 2009. What Is Analytic Philosophy? Cambridge: Cambridge University Press.

Gobert, James J, and Ana-Maria Pascal. 2014. European Developments in Corporate Criminal Liability. London: Routledge.

Gurney, Jeffrey K. 2016a. “Crashing into the Unknown: An Examination of Crash-Opimization Algorithms through the Two Lanes of Ethics and Law.” Albany Law Review 64(2011): 44–45.

Gurney, Jeffrey K. 2016b. “Crashing into the Unknown: An Examination of Crash-Optimization Algorithms Trough the Two Lanes of Ethics and Law.” Albany Law Review 79(1): 183–267.

Gutwirth, Serge, and Mireille Hildebrandt. 2010. “Data Protection in a Profiled World.” In Data Protection in a Profiled World, , 31–41.

Hallevy, Gabriel. 2013. When Robots Kill. Artificial Intelligence under Criminal Law. Bost: Northeastern University Press.

———. 2015. Liability for Crimes Involving Artificial Intelligence Systems. Cham: Springer.

Haraway, Donna. 1991. “A Cyborg Manifesto: Science, Technology and Socialist-Femenism in the Late Twentiety Century.” In Simians, Cyborgs and Women: The Reinvention of Nature, , 149–81.

Hayles, N. Katherine. 2002. “Flesh and Metal: Reconfiguring the Mindbody in Virtual Environments.” Configurations 10(2): 297–320.

Hellner, Jan. 1972. Skadeståndsrätt. Stockholm: Almqvist & Wiksell.

Hildebrandt, Mireille. 2011. “Criminal Liability and ‘Smart’ Environments.” In Philosophical Foundations of Criminal Law, eds. R A Duff and Stuart Green. Oxford: Oxford University Press, 507–532.

———. 2015. Smart Technologies and the End(s) of Law. Novel Entanglements of Law and Technology. Cheltenham; Northampton: Edward Elgar Publishing.

———. 2016. “Law as Information in the Era of Data‐Driven Agency.” The Modern Law Review 79(1): 1–30.

Hooydonk, Eric Van. 2014. “The Law of Unmanned Merchant Shipping – an Exploration.” The Journal of International Maritime Law 20: 403–23.

Hubbard, F Patrick. 2010. “Do Androids Dream: Personhood and Intelligent Artifacts.” Temple Law Review 83: 405.

Kroll, Joshua A. et al. 2017. “Accountable Algorithms.” University of Pennsylvania Law Review 165: 633–706.

Kurzweil, Raymond. 2005. 2011 Book The Singularity Is Near: When Humans Transcend Biology. London: Duckworth Overlook.

List, Christian., and Philip Pettit. 2011. Group Agency. The Possibility, Design, and Status of Corporate Agents. Oxford: Oxford University Press.

McAdams, Richard H. 2015. The Expressive Powers of Law: Theories and Limits. Cambridge, MA: Harvard University Press.

Moore, Michael S. 2012. Placing Blame. A General Theory of the Criminal Law. Oxford; New York: Oxford University Press.

Morse, Stephen J. 2008. “Psychopathy and Criminal Responsibility.” Neuroethics 1(3): 205–12.

O’Neill, Cathy. 2017. Weapons of Math Destruction. New York: Crown.

Pardo, Michael S, and Dennis Michael Patterson. 2015. Minds, Brains, and Law. The Conceptual Foundations of Law and Neuroscience. New York: Oxford University Press.

Pasquale, Frank. 2015. The Black Box Society. The Secret Algorithms That Control Money and Information. Cambrdge,MA: Harvard University Press.

Pieth, Mark, and Radha Ivory. 2011. Corporate Criminal Liability. Dordrecht: Springer.

Robinson, P. H., and John M. Darley. 2004. “Does Criminal Law Deter? A Behavioural Science Investigation.” Oxford Journal of Legal Studies 24(2): 173–205.

Schauer, Frederick. 2015. The Force of Law. Cambridge, MA: Harvard University Press.

Schuppli, Susan. 2014. “Deadly Algorithms: Can Legal Codes Hold Software Accountable for Code That Kills?” Radical Philosophy 187: 2–6.

Schwab, Klaus. 2016. The Fourth Industrial Revolution. Cologny: World Economic Forum.

Selkälä, Toni, and Mikko Rajavuori. 2017. “Myths , and Utopias of Personhood: An Introduction.” German Law Journal 18(5): 1017–1068.

Shapiro, Scott J. 2011. Legality. Cambridge, MA: Belknap Press.

Singer, Peter Warren. 2010. Wired for War. The Robotics Revolution and Conflict in the Twenty-First Century. London: Penguin Books.

Slapper, Gary., and Steve. Tombs. 1999. Corporate Crime. Harlow: Pearson Longman.

Solum, Lawrence B. 1992. “Legal Personhood for Artificial Intelligences.” North Carolina Law Review 70: 1231–88.

Strahl, Ivar. 1959. Tort Liability and Insurance. Stockholm: Almquist & Wiksell.

Tadros, Victor. 2005. Criminal Responsibility. Oxford University Press.

Teubner, Gunther. 2006. “Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law.” Journal of Law and Society 33(4): 497–521.

Thaler, Richard H, and Cass R. Sunstein. 2008. Nudge. Improving Decisions about Health, Wealth, and Happiness. New Haven: Yale University Press.

Viljanen, Mika. 2017. “A Cyborg Turn in Law ?” German Law Journal 18(5): 277–308.

Vincent, Nicole A. 2010. “On the Relevance of Neuroscience to Criminal Responsibility.” Criminal Law and Philosophy 4(1): 77–98.

———. 2013. Neuroscience and Legal Responsibility. New York: Oxford University Press.

Wallach, Wendell. 2011. “From Robots to Techno Sapiens: Ethics, Law and Public Policy in the Development of Robotics and Neurotechnologies.” Law, Innovation and Technology 3(2): 185–207.

Wein, Leon E. 1992. “Responsibility of Intelligent Artifacts: Toward an Automation Jurisprudence.” 5(Fall): 103–154.

Yeung, Karen. 2008. “Towards an Understanding of Regulation by Design.” In Regulating Technologies. Legal Futures, Regulatory Frames and Technological Fixes, eds. Roger Brownsword and Karen Yeung. Oxford: Hart, 79–107.

———. 2017. “‘Hypernudge’: Big Data as a Mode of Regulation by Design.” Information, Communication & Society 20(1): 118–36.