One of the dominant definitions of privacy—particularly in the policy world but by no means confined there—is that of control over personal information. Certainly it influences data protection law in Canada, which requires organizations to obtain the consent of individuals for the collection, use and disclosure of personal information. One of the great advantages of such a model is that it does not limit protection to a particular sub-class of personal information such as information that is sensitive and intimate—“personal information” is simply information about an identifiable individual. This makes such models potentially more responsive to information practices that rely less on intruding into a sensitive sphere and more upon compiling pieces of information that, on their own, are not sensitive and may even be “public.” However, the breadth of a control-model is also its Achilles heel: to create a workable scheme one needs many exceptions and without careful thought these may be clumsily introduced. Canada’s experience with these regimes bears this out, and I have documented these problems elsewhere.
For the purposes of this blog, I want to focus here on a particular strategy for limiting the breadth of a control-over-personal-information model of privacy that is popular in Canadian jurisprudence: the idea of a "biographical core." Canadian Supreme Court constitutional privacy jurisprudence (arising out of the search and seizure context) has often endorsed ideas like control over personal information in relation to informational privacy. However, most of the real work is in fact being done by a much narrower idea. Informational privacy is said to protect one’s “biographical core of personal information,” which has been defined as including “information which tends to reveal intimate details of the lifestyle and personal choices of the individual.” (Plant) This narrowing of personal information to one’s biographical core is also present in data protection regimes, although less explicitly, because of the need to provide some personal information with stronger protection than other information (for example, this sometimes plays out in debates regarding the type of consent required or in how a balancing test is implemented).
I have pointed out this trend at a number of practice-oriented forums and usually get one of two responses. The first, from decision makers, is that of course they have to operate with some idea of a “biographical core” because some information is more sensitive than others and this is the only way to properly engage in a privacy risk assessment. The second, from various privacy advocates, is shock and dismay that the privacy community is reverting to what looks like an idea of sensitive and intimate information that seems wholly unsuited to meet current privacy challenges.
I, however, think that privacy-as-protection-of-one’s-biographical-core has far more in common with privacy-as-control-over-personal-information than simply its pragmatic use to narrow an overly-broad definition. They both draw upon a similar idea of the self.
This becomes readily apparent if we consider the work of Alan Westin in his influential book, Privacy and Freedom. Westin is often cited for this classic privacy-as-control statement:
Privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others. (p.7)
But Westin also goes on to write:
privacy is the voluntary and temporary withdrawal of a person from the general society through physical or psychological means, either in a state of solitude or small-group intimacy or, when among larger groups, in a condition of anonymity or reserve. … [E]ach individual is continually engaged in a personal adjustment process in which he balances the desire for privacy with the desire for disclosure and communication of himself to others (p. 7)
…
The most serious threat to the individual’s autonomy is the possibility that someone may penetrate the inner zone and learn his ultimate secrets, either by physical or psychological means. (p.33)
From this we can see that Westin’s claims regarding control over information are in service of an idea of privacy as social withdrawal—an idea that lines up with more traditional privacy ideas such as the protection of secret, sensitive and intimate information. Moreover, this withdrawal is ultimately in service of the protection of an “inner zone” that parallels the Supreme Court of Canada’s biographical core. Social interaction is something that is balanced against this need for withdrawal, something that is in constant tension with it—which echoes the difficulty that many judges have in understanding why someone might have a privacy interest in information that has been voluntarily disclosed to others, or in regards to something that has in some context been made “public.”
There are other alternatives for thinking about the self and privacy. Suppose instead that we took up the challenge posed by some of the first generation philosophers of technology that we need to rethink the modern subject if we are to properly respond to the challenges of technology. Suppose, for example, that instead of the idea of an individual with an inner core transparent to itself upon solitary introspection, we posited a self that is in fact formed through social interaction. The point of privacy would not be to protect the conditions of social withdrawal in order to maintain the integrity of such a self—it would be to protect the conditions of social interaction in order to provide the basis for identity formation in the first place.
I am currently working on outlining an account of privacy such as this. Inspired explicitly by Goffman, but influenced by many others, I want to claim that privacy should be understood in terms of protecting our capacity for self-presentation. This “self” that is presented may or may not be different in relation to different “others,” may or may not be constituted through these relationships, and may or may not vary over time and across contexts in contradictory ways—in other words, it stays far away from positing anything like an “inner zone” or “biographical core.” What becomes important is not the protection of different layers of an already-constituted self but rather an individual’s ability to know the others to whom she presents herself—and even, in some case, to be able to choose these others. For example, if I take a photo of you in a public place and publish it in a magazine I have dramatically changed the nature of the others to whom you were presenting yourself—the “audience” shifts from the other people sharing this public space to the other people reading the magazine. This shift, I want to argue, undermines one’s capacity for self-presentation and therefore raises at least a prima facie privacy claim—even though the photo was taken in “public” and even though it reveals nothing embarrassing or sensitive (I have written elsewhere about the Aubry case, which has these facts).
There is, of course, much more to say and this is what my current work is focusing on. My point in these two blog posts has been to try to show that the first generation of philosophers of technology raise an intriguing challenge to legal theorists regarding the need to examine the view of the self that we adopt in thinking about technological questions. I think that privacy law and theory would do well to rise to the challenge.
Saturday, February 28, 2009
Thursday, February 26, 2009
Are We All Control Freaks Now?
Earlier this month, Facebook quietly changed its terms of service and waded into what I will call the “control wars” over personal information. Facebook’s changes would enhance its control over users posted information, including material that had been deleted. The response was swift and angry. A Facebook Group, “People Against the New Terms of Service,” attracted over 130,000 members to pressure Facebook to revert to its old terms of use. The Electronic Privacy Information Centre (EPIC) threatened to file a complaint with the Federal Trade Commission. Facebook backed down.
This incident is interesting for many reasons. For one, it illustrates public anxieties regarding personal information. Tracked by public surveillance cameras, profiled by marketers, tagged by Facebook friends—increasingly we fear that information and communications technology has placed our personal information beyond our control. And, given that one of the most popular definitions of privacy is “control over personal information”, any loss of control is viewed as a problematic loss of privacy.
The Facebook incident also highlights the accepted solutions to this problem. The way to halt the rapid erosion of privacy is to provide individuals with more control over their personal information. This has both a technological and a legal aspect. The technological aspect can be seen by the use of technology itself (a Facebook group) to mobilize individuals into an effective pressure group. The legal aspect can be seen through the threat of legal action. In fact, EPIC claims that this incident is evidence of the need for more comprehensive privacy laws in the United States. Canada has such legislation, including our federal Personal Information Protection and Electronic Documents Act (PIPEDA), which aims to provide individuals with greater control over the collection, use and disclosure of their personal information. Even before this recent controversy, the Canadian Internet and Public Policy Clinic (CIPPIC) filed a complaint with the federal Privacy Commissioner alleging that Facebook was in violation of its obligations under PIPEDA.
I am a supporter of comprehensive privacy legislation and, as a Facebook user, happy that Facebook reversed its decision. Nonetheless I think we should be concerned about the prevalence of “control” as the paradigm for both the problem of, and solution to, information and communication technology.
What interests me here is the striking parallels between contemporary privacy angst and technological fears from an earlier era. Like the “information age,” the modern industrial age engendered dystopian visions of out-of-control technology, technology that did not simply herald a new age of freedom but rather brought with it new types of threats to human autonomy, health, communities and the environment. This spawned a great deal of academic commentary across many disciplines; I want to focus here specifically on the philosophy of technology and what it can both contribute to, and learn from, the control wars.
Hans Achterhuis usefully distinguishes first and second generation philosophers of technology. Perhaps the most influential philosopher of the first generation is Martin Heidegger. According to Heidegger, the instrumental conception of technology—that technology is simply a means that we create and use to further our chosen ends— blinds us to the true essence of technology. As he famously—and rather cryptically—argued in The Question Concerning Technology, “the essence of technology is by no means anything technological.” Instead, the essence of technology is more akin to what we might now call a cultural paradigm that conditions us to view the world as resources at our disposal. Moreover, for him the essence of technology is intrinsically tied to the project of modernity itself. In this way, his work fits within a general category of primarily European thinkers who made technology an explicit theme in their reflections and who argued—although each in quite different terms—that the significance of modern technology does not lie in specific features of its machinery but rather in a kind of rationality and cultural milieu intimately linked with the project of modernity and the Enlightenment values that animate it but simultaneously threatening to undermine human freedom. In addition to Heidegger, Jacques Ellul, Gabriel Marcel, and the Frankfurt School were all influential in this regard.
Second-generation philosophers of technology share a general rejection of instrumental definitions of technology but have largely tried to distance themselves from the strong dystopian flavour of these earlier more radical techniques. According to these second-generation thinkers, these earlier critiques fail because they are essentialist in talking about “Technology” rather than “technologies,” and determinist in not seeing the myriad ways in which human contexts and values shape and constrain the uses of technology. In a world where modern technology is ubiquitous and most often welcomed, they argue, we need a more nuanced view of technology, one that has a place to laud the victories of technology and a program for technological design that enhances democratic and ethical values. Indeed, as Hans Achterhuis has argued, second-generation philosophers of technology have largely taken an “empirical turn.”
This second-generation empirical turn can enrich legal discussions of technology by opening legal discussion to the insights of theorists from a variety of disciplines who have indicated that technology is in fact not neutral, that it often embodies important social and political values and therefore can have unintended and undesirable effects beyond simply physical consequences. It can also point to the ways in which we have the resources to think about, and build, technologies in a number of different ways and give us a richer basis upon which to think about law’s role in this.
However, in distancing itself from these various elements of earlier critiques, second generation philosophers of technology have largely lost sight of the normative elements of earlier critiques. The danger is that in showing how technologies are shaped by a complex of social forces, as well as how they open up a plurality of options, these theories fall into a kind of descriptive obscurity. Indeed, Langdon Winner accuses some expressions of this “empirical turn” of ignoring—even disdaining—any normative inquiry into technology in favour of highlighting the interpretive flexibility of any particular technology. As Winner argues, the important question is not how technology is constructed but which norms we should invoke to evaluate technologies, their design, implementation, and effects.
This is where legal scholars need to intervene.
What some of the legal debates regarding technology highlight is that it is not clear that the traditional normative strategies we might employ to evaluate technologies are adequate. And many of these normative strategies center on a particular idea of the self. For example, in an earlier posting, Frank Pasquale indicated that the question of the acceptance of self-enhancing technologies is not being driven by the technology itself but rather by a conception of the self that should be questioned. Kieran Tranter wrote of the need for alternative stories of self-creation.
These observations—with which I agree—suggest that we should rethink the empirical turn. What the first generation of philosophers of technology understood was that at the root of their questioning of technology lay the need to question the modern self itself. At the end of the day, this was Heidegger’s message regarding technology: the instrumental definition of technology blinds us to the real essence of technology but the supreme danger of this is that we are thereby also blinded to the true nature of what it means to be a human being. Discussions of controlling technology – through law or other means—misses this entirely and in fact risks perpetuating a problematic view of the self.
In my next post, I will try to show how this insight can be helpful in understanding the limits of a privacy paradigm centered on control of personal information even if we don’t return to the radical excesses of first generation philosophy of technology.
But in closing let me respond to one possible objection to my claim that law is an important site for normative engagement with technology and, in particular, claims of control. One might ask whether law itself is a technology and therefore not something that can be easily and straightforwardly enlisted to judge other technologies. Ellul, who has already been mentioned in a number of previous posts, himself wrote of “judicial technique,” placing it in the realm of calculative rationality that characterizes other techniques. Nonetheless, because law is a site of justice it is also in a kind of privileged position in relation to technology as that which can never fully become technique. He argues:
Judicial technique is in every way much less self-confident than the other techniques, because it is impossible to transform the notion of justice into technical elements. Despite what philosophers may say, justice is not a thing which can be grasped or fixed. If one pursues genuine justice (and not some automatism or egalitarianism), one never knows where one will end. A law created as a function of justice has something unpredictable in it which embarrasses the jurist. Moreover, justice is not in the service of the state; it even claims the right to judge the state. Law created as a function of justice eludes the state, which can neither create nor modify it. The state of course sanctions this situation only to the degree that it has little power or has not yet become fully self-conscious; or to the degree that its jurists are not exclusively technical rationalists and subordinated to efficient results. Under these conditions, technique assumes the role of a handmaiden modestly resigned to the fact that she does not automatically get what she desires. (The Technological Society, p. 292)
One might say that justice eludes control and we would do well to attend to this and its significance.
This incident is interesting for many reasons. For one, it illustrates public anxieties regarding personal information. Tracked by public surveillance cameras, profiled by marketers, tagged by Facebook friends—increasingly we fear that information and communications technology has placed our personal information beyond our control. And, given that one of the most popular definitions of privacy is “control over personal information”, any loss of control is viewed as a problematic loss of privacy.
The Facebook incident also highlights the accepted solutions to this problem. The way to halt the rapid erosion of privacy is to provide individuals with more control over their personal information. This has both a technological and a legal aspect. The technological aspect can be seen by the use of technology itself (a Facebook group) to mobilize individuals into an effective pressure group. The legal aspect can be seen through the threat of legal action. In fact, EPIC claims that this incident is evidence of the need for more comprehensive privacy laws in the United States. Canada has such legislation, including our federal Personal Information Protection and Electronic Documents Act (PIPEDA), which aims to provide individuals with greater control over the collection, use and disclosure of their personal information. Even before this recent controversy, the Canadian Internet and Public Policy Clinic (CIPPIC) filed a complaint with the federal Privacy Commissioner alleging that Facebook was in violation of its obligations under PIPEDA.
I am a supporter of comprehensive privacy legislation and, as a Facebook user, happy that Facebook reversed its decision. Nonetheless I think we should be concerned about the prevalence of “control” as the paradigm for both the problem of, and solution to, information and communication technology.
What interests me here is the striking parallels between contemporary privacy angst and technological fears from an earlier era. Like the “information age,” the modern industrial age engendered dystopian visions of out-of-control technology, technology that did not simply herald a new age of freedom but rather brought with it new types of threats to human autonomy, health, communities and the environment. This spawned a great deal of academic commentary across many disciplines; I want to focus here specifically on the philosophy of technology and what it can both contribute to, and learn from, the control wars.
Hans Achterhuis usefully distinguishes first and second generation philosophers of technology. Perhaps the most influential philosopher of the first generation is Martin Heidegger. According to Heidegger, the instrumental conception of technology—that technology is simply a means that we create and use to further our chosen ends— blinds us to the true essence of technology. As he famously—and rather cryptically—argued in The Question Concerning Technology, “the essence of technology is by no means anything technological.” Instead, the essence of technology is more akin to what we might now call a cultural paradigm that conditions us to view the world as resources at our disposal. Moreover, for him the essence of technology is intrinsically tied to the project of modernity itself. In this way, his work fits within a general category of primarily European thinkers who made technology an explicit theme in their reflections and who argued—although each in quite different terms—that the significance of modern technology does not lie in specific features of its machinery but rather in a kind of rationality and cultural milieu intimately linked with the project of modernity and the Enlightenment values that animate it but simultaneously threatening to undermine human freedom. In addition to Heidegger, Jacques Ellul, Gabriel Marcel, and the Frankfurt School were all influential in this regard.
Second-generation philosophers of technology share a general rejection of instrumental definitions of technology but have largely tried to distance themselves from the strong dystopian flavour of these earlier more radical techniques. According to these second-generation thinkers, these earlier critiques fail because they are essentialist in talking about “Technology” rather than “technologies,” and determinist in not seeing the myriad ways in which human contexts and values shape and constrain the uses of technology. In a world where modern technology is ubiquitous and most often welcomed, they argue, we need a more nuanced view of technology, one that has a place to laud the victories of technology and a program for technological design that enhances democratic and ethical values. Indeed, as Hans Achterhuis has argued, second-generation philosophers of technology have largely taken an “empirical turn.”
This second-generation empirical turn can enrich legal discussions of technology by opening legal discussion to the insights of theorists from a variety of disciplines who have indicated that technology is in fact not neutral, that it often embodies important social and political values and therefore can have unintended and undesirable effects beyond simply physical consequences. It can also point to the ways in which we have the resources to think about, and build, technologies in a number of different ways and give us a richer basis upon which to think about law’s role in this.
However, in distancing itself from these various elements of earlier critiques, second generation philosophers of technology have largely lost sight of the normative elements of earlier critiques. The danger is that in showing how technologies are shaped by a complex of social forces, as well as how they open up a plurality of options, these theories fall into a kind of descriptive obscurity. Indeed, Langdon Winner accuses some expressions of this “empirical turn” of ignoring—even disdaining—any normative inquiry into technology in favour of highlighting the interpretive flexibility of any particular technology. As Winner argues, the important question is not how technology is constructed but which norms we should invoke to evaluate technologies, their design, implementation, and effects.
This is where legal scholars need to intervene.
What some of the legal debates regarding technology highlight is that it is not clear that the traditional normative strategies we might employ to evaluate technologies are adequate. And many of these normative strategies center on a particular idea of the self. For example, in an earlier posting, Frank Pasquale indicated that the question of the acceptance of self-enhancing technologies is not being driven by the technology itself but rather by a conception of the self that should be questioned. Kieran Tranter wrote of the need for alternative stories of self-creation.
These observations—with which I agree—suggest that we should rethink the empirical turn. What the first generation of philosophers of technology understood was that at the root of their questioning of technology lay the need to question the modern self itself. At the end of the day, this was Heidegger’s message regarding technology: the instrumental definition of technology blinds us to the real essence of technology but the supreme danger of this is that we are thereby also blinded to the true nature of what it means to be a human being. Discussions of controlling technology – through law or other means—misses this entirely and in fact risks perpetuating a problematic view of the self.
In my next post, I will try to show how this insight can be helpful in understanding the limits of a privacy paradigm centered on control of personal information even if we don’t return to the radical excesses of first generation philosophy of technology.
But in closing let me respond to one possible objection to my claim that law is an important site for normative engagement with technology and, in particular, claims of control. One might ask whether law itself is a technology and therefore not something that can be easily and straightforwardly enlisted to judge other technologies. Ellul, who has already been mentioned in a number of previous posts, himself wrote of “judicial technique,” placing it in the realm of calculative rationality that characterizes other techniques. Nonetheless, because law is a site of justice it is also in a kind of privileged position in relation to technology as that which can never fully become technique. He argues:
Judicial technique is in every way much less self-confident than the other techniques, because it is impossible to transform the notion of justice into technical elements. Despite what philosophers may say, justice is not a thing which can be grasped or fixed. If one pursues genuine justice (and not some automatism or egalitarianism), one never knows where one will end. A law created as a function of justice has something unpredictable in it which embarrasses the jurist. Moreover, justice is not in the service of the state; it even claims the right to judge the state. Law created as a function of justice eludes the state, which can neither create nor modify it. The state of course sanctions this situation only to the degree that it has little power or has not yet become fully self-conscious; or to the degree that its jurists are not exclusively technical rationalists and subordinated to efficient results. Under these conditions, technique assumes the role of a handmaiden modestly resigned to the fact that she does not automatically get what she desires. (The Technological Society, p. 292)
One might say that justice eludes control and we would do well to attend to this and its significance.
Wednesday, February 25, 2009
Introducing Lisa Austin
Thanks to Lyria and all of the previous bloggers for their many thought-provoking posts. We're now in the home stretch with two bloggers left to go.
Our next blogger, Lisa Austin from the University of Toronto, conducts research in areas that include privacy law and the ethical and social justice issues raised by emerging technologies. A recent work focuses on the challenges to privacy rights and interests presented by state information-sharing practices. Lisa is currently working on a research project involving privacy and identity.
Our next blogger, Lisa Austin from the University of Toronto, conducts research in areas that include privacy law and the ethical and social justice issues raised by emerging technologies. A recent work focuses on the challenges to privacy rights and interests presented by state information-sharing practices. Lisa is currently working on a research project involving privacy and identity.
Monday, February 23, 2009
Technology bias
In her comment on my previous post, Gaia Bernstein asks an important question:
No single article or author writing about virtual worlds is doing any wrong or harm. Having read 126 such articles, many of them are very interesting - as I have said previously, I love legal hypotheticals involving new technologies. I am not the only one - analysis of legal issues surrounding new technologies (from virtual worlds to genetics) can often be found in the mainstream media. And no-one is harmed by an exploration of how transactions concerning a moon platform or a virtual mace are classified from a legal perspective.
But there are concerns that result from legal scholars' interest in technology. The first is that raised by Beebe, it allows lawyers to pretend that law is still in control. We "domesticate" technological innovation by analysing it in legal terms.
Interest of this sort is usually short-lived, so that we still have cyberlaw (though much of this is being assimilated) and virtual law, but no longer railroad law. And we now expore property concepts by testing them against virtual objects rather than space platforms. If the point is to understand "property" better, why no longer space platforms?
The other concern is that legal scholars might focus on technological aspects of particular issues, while ignoring broader questions. It is one thing to say that the law can control technological monsters, but another to see only technological monsters.
For example, technology might be portrayed as a “monster” while analogous non-technological threats recede into the background. Consider Frank Pasquale’s discussion on this blog and in a previous article of the dangers of technologies that offer competitive advantage. As I said in my comment, I personally find the idea of neurocosmetics pretty horrific. But I have no trouble using parenting techniques to manipulate my childrens' personalities. In using such techniques, I am taking advantage of my children’s neuroplasticity to alter (to some extent at least) their future "selves." In this way, parenting can operate as an alternative path to the ends achieved by neurocosmetics. But parenting is not “scary,” not even if I know that it gives some children an “advantage” over children whose parents, perhaps due to socio-economic disadvantage, lack the resources to learn and utilise various parenting strategies. Which leads back to the question, if the concern is competitive advantage, is it reasonable to focus on the newest technological means of gaining a competitive advantage? Frank would say "yes" because
Even where the problem is not containing technological "monsters," but merely exploring uncertainties or filling legal gaps, it is important to justify a technological focus. I have tried to do this in Recurring Dilemmas and Why Have a Theory of Law and Technological Change. Others will judge my efforts. One interesting observation I made, though, was the tendency for lawmakers to use technological change as an excuse to change a law where that is not the real or only reason they wish to do so. We are used to the story of law falling behind technology and needing to be updated. While this narrative is sometimes pertinent, it is important to remain vigilant as to the bias it can cause. In some cases, portraying a new technology as the problematic element is used to advance a particular perspective. For example, digital copying and peer-to-peer technologies have been portrayed by organisations like the RIAA as requiring "updating" of copyright law (eg the DMCA). The narrative is one of an existing status quo, upset by technological change, requiring new laws to ensure reversion to the status quo. The DMCA may or may not be a good idea, but portraying technology as the disruptive element in need of a legal "fix" is not the only story to be told.
So, what lessons to draw? I am still unsure which aspects of virtual world scholarship can fairly be distinguished from golden age space law. But I think it is an important question to ask. Given our autonomy, why do we so often choose to explore legal issues surrounding new technologies? What justifications can we offer to counter any dangers of an overly technological focus?
The question is should the autonomy of scholars be constrained and their efforts be directed to areas of law where their insights would be most effective?Actually, I agree with Gaia, that the answer is "no." I am not attempting to cramp the autonomy of legal scholars to write about what they wish, only to encourage greater self-reflection.
No single article or author writing about virtual worlds is doing any wrong or harm. Having read 126 such articles, many of them are very interesting - as I have said previously, I love legal hypotheticals involving new technologies. I am not the only one - analysis of legal issues surrounding new technologies (from virtual worlds to genetics) can often be found in the mainstream media. And no-one is harmed by an exploration of how transactions concerning a moon platform or a virtual mace are classified from a legal perspective.
But there are concerns that result from legal scholars' interest in technology. The first is that raised by Beebe, it allows lawyers to pretend that law is still in control. We "domesticate" technological innovation by analysing it in legal terms.
Interest of this sort is usually short-lived, so that we still have cyberlaw (though much of this is being assimilated) and virtual law, but no longer railroad law. And we now expore property concepts by testing them against virtual objects rather than space platforms. If the point is to understand "property" better, why no longer space platforms?
The other concern is that legal scholars might focus on technological aspects of particular issues, while ignoring broader questions. It is one thing to say that the law can control technological monsters, but another to see only technological monsters.
For example, technology might be portrayed as a “monster” while analogous non-technological threats recede into the background. Consider Frank Pasquale’s discussion on this blog and in a previous article of the dangers of technologies that offer competitive advantage. As I said in my comment, I personally find the idea of neurocosmetics pretty horrific. But I have no trouble using parenting techniques to manipulate my childrens' personalities. In using such techniques, I am taking advantage of my children’s neuroplasticity to alter (to some extent at least) their future "selves." In this way, parenting can operate as an alternative path to the ends achieved by neurocosmetics. But parenting is not “scary,” not even if I know that it gives some children an “advantage” over children whose parents, perhaps due to socio-economic disadvantage, lack the resources to learn and utilise various parenting strategies. Which leads back to the question, if the concern is competitive advantage, is it reasonable to focus on the newest technological means of gaining a competitive advantage? Frank would say "yes" because
Technology is often far more sudden, effective, and commodifiable than social or cultural methods of accomplishing ends.This suggests that technological means to achieving competitive advantage are of more concern than non-technological means. But it might be argued that a technological focus also deflects attention away from the (currently) greater social problem. I would perhaps justify a technological focus in a different way in this case - absent a rejection of capitalism in its current form, the only regulation likely is restrictions on technological means of gaining competitive advantage. Thus I am not saying that a technological focus might not be constructive nor that a particular article cannot choose to focus on technological aspects of a problem. But by focusing on the technological, we should not ignore the non-technological. In other words, it is important to consider the broader question about competitive advantage, in particular any other aspects of it that can realistically be limited. We should still consider, for example, whether students ought to be obliged to disclose the use of tutoring colleges when applying for university or jobs.
Even where the problem is not containing technological "monsters," but merely exploring uncertainties or filling legal gaps, it is important to justify a technological focus. I have tried to do this in Recurring Dilemmas and Why Have a Theory of Law and Technological Change. Others will judge my efforts. One interesting observation I made, though, was the tendency for lawmakers to use technological change as an excuse to change a law where that is not the real or only reason they wish to do so. We are used to the story of law falling behind technology and needing to be updated. While this narrative is sometimes pertinent, it is important to remain vigilant as to the bias it can cause. In some cases, portraying a new technology as the problematic element is used to advance a particular perspective. For example, digital copying and peer-to-peer technologies have been portrayed by organisations like the RIAA as requiring "updating" of copyright law (eg the DMCA). The narrative is one of an existing status quo, upset by technological change, requiring new laws to ensure reversion to the status quo. The DMCA may or may not be a good idea, but portraying technology as the disruptive element in need of a legal "fix" is not the only story to be told.
So, what lessons to draw? I am still unsure which aspects of virtual world scholarship can fairly be distinguished from golden age space law. But I think it is an important question to ask. Given our autonomy, why do we so often choose to explore legal issues surrounding new technologies? What justifications can we offer to counter any dangers of an overly technological focus?
Saturday, February 21, 2009
Turning the lenses inward
With my posts, I am going to do a different blend of the concepts autonomy, law, technology and explore the reasons why legal scholars use their autonomy to focus on issues surrounding new technologies. By “issues surrounding new technologies” I don’t mean why are we here discussing law and technology theory (there are, after all, relatively few of us, and many justifications we could offer for our choice of scholarship, some of which were collected in the MJLST symposium). Rather, I am referring to the vast fields of scholarship exploring particular legal issues surrounding particular technologies.
In my first post, I will set up the question, and in the second go some way towards an answer. One caution – I have much further to go with this project before producing a piece for publication, so my ideas are still tentative. Hopefully, these two blogs will generate critique and suggestions! But on with the show…
Beebe, in an excellent note entitled Law’s Empire and the Final Frontier: Legalizing the Future in the Earlyorpus Juris Spatialis (108 Yale L.J. 1737), discusses the fate of “space law.” He describes the “Golden Age” of space law in which lawyers debated such questions as whether title to a space platform would be transferred by bill of sale or deed. Far from lagging behind technology, lawyers were leaping ahead. He argues that lawyers’ focus on outer space was an attempt, as Kieran Tranter might put it, to ensure that the “law” story won over the “technology” story, and hence that lawyers had a place in the future.
Note that Beebe does not deny that new technologies generate new legal issues. In an earlier piece, I categorised legal issues generated by technological change. It might in fact be uncertain, on the basis of pre-existing law, how title to a space platform would be transferred. Beebe’s point is not that this issue was meaningless or easy, but rather that the purpose of discussing it is to assert the dominance of a legal narrative in a technological future rather than to set out an authoritative, coherent statement of legal doctrine. “Space law” still exists, although Beebe distinguishes modern space law from “golden age” space law by describing the former as “a highly technical discourse spoken primarily by specialist practitioners.”
Might today's legal scholars, with the freedom to discuss whatever they wish, fall into a similar trap as "golden age" space lawyers? One area where this might be happening is in the scholarship surrounding legal issues in virtual worlds. I should start by admitting my own musings on this topic in an article on the scope of property law which employed virtual property as one of its examples. So, why am I worried about the parallels? First, it is not self-evident why legal scholars would be concerned with virtual worlds. Unlike a technology such as cloning, there is no “obvious” role for law to play. Second, people spending time and doing business in constructed virtual worlds arguably pose a similar "threat" to lawyers to that posed by the possibility of space travel in the 1960s.
With the help of a research assistant, I am in the process of compiling a list of all articles dealing with legal issues in virtual worlds published (or appearing on-line) before the end of 2008. We have over 100 articles dealing with legal issues in virtual worlds. I am not currently including books such as that by Duranske on Virtual Law (published by the American Bar Association). As well as getting a sense of numbers, I have “coded” them for explanations offered as to why the issue being discussed is important or urgent. Some articles gave more than one reason, in which case more than one coding was allocated. My “coding” is necessarily subjective (as the justification for exploring issues in virtual worlds was often implied from introductions rather than explicitly identified as a rationale). But what I wanted was a sense of whether there was any expressed need for legal scholarship on virtual worlds that could take it outside the realm of Beebe's concern.
Most articles offered at least some rationale for finding the topic of interest. A few (including my own) were concerned with broader legal development, using virtual worlds as a launching pad to explore more general legal issues. Of the ones that considered the resolution of legal issues in virtual worlds important for themselves, the most popular reason was the rate of growth of virtual worlds, by reference to changes in population or profit. A few raised the need to ensure continuing growth and productivity of virtual worlds as a rationale for their discussion. Government and judicial activity was sometimes mentioned as justifying legal analysis. Quite a few articles referred to the fact that virtual world transactions have corresponding “real money” values, with some more referring to “real world” effects of virtual activity more broadly. Some articles referred to the importance of virtual worlds in the lives of (at least some of) their residents. There is also a cumulative effect, with some articles referring to previous media or academic interest in virtual worlds as a rationale for further discussion of virtual worlds.
So, is there anything in all this that might explain the popular focus on legal issues in virtual worlds? Some, still tentative, thoughts:
Growth: The growth of virtual worlds might be important for two (related) reasons: (1) if there are legal dilemmas, it is possible that more and more people will encounter them, and (2) if laws are going to be made, they need to be made soon before the technological status quo becomes entrenched.
The first of these is true, but statements about the number of citizens in Second Life is no more impressive than lists of man's accomplishments in outer space in the 1970s. Neither tells us whether resolution of the legal issues is timely or premature. Growth itself might signal either - ongoing growth and development might make early legal responses obsolete. Growth might also be illusory - a passing fad.
The second of these does seek to explain the urgency of attention to legal issues. However, “growth” as such may not be the relevant factor. According to Gaia Bernstein, diffusion patterns can signal a need for urgent consideration of legal issues. Diffusion patterns are not, however, mere reference to rate of uptake but rather features such as centralisation and the existence of a critical mass point. Although decentralised, the fate of virtual worlds (in terms of critical mass point) is less clear than the fate of the Internet discussed by Gaia in her paper. However, demonstration that the diffusion pattern of virtual worlds made particular legal problems more urgent would satisfactorily distinguish virtual law from space law.
Technology promotion: Where a technology is independently desirable, but diffusion is stymied for an external reason, then law reform to remove the blockage might be desirable. Gaia Bernstein gives an example of this in her discussion of privacy concerns inhibiting the diffusion of genetic testing technologies. Whether this scenario (or something similar) applies in the case of virtual worlds would require demonstration. I am not so sure that promoting virtual worlds is a high government priority right now anyway.
Government and judicial activity: Certainly, a judicial decision, proposed law or proposed agency action can be a good reason for legal commentary. However, in the case of virtual worlds, few decisions and little action tends to lead to plenty of commentary. Bragg v Linden Labs only reached the interlocutory stage before being settled, yet academic commentary is plentiful.
Real world implications (including the possibility of exchange between virtual currency and real currency): The fact that actions in virtual worlds can have real world implications is generally a pre-requisite for their being of interest to lawyers at all. However, given the vast amounts of possible activity that has implications, including financial implications, this cannot be a reason in itself. However, if for example large amounts of money depended on the answer to a legal issue arising in virtual worlds, that could justify further exploration. Some virtual worlds literature falls into this category.
Importance to individuals using the technology: This seems a good reason to resolve legal issues surrounding virtual worlds. If the lives of many individuals would be enhanced by particular legal treatment of virtual worlds, then advocating such treatment seems sensible. Of course, ideally, one would have empirical proof of what legal issues virtual citizens are concerned about, rather than mere supposition.
In summary, there are some glimmers of hope that virtual law scholarship will turn out to be less humorous in retrospect than "golden age" space law scholarship, although the jury is still out. Most likely, as in the case of space law, some aspects of virtual law jurisprudence will become relevant and important, perhaps confined to true specialists. Other areas may seem, in retrospect, a distraction, motivated by legal academics’ desire to explore strange new worlds.
But, if scholars can do what we like, why does this matter? The answer (or at least further musings) will have to wait until my next post.
In my first post, I will set up the question, and in the second go some way towards an answer. One caution – I have much further to go with this project before producing a piece for publication, so my ideas are still tentative. Hopefully, these two blogs will generate critique and suggestions! But on with the show…
Beebe, in an excellent note entitled Law’s Empire and the Final Frontier: Legalizing the Future in the Earlyorpus Juris Spatialis (108 Yale L.J. 1737), discusses the fate of “space law.” He describes the “Golden Age” of space law in which lawyers debated such questions as whether title to a space platform would be transferred by bill of sale or deed. Far from lagging behind technology, lawyers were leaping ahead. He argues that lawyers’ focus on outer space was an attempt, as Kieran Tranter might put it, to ensure that the “law” story won over the “technology” story, and hence that lawyers had a place in the future.
Note that Beebe does not deny that new technologies generate new legal issues. In an earlier piece, I categorised legal issues generated by technological change. It might in fact be uncertain, on the basis of pre-existing law, how title to a space platform would be transferred. Beebe’s point is not that this issue was meaningless or easy, but rather that the purpose of discussing it is to assert the dominance of a legal narrative in a technological future rather than to set out an authoritative, coherent statement of legal doctrine. “Space law” still exists, although Beebe distinguishes modern space law from “golden age” space law by describing the former as “a highly technical discourse spoken primarily by specialist practitioners.”
Might today's legal scholars, with the freedom to discuss whatever they wish, fall into a similar trap as "golden age" space lawyers? One area where this might be happening is in the scholarship surrounding legal issues in virtual worlds. I should start by admitting my own musings on this topic in an article on the scope of property law which employed virtual property as one of its examples. So, why am I worried about the parallels? First, it is not self-evident why legal scholars would be concerned with virtual worlds. Unlike a technology such as cloning, there is no “obvious” role for law to play. Second, people spending time and doing business in constructed virtual worlds arguably pose a similar "threat" to lawyers to that posed by the possibility of space travel in the 1960s.
With the help of a research assistant, I am in the process of compiling a list of all articles dealing with legal issues in virtual worlds published (or appearing on-line) before the end of 2008. We have over 100 articles dealing with legal issues in virtual worlds. I am not currently including books such as that by Duranske on Virtual Law (published by the American Bar Association). As well as getting a sense of numbers, I have “coded” them for explanations offered as to why the issue being discussed is important or urgent. Some articles gave more than one reason, in which case more than one coding was allocated. My “coding” is necessarily subjective (as the justification for exploring issues in virtual worlds was often implied from introductions rather than explicitly identified as a rationale). But what I wanted was a sense of whether there was any expressed need for legal scholarship on virtual worlds that could take it outside the realm of Beebe's concern.
Most articles offered at least some rationale for finding the topic of interest. A few (including my own) were concerned with broader legal development, using virtual worlds as a launching pad to explore more general legal issues. Of the ones that considered the resolution of legal issues in virtual worlds important for themselves, the most popular reason was the rate of growth of virtual worlds, by reference to changes in population or profit. A few raised the need to ensure continuing growth and productivity of virtual worlds as a rationale for their discussion. Government and judicial activity was sometimes mentioned as justifying legal analysis. Quite a few articles referred to the fact that virtual world transactions have corresponding “real money” values, with some more referring to “real world” effects of virtual activity more broadly. Some articles referred to the importance of virtual worlds in the lives of (at least some of) their residents. There is also a cumulative effect, with some articles referring to previous media or academic interest in virtual worlds as a rationale for further discussion of virtual worlds.
So, is there anything in all this that might explain the popular focus on legal issues in virtual worlds? Some, still tentative, thoughts:
Growth: The growth of virtual worlds might be important for two (related) reasons: (1) if there are legal dilemmas, it is possible that more and more people will encounter them, and (2) if laws are going to be made, they need to be made soon before the technological status quo becomes entrenched.
The first of these is true, but statements about the number of citizens in Second Life is no more impressive than lists of man's accomplishments in outer space in the 1970s. Neither tells us whether resolution of the legal issues is timely or premature. Growth itself might signal either - ongoing growth and development might make early legal responses obsolete. Growth might also be illusory - a passing fad.
The second of these does seek to explain the urgency of attention to legal issues. However, “growth” as such may not be the relevant factor. According to Gaia Bernstein, diffusion patterns can signal a need for urgent consideration of legal issues. Diffusion patterns are not, however, mere reference to rate of uptake but rather features such as centralisation and the existence of a critical mass point. Although decentralised, the fate of virtual worlds (in terms of critical mass point) is less clear than the fate of the Internet discussed by Gaia in her paper. However, demonstration that the diffusion pattern of virtual worlds made particular legal problems more urgent would satisfactorily distinguish virtual law from space law.
Technology promotion: Where a technology is independently desirable, but diffusion is stymied for an external reason, then law reform to remove the blockage might be desirable. Gaia Bernstein gives an example of this in her discussion of privacy concerns inhibiting the diffusion of genetic testing technologies. Whether this scenario (or something similar) applies in the case of virtual worlds would require demonstration. I am not so sure that promoting virtual worlds is a high government priority right now anyway.
Government and judicial activity: Certainly, a judicial decision, proposed law or proposed agency action can be a good reason for legal commentary. However, in the case of virtual worlds, few decisions and little action tends to lead to plenty of commentary. Bragg v Linden Labs only reached the interlocutory stage before being settled, yet academic commentary is plentiful.
Real world implications (including the possibility of exchange between virtual currency and real currency): The fact that actions in virtual worlds can have real world implications is generally a pre-requisite for their being of interest to lawyers at all. However, given the vast amounts of possible activity that has implications, including financial implications, this cannot be a reason in itself. However, if for example large amounts of money depended on the answer to a legal issue arising in virtual worlds, that could justify further exploration. Some virtual worlds literature falls into this category.
Importance to individuals using the technology: This seems a good reason to resolve legal issues surrounding virtual worlds. If the lives of many individuals would be enhanced by particular legal treatment of virtual worlds, then advocating such treatment seems sensible. Of course, ideally, one would have empirical proof of what legal issues virtual citizens are concerned about, rather than mere supposition.
In summary, there are some glimmers of hope that virtual law scholarship will turn out to be less humorous in retrospect than "golden age" space law scholarship, although the jury is still out. Most likely, as in the case of space law, some aspects of virtual law jurisprudence will become relevant and important, perhaps confined to true specialists. Other areas may seem, in retrospect, a distraction, motivated by legal academics’ desire to explore strange new worlds.
But, if scholars can do what we like, why does this matter? The answer (or at least further musings) will have to wait until my next post.
Introducing Lyria Bennett Moses
Our next blogger, Lyria Bennett Moses, hails from the University of New South Wales.
An earlier paper by Lyria discussed how the law deals with 'recurring dilemmas' when confronted with new technologies as well as the ways that technology change differs from other social changes that challenge traditional legal interests. In this paper and elsewhere, Lyria has been developing a framework for legal analysis at the intersection of law and technology.
Her earlier posts at this blog can be found here.
An earlier paper by Lyria discussed how the law deals with 'recurring dilemmas' when confronted with new technologies as well as the ways that technology change differs from other social changes that challenge traditional legal interests. In this paper and elsewhere, Lyria has been developing a framework for legal analysis at the intersection of law and technology.
Her earlier posts at this blog can be found here.
Friday, February 20, 2009
Two Technological Tales: Email and Minitel
We tend to think that a technology which failed to diffuse must have been a bad idea. But, there are technologies, which undergo long social adoption processes and eventually achieve mainstream adoption. These long social adoption processes, if at all acknowledged, are usually attributed off-handedly to technical issues. Yet, diffusion delays are often related to a complex interaction of factors, many of which are not related to technical difficulties but to individual adoption decisions. In this post I want to use the stories of two eventually successful technologies, which underwent long social adoption processes in order to underscore the need to focus legal attention and resources on the user as an adopter.
The first story is about videotext systems. We often marvel at how the Internet transformed our lives: from the abundance of information to the conveniences of online shopping. The Internet has reached mainstream adoption in the mid-1990s. But, few realize that the majority of the French population has enjoyed the conveniences of the Internet from the early 1980s through use of a videotext system called Minitel. Minitel consisted of a small monitor and keyboard, which used the phone connection to transmit information. Minitel was used for online banking, travel reservations, information services, online grocery shopping and messaging services. All in all it encompassed many of the features we have come to associate with the Internet.
While the Minitel was introduced in France in 1982 and reached mainstream adoption by 1985, similar videotext systems were launched in the United States, most European countries and Japan, yet these systems were not adopted. The residents of most of the world had to wait until the mid-1990s to enjoy the conveniences the French enjoyed a decade earlier.
The second tale is about the email. Most people consider the email to be a 1990s technology. But, it was in 1971 that the first email was sent between computers. The major technological difficulties were overcome by the early 1980s with the adoption of the uniform TCP/IP standard. Commercial email, in fact, existed during the 1970s. The Queen of England sent her first email over the Atlantic in 1976. Jimmy Carter’s campaign also used email in 1976. Then why have most of us started using email only during the mid-1990s? Technological issues alone fail to account for the time lag.
The stories of the videotext systems and the email leave many questions unanswered. What prevented users from adopting these technologies earlier? What could have been done to accelerate diffusion? I hope to further explore these issues. But, my main goal in this post was to use these stories to illustrate the importance of shifting the legal regime’s attention and resources toward regulating user adoption behavior because of its important role in technological diffusion delays.
Wednesday, February 18, 2009
The User as a Resister of New Technologies (or Hail the Couch Potato)
Legal scholars have recently discovered the user of new technologies. But, we tend to concentrate on a specific type of user – the user as an innovator. We look at the user who designs, who changes a technology to reflect his needs. For example, much has been written about users innovating with open source software. We also pay ample attention to new users’ abilities to create using digital technology and the abundant content available on the Internet.
I do not wish to belittle this recent focus on the user as an innovator. But, I believe our concern with users should be significantly broader. After all, the user as an innovator is not our typical user. I want to suggest in this post that we begin paying attention to the ordinary user – the couch potato.
You may be wondering – why dedicate our time to the couch potato – isn’t our goal to encourage users to actively participate and innovate to promote progress? I propose that we focus on the ordinary user because despite the common belief that a technology failed because it was inherently destined for failure, it is this user who routinely makes decisions about whether to adopt or not to adopt new technologies. Users resist new technologies in different ways. Sometimes they actively resist them. Demonstrations against nuclear weapons are an example of active resistance. But most commonly, users engage in avoidance resistance. Examples of avoidance resistance are plentiful. From a woman not buying genetically modified food in the supermarket to an aging poet refusing to replace his typewriter with a computer.
I suggest that we start focusing on the user as an adopter of new technologies. The importance of concentrating on users daily adoption decisions lies in our emphasis on progress as an important socio-legal value. We care about the user as an innovator because we believe that innovation promotes progress and human welfare. But, if a brilliant new technology is not adopted, the progress goal itself is frustrated, and our investment in innovation is wasted.
In my next post, I will use the stories of two technologies: Videotext systems and the email to illustrate the importance of paying attention to user resistance.
I do not wish to belittle this recent focus on the user as an innovator. But, I believe our concern with users should be significantly broader. After all, the user as an innovator is not our typical user. I want to suggest in this post that we begin paying attention to the ordinary user – the couch potato.
You may be wondering – why dedicate our time to the couch potato – isn’t our goal to encourage users to actively participate and innovate to promote progress? I propose that we focus on the ordinary user because despite the common belief that a technology failed because it was inherently destined for failure, it is this user who routinely makes decisions about whether to adopt or not to adopt new technologies. Users resist new technologies in different ways. Sometimes they actively resist them. Demonstrations against nuclear weapons are an example of active resistance. But most commonly, users engage in avoidance resistance. Examples of avoidance resistance are plentiful. From a woman not buying genetically modified food in the supermarket to an aging poet refusing to replace his typewriter with a computer.
I suggest that we start focusing on the user as an adopter of new technologies. The importance of concentrating on users daily adoption decisions lies in our emphasis on progress as an important socio-legal value. We care about the user as an innovator because we believe that innovation promotes progress and human welfare. But, if a brilliant new technology is not adopted, the progress goal itself is frustrated, and our investment in innovation is wasted.
In my next post, I will use the stories of two technologies: Videotext systems and the email to illustrate the importance of paying attention to user resistance.
Tuesday, February 17, 2009
Introducing Gaia Bernstein
Our next blogger is Gaia Bernstein from Seton Hall Law School.
Gaia, along with Frank Pasquale, organized and hosted our earlier law and technology theory blog.
Gaia also organized the first symposium issue on works that considered the development of a general theory of law and technology. In this issue, her own contribution built on her earlier research to focus on the role of law with respect to the diffusion of new technologies. This work helped me to understand how law interacts with our (mainly) love/hate relationship with new technologies as we for the most part embrace these technologies while, in certain cases, fear their individual or social consequences.
Drumroll please ...
Gaia, along with Frank Pasquale, organized and hosted our earlier law and technology theory blog.
Gaia also organized the first symposium issue on works that considered the development of a general theory of law and technology. In this issue, her own contribution built on her earlier research to focus on the role of law with respect to the diffusion of new technologies. This work helped me to understand how law interacts with our (mainly) love/hate relationship with new technologies as we for the most part embrace these technologies while, in certain cases, fear their individual or social consequences.
Drumroll please ...
Sunday, February 15, 2009
Stories of Autonomy, Technology and Law II
The Autonomy Story
Freedom has exercised particular attraction to the modern imagination. The technology story saw the tool using human as freeing humanity from the constraints of a fickle and oppressive nature. The legal story saw contract and government as freeing human from too much freedom in the state of nature. Freedom is defined in relation as a freedom from. The concept of will that Nietzsche exalted (as a rejection of the orthodoxy that ‘freedom’ had become) turns out, on a simplistic analysis, to be ‘freedom from’ on steroids. Freedom from or the pure exercise of will has a tinge of irresponsibility about it; as first year law students demonstrate when they are allowed to play, under close supervision with negative rights in tutorials (I am free to swing my fist to within 1/1000 of an inch of your nose). Autonomy can suggest something else; and that something else can be seen in the autonomy story of autonomy, technology and law.
The autonomy story emerges from critiques of both the technological and the legal story. One of the first disciplines to question the technological vision of humanity as the freed being of brain and tool was technology studies. I am referring to Lewis Mumford’s canonical two part Myth of the Machine (1966). In it, drawing upon the breath of human diversity as a catalogued by mid-twentieth century cultural anthropology, Mumford argued that it was not tool use that defined humans, but language and culture, and the evolution of our mental hardware was stimulated by increasing sophistication in usage of signs and symbols. Human freedom from nature was not because of tools but because of culture that allowed more effective domination – technology was the material manifestation of culture; not the substratum on which the superstructure of culture was erected.
This meant that for Mumford culture - law, morals, myths and technology – is what liberated humans. Notice that unlike the other stories there are no second order consequences. Law and technology as culture are tied to human freedom. Mumford’s project was clear – that modern accounts of technology that posited technology as outside of human control were false and ‘placed our whole civilisation in a state of perilous unbalance: all the more because we have cast away at this critical moment, as an affront to our rationality, man’s earliest forms of moral discipline and self-control’ (Mumford 1966: 52). Mumford regarded law (moral discipline and self-control) and technology as elements from a cultural whole. The need for law, for discipline and control, of technology was self-evident.
There is the spectre of the noble savage that haunts Mumford’s work; and a sort of negative ethnocentrism, that became obvious in the appropriate technology movement of the 1970s that his writing helped found, in favour of indigenous society against the ‘unbalanced’ West and all its works. However, this extremism is not core to the story that Mumford tells. Indeed, what this cultural re-reading of the technological story posit is a relation between law and technology that does not reify technology as either essentially human, and by location ‘good’ (as in the technology story), nor unessentially secondary and by location ‘bad’ (as in the legal story). What Mumford’s story allowed is a freedom to choose, but in that freedom hid responsibility. Humans, through culture are the creators of their own destiny, and law and technology are equal partners in this self-creation.
This still talks about freedom, but it is a qualified freedom. Not a freedom from but a freedom to. It seems that a vision of human in the world that involves culture and self-creation also includes a concept of responsibility. It is this freedom to and normative demand of responsibility that is captured by autonomy. This can be glimpsed in the critique of the legal story.
A fundamental challenge to the legal story of autonomy, technology and law comes, like Mumford’s critique of the technology story, from the social sciences. As early as thelawyer turned sociologist Max Weber began the task of cataloguing legal systems it became increasingly clear that social contract narratives failed to account for what it meant to live with a fully rationalised legal system, modern executive government and industrial capitalism. In this mass urban context of the machine (it must be remembered that ‘technology’ only become common parlance in the 1950s) concepts like nature, reason, freedom, sovereign, contract, rights had difficulty being recognized by identifiable ‘things.’ The US realists of the 1920 and 1930 tried to grasp this, but were hampered by their common law training, law school context and remained, in the main, fixated on judicial decision-making. It is the work of Michel Foucault that fundamentally challenges the legal story of autonomy, technology and law. Instead, of postulating a natural human and a state of nature, Foucault presents a plastic human constructed by techniques. Human subjectivity (that place were one feels free or otherwise) was not a private zone of autonomy that survived and was to be guaranteed by the social contract, but a product of context. Foucault talks about the cultural processes in modernity through which humans are made: The processes that Mumford glosses with his broad brush stokes. These processes are the discourses of the self (medical, sexual, legal) and the mundane training, through routine, reports and discipline by panoptic institutions (the family, schools, hospitals, army, prisons, churches, and especially universities) that construct the ‘I’ of modern life. There is not the binary sovereign-subject but ever-changing and ever- to-be negotiated networks of power relations. Here ‘law’ is more properly experienced as mores, authority, disciplines and punishments, and ‘technology’ is more properly experienced as techniques for self control and for power over others. Talk of autonomy is a relative and negotiated affair that can be represented spatially as zones where the reflective possibility of choice is possible. However, that is not freedom from; the range of choices are always limited and circumscribed.
In Foucault’s story the emphasis is on how the individual as a self negotiates the everyday; through using techniques and in being subjected to techniques, and in so doing changing. I am suggesting that, notwithstanding their obvious differences that Foucault fits within Mumford’s very grand account. Mumford on the primacy of culture, and with that humankind’s responsibility for self-creation, while Foucault explains the processes, at the level of the individual, through which an individual is made to be responsible for the self.
Now this autonomy story might seem quite removed from the mainstream of law and technology scholarship. However, I would submit that the more complex assessments of technology that are being voiced in this forum owe there formative moment to a realisation that it is human doing with technology, that is the cultural registry, that is the frame from which law and technology needs to be considered. Further, the existence of this forum, with all these signs and symbols (Mumford would be proud, and ‘signs and symbols’ sounds more retro-cool than the po-mo ‘discourse’) exercising our autonomy to reflect on our freedom to, and our responsible for the world that we make through technology and law.
In short, and this is the punch line of my argument, we tell stories. I continually and on purpose used the noun ‘story’ and verbs like ‘talk’and ‘telling’ throughout. What I have endeavoured to show has been how law and technology thinking replicates and transmits fundamental narratives about autonomy, technology and law, even in the guise of practicality. What I also have suggested in the conclusion with the autonomy story is a realisation that these stories, embedded and persuasive that they are, are cultural and we have responsibility for them. This is why my research continues to circle back to science fiction (even when I feel I should grow up, get grants and do practical law and technology research). Putting aside the mountains of chaff within the opus of science fiction there are some grains - some concepts, characters, plots, narratives -that are resources to write alternative stories about the relation between humans, technology and law.
Freedom has exercised particular attraction to the modern imagination. The technology story saw the tool using human as freeing humanity from the constraints of a fickle and oppressive nature. The legal story saw contract and government as freeing human from too much freedom in the state of nature. Freedom is defined in relation as a freedom from. The concept of will that Nietzsche exalted (as a rejection of the orthodoxy that ‘freedom’ had become) turns out, on a simplistic analysis, to be ‘freedom from’ on steroids. Freedom from or the pure exercise of will has a tinge of irresponsibility about it; as first year law students demonstrate when they are allowed to play, under close supervision with negative rights in tutorials (I am free to swing my fist to within 1/1000 of an inch of your nose). Autonomy can suggest something else; and that something else can be seen in the autonomy story of autonomy, technology and law.
The autonomy story emerges from critiques of both the technological and the legal story. One of the first disciplines to question the technological vision of humanity as the freed being of brain and tool was technology studies. I am referring to Lewis Mumford’s canonical two part Myth of the Machine (1966). In it, drawing upon the breath of human diversity as a catalogued by mid-twentieth century cultural anthropology, Mumford argued that it was not tool use that defined humans, but language and culture, and the evolution of our mental hardware was stimulated by increasing sophistication in usage of signs and symbols. Human freedom from nature was not because of tools but because of culture that allowed more effective domination – technology was the material manifestation of culture; not the substratum on which the superstructure of culture was erected.
This meant that for Mumford culture - law, morals, myths and technology – is what liberated humans. Notice that unlike the other stories there are no second order consequences. Law and technology as culture are tied to human freedom. Mumford’s project was clear – that modern accounts of technology that posited technology as outside of human control were false and ‘placed our whole civilisation in a state of perilous unbalance: all the more because we have cast away at this critical moment, as an affront to our rationality, man’s earliest forms of moral discipline and self-control’ (Mumford 1966: 52). Mumford regarded law (moral discipline and self-control) and technology as elements from a cultural whole. The need for law, for discipline and control, of technology was self-evident.
There is the spectre of the noble savage that haunts Mumford’s work; and a sort of negative ethnocentrism, that became obvious in the appropriate technology movement of the 1970s that his writing helped found, in favour of indigenous society against the ‘unbalanced’ West and all its works. However, this extremism is not core to the story that Mumford tells. Indeed, what this cultural re-reading of the technological story posit is a relation between law and technology that does not reify technology as either essentially human, and by location ‘good’ (as in the technology story), nor unessentially secondary and by location ‘bad’ (as in the legal story). What Mumford’s story allowed is a freedom to choose, but in that freedom hid responsibility. Humans, through culture are the creators of their own destiny, and law and technology are equal partners in this self-creation.
This still talks about freedom, but it is a qualified freedom. Not a freedom from but a freedom to. It seems that a vision of human in the world that involves culture and self-creation also includes a concept of responsibility. It is this freedom to and normative demand of responsibility that is captured by autonomy. This can be glimpsed in the critique of the legal story.
A fundamental challenge to the legal story of autonomy, technology and law comes, like Mumford’s critique of the technology story, from the social sciences. As early as thelawyer turned sociologist Max Weber began the task of cataloguing legal systems it became increasingly clear that social contract narratives failed to account for what it meant to live with a fully rationalised legal system, modern executive government and industrial capitalism. In this mass urban context of the machine (it must be remembered that ‘technology’ only become common parlance in the 1950s) concepts like nature, reason, freedom, sovereign, contract, rights had difficulty being recognized by identifiable ‘things.’ The US realists of the 1920 and 1930 tried to grasp this, but were hampered by their common law training, law school context and remained, in the main, fixated on judicial decision-making. It is the work of Michel Foucault that fundamentally challenges the legal story of autonomy, technology and law. Instead, of postulating a natural human and a state of nature, Foucault presents a plastic human constructed by techniques. Human subjectivity (that place were one feels free or otherwise) was not a private zone of autonomy that survived and was to be guaranteed by the social contract, but a product of context. Foucault talks about the cultural processes in modernity through which humans are made: The processes that Mumford glosses with his broad brush stokes. These processes are the discourses of the self (medical, sexual, legal) and the mundane training, through routine, reports and discipline by panoptic institutions (the family, schools, hospitals, army, prisons, churches, and especially universities) that construct the ‘I’ of modern life. There is not the binary sovereign-subject but ever-changing and ever- to-be negotiated networks of power relations. Here ‘law’ is more properly experienced as mores, authority, disciplines and punishments, and ‘technology’ is more properly experienced as techniques for self control and for power over others. Talk of autonomy is a relative and negotiated affair that can be represented spatially as zones where the reflective possibility of choice is possible. However, that is not freedom from; the range of choices are always limited and circumscribed.
In Foucault’s story the emphasis is on how the individual as a self negotiates the everyday; through using techniques and in being subjected to techniques, and in so doing changing. I am suggesting that, notwithstanding their obvious differences that Foucault fits within Mumford’s very grand account. Mumford on the primacy of culture, and with that humankind’s responsibility for self-creation, while Foucault explains the processes, at the level of the individual, through which an individual is made to be responsible for the self.
Now this autonomy story might seem quite removed from the mainstream of law and technology scholarship. However, I would submit that the more complex assessments of technology that are being voiced in this forum owe there formative moment to a realisation that it is human doing with technology, that is the cultural registry, that is the frame from which law and technology needs to be considered. Further, the existence of this forum, with all these signs and symbols (Mumford would be proud, and ‘signs and symbols’ sounds more retro-cool than the po-mo ‘discourse’) exercising our autonomy to reflect on our freedom to, and our responsible for the world that we make through technology and law.
In short, and this is the punch line of my argument, we tell stories. I continually and on purpose used the noun ‘story’ and verbs like ‘talk’and ‘telling’ throughout. What I have endeavoured to show has been how law and technology thinking replicates and transmits fundamental narratives about autonomy, technology and law, even in the guise of practicality. What I also have suggested in the conclusion with the autonomy story is a realisation that these stories, embedded and persuasive that they are, are cultural and we have responsibility for them. This is why my research continues to circle back to science fiction (even when I feel I should grow up, get grants and do practical law and technology research). Putting aside the mountains of chaff within the opus of science fiction there are some grains - some concepts, characters, plots, narratives -that are resources to write alternative stories about the relation between humans, technology and law.
Friday, February 13, 2009
Stories of Autonomy, Technology and Law
I’ll address the most important topic raised in Art’s introduction. Re: Galactica. I am planning to do some more writing on Galactica later in the year and that might answer the question whether I am ‘enjoying’ Season 4. The enjoyment has morphed into a compulsion...
On the matters at hand.
I am very glad that Art has suggested this topic for this year’s blog as it has allowed me to untangle some ideas that have lay undisturbed by my past thinking about law and technology.
Like Jennifer what follows are new ideas (at least for me) – I have welcomed this as a forum for expressing new thoughts and I would be very keen to engage in a dialogue. It also means that I do not have the solidity of a worked paper behind these thoughts, please forgive the roughness of ideas and expression.
In recent years due to teaching and editing responsibilities I have found myself to be becoming more and more a legal philosopher. This, I think, is a good discipline to bring to a discussion on human autonomy, technology and law. My argument in what follows is that specific engagements with law and technology tend to be scripted by stories that posit a fundamental relationship between human autonomy, technology and law. There is direction to my narrative. I examine three of these stories, the ‘technology’ story, the ‘legal’ story and the ‘autonomy’ story; concluding with the autonomy story as exposing the truth of the task at hand.
The Technology Story
The technology story begins with the populist definition of human as tool user. The origins of this story run deep in Western culture but a specific beginning lies in the paleonanthropological theorising of the nineteenth and early twentieth centuries, that the evolution of human, the specific chance relationship that accelerated natural selection, was tool-use by distant apelike ancestors. It was claimed that the chipping of flint and the domestication of fire set the cortex alight. Tool use facilitated greater resource utilisation which in turn gave stimulus to brain development which in turn lead to greater creativity and experimentation in tool use; and very rapidly (in evolutionary time), our hairy ancestors moved from flints and skins to not so hairy modern humans with Blackberries in Armani. In this story what distinguished modern humans was this tool use. The sub-text is autonomy. Tools and brain freed humans from nature. In Bernard Stiegler’s nice phrase from Technics and Time technology allowed ‘…the pursuit of the evolution of the living by other means than life. (Stiegler (1998): 135).’ In this story technology fundamentally relates to autonomy.
What this story about technology and human autonomy does not tell is law. Indeed, law’s absence telling. As a fundamental myth, the tool-using-free human (TUFH?), is before law. Law emerges later, as a second order consequence, a supplement laid over the top of humanity’s essential nature.
The state of debate in contemporary paleonanthropology is that this story, as an account of the evolution of Homo Sapiens, is problematic and simplistic. Further, deep ecologists have been keen to point out since the 1970s that human’s share the planet with other tool using species and a claim of superiority on the basis of tool use is anthropocentric. But it is a good story, a modern version of the myth in Plato’s Protagoras of Epimetheus, Prometheus and the gifts of traits, and is an entrenched, and often repeated, narrative within Western culture.
The essential elements of this story are repeated again and again in the assumption of techno-determinism. It is the meta-form that scripts the arguments of those that enthusiastically embrace technological change as a good in itself. It is also the narrative that animates the legal mind when it turns causally to the question of technology and thinks that law must ‘catch-up’ or that law is ‘marching behind and limping.’ In these phrases technology is placed at the core of what it means to be human, while law is located at the periphery. Its influence can also be seen in the ‘can’t’ or ‘shouldn’t’ regulate technological change arguments. Being technological is regarded as the essence of humanity and artificial attempts to regulate the ever flowering of this being will either fail (can’t) or end in debasement and corruption (shouldn’t).
The Legal Story
The legal story mixes the relationship of human autonomy, technology and law according to a different recipe. This story comes down to us from the social contract tradition of early modernity. In this story the roles of law and technology and reversed. The story goes that humans lived wretched (Hobbes) or simple (Locke) lives in the state of nature; living by passions with only the spark of reason to distinguish humans from animals. This state was the state of complete freedom. However, that spark of reason eventually lead to the realisation that a compact between humans could secure a more peaceful (Hobbes) or propertied (Locke) existence. The social contract was formed and, bingo, government, law, economy, society, global financial crisis, followed. In the social contract some freedoms were sacrificed to preserve others. Here law is fundamentally tied to human freedom at two levels; first it is the legal form of a contract that binds the natural human and second, freedom, reason and covenant combine to provide a justification for the posited legal system. One of the benefits, to use Hobbes phrase, of the ‘sovereign’s peace’, was technology. As humans were no-longer in the ‘war of all against all’ (Hobbes) or worrying about where the next meal would come from (Locke) they could get on with learning about the world and making use of that knowledge. Hence technology emerges as a second order consequence.
Like the technology story, this story permeates Western culture. It remains law’s formal story of origin and so ingrained is it in the modern jurisprudence that explanations of legal orders that do not include such concepts as nature, reason, freedom, sovereign, contract, rights, seem irrelevant. It shows its influence in law and technology scholarship. Fukuyama’s clarion call for law to ‘save’ humanity from biotechnology is an example. Driving Fukuyama’s argument is the social contract vision of the human as a reasoning being who is biologically vulnerable and this combination, on which the Western apparatus for the expression of freedom (government and market) has been constructed, is under threat by technology. The core needs to, and it is legitimate for it to, secure itself against change. In this account technology as a second order consequence is a threat but also a threat that can be met. There is a fundamental confidence in legal mastery of technology that is absence in the technology story.
To recap. The technology story posits human autonomy and technology as essential, with law a second order consequence. In the alternative the legal story narrates human autonomy and law as essential, with technology a second order consequence. My argument has been that much of the scholarship on law and technology emanates (that is draws fundamental structure) from either of these narratives (and sometimes, in the guise of practical-ness – both). What has happened in my telling of these stories has been a muffling of ‘autonomy.’ I moved from autonomy to freedom, and as treating these two words as synonyms is common I should have got away with it. But perhaps I shouldn’t have. This opens to the autonomy story.
On the matters at hand.
I am very glad that Art has suggested this topic for this year’s blog as it has allowed me to untangle some ideas that have lay undisturbed by my past thinking about law and technology.
Like Jennifer what follows are new ideas (at least for me) – I have welcomed this as a forum for expressing new thoughts and I would be very keen to engage in a dialogue. It also means that I do not have the solidity of a worked paper behind these thoughts, please forgive the roughness of ideas and expression.
In recent years due to teaching and editing responsibilities I have found myself to be becoming more and more a legal philosopher. This, I think, is a good discipline to bring to a discussion on human autonomy, technology and law. My argument in what follows is that specific engagements with law and technology tend to be scripted by stories that posit a fundamental relationship between human autonomy, technology and law. There is direction to my narrative. I examine three of these stories, the ‘technology’ story, the ‘legal’ story and the ‘autonomy’ story; concluding with the autonomy story as exposing the truth of the task at hand.
The Technology Story
The technology story begins with the populist definition of human as tool user. The origins of this story run deep in Western culture but a specific beginning lies in the paleonanthropological theorising of the nineteenth and early twentieth centuries, that the evolution of human, the specific chance relationship that accelerated natural selection, was tool-use by distant apelike ancestors. It was claimed that the chipping of flint and the domestication of fire set the cortex alight. Tool use facilitated greater resource utilisation which in turn gave stimulus to brain development which in turn lead to greater creativity and experimentation in tool use; and very rapidly (in evolutionary time), our hairy ancestors moved from flints and skins to not so hairy modern humans with Blackberries in Armani. In this story what distinguished modern humans was this tool use. The sub-text is autonomy. Tools and brain freed humans from nature. In Bernard Stiegler’s nice phrase from Technics and Time technology allowed ‘…the pursuit of the evolution of the living by other means than life. (Stiegler (1998): 135).’ In this story technology fundamentally relates to autonomy.
What this story about technology and human autonomy does not tell is law. Indeed, law’s absence telling. As a fundamental myth, the tool-using-free human (TUFH?), is before law. Law emerges later, as a second order consequence, a supplement laid over the top of humanity’s essential nature.
The state of debate in contemporary paleonanthropology is that this story, as an account of the evolution of Homo Sapiens, is problematic and simplistic. Further, deep ecologists have been keen to point out since the 1970s that human’s share the planet with other tool using species and a claim of superiority on the basis of tool use is anthropocentric. But it is a good story, a modern version of the myth in Plato’s Protagoras of Epimetheus, Prometheus and the gifts of traits, and is an entrenched, and often repeated, narrative within Western culture.
The essential elements of this story are repeated again and again in the assumption of techno-determinism. It is the meta-form that scripts the arguments of those that enthusiastically embrace technological change as a good in itself. It is also the narrative that animates the legal mind when it turns causally to the question of technology and thinks that law must ‘catch-up’ or that law is ‘marching behind and limping.’ In these phrases technology is placed at the core of what it means to be human, while law is located at the periphery. Its influence can also be seen in the ‘can’t’ or ‘shouldn’t’ regulate technological change arguments. Being technological is regarded as the essence of humanity and artificial attempts to regulate the ever flowering of this being will either fail (can’t) or end in debasement and corruption (shouldn’t).
The Legal Story
The legal story mixes the relationship of human autonomy, technology and law according to a different recipe. This story comes down to us from the social contract tradition of early modernity. In this story the roles of law and technology and reversed. The story goes that humans lived wretched (Hobbes) or simple (Locke) lives in the state of nature; living by passions with only the spark of reason to distinguish humans from animals. This state was the state of complete freedom. However, that spark of reason eventually lead to the realisation that a compact between humans could secure a more peaceful (Hobbes) or propertied (Locke) existence. The social contract was formed and, bingo, government, law, economy, society, global financial crisis, followed. In the social contract some freedoms were sacrificed to preserve others. Here law is fundamentally tied to human freedom at two levels; first it is the legal form of a contract that binds the natural human and second, freedom, reason and covenant combine to provide a justification for the posited legal system. One of the benefits, to use Hobbes phrase, of the ‘sovereign’s peace’, was technology. As humans were no-longer in the ‘war of all against all’ (Hobbes) or worrying about where the next meal would come from (Locke) they could get on with learning about the world and making use of that knowledge. Hence technology emerges as a second order consequence.
Like the technology story, this story permeates Western culture. It remains law’s formal story of origin and so ingrained is it in the modern jurisprudence that explanations of legal orders that do not include such concepts as nature, reason, freedom, sovereign, contract, rights, seem irrelevant. It shows its influence in law and technology scholarship. Fukuyama’s clarion call for law to ‘save’ humanity from biotechnology is an example. Driving Fukuyama’s argument is the social contract vision of the human as a reasoning being who is biologically vulnerable and this combination, on which the Western apparatus for the expression of freedom (government and market) has been constructed, is under threat by technology. The core needs to, and it is legitimate for it to, secure itself against change. In this account technology as a second order consequence is a threat but also a threat that can be met. There is a fundamental confidence in legal mastery of technology that is absence in the technology story.
To recap. The technology story posits human autonomy and technology as essential, with law a second order consequence. In the alternative the legal story narrates human autonomy and law as essential, with technology a second order consequence. My argument has been that much of the scholarship on law and technology emanates (that is draws fundamental structure) from either of these narratives (and sometimes, in the guise of practical-ness – both). What has happened in my telling of these stories has been a muffling of ‘autonomy.’ I moved from autonomy to freedom, and as treating these two words as synonyms is common I should have got away with it. But perhaps I shouldn’t have. This opens to the autonomy story.
Introducing Kieran Tranter
We will hear next from Kieran Tranter of Griffith University.
I first came across Kieran's law and technology work in an article where he studied the complex historical processes that influenced the regulation of automobiles in early 20th Century Australia. (Peter Yu and Greg Mandel have also written and posted views that discuss how history can drive law and technology developments.)
Kieran has also managed to cast a critical eye on the ways that philosophies of technology can assist with the development of law and technology theories, including a discussion of how Battlestar Galactica challenges Heideggerian views on the metaphysics of technology! More importantly, one wonders whether Kieran is enjoying this final season of Galactica ...
Kieran's earlier posts at this blog can be found here.
I first came across Kieran's law and technology work in an article where he studied the complex historical processes that influenced the regulation of automobiles in early 20th Century Australia. (Peter Yu and Greg Mandel have also written and posted views that discuss how history can drive law and technology developments.)
Kieran has also managed to cast a critical eye on the ways that philosophies of technology can assist with the development of law and technology theories, including a discussion of how Battlestar Galactica challenges Heideggerian views on the metaphysics of technology! More importantly, one wonders whether Kieran is enjoying this final season of Galactica ...
Kieran's earlier posts at this blog can be found here.
Thursday, February 12, 2009
Does technology make "an offer you cannot refuse"? Some thoughts on human autonomy and technology.
Autonomy is the state of freedom from external control and constraint on one’s decisions and actions. We are constrained by many things such as, for example, the earth’s gravity. Interestingly, many of our technologies increase our autonomy in the face of some of these constraints. For example, our experience of the constraining effect of gravity is greatly altered when we are on one of the thousands of airplanes circling the earth every day.
However, despite the range of decisions and actions that technologies open to us, there is a way in which we come to feel forced to adopt and use technologies, whether we like it or not. In some cases, this is because the technology becomes an indispensable part of the material or cultural infrastructure of a society and we must use it in order to participate in that society. For example, the widespread use of the automobile has led to styles of life and urban layouts that presuppose mechanical transportation.
In addition to the ways in which some of our technologies cause us to restructure society in a way that presupposes their use, the issues of human competition and equality are perhaps also at the heart of why we feel forced to adopt technologies.
In asking about the interaction of equality and technology, I am adopting the following understanding of human equality: I am interested here in the equality of resources, understood broadly to include not just external resources (e.g. wealth, natural environment, social and cultural resources), but also internal or personal resources (e.g. personality, physical and mental abilities). This is a provisional (“half-baked”) definition, and I am launching into this discussion with some trepidation. However, since this blog is a great opportunity to ventilate and develop ideas – here goes.
Technologies can be used to alter one’s endowment of both internal and external resources. Where there is a pre-existing inequality or disadvantage with regard to some resource (e.g. physical strength), a party may seek a technology to neutralize this disadvantage. Note, for example, that the 19th century nickname for the Colt handgun was “the Equalizer.”
Others may seek to go further with technologies and to create a positive advantage over others, whether they started from a position of pre-existing disadvantage or not. Frank discusses the competitive pursuit of technological enhancement in a fascinating post dealing with “positional competition.” It may be that the social pressure to neutralize disadvantages or to seize advantages is one reason why people feel obliged to adopt technologies.
Another reason why people may feel obliged to adopt technologies arises from a problem at the heart of using technological fixes for socially-constructed disadvantages. By “socially-constructed disadvantages” I mean human characteristics that do not entail any actual harm to an individual other than the negative social valuation of those characteristics. Paradoxically, attempts to neutralize socially-constructed disadvantages through technology merely strengthen that social construction. This has the effect of reinforcing the pressure on the disadvantaged group to “fix” itself to conform to the social expectation.
Several examples could be cited here. As Clare Chambers discusses, the availability of “virginity-restoring” surgery for women may enable them to elude the effects of a double standard applicable to men and women with respect to sexual freedom. At the same time, it strengthens the double standard that forces women in some places to resort to the surgery. In other words, the technological response and the discriminatory norm are in a mutually-reinforcing feedback loop.
In The Case Against Perfection, Michael Sandel discusses the government-approved use of human growth hormone as a height-increasing drug for healthy children whose adult height is projected to be in the bottom first percentile. This allows a few to gain in stature, leaving the rest to seem even more unusually short due to their decreased numbers. It does nothing to disrupt the socially-constructed disadvantage of being short.
In other cases, a technology offers an escape from what appears to be a real rather than a socially-constructed disadvantage. For example, the discovery of insulin and methods to produce it cheaply and efficiently have proven to be helpful in promoting equality at least with respect to pancreatic functioning and health. Interestingly, insulin is an excellent example of a technology that cannot fuel a technological enhancement arms race. As far as I know, insulin is of no use to non-diabetics. As a result, it can only close an inequality, without offering the possibility of seizing an advantage through supra-normal amounts of insulin.
All of this suggests to me that technology has a peculiar effect on human autonomy. The technologies offer us opportunities which, at first glance, would seem to promote autonomy. They expand the range of options open to the individual, and leave it to each person to adopt them or not.
However, there are various reasons that technologies become “offers you cannot refuse.” Society restructures itself to presuppose the use of certain technologies so that it becomes hard to exist in society without them. In addition, human competition for advantage maintains a continuous pressure to adopt technological enhancement. Finally, technologies offer the opportunity to people to neutralize socially-constructed disadvantages. This is most insidious from the perspective of human autonomy since the social expectations that fuel the demand for the technologies are reinforced by those very technologies.
However, despite the range of decisions and actions that technologies open to us, there is a way in which we come to feel forced to adopt and use technologies, whether we like it or not. In some cases, this is because the technology becomes an indispensable part of the material or cultural infrastructure of a society and we must use it in order to participate in that society. For example, the widespread use of the automobile has led to styles of life and urban layouts that presuppose mechanical transportation.
In addition to the ways in which some of our technologies cause us to restructure society in a way that presupposes their use, the issues of human competition and equality are perhaps also at the heart of why we feel forced to adopt technologies.
In asking about the interaction of equality and technology, I am adopting the following understanding of human equality: I am interested here in the equality of resources, understood broadly to include not just external resources (e.g. wealth, natural environment, social and cultural resources), but also internal or personal resources (e.g. personality, physical and mental abilities). This is a provisional (“half-baked”) definition, and I am launching into this discussion with some trepidation. However, since this blog is a great opportunity to ventilate and develop ideas – here goes.
Technologies can be used to alter one’s endowment of both internal and external resources. Where there is a pre-existing inequality or disadvantage with regard to some resource (e.g. physical strength), a party may seek a technology to neutralize this disadvantage. Note, for example, that the 19th century nickname for the Colt handgun was “the Equalizer.”
Others may seek to go further with technologies and to create a positive advantage over others, whether they started from a position of pre-existing disadvantage or not. Frank discusses the competitive pursuit of technological enhancement in a fascinating post dealing with “positional competition.” It may be that the social pressure to neutralize disadvantages or to seize advantages is one reason why people feel obliged to adopt technologies.
Another reason why people may feel obliged to adopt technologies arises from a problem at the heart of using technological fixes for socially-constructed disadvantages. By “socially-constructed disadvantages” I mean human characteristics that do not entail any actual harm to an individual other than the negative social valuation of those characteristics. Paradoxically, attempts to neutralize socially-constructed disadvantages through technology merely strengthen that social construction. This has the effect of reinforcing the pressure on the disadvantaged group to “fix” itself to conform to the social expectation.
Several examples could be cited here. As Clare Chambers discusses, the availability of “virginity-restoring” surgery for women may enable them to elude the effects of a double standard applicable to men and women with respect to sexual freedom. At the same time, it strengthens the double standard that forces women in some places to resort to the surgery. In other words, the technological response and the discriminatory norm are in a mutually-reinforcing feedback loop.
In The Case Against Perfection, Michael Sandel discusses the government-approved use of human growth hormone as a height-increasing drug for healthy children whose adult height is projected to be in the bottom first percentile. This allows a few to gain in stature, leaving the rest to seem even more unusually short due to their decreased numbers. It does nothing to disrupt the socially-constructed disadvantage of being short.
In other cases, a technology offers an escape from what appears to be a real rather than a socially-constructed disadvantage. For example, the discovery of insulin and methods to produce it cheaply and efficiently have proven to be helpful in promoting equality at least with respect to pancreatic functioning and health. Interestingly, insulin is an excellent example of a technology that cannot fuel a technological enhancement arms race. As far as I know, insulin is of no use to non-diabetics. As a result, it can only close an inequality, without offering the possibility of seizing an advantage through supra-normal amounts of insulin.
All of this suggests to me that technology has a peculiar effect on human autonomy. The technologies offer us opportunities which, at first glance, would seem to promote autonomy. They expand the range of options open to the individual, and leave it to each person to adopt them or not.
However, there are various reasons that technologies become “offers you cannot refuse.” Society restructures itself to presuppose the use of certain technologies so that it becomes hard to exist in society without them. In addition, human competition for advantage maintains a continuous pressure to adopt technological enhancement. Finally, technologies offer the opportunity to people to neutralize socially-constructed disadvantages. This is most insidious from the perspective of human autonomy since the social expectations that fuel the demand for the technologies are reinforced by those very technologies.
Tuesday, February 10, 2009
"Science discovers, genius invents, industry applies, and man adapts himself..."
One of the slogans of the 1933 Chicago Worlds’ Fair was the following: “Science discovers, genius invents, industry applies, and man adapts himself to, or is molded by, new things...Individuals, groups, entire races of men fall into step with science and industry."
This wasn’t a new idea. There is a long-standing strand in human thinking about technology that emphasizes the important (and sometimes apparently decisive) effect of our technologies on society. In 1620 Sir Francis Bacon wrote in the Novum Organum that:
“…it is well to observe the force and virtue and consequences of discoveries, and these are to be seen nowhere more conspicuously than in those three which were unknown to the ancients, …; namely, printing, gunpowder, and the magnet. For these three have changed the whole face and state of things throughout the world; the first in literature, the second in warfare, the third in navigation; whence have followed innumerable changes, insomuch that no empire, no sect, no star seems to have exerted greater power and influence in human affairs than these mechanical discoveries.”
Numerous subsequent writers have raised the same suggestion that the technologies that we create and use have profound effects on social structures and on human history. At the extreme, technology itself is viewed as a phenomenon that drives history and society. It seems likely that this idea is, in part, true. However, at the same time, technologies are produced and used by a given society and so are themselves determined by that society. In other words, the influence appears to flow in two directions between technology and society, and it is difficult to untangle the primary cause (if there is one).
And yet, the complexity of this interaction makes it seem sometimes as if technology calls the shots. As they said at the Worlds’ Fair, society and individuals “fall into step with,” “adapt to,” or are “molded by” the technology. In this pair of blog postings, I would like to tackle the following two questions. First, has technology and technological ideology so pervaded the law and judicial thinking that it can be said that the law is determined by technology rather than that technology is controlled by the law?
The second blog posting will look at the effects of technology on the autonomy of the individual human being rather than the effects of technology on the collective self-determination of humans in a society. In that second post, I would like to explore the mechanisms by which individuals come to feel obliged to adopt a given technology, and how inequality (of power, natural or material resources) between humans drives this process. With this second posting, I am indebted to Frank Pasquale, whose excellent recent posts in this blog and previous writing on equality and technology have spurred my thinking in this direction. My discussions with my good friend and extremely insightful colleague at the University of Ottawa, Ian Kerr, on the complex effects of technology on human equality were both fun and deeply illuminating too!
Onward with the first posting!
A year or so I published an article that asked whether courts control technology or simply legitimize its social acceptance. I raised this possibility because I kept coming across judgments that suggested that either (1) our legal rules are biased in favour of technologies and against competing non-technological values, or (2) judges find ways to reframe disputes in ways that tend to favour technologies. This is a bold accusation; it is possible that counter-examples could be proposed. However, let me give two examples to illustrate what I mean.
The doctrine of mitigation in tort law states that a plaintiff who sues a defendant for compensation cannot recover compensation for those damages that could reasonably have been avoided. So far, so good. It makes sense to encourage people to take reasonable steps to limit the harm they suffer. In practice, however, this rule has been applied by the courts to require plaintiffs to submit to medical treatments involving various invasive technologies to which they deeply objected, including back surgery and electro-shock therapy. Although plaintiffs have not been physically forced to do so, a seriously-injured plaintiff may face considerable economic duress. Knowing that compensation will likely be withheld by the courts if they do not submit to a majoritarian vision of reasonable treatment, they may submit unwillingly to these interventions. I think that this doctrine operates in a way that normalizes the use of medical technologies despite legitimate objections to them by individual patients.
In the trial level decision in the Canadian case of Hoffman v. Monsanto, a group of organic farmers in Saskatchewan attempted to start a lawsuit against the manufacturer of genetically-modified canola. The farmers argued that because of the drift of genetically-modified canola pollen onto their crops, their organic canola was ruined and their land contaminated. The defendants responded that their product had been found to be safe by the Canadian government and that it had not caused any harm to the organic farmers. Instead, the organic farmers had brought harm upon themselves by insisting on adhering to the organic standards set by organic certifiers and the organic market. The trial judge was very receptive to this idea that the losses flowed from actions of organic certifiers and markets in rejecting genetically-modified organisms, and not from the actions of the manufacturers. I find this to be a very interesting framing of the dispute. In essence, it identifies the source of harm as the decision to reject the technology, rather than the decision to introduce the technological modification to the environment. Once again, the technology itself becomes invisible in this re-framing of the source of the harm.
These judges do not set out to make sure that humans adapt to the technologies in these cases. Instead, I think these cases can be interpreted as being driven by the ideological commitments of modernity to progress and instrumental rationality. An interpretation of the facts or a choice of lifestyle that conflicts with these ideologies sits highly uneasily within a legal system that itself also reflects these ideologies.
More recently, I have begun to explore a second question along these lines. If judges and our legal rules are stacked in favour of technologies and against other values, what happens when it is the judges themselves who are in conflict with the technologies. Do the judges adapt? Here I turned to the history of the polygraph machine (lie detector), and the attempts to replace the judicial assessment of veracity with evidence from the machine. The courts have generally resisted the use of polygraph evidence on two bases. First, they say, it is unreliable. Second, the assessment of veracity is viewed as a “quintessentially human” function, and the use of a machine for this function would dehumanize the justice system. While the judges appear to be holding the line at the attempted usurpation by the machine of this human role in justice, it is interesting to speculate about how long they will be able to do so. Will they be able to resist admitting reliable machine evidence, particularly given concerns about how reliable humans actually are at detecting lies. Novel neuro-imaging techniques such as fMRI which purport to identify deception by patterns of activity in the brain, represent the next step in this debate. If these neuro-imaging techniques are refined to the point that they are demonstrably superior to human beings in assessing veracity, would it be fair to exclude this evidence in a criminal trial? The right to make a full answer and defence to criminal charges may say “no.”
I am currently researching neuro-imaging technologies and their use in the detection of deception in order to predict how our law may be affected by them. In the background is the continued question: Is it true that “Science discovers, genius invents, industry applies, and man adapts himself to, or is molded by, new things...Individuals, groups, entire races of men fall into step with science and industry"?
This wasn’t a new idea. There is a long-standing strand in human thinking about technology that emphasizes the important (and sometimes apparently decisive) effect of our technologies on society. In 1620 Sir Francis Bacon wrote in the Novum Organum that:
“…it is well to observe the force and virtue and consequences of discoveries, and these are to be seen nowhere more conspicuously than in those three which were unknown to the ancients, …; namely, printing, gunpowder, and the magnet. For these three have changed the whole face and state of things throughout the world; the first in literature, the second in warfare, the third in navigation; whence have followed innumerable changes, insomuch that no empire, no sect, no star seems to have exerted greater power and influence in human affairs than these mechanical discoveries.”
Numerous subsequent writers have raised the same suggestion that the technologies that we create and use have profound effects on social structures and on human history. At the extreme, technology itself is viewed as a phenomenon that drives history and society. It seems likely that this idea is, in part, true. However, at the same time, technologies are produced and used by a given society and so are themselves determined by that society. In other words, the influence appears to flow in two directions between technology and society, and it is difficult to untangle the primary cause (if there is one).
And yet, the complexity of this interaction makes it seem sometimes as if technology calls the shots. As they said at the Worlds’ Fair, society and individuals “fall into step with,” “adapt to,” or are “molded by” the technology. In this pair of blog postings, I would like to tackle the following two questions. First, has technology and technological ideology so pervaded the law and judicial thinking that it can be said that the law is determined by technology rather than that technology is controlled by the law?
The second blog posting will look at the effects of technology on the autonomy of the individual human being rather than the effects of technology on the collective self-determination of humans in a society. In that second post, I would like to explore the mechanisms by which individuals come to feel obliged to adopt a given technology, and how inequality (of power, natural or material resources) between humans drives this process. With this second posting, I am indebted to Frank Pasquale, whose excellent recent posts in this blog and previous writing on equality and technology have spurred my thinking in this direction. My discussions with my good friend and extremely insightful colleague at the University of Ottawa, Ian Kerr, on the complex effects of technology on human equality were both fun and deeply illuminating too!
Onward with the first posting!
A year or so I published an article that asked whether courts control technology or simply legitimize its social acceptance. I raised this possibility because I kept coming across judgments that suggested that either (1) our legal rules are biased in favour of technologies and against competing non-technological values, or (2) judges find ways to reframe disputes in ways that tend to favour technologies. This is a bold accusation; it is possible that counter-examples could be proposed. However, let me give two examples to illustrate what I mean.
The doctrine of mitigation in tort law states that a plaintiff who sues a defendant for compensation cannot recover compensation for those damages that could reasonably have been avoided. So far, so good. It makes sense to encourage people to take reasonable steps to limit the harm they suffer. In practice, however, this rule has been applied by the courts to require plaintiffs to submit to medical treatments involving various invasive technologies to which they deeply objected, including back surgery and electro-shock therapy. Although plaintiffs have not been physically forced to do so, a seriously-injured plaintiff may face considerable economic duress. Knowing that compensation will likely be withheld by the courts if they do not submit to a majoritarian vision of reasonable treatment, they may submit unwillingly to these interventions. I think that this doctrine operates in a way that normalizes the use of medical technologies despite legitimate objections to them by individual patients.
In the trial level decision in the Canadian case of Hoffman v. Monsanto, a group of organic farmers in Saskatchewan attempted to start a lawsuit against the manufacturer of genetically-modified canola. The farmers argued that because of the drift of genetically-modified canola pollen onto their crops, their organic canola was ruined and their land contaminated. The defendants responded that their product had been found to be safe by the Canadian government and that it had not caused any harm to the organic farmers. Instead, the organic farmers had brought harm upon themselves by insisting on adhering to the organic standards set by organic certifiers and the organic market. The trial judge was very receptive to this idea that the losses flowed from actions of organic certifiers and markets in rejecting genetically-modified organisms, and not from the actions of the manufacturers. I find this to be a very interesting framing of the dispute. In essence, it identifies the source of harm as the decision to reject the technology, rather than the decision to introduce the technological modification to the environment. Once again, the technology itself becomes invisible in this re-framing of the source of the harm.
These judges do not set out to make sure that humans adapt to the technologies in these cases. Instead, I think these cases can be interpreted as being driven by the ideological commitments of modernity to progress and instrumental rationality. An interpretation of the facts or a choice of lifestyle that conflicts with these ideologies sits highly uneasily within a legal system that itself also reflects these ideologies.
More recently, I have begun to explore a second question along these lines. If judges and our legal rules are stacked in favour of technologies and against other values, what happens when it is the judges themselves who are in conflict with the technologies. Do the judges adapt? Here I turned to the history of the polygraph machine (lie detector), and the attempts to replace the judicial assessment of veracity with evidence from the machine. The courts have generally resisted the use of polygraph evidence on two bases. First, they say, it is unreliable. Second, the assessment of veracity is viewed as a “quintessentially human” function, and the use of a machine for this function would dehumanize the justice system. While the judges appear to be holding the line at the attempted usurpation by the machine of this human role in justice, it is interesting to speculate about how long they will be able to do so. Will they be able to resist admitting reliable machine evidence, particularly given concerns about how reliable humans actually are at detecting lies. Novel neuro-imaging techniques such as fMRI which purport to identify deception by patterns of activity in the brain, represent the next step in this debate. If these neuro-imaging techniques are refined to the point that they are demonstrably superior to human beings in assessing veracity, would it be fair to exclude this evidence in a criminal trial? The right to make a full answer and defence to criminal charges may say “no.”
I am currently researching neuro-imaging technologies and their use in the detection of deception in order to predict how our law may be affected by them. In the background is the continued question: Is it true that “Science discovers, genius invents, industry applies, and man adapts himself to, or is molded by, new things...Individuals, groups, entire races of men fall into step with science and industry"?
Monday, February 9, 2009
Introducing Jennifer Chandler
Up to bat next is Jennifer Chandler from the University of Ottawa.
Jennifer writes in the areas of tort law, health law and cyberlaw, including discussions of network and software security. She also teaches an interesting course on 'technoprudence: legal theory in the information age.'
A previous article examined whether courts control technology or whether they simply legitimize the social acceptance of new technologies. This work is particularly relevant to our blog's current topic of 'Human Autonomy, Technology, and Law' as it queries whether courts and other legal institutions are merely passive observers of changing technologies or whether they help to shape these technologies to protect perceived values.
Jennifer writes in the areas of tort law, health law and cyberlaw, including discussions of network and software security. She also teaches an interesting course on 'technoprudence: legal theory in the information age.'
A previous article examined whether courts control technology or whether they simply legitimize the social acceptance of new technologies. This work is particularly relevant to our blog's current topic of 'Human Autonomy, Technology, and Law' as it queries whether courts and other legal institutions are merely passive observers of changing technologies or whether they help to shape these technologies to protect perceived values.
Sunday, February 8, 2009
More on the Tech-Driven Rat Race: From Professors to Police
I hope to get a chance to address the great comments on my last post soon. I'm going to do this second post now so I can get my contribution to this online symposium in the right ordering.
In a recent discussion of the Nature Editorial I mentioned in my last post, one of its authors came under serious criticism for several flaws in its reasoning. Thomas Murray of the Hastings Center characterized calls for "responsible use" of cognition-enhancing drugs in the healthy as utterly naive, especially given the editorial authors' reluctance to specify much in the way of strong legal rules to guarantee the "responsibility" qualifier. Nora Volkow also argued that it's unrealistic to expect any given drug to just make people "smarter" overall--there are trade-offs between focus and creativity, among other mental traits.
Faced with this onslaught, Martha Farah fell back on the old reliable defense of complacent continuumism (which I describe more fully in this paper). This is just like cosmetic surgery, she claimed--people were at first really disturbed about that, but they got used to it.
I think Farah's comparison is more revealing than she would like it to be. Like cosmetic surgery, a market for brain-enhancing drugs may draw drug companies away from real human needs and into more intense service of an already privileged elite. Such drugs also promise to spur positional competition, at younger and younger ages. (One can imagine an unenhanced high school student sullenly blaming her parents for her failure to get into college if the parents refused to ply her with the best mind enhancers at an early age.) I foresee something even more insidious with the mind-enhancing drugs--a fetishization of qualities that can be enhanced by technology over those which cannot. Rather than simply letting, say, academics perform old duties better, they will slowly change our conception of those activities.
Consider the role of steroids in policing. The Village Voice has a long story on some possibly inappropriate steroid/HGH use in the NYPD. I say "possibly" for two reasons: 1) the slippery "therapy/enhancement" distinction here and 2) the threat posed by bulked up criminals. The Voice reports that "the Brooklyn District Attorney's Office knows of 29 cops and at least 10 NYPD civilian employees—all well under the age of 60—who have received prescriptions for [steroids for] hypogonadism." Doctors quoted in the story find it implausible that so many officers would have this disorder--but there are probably other physicians who have a much broader concept of disease. And if suspects are bulking up on illegal substances, who can blame the cops for trying to catch up?
Now consider the spread of concentration-enhancing drugs from students (an old problem) to professors. Andrew Sullivan asks, "So if a prof wants to do a little Provigil, it's no worry for me. Why should it be a worry for anyone but the prof himself?" I think there are several reasons, not least the potential for medicalized competition to invade spheres of life we now deem constitutive of our identity. But for now let me just focus on how the police and profs examples intersect.
Think about the balance of scholarship produced in a regime where some labor under the supercharging influence of Provigil, and others forbear. The former will presumably generate more work than the latter. That may be fine in relatively technical fields (who wants to slow down the sequencing of a genome?). But in areas where ideology matters, the potential power of the pill-poppers can be a problem. We need to ask: what are the reasons people are not taking the drugs? A (wise) risk-aversion? A fear of disadvantaging others who can't afford them? A religious concern about "playing God"? And finally, are the people who have all these concerns really the ones we want to be drowned out by super-stimulated, super-productive others?
My basic point here is that Sullivan (and many other libertarians) make an erroneous presumption that the decision to use the drug is wholly distinct from whatever ideology a particular person has. To them, the technology is neutral in itself, and can be freely used (or not used) by anyone. In fact, the drugs fit in very well with certain ideologies and not at all with others. This is an old theme in the philosophy of technology, but is hard to encapsulate in a soundbite (itself a technology far more amenable to some ideologies than others).
At risk of stretching an analogy to the breaking point, I think professors and police face a similarly competitive landscape. The former battle for "mind share," the latter for order. The more we understand the true lesson of Darwin/Dawkins--the pervasiveness of competitive struggle in daily life--the better we can see the need for "arms control agreements" regarding enhancement technologies. (Hopefully they will be more effective than the failed policies of the past.) The question is whether we will permit ourselves to direct evolution or to be the mere products of blind technological forces. Those opting for the latter route make Benjamin's words on the "angel of history" all too prophetic:
--Walter Benjamin, "On the Concept of History", cited here.
In a recent discussion of the Nature Editorial I mentioned in my last post, one of its authors came under serious criticism for several flaws in its reasoning. Thomas Murray of the Hastings Center characterized calls for "responsible use" of cognition-enhancing drugs in the healthy as utterly naive, especially given the editorial authors' reluctance to specify much in the way of strong legal rules to guarantee the "responsibility" qualifier. Nora Volkow also argued that it's unrealistic to expect any given drug to just make people "smarter" overall--there are trade-offs between focus and creativity, among other mental traits.
Faced with this onslaught, Martha Farah fell back on the old reliable defense of complacent continuumism (which I describe more fully in this paper). This is just like cosmetic surgery, she claimed--people were at first really disturbed about that, but they got used to it.
I think Farah's comparison is more revealing than she would like it to be. Like cosmetic surgery, a market for brain-enhancing drugs may draw drug companies away from real human needs and into more intense service of an already privileged elite. Such drugs also promise to spur positional competition, at younger and younger ages. (One can imagine an unenhanced high school student sullenly blaming her parents for her failure to get into college if the parents refused to ply her with the best mind enhancers at an early age.) I foresee something even more insidious with the mind-enhancing drugs--a fetishization of qualities that can be enhanced by technology over those which cannot. Rather than simply letting, say, academics perform old duties better, they will slowly change our conception of those activities.
Consider the role of steroids in policing. The Village Voice has a long story on some possibly inappropriate steroid/HGH use in the NYPD. I say "possibly" for two reasons: 1) the slippery "therapy/enhancement" distinction here and 2) the threat posed by bulked up criminals. The Voice reports that "the Brooklyn District Attorney's Office knows of 29 cops and at least 10 NYPD civilian employees—all well under the age of 60—who have received prescriptions for [steroids for] hypogonadism." Doctors quoted in the story find it implausible that so many officers would have this disorder--but there are probably other physicians who have a much broader concept of disease. And if suspects are bulking up on illegal substances, who can blame the cops for trying to catch up?
Now consider the spread of concentration-enhancing drugs from students (an old problem) to professors. Andrew Sullivan asks, "So if a prof wants to do a little Provigil, it's no worry for me. Why should it be a worry for anyone but the prof himself?" I think there are several reasons, not least the potential for medicalized competition to invade spheres of life we now deem constitutive of our identity. But for now let me just focus on how the police and profs examples intersect.
Think about the balance of scholarship produced in a regime where some labor under the supercharging influence of Provigil, and others forbear. The former will presumably generate more work than the latter. That may be fine in relatively technical fields (who wants to slow down the sequencing of a genome?). But in areas where ideology matters, the potential power of the pill-poppers can be a problem. We need to ask: what are the reasons people are not taking the drugs? A (wise) risk-aversion? A fear of disadvantaging others who can't afford them? A religious concern about "playing God"? And finally, are the people who have all these concerns really the ones we want to be drowned out by super-stimulated, super-productive others?
My basic point here is that Sullivan (and many other libertarians) make an erroneous presumption that the decision to use the drug is wholly distinct from whatever ideology a particular person has. To them, the technology is neutral in itself, and can be freely used (or not used) by anyone. In fact, the drugs fit in very well with certain ideologies and not at all with others. This is an old theme in the philosophy of technology, but is hard to encapsulate in a soundbite (itself a technology far more amenable to some ideologies than others).
At risk of stretching an analogy to the breaking point, I think professors and police face a similarly competitive landscape. The former battle for "mind share," the latter for order. The more we understand the true lesson of Darwin/Dawkins--the pervasiveness of competitive struggle in daily life--the better we can see the need for "arms control agreements" regarding enhancement technologies. (Hopefully they will be more effective than the failed policies of the past.) The question is whether we will permit ourselves to direct evolution or to be the mere products of blind technological forces. Those opting for the latter route make Benjamin's words on the "angel of history" all too prophetic:
This is how one pictures the angel of history. His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. This storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress.
--Walter Benjamin, "On the Concept of History", cited here.
Friday, February 6, 2009
Cognition-Enhancing Drugs: Can We Say No?
A recent book on health care rationing in the US, Can We Say No?, worries that political pressures for health spending will ultimately bankrupt the US economy. This idea of a spending ratchet is a commonplace of the health care finance literature. Less well-covered has been a creep toward performance-enhancing drugs. Though less of a threat to the public till, they raise fundamental questions about individuals' capacity for autonomous reactions to technological trends.
Consider a recent discussion in Edge, a fascinating online salon/magazine which asked 151 luminaries "What Will Change Everything"? Marcel Kinsborne predicts a growing market for "neurocosmetics" which translate the benefits of cosmetic surgery to the social world:
It's one thing to read these imaginings in the fiction of a Houllebecq, Franzen, or Foster Wallace; it's quite another to see them predicted by a Professor of Psychology at the New School for Social Research. I have also predicted an arms race in the use of personality optimizing drugs, but I believe such an arms race would defeat, rather than reveal, humanity's true nature. My difference with Kinsborne suggests a technophilic bias at the heart of Edge's inquiry: an implicit belief that certain technologies will inevitably change us, rather than being changed or stopped by us.
We need to understand that it's a conception of the self that is driving the acceptance of new technologies of self-alteration here, rather than vice versa. Consider eHarmony consultant Helen Fisher's acceptance of the arms arms race metaphor in the same issue of Edge:
In a recent editorial in Nature entitled Towards responsible use of cognitive-enhancing drugs by the healthy, distinguished contributors have endorsed a "presumption that mentally competent adults should be able to engage in cognitive enhancement using drugs." Against various Luddites who worry about the rat races such drug use could spark, the editorialists argue that cognitive enhancement is here to stay: "From assembly line workers to surgeons, many different kinds of employee may benefit from enhancement and want access to it, yet they may also need protection from the pressure to enhance." Instead of the regulation encouraged by Francis Fukuyama, they would have us rely on robust professional standards to guide "appropriate prescribing of cognitive enhancers."
But it's easy to see where this arms race can lead. Perhaps at some point we'll all end up like those apostles of reductionist philosophy Patricia and Paul Churchland, who, rather than acting out, expressing, or displaying emotions, appear to prefer to refer to their supposed chemical determinants:
Nicholas Carr has noted that "institutionally supported programs of brain enhancement [may] impose on us, intentionally or not, a particular ideal of mental function." Fisher, Kinsborne, and the Churchlands suggest the metaphysical foundations of self-mechanization. It's a vision of the self as "multiple input-multiple output transducer," which, as I said in this article, follows a long line of reducing "soul to self, self to mind, and mind to brain." This last step of understanding what the brain is as what it does is a functionalism that begs the question Bourne used to put to Dewey: what exactly is the point of this pragmatic deflation of our self-understanding?
In a recent series of posts at PopMatters, Rob Horning has explored the psychology of consumerism, a condition we are endlessly told by elites to consider the linchpin of global development, economic growth, and domestic order.
The neurocosmetics forecast in Edge have the same place in the social world that marketing has in the worlds of goods and services. For example, the complex mixture of ennui, detachment, skepticism, and embers of warmth in office life limned in Joshua Ferris's And Then We Came to the End could be flattened into the glad-handing grin of an unalloyed will-to-succeed. Horning suggests that "consumerism makes the will and ability to concentrate seem a detriment to ourselves:"
Similarly, neurocosmetics promises to relieve the mental effort of crafting a genuine response to events from the welter of conflicting emotions they generate, leaving only the feeling induced by drugs.
In a world of neurocosmetics, emotions lose their world-disclosive potential and moral force. Rather than guiding our choices, they are themselves one among many choices. The industrial possibilities are endless, and I'm sure some rigorous cost-benefit analyses will prove the new soma's indispensability to such varied crises as aging, unemployment, and gender imbalances.
I shudder at such a world, but I doubt economic analysis can provide any basis for rejecting it. Neurocosmetics and consumerism are but two facets of the individualist, subjectivist, economic functionalism that's become our default language for judging states of the world.
If I were asked to participate in Edge's salon, I think I'd flip the question and ask "what kind of common moral language do we need to stop random technological developments from changing everything?" Philosophers like Langdon Winner and Albert Borgmann have started answering that question as they consider technology and the character of contemporary life. Borgmann notes that "simulations of reality can lead to disastrous decisions when assumptions or data are faulty." Perhaps we should start thinking of neurocosmetics as a faulty source of emotional data about our responses to the world around us. As Ellen Gibson reported at Businessweek, autonomous decisions to compete on can adversely affect everyone's identity:
From autonomy to automata: a provocative possibility.
Consider a recent discussion in Edge, a fascinating online salon/magazine which asked 151 luminaries "What Will Change Everything"? Marcel Kinsborne predicts a growing market for "neurocosmetics" which translate the benefits of cosmetic surgery to the social world:
[D]eep brain stimulation will be used to modify personality so as to optimize professional and social opportunity, within my lifetime. Ethicists will deplore this, and so they should. But it will happen nonetheless, and it will change how humans experience the world and how they relate to each other in as yet unimagined ways. . . . We read so much into a face — but what if it is not the person's "real" face? Does anyone care, or even remember the previous appearance? So it will be with neurocosmetics.
Consider an arms race in affability, a competition based not on concealing real feelings, but on feelings engineered to be real. Consider a society of homogenized good will, making regular visits to [a] provider who advertises superior electrode placement? Switching a personality on and then off, when it becomes boring? . . .
We take ourselves to be durable minds in stable bodies. But this reassuring self-concept will turn out to be yet another of our so human egocentric delusions. Do we, strictly speaking, own stable identities? When it sinks in that the continuity of our experience of the world and our self is at the whim of an electrical current, then our fantasies of permanence will have yielded to the reality of our fragile and ephemeral identities.
It's one thing to read these imaginings in the fiction of a Houllebecq, Franzen, or Foster Wallace; it's quite another to see them predicted by a Professor of Psychology at the New School for Social Research. I have also predicted an arms race in the use of personality optimizing drugs, but I believe such an arms race would defeat, rather than reveal, humanity's true nature. My difference with Kinsborne suggests a technophilic bias at the heart of Edge's inquiry: an implicit belief that certain technologies will inevitably change us, rather than being changed or stopped by us.
We need to understand that it's a conception of the self that is driving the acceptance of new technologies of self-alteration here, rather than vice versa. Consider eHarmony consultant Helen Fisher's acceptance of the arms arms race metaphor in the same issue of Edge:
As scientists learn more about the chemistry of trust, empathy, forgiveness, generosity, disgust, calm, love, belief, wanting and myriad other complex emotions, motivations and cognitions, even more of us will begin to use this new arsenal of weapons to manipulate ourselves and others. And as more people around the world use these hidden persuaders, one by one we may subtly change everything. [emphasis added]
In a recent editorial in Nature entitled Towards responsible use of cognitive-enhancing drugs by the healthy, distinguished contributors have endorsed a "presumption that mentally competent adults should be able to engage in cognitive enhancement using drugs." Against various Luddites who worry about the rat races such drug use could spark, the editorialists argue that cognitive enhancement is here to stay: "From assembly line workers to surgeons, many different kinds of employee may benefit from enhancement and want access to it, yet they may also need protection from the pressure to enhance." Instead of the regulation encouraged by Francis Fukuyama, they would have us rely on robust professional standards to guide "appropriate prescribing of cognitive enhancers."
But it's easy to see where this arms race can lead. Perhaps at some point we'll all end up like those apostles of reductionist philosophy Patricia and Paul Churchland, who, rather than acting out, expressing, or displaying emotions, appear to prefer to refer to their supposed chemical determinants:
One afternoon recently, Paul says, he was home making dinner when Pat burst in the door, having come straight from a frustrating faculty meeting. "She said, 'Paul, don't speak to me, my serotonin levels have hit bottom, my brain is awash in glucocorticoids, my blood vessels are full of adrenaline, and if it weren't for my endogenous opiates I'd have driven the car into a tree on the way home. My dopamine levels need lifting. Pour me a Chardonnay, and I'll be down in a minute'."
Nicholas Carr has noted that "institutionally supported programs of brain enhancement [may] impose on us, intentionally or not, a particular ideal of mental function." Fisher, Kinsborne, and the Churchlands suggest the metaphysical foundations of self-mechanization. It's a vision of the self as "multiple input-multiple output transducer," which, as I said in this article, follows a long line of reducing "soul to self, self to mind, and mind to brain." This last step of understanding what the brain is as what it does is a functionalism that begs the question Bourne used to put to Dewey: what exactly is the point of this pragmatic deflation of our self-understanding?
In a recent series of posts at PopMatters, Rob Horning has explored the psychology of consumerism, a condition we are endlessly told by elites to consider the linchpin of global development, economic growth, and domestic order.
[Harry Frankfurt] calls attention to “second-order desires”, or the desires we have about our primary desires. These are what we want to want and, according to Frankfurt, make up the substance of our will . . . . [W]e often have multiple sets of preferences simultaneously, which foils the more simplistic models of neoclassical economics with regard to consumer demand. . . .
The persuasion industry is seeking always to confuse the communication between our first- and second-order desires; it’s seeking to short circuit the way we negotiate between the many things we can conceive of wanting to come up with a positive will to want certain particular things at certain moments. It seeks to make us more impulsive at the very least; at worst it wants to supplant our innate will with something prefabricated that will orient us toward consumer goods rather than desires that are able to be fulfilled outside the market.
The neurocosmetics forecast in Edge have the same place in the social world that marketing has in the worlds of goods and services. For example, the complex mixture of ennui, detachment, skepticism, and embers of warmth in office life limned in Joshua Ferris's And Then We Came to the End could be flattened into the glad-handing grin of an unalloyed will-to-succeed. Horning suggests that "consumerism makes the will and ability to concentrate seem a detriment to ourselves:"
Dilettantism is a perfectly rational response to the hyperaccessibility of stuff available to us in the market, all of which imposes on us time constraints where there was once material scarcity. These time constraints become more itchy the more we recognize how much we are missing out on (thanks to ever more invasive marketing efforts, often blended in to the substance of the material we are gathering for self-realization).
Similarly, neurocosmetics promises to relieve the mental effort of crafting a genuine response to events from the welter of conflicting emotions they generate, leaving only the feeling induced by drugs.
In a world of neurocosmetics, emotions lose their world-disclosive potential and moral force. Rather than guiding our choices, they are themselves one among many choices. The industrial possibilities are endless, and I'm sure some rigorous cost-benefit analyses will prove the new soma's indispensability to such varied crises as aging, unemployment, and gender imbalances.
I shudder at such a world, but I doubt economic analysis can provide any basis for rejecting it. Neurocosmetics and consumerism are but two facets of the individualist, subjectivist, economic functionalism that's become our default language for judging states of the world.
If I were asked to participate in Edge's salon, I think I'd flip the question and ask "what kind of common moral language do we need to stop random technological developments from changing everything?" Philosophers like Langdon Winner and Albert Borgmann have started answering that question as they consider technology and the character of contemporary life. Borgmann notes that "simulations of reality can lead to disastrous decisions when assumptions or data are faulty." Perhaps we should start thinking of neurocosmetics as a faulty source of emotional data about our responses to the world around us. As Ellen Gibson reported at Businessweek, autonomous decisions to compete on can adversely affect everyone's identity:
Dr. Anjan Chatterjee, a neurologist at the University of Pennsylvania Hospital, raises [a] red flag. Creative insights often arise when the mind is allowed to wander, he says. If drugs that sharpen concentration become widespread in the workplace, they may nurture "a bunch of automatons that are very good at implementing things but have nothing to implement."
From autonomy to automata: a provocative possibility.
Thursday, February 5, 2009
Introducing Frank Pasquale
Next up, we will hear from Frank Pasquale, currently visiting at Yale Law School.
A serial blogger at Concurring Opinions and madisonian.net, Frank researches and writes in the areas of health law and intellectual property law.
One of Frank's interesting law and technology papers 'Technology, Competition, and Values' reviews the role of law in promoting or inhibiting the diffusion of new technologies, and how this process affects social values. Frank's earlier posts at this blog, which also explored the relationship between technologies and values, can be found here.
A serial blogger at Concurring Opinions and madisonian.net, Frank researches and writes in the areas of health law and intellectual property law.
One of Frank's interesting law and technology papers 'Technology, Competition, and Values' reviews the role of law in promoting or inhibiting the diffusion of new technologies, and how this process affects social values. Frank's earlier posts at this blog, which also explored the relationship between technologies and values, can be found here.
Tuesday, February 3, 2009
Privacy and State Investigations Using New Technologies
Part of my research focuses on trying to understand how privacy laws and interests are challenged by technological change in the context of post-9/11 government surveillance.
Countries such as Canada have constitutional protections against unreasonable state searches. In Canada and elsewhere, legal analysis has traditionally emphasized the individual rights aspects of privacy in the context of police investigations. This view has sometimes led to the notion that privacy is a competing interest with security hence privacy must be diluted to protect the public against criminals and terrorists.
In an era where governments are deploying powerful information, communication, and genetic technologies that greatly enhance their ability to collect, use and share personal information as part of their investigations, a broader consideration of the privacy interests at stake is required.
The problem is that within a 'surveillance society' where we are increasingly watched by the state through close-circuit television (or digital) cameras or cameras that can peer through our walls to take heat pictures (i.e., Forward Looking Infra-Red technology), software programs that log every keystroke we make, etc. there is a danger of unduly encroaching on values that keep our democracies vibrant and secure. Privacy is an enabler, for instance, of freedom of expression and the right to engage in political dissent.
From a substantive theoretical perspectives, we are heading down the road where our governments are deploying powerful new technologies to keep us safe ... but with the unintended effect of undermining our safety, at least in the long run. It's not so much that the machines are controlling us, but more that we are adopting technologier (ostensibly for sound policy reasons) that may end up backfiring. It's getting so bad I recently worried that even Santa might be tracking us.
One example: if an identifiable group believes it is being unfairly targeted by ongoing (and surreptitious) surveillance then members of this group may be less willing to help authorities with investigations, making us all less secure.
Our research team's comparative survey of nine countries (Canada, the United States, France, Hungary, Spain, Mexico, Japan, China and Brazil) suggests that individuals in these countries believe their governments have not struck the right balance in protecting their privacy, leading to views that their privacy interests are being undermined by a combination of legal reforms and the usage of powerful new surveillance technologies.
Some privacy researchers are relying on earlier concepts of the 'social' value of privacy (an idea developed by Priscilla Regan in her book Legislating Privacy). The social value of privacy can be understood a broader societal interest in privacy beyond the interests of particular individuals.
As such, the preservation of the social value of privacy can be portrayed as consistent--and not competing--with security interests. In an article, I've argued the Canadian Supreme Court implicitly recognized the social value of privacy so that, for state searches to be constitutionally permissible, the government must establish that it has reasonable policies and practices in place to govern its personal information collection practices.
(On the other hand, 'privacy panic' can also lead to legislative over-reach such as the zany law introduced last week by the U.S. Congress to force all cell phones manufacturers to install 'beeps' when a picture is taken! As always, it's all about striking the right balance ... )
Countries such as Canada have constitutional protections against unreasonable state searches. In Canada and elsewhere, legal analysis has traditionally emphasized the individual rights aspects of privacy in the context of police investigations. This view has sometimes led to the notion that privacy is a competing interest with security hence privacy must be diluted to protect the public against criminals and terrorists.
In an era where governments are deploying powerful information, communication, and genetic technologies that greatly enhance their ability to collect, use and share personal information as part of their investigations, a broader consideration of the privacy interests at stake is required.
The problem is that within a 'surveillance society' where we are increasingly watched by the state through close-circuit television (or digital) cameras or cameras that can peer through our walls to take heat pictures (i.e., Forward Looking Infra-Red technology), software programs that log every keystroke we make, etc. there is a danger of unduly encroaching on values that keep our democracies vibrant and secure. Privacy is an enabler, for instance, of freedom of expression and the right to engage in political dissent.
From a substantive theoretical perspectives, we are heading down the road where our governments are deploying powerful new technologies to keep us safe ... but with the unintended effect of undermining our safety, at least in the long run. It's not so much that the machines are controlling us, but more that we are adopting technologier (ostensibly for sound policy reasons) that may end up backfiring. It's getting so bad I recently worried that even Santa might be tracking us.
One example: if an identifiable group believes it is being unfairly targeted by ongoing (and surreptitious) surveillance then members of this group may be less willing to help authorities with investigations, making us all less secure.
Our research team's comparative survey of nine countries (Canada, the United States, France, Hungary, Spain, Mexico, Japan, China and Brazil) suggests that individuals in these countries believe their governments have not struck the right balance in protecting their privacy, leading to views that their privacy interests are being undermined by a combination of legal reforms and the usage of powerful new surveillance technologies.
Some privacy researchers are relying on earlier concepts of the 'social' value of privacy (an idea developed by Priscilla Regan in her book Legislating Privacy). The social value of privacy can be understood a broader societal interest in privacy beyond the interests of particular individuals.
As such, the preservation of the social value of privacy can be portrayed as consistent--and not competing--with security interests. In an article, I've argued the Canadian Supreme Court implicitly recognized the social value of privacy so that, for state searches to be constitutionally permissible, the government must establish that it has reasonable policies and practices in place to govern its personal information collection practices.
(On the other hand, 'privacy panic' can also lead to legislative over-reach such as the zany law introduced last week by the U.S. Congress to force all cell phones manufacturers to install 'beeps' when a picture is taken! As always, it's all about striking the right balance ... )
Subscribe to:
Posts (Atom)