Wednesday, January 31, 2007

New Media at the Turn of the Century

Thank you, Frank, for the very kind introduction, and Gaia, for inviting me to participate in this symposium. In an earlier article, I discussed how the encounter of the Church, medieval scribes and Venetian printers with the Gutenberg press had provided interesting insights into our current response to the "digital dilemma" created by the Internet and new media technologies. In this week's entries, I will use a similar approach and offer my thoughts on what a general theory of law and technology would, or should, look like.

My first entry focuses on a new media technology at the turn of the century--the turn of the last century, that is. Motion picture. When the motion picture first emerged, it was the "new, new thing." Except for a few technology enthusiasts, the public rarely saw motion pictures and had limited interest in this new technology. Indeed, when Chief Justice Edward Douglass White and his colleagues were asked to view the oft-banned film The Birth of a Nation, the Chief Justice responded: "Moving picture! It's absurd, Sir. I never saw one in my life and I haven't the slightest curiosity to see one."

By the turn of the twentieth century, however, motion pictures had received a lot of interest and attention. While movies were popular among low-income households, in particular immigrants and new urban migrants, they also became a major concern of social reformers, who considered them a "new kind of urban vice" and called for tougher regulation.

To protect public morality, many states and municipalities enacted censorship laws to regulate the operation and exhibition of motion pictures. In 1907, the Chicago City Council enacted the nation's first motion picture censorship law, which prohibited "immoral or obscene" pictures while requiring exhibitors of motion pictures to first obtain permits from the police department. The States of Pennsylvania, Ohio, Kansas and Maryland soon followed suit by establishing statewide censorship boards, while major cities, like Birmingham, Detroit, Kansas City, Los Angeles, Louisville, St. Louis, San Francisco, Trenton and Washington, introduced local legislation or censorship boards.

The first challenge to film censorship laws as an abridgement of freedom of the press came in the 1915 Supreme Court case of Mutual Film Corp. v. Industrial Commission. There, a motion picture distributor challenged the constitutionality of the Ohio censorship law, asserting that the statute violated the freedom of speech and press guarantees of the Ohio Constitution. (The distributor relied on the Ohio Constitution, because the Supreme Court, at that time, had yet to include freedom of speech and press among the fundamental rights and liberties protected by the Due Process Clause of the Fourteenth Amendment.)

Although the Mutual Film Court recognized that motion pictures might be used for worthy purposes, it underscored the technology's capacity for evil and potential to corrupt the public--children, in particular. Writing for a unanimous court, Justice McKenna distinguished motion pictures from other mediums of expression and found that the exhibition of motion pictures was "a business pure and simple, originated and conducted for profit." The Court therefore held that motion pictures were not part of the press and did not warrant protection under the Ohio Constitution.

The Mutual Film decision was initially well received by the legal community, but its desirability and rationale was soon attacked by commentators and in academic literature. The debate became even more intense when the Supreme Court of Tennessee upheld a ban by the Memphis censorship board on a movie showing a desegregation school class on the ground that "the south [did] not permit negroes in white school nor recognize social equality between the races even in children."

Meanwhile, the technological medium had evolved, partly as a result of the emergence of "talkies" in the late 1920s and partly in response to the industry's self-regulation efforts. By the late 1930s, motion pictures had become a dominant communication medium in American culture. During the Great Depression and the Second World War, movies provided Americans not only with a shared visual experience, but also with a "common bond of language" that helped unify the country.

Moreover, newer media technologies, like radio, television and the sound truck, had emerged since the 1915 decision. As more technologies were developed, motion pictures were no longer considered a "new, new threat" as the Mutual Film Court had found. The subject matter of motion pictures also gradually moved away from the early focus of sex and scandal to the later discussion of racial, social and political matters. As movie content became more serious, the medium was viewed with greater respectability and fostered a closer connection to civil liberties that usually justify First Amendment protection.

Against this background, the Supreme Court began to reconsider its earlier treatment of motion pictures. Shortly after the Second World War, the Court noted in dicta in an antitrust case that motion pictures, along with newspapers and radio, are part of the press as defined by the First Amendment. In another case a year later, three Supreme Court justices again aligned motion pictures with other mediums of communication.

In 1952, the Court finally overruled the unpopular Mutual Film decision in Joseph Burstyn, Inc. v. Wilson. There, a film distributor challenged a New York statute that had permitted Roberto Rossellini's The Miracle to be banned on the ground that the movie was "sacrilegious." This time, the distributor won. Unlike the Mutual Film Court, the Burstyn Court found that the exhibition of motion pictures was no longer "a business pure and simple." Rather, the medium fell squarely within the free speech and free press guarantees of the First and Fourteenth Amendments.

After 35 years, a Great Depression, and two World Wars, the Court finally extended free speech and free press protections to this once-new technology. So, what lessons can we learn from this historical account? Why did the Court treat the Internet differently from its earlier treatment of the motion picture (at least in Reno v. ACLU and other early Internet cases)? Could the comparison between the two contribute to our discussion of a general theory of law and technology? I have some ideas but don't know exactly where I will be going. Comments and feedback will certainly help me find my way forward.

Monday, January 29, 2007

Introducing Peter Yu

Thanks very much for those fascinating posts, Arthur. I look forward to offering some comments once the infamous "March window" of American law review publication passes.

This week, I'm honored to welcome Peter K. Yu (余家明) to Law and Technology Theory. Prof. Yu is the founding director of the nationally-ranked Intellectual Property & Communications Law Program at Michigan State University College of Law. He holds appointments in the Asian Studies Center and the Department of Telecommunication, Information Studies and Media at Michigan State University. He is also a research fellow of the Center for Studies of Intellectual Property Rights at Zhongnan University of Economics and Law in Wuhan, China and a member of the affiliated faculty of the Working Group on Property, Citizenship, and Social Entrepreneurism at Syracuse University College of Law.

Born and raised in Hong Kong, Professor Yu is a leading expert in international intellectual property and communications law. He also writes and lectures extensively on international trade, international and comparative law, and the transition of the legal systems in China and Hong Kong. An editor or coeditor of three books, Professor Yu has spoken at events organized by the ITU, UNCTAD, WIPO and the U.S. and Hong Kong governments and at leading research institutions from around the world. His lectures and presentations have spanned more than ten countries on four continents, and he is a frequent commentator in the national and international media. His publications are available at his website.

One last note: congratulations to Prof. Yu on the recent publication of the magnum opus, Intellectual Property and Information Wealth: Issues and Practices in the Digital Age. Just glance at the table of contents of this four-volume set and you'll be impressed by the comprehensiveness and importance of this work.

Thursday, January 25, 2007

A Synthetic Theory of Law and Technology

I’d like to start my last post by thanking Gaia, Frank and Jim for putting this blog together. I also now see my link to ‘digital biosphere’ in yesterday's post was wrong, should be okay now.

Today’s post will discuss a forthcoming co-authored work (with Jason Pridmore) ‘A Synthetic Theory of Law and Technology, Minnesota Journal of Science and Technology (forthcoming 2007)’ where we discuss how a synthetic theory of law and technology could inform law and tech analysis—I don’t have a copy posted anywhere but I’d be happy to email you a copy of the draft, if interested.

The theory draws from existing literature, mainly developed by sociologists. I suppose it might be possible to develop a theory from scratch, examining issues such as the definition of technology, but it may make more sense to draw from a mature body of literature. Other disciplines, such as economists' theories of economic diffusion, might also serve to ground a law and tech theory.

First off, why a synthetic theory? Why not say polyester or perhaps a nice cotton blend? The synthetic theory is a synthesis of two broad theories of technology: instrumental theories and substantive theories. Instrumental theories (probably more like social perspectives than outright theory) tend to treat technology as a neutral tool without examining its broader social/cultural impact. In contrast, substantive theories emphasize the ways that technological systems can exert ‘control’ over individuals, often without their knowledge that this process is taking place.

From our perspective, each theory, standing alone, has disadvantages that reduce their utility with respect to legal analysis. Instrumental theories suffer from the fact that they do not take into full account the contextual complexities that could inform legal analysis in search of optimal policy solutions in an environment of tech change. Substantive theories, on the other hand, appear to over-emphasize the need to address the social impact of technological structures, at the expense of a fuller consideration of human agency and examination of each case on its particular facts and circumstances. We tried to draw out and integrate the most helpful elements of both theories to create the synthetic theory.

It may be helpful to offer an example of the ways that technologies can have a substantive impact (whether political, social, cultural or some other way) on society so that, according to the substantive theories, they should not be viewed as merely neutral tools. In Do Artifacts Have Politics, Langdon Winner takes it as a given that technologies are interwoven into modern politics and in fact embody specific forms of power and authority. To sustain this point, Winner uses the examples of low highway overpasses and mechanical iron molding machines. The overpass bridges were built low to deliberately prevent low-income transportation (e.g., buses) from travelling out of New York towards the homes of the wealthy on Long Island. The iron moulds did not work as well, or as cheaply as skilled iron workers, although they were implemented to effectively prevent iron workers from unionizing, as the steel mill owners now had an alternative, if needed. To Winner, it is obvious that technologies stack the deck in favour of certain social or political interests and, as such, the technologies have a substantive impact on society that exists outside of their intended use.

For a more modern example, consider cell phones: they were developed to enable wireless communications but an unintended use is that they reveal the geographic location of the user, potentially for state investigatory purposes at some later date. So many of us now carry around a state tracking device without a second thought. Substantive theorists—including critical theorists, and folks like Max Weber, Jacques Ellul—worry that technology is embedded within social structures such as capitalism (or Ellul’s technique) that render the actions of human agents insignificant—we no longer seem to mind carrying around tracking devices, which may help to change or ‘determine’ individual and social expectations about privacy in the context of state searches.

We propose a synthetic theory that tries to balance the potentials for restrictive and beneficial forms of social structure against the limitations and potentials of human agency. The synthetic theory could then be directed at the analysis of the three broad themes or general principles at the intersection of law and technology discussed in the last post: (a) the analysis needs to try to account for the complex and interdependent relationship between law and tech; (b) the analysis needs to explore how the regulation of tech could indirectly protect legal interests; and (c) the analysis should explore whether tech change is subverting traditional legal interest and, if so, deploy creative analysis that is less deferential to traditional doctrine in order to preserve these interests.

Consider briefly the deployment of new surveillance technologies and enhanced sharing of personal information among governments in the post-9/11 environment coupled with legal changes in many countries that reduce traditional protections against unreasonable state searches. If a judge is presented with a case involving state searches of terrorist suspects, by drawing from substantive perspectives of technology she could gain a more accurate assessment of the risks associated with reducing legal protections in an era of enhanced surveillance technologies. Under the substantive view, legal analysis should recognize the ‘public’ or ‘social’ aspect of privacy, which is society’s interest in preserving privacy apart from a particular individual’s interest.

Priscilla Regan, for instance, argues that privacy serves purposes beyond those that it performs for a particular individual: she notes that one aspect of the social value of privacy is that it sets boundaries that the state’s exercise of power should not transgress to preserve, for example, freedom of speech and association within a democratic political system (See Priscilla Regan, Legislating Privacy: Technology, Social Values, and Public Policy (1995)). Under this view, even if privacy becomes less important to certain individuals, it continues to serve other critical interests in a free and democratic state (e.g., the need to protect political dissent) beyond those that it performs for a particular person. As such, the preservation of the social value of privacy can be portrayed as consistent with the promotion of long-term security interests.

Consistent with this view, research by sociologists, political scientists and others discusses how surveillance technological advances heighten the risk of unanticipated adverse social consequences. These outcomes include repression of political dissent as surveillance technologies are used to target identifiable groups such as Muslims despite no evidence of individual wrongdoing: this sort of profiling also tends to lead to social alienation of the targeted group who increasingly take on an ‘us’ versur ‘them’ mentality. Our research team, the Queen’s Surveillance Project, discusses some of these issues in the context of a recent public enquiry involving a Canadian citizen who was sent by U.S. authorities to Syria where he was tortured for over a year, in part as a result of inaccurate information provided by Canadian police agencies. We are trying to reform Canadian law to exert more public oversight over Canadian agencies and their sharing of information about Canadians with foreign agencies.

Wednesday, January 24, 2007

Promoting Conversations among Different Tech Law Analysts

Can the regulation of cars tell us something about proposed Internet laws? Can the approval process for new biotech drugs help us to understand copyright law? Does legal analysis share common attributes when faced with situations involving technological change? Is it worth studying technology change and its broad interplay with law? If a tree falls in the forest does it make a sound? The hope is that working toward a law and technology theory could help us to answer at least some of these questions (although the last one is tricky).

As mentioned in my last post, it is possible to identify the three following themes in scholarly works that deal with law and technology matters: (1) an understanding of the complex and non-linear relationship between law and technology; (2) an exploration of the ways that laws could shape technological developments to protect legal interests; and (3) an awareness of different ways that law could respond to technology change that threatens legal interests. I also said that a critical examination of these matters might bear fruit in the sense that it will encourage a fuller exploration of the legal interests at stake to promote more sound policy outcomes. Today’s post will try to show how this categorization process could help scholars in different technology law areas enter into a conversation with each other to provoke a deeper understanding of their own fields of research.

I’d like to take up these themes in the context of technology change and tax policy (a strange obsession of mine for which I am considering seeking counseling). More specifically, I’ll focus the discussion on challenges to traditional tax law jurisdiction by enhanced cross-border consumer sales promoted over the Internet.

Here is the basic issue: The last few decades have witnessed increased amounts of cross-border consumer sales of goods and services in part as a result of tech developments like enhanced mail-order sales. The policy issue became more pressing since the mid-1990s as a result of an increase in cross-border consumer sales over the Internet (e.g., book sales via Amazon.com to foreign consumers). The problem is that many subnational governments (i.e., state, provincial or local governments) as well federal governments have a tough time enforcing their tax systems outside of their borders. I’ll address the problem by discussing the three themes noted previously.

Complex relationship: Technology scholars often assert that there is generally not a linear relationship between legal and technological developments—Marshall McCluhan, for instance, proposed four ‘laws’ to help understand how media and technologies interact with culture (He asked: What does the technology extend? What does it make obsolete? What is retrieved? What does the technology reverse into if it is over-extended?). As an explanatory device, I’ve analogized the Internet with a digital biosphere to help show how the law, network, real world values, cyberspace values all interact in a dynamic and interdependent, almost organic, relationship. The Internet promotes cross-border sales but a likely unintended use of the medium is it would also permit automated tax collection that could help tax authorities collect taxes from international transactions. Such automated tracking and tax collection, that could identify the Internet consumer’s geographic location and purchasing habits, runs up against privacy concerns. Could the automated collection system be used by the state in the war on terror and inhibit freedom of expression as folks won’t purchases certain books if records were kept? The lesson here is to tread carefully where laws and policies surrounding technology could have a substantive impact on society apart from the technology’s intended use.

Technology is Law: Under the technology is law approach, governments should consider regulating the development of technology to promote other goals such as their ability to collect taxes. Consistent with this view (aka 'code is law'), some governments are seeking to extend their tax jurisdiction over remote sales with the help from an automatic online tax collection system. For example, U.S. state tax authorities are worried that they are losing roughly $15 billion a year because they can’t collect sales (and use) taxes on sales to consumers living outside of their state borders. As a result, they are sponsoring something called the Streamlined Sales Tax Project whereby they have agreed to adopt a common tax base for sales and use taxes for state and, gulp, over 7,000 local governments (this is actually a radical reconception of U.S. fiscal federalism as many state governments have agreed to give up significant fiscal sovereignty because they used to be able to design their state sales tax systems as they saw fit). To promote tax compliance, the state tax authorities are immunizing businesses from liability for uncollected sales taxes—as long as these businesses ‘voluntarily’ sign on to the new regime. Privacy protections are also being built into the design of the online collection system. Some of the researchers who write in this area, like Walter Hellerstein and Charles McLure Jr., suggest the states have reacted properly by designing laws to preserve existing values (i.e., their ability to collect taxes on consumer sales). In this case, a radical legal change is thought to be needed to confront the challenges posed by technology change—state governments were prepared to sacrifice one set of values (fiscal sovereignty) to salvage another (the desire to inhibit tax revenue losses) via a technology is law approach.

Law is Technology: Under the law is technology approach, an attempt is made to see what legal response, if any, is necessary to address tech change that subverts legal interests. In contrast to the U.S. state tax authorities, governments have done a poor job at addressing the international income tax jurisdiction challenges promoted by the Internet. After years of deliberation, governments with the Organization for Co-operation and Development (OECD) have agreed that they won’t tax profits from foreign Internet businesses unless these businesses sell goods or services through a computer server (i.e., a computer that has been networked to the Internet) located in the consumer’s country (more technically, the OECD model tax treaty Commentaries were amended to reflect this new rule). This was, to put it charitably, an unhelpful policy change as it will promote aggressive tax planning and, more importantly, not ensure a fair sharing of tax revenues between countries—there are now tens of millions of servers around the world that could serve as nexus for international income tax purposes. In most other tax areas, the OECD did a good job, but this experience shows that, when a legal response is needed, regulators need to take care not to over-reach or develop rules that will not effectively protect existing values.

So really all I’ve done above is to group the interests into three categories that, it was claimed, are broadly shared with other law and technology analysis: the categories are fuzzy and fairly obvious (self-evident?) so that their usage could permit a conversation among lawyers and researchers in different tech law field; a tax lawyer could learn from a patent lawyer and vice versa (Gaia’s posts discussed some of the problems associated with this approach). I’ve also said that law and technology analysts should try to take a more critical examination of the legal issues within the three categories. In my last post this week, I’ll discuss how a ‘synthetic theory’ that combines instrumental and substantive theories of technology could inform this analysis. Not exactly a cliffhanger ending, I know . . . .

Monday, January 22, 2007

On The Potential for a Law and Technology Theory

Thanks for the kind introduction Gaia. I thought I’d start off with a boosterish post on the potential for a law and technology theory.

What could such a theory accomplish? Could it ensure world peace? Solve ongoing scientific attempts to generate a unified field theory? Provide every hungry child with a bowl of steaming porridge? Well, maybe or maybe not …

Here’s the real potential: law and technology theory could help us to organize and make sense of the various areas of law with law and technology themes (copyright, biotech, cyberlaw, new media, etc.) to promote more fully-informed legal analysis. To assist with this organization, a law and technology theory could help us to generate common themes or general principles that run through these seemingly disparate areas of tech law. Once these general principles have been discerned, a law and technology theory could reflect back on the different areas of tech law: the theory would act as one more tool within a scholar’s methodological toolbox to promote legal analysis that strives to determine optimal social policy. At the end of the day, a law and technology theory could counter Judge Easterbrook’s view that studying technology in compartmentalized areas like cyberlaw is like studying the ‘law of the horse’ (see Frank H. Easterbrook, Cyberspace and the Law of the Horse, 11 U. Chi. Legal F. 207 (1996)). The broader perspective could help to shed insight into the whole law (or in Easterbrook’s words, “illuminate the entire law.”).

So the trick then will be to generate general principles applicable to different areas of technology law. Lyria Bennett-Moses, Greg Mandel and others have discussed these matters in different posts, and I’ll briefly touch on one possible approach. For a symposium issue on 'What is legal knowledge?', I wrote an article on Towards a Law and Technology Theory, 30 Man. L.J. 383 (2004), where I tried to set out common themes or general principles at the intersection of law and technology analysis:
First, the relationship between law and technology is complex and non-linear.
Second, the regulation of technology itself can indirectly promote interests and values (also known as ‘technology is law’, which is really just Lessig's ‘code is law’ writ large).
Third, law seeks to address technological developments that destabilize traditional interests and values protected by law (or ‘law is technology’).

In the article, I made the claim that the better analytical approach at the intersection of law and technology is to critically examine these three principles (I called this approach the ‘liberal approach’, a confusing term as it gives rise to assumptions about the political philosophy of liberalism, but I digress …). The other ‘conservative’ form of analysis—less sensitive to the ways that technology and law interact—is less helpful. I then went on to claim that the liberal and conservative approaches both get integrated into the law in different ways. The liberal approach destabilizes the law as novel or creative ways of preserving traditional interests in light of tech change become integrated in other areas of law that have not witnessed similar tech change (for example, a more flexible interpretation of offer and acceptance for shrinkwrap contracting purposes will eventually become integrated into other areas of contract law). As such, the liberal approach undermines the common law principle of stare decisis because old decisions may be less helpful as precedents for present or future cases, making it more difficult for lawyers to predict the outcome of a case when they advise clients.

But the conservative approach leads to even more instability due to the need for a later and greater ‘correction’ to attempt to recapture or preserve traditional values. By way of example, consider wiretap searches and U.S. constitutional protections against unreasonable state searches. In its first review of this issue (Olmstead in 1928), the U.S. Supreme Court ruled that wiretap searches did not involve a physical search of the home and thus did not implicate the Fourth Amendment's protection against unreasonable state searches. Justice Brandeis, in his well-known dissent, deployed more forward-looking and flexible analysis to show how a wiretap search invades privacy and potentially enables abusive state actions that, at least in the long run, would make the public less secure. For forty years, there was significant instability in this area of law until the Supreme Court reversed itself and adopted Brandeis’ views (Katz in 1967). In other words, the initial ‘conservative’ form of analysis led to significant instability until the correction took place in Katz. If this vision of the transformation of the law is accurate, it serves as evidence that the liberal analytical approach is the preferred one.

The hope is that this transformative process will help provide insight into the ways the law reacts to technological change. Because tech change can differ from other forms of social change (as Lyria discussed in a previous post), I was trying to set out a few thoughts on a law and technology theory rather than a general theory about how the law works.

If you are still with me at this point, here's my plan for the rest of the week. My next post will address the three themes identified above with respect to tax policy developments. Then I’ll follow with a post on our more recent work in this area where Jason Pridmore and I discuss how a ‘Synthetic Theory of Law and Technology’ builds on the view that we need to incorporate more critical analysis of the ways that technology change can undermine legal interests.

Introducing Arthur Cockfield

Thank you to Greg Mandel for a week of very interesting posts! It is my great pleasure to introduce Arthur Cockfield. Art is Associate Dean and Associate Professor at Queen’s University Faculty of Law in Canada. He received his J.S.D. and J.S.M. from Stanford Law School, his LL.M. from Queens University and his B.A. from the University of Western Ontario.

Art writes in the areas of law and technology, privacy and and tax. Among his recent and forthcoming publications are a book titled; Technology, Privacy and Justice (co-edited with Lisa Austin) (forthcoming Montreal: Editorial Themis 2007); Protecting the Social Value of Privacy in the Context of State Investigations Using New Technologies, University of British Columbia Law Review (forthcoming 2007) and Towards a Law and Technology Theory, 30 Manitoba Law Journal 383 (2004). Art has also authored a novel titled: The End.

Seeing Art’s article Toward a Law and Technology Theory on SSRN about a year and a half ago made me realize that there are several of us in different countries who are writing and thinking about these issues and that it would be helpful to open a dialogue between this emerging group of scholars. Art was one of the participants in the Law & Society panel from which this symposium originated. This week, he is going to discuss his paper: A Synthetic Theory of Law and Technology, which he is co-authoring with Jason Pridmore. I am sure this will prove to be a very interesting week.

Friday, January 19, 2007

Guideline III: The Types of New Technology Disputes are Unforeseeable

The final guideline that I offer here for a general theory of law and technology is that decision-makers must remain cognizant of the limits of their knowledge about new technology and the unforeseeability of what new issues will arise in the future. Particularly in initial stages of technological development, it is inevitable that legal disputes cnncerning a new technology will be handled under preexisting legal schemes. In early stages, there often will not be enough information and knowledge about nascent technologies to develop or modify appropriate legal rules, or there may not have been enough time to establish new laws or regulations for managing the technology. There also often is an inclination to handle new technology disputes under existing rules; this is usually the easiest response both administratively and psychologically. Not surprisingly, however, preexisting legal structure may prove a poor match for new technology.

The regulation of biotechnology serves as a one example (among many). As the biotechnology industry developed in the early 1980s, the federal government determined that bioengineered products generally would be regulated under the already-existing statutory and regulatory structure. The basis for this decision was a determination that the process of biotechnology was not inherently risky, and therefore that only the products of biotechnology, not the process itself, required oversight. This decision has proven to be at least questionable. As a result of this decision, biotechnology products are regulated under a dozen statutes and by five different agencies and services. Experience has revealed gaps in biotechnology regulation; inefficient overlaps in regulation; inconsistencies among agencies in their regulation of similarly situated biotechnology products; and instances of agencies acting outside of their areas of expertise. I will not go into the specific problems in this post; they are discussed comprehensively in an earlier article.

The admonition to be aware of what you do not know and to recognize the limits of foresight is clearly a difficult one to follow. This guideline highlights the need for legal regimes governing new technologies to be flexible and reveals that it should be anticipated that preexisting legal regimes may run into problems when being used to govern technology that did not exist when the regimes were created. A leading current candidate for application of these understandings is the management of nanotechnology.

I will conclude my posts by responding to a potential critique of these guidelines generally: that the guidelines describe a general legal theory, one not limited to law and technology. The suggestion to consider the legal basis for existing doctrine before extending it to new application, for instance, is appropriate for all manner of legal decisions. There are two broad reasons why the theory offered here is one particular for law and technology. First, certain of the guidelines are only applicable to law and technology issues—for example, that legal decision-makers should not let their amazement with new technology overrun their legal analysis, or that legal regimes developed prior to the advent of a technology often reveal gaps and other problems when applied to future technology issues. Second, for the guidelines that do have significant general application, the interaction of technological development and the legal system renders the guidelines particularly apposite for resolving new technological disputes. Determining the basis for legal constructs before extending them does apply in many situations, but the nature of technological advance means that this consideration is a ubiquitous concern for handling new legal disputes caused by technological advance.

Thursday, January 18, 2007

Guideline II: Do Not be Blinded by the Technology

A second guideline for law and technology is that decision-makers must look through the technology involved in a dispute to focus on the legal issues in question. Sometimes decision-makers have a tendency to be blinded by spectacular technological achievement. I’ll again offer examples from historic and modern technological advances.

At the beginning of the 20th Century, courts for the first time confronted the admission of fingerprint evidence to prove identity. In several murder cases, courts admitted fingerprint identification testimony—evidence that was often critical to conviction—without any concrete evidence of the accuracy or reliability of fingerprint identification. Rather, courts simply relied on the testimony of law enforcement officials who worked with fingerprints. These officials, however, did not testify to the reliability of the fingerprint identification method, but rather to there being resemblance between a defendant's prints and the prints found at a crime scene. Reading the early opinions, one is left with the impression that courts were simply very impressed with the concept of fingerprint identification. Fingerprinting was perceived to be an exciting new schentific ability and crime-fighting tool. The opinions are rife with substantial description of the fingerprint identification method and the experts’ qualifications, but lack analysis of fingerprint identification reliability or recognition that the experts testifying had a significant self-interest in having their new line of work justified by judicial approval.

At the end of the 20th Century, courts confronted the admission of DNA evidence to prove identity. Despite a century of scientific advance, courts were prone to strikingly similar errors. Oregon v. Lyons, for instance, concerned the admissibility of a new method of DNA identification, the “PCR replicant method,” a process for determining the probability of a match between a defendant’s DNA and DNA from a crime scene. As in the earlier fingerprint cases, the Lyons court admitted the DNA evidence relying on the expert’s own testimony that the method was reliable and that there were no errors in his method or analysis. Also similarly, the DNA identification testimony was admitted without evidence concerning the reliability of the method under crime scene conditions and without analysis of the expert's self interest in the admission of the evidence (an even greater conflict here, as it was a private company that conducted the test). Like the fingerprint cases, the court appears amazed by the technology—the opinion includes not only a lengthy description of the PCR replicant method process, but also an extended discussion of DNA, all irrelevant to the issue of reliability.

Lest the above discussion be dismissed as nit-picking critique, it is worth noting that both fingerprint and DNA identification evidence came under later scrutiny concerning reliability. A number of significant problems were identified concerning methods of DNA identification, and courts in some instances held DNA evidence inadmissible. Eventually, new procedures were instituted and standardized, and sufficient data was gathered such that courts now generally routinely admit DNA evidence. Intriguingly, the challenges to DNA identification methods led to challenges to fingerprint identification evidence. Despite its long use and mythical status in crime-solving lore, at the end of the 20th Century fingerprint identification methods still lacked established criteria for requirements for a fingerprint match, data on how likely it is for different individuals’ prints to match, or data on how likely it is for an expert err in identification. In 2002, a district court held fingerprint identification evidence inadmissible as unreliable. Following an uproar and a hearing at which multiple FBI agents testified, the court reversed its decision.

In sum, decision-makers must not be blinded by the wonder or promise of technology when judging the new legal issues created by impressive technological advance. It is a lesson that is easy to state, but more difficult to apply, particularly when a decision-maker is confronted with a new technology for the first time and a cadre of experts testifies to its spectacular abilities.

Wednesday, January 17, 2007

Guideline I: Examine the Basis for Legal Constructs

The first guideline for a general theory of law and technology I propose is that one must examine the basis for preexisting existing legal categories before extending them to new technology issues. Examples of the invention of the telegraph 150 years ago and the development of the Internet today help to elucidate this point.

The advent of the telegraph led to disputes over telegraph company liability for miscommunicated telegraph messages. Two different courts confronted this same issue in Parks v. Alta California Telegraph Co. and Breese v. U.S. Telegraph Co. Both courts concluded that the outcome hinged on whether a telegraph company was a common carrier. Common carriers, such as companies that transported goods, were automatically insurers of the delivery of the gonds. The Parks court concluded that telegraph companies were common carriers, and therefore liable for the loss caused by miscommunicated messages; after all, telegraph companies delivered messages just like companies that delivered physical goods also delivered messages (letters). The Breese court concluded that telegraph companies were not common carriers, reasoning that the law of contract should govern, and therefore that telegraph companies were liable for no more than the cost of the telegraph in the case of a miscommunicated message.

The problem with each courts’ analysis lies in comparing the function of the new technology to the function of the prior technology as a basis for deciding whether to handle a new legal dispute under pre-existing legal rules and categories. A decision-maker, rather, should consider the rationale for the existing legal categories in the first instance, and then determine whether that rationale applies to the new technology. In the case of the telegraph, for example, the rationale for common carrier liability may have been to institute a least-cost avoider regime and reduce transactions costs (among other reasons). This rationale may not apply to telegraphs because they offered a new, easy, cheap method of self-insurance—having the message returned to the sender to check its accuracy.

The same problems can be seen in issues concerning how to resolve disputes brought about by modern advances in communication. Students of internet law are familiar with cases in which courts prohibited the sending of unsolicited email (spam) pursuant to the ancient common law doctrine of trespass to chattels. Courts got around the requirements of physical contact with the chattel, dispossession, and impairment by considering the electronic signals to be physical, band-width to have been dispossessed, and the computer to have been impaired. While one can understand a desire to limit spam, these holdings present the same problem discussed above. In extending a doctrine developed for dispossession of a physical chattel, courts failed to realize the implications of their decisions. The holdings, for instance, would render all unsolicited email, physical mail (junk mail), telephone calls, and even advertisements on broadcast television trespass to chattels.

Preexisting legal categories may be applicable in some cases, but the only way to determine this is to examine the basis for the categories in the first instance, and whether that basis is satisfied by extension of the doctrine. Legal categories (such as common carrier) are only that—legal constructs. Such constructs may need to be revised in the face of technological change.

Tuesday, January 16, 2007

History Lessons for a General Theory of Law and Technology

Thank you for the introduction Frank, and thank you Gaia and Frank for organizing this discussion. I am excited to take part in it.

I want to elaborate on a theme that has been touched on in several posts and comments: whether certain legal issues that arise as a result of technological change are recurring. Stated another way, can we frame a general theory of law and technology by studying how prior law and technology issues have been handled, and developing a set of guidelines for how the legal system should respond to future law and technology issues as they arise.

I believe that examining historic responses to new legal issues brought about by technological advance reveals that we can develop such common guidelines. Considering historic responses will not provide a complete road map for responding to each new law and technology issue—such a goal is unachievable considering the wide variety of technological change and wide variety of legal disputes—but the history lessons can offer a number of useful guidelines for how to confront novel law and technology issues. In following posts I will discuss three lessons: (1) that preexisting legal categories may not apply to new technology issues, (2) that decision-makers should not be blinded by the wonders of a new technology in deciding how to handle disputes concerning the technology, and (3) that the types of new disputes created by technological advance are unforeseeable.

These three guidelines are only intended to be examples, not a comprehensive list. I welcome any other examples. Critically, I contend that these guidelines are applicable across a wide variety of disparate technologies, even technologies that we cannot conceive of presently. In this manner, the guidelines represent one form of a general theory of law and technology.

Welcome Gregory Mandel

It is my great pleasure to introduce Professor Gregory Mandel to Law & Technology Theory. I found Mandel's Technology Wars: The Failure of Democratic Discourse to be one of those rare monographs indispensable to understanding current technology policy. Mandel's empirical scholarship also made a stir in the IP field last year when it helped inspire a leading IP academic/practitioner to reverse course on one of the most important patent disputes to reach the Supreme Court in decades.

Prof. Mandel is Associate Dean for Research and Scholarship, and Professor of Law, at Albany Law School. He is currently on an American Bar Association task force briefing the Environmental Protection Agency on arising nanotechnology issues, on the Advisory Board of the Science and Technology Law Center, and on the Faculty of the Alden March Bioethics Institute.

Prof. Mandel specializes in the interface among technology, science and the law. He is the author of numerous publications, including articles on patent law, nanotechnology law, biotechnology law, and on how society should handle new technologies and technological risk. Prof. Mandel has presented his work internationally at over 20 law schools and other institutions, including for the United Nations. He has consulted with a variety of senators, representatives, administrative agencies, and private entities concerning technology legislation, regulation, and social and economic effects.

Many Thanks to Andrea Matwyshyn

Thanks very much to Andrea for a week of fascinating posts. I'm afraid I was in the midst of writing a technology self-study for my law school, so didn't have much time to comment at the time, but hope to later. One of the nice things about this format is that the opportunity for comment that disappears at the end of "real-space" conference panels is always available in cyberspace.

Friday, January 12, 2007

Casestudy in Legal Linearity: The Children's Online Privacy Protection Act (COPPA)

As discussed previously, developmental psychology has moved toward a nonlinear paradigm driven by studying individuals in social context. The Children's Online Privacy Protection Act framework, however, presents a static framework that does not take into account the nonlinear nature of development.

COPPA requires that websites targeting children under age 13 provide notice of privacy practices and obtain verifiable parental consent prior to collecting data from the child. The statute also empowers the Federal Trade Commission to promulgate additional regulations to require the operator of a website subject to COPPA to establish and maintain reasonable procedures “to protect the confidentiality, security, and integrity of personal information collected from children.” Specifically, COPPA stipulates that prior to collection of data from a child under 13, a website “operator” must obtain obtain “verifiable parental consent”. The preferred medium for this verifiable parental consent is receipt of a fax from the parent, however an email exception was originally crafted as an interim measure for limited amount of time. This email exception evolved into a “sliding scale approach” which is still applied by the FTC in COPPA inquiries. Depending on the character of the data collection and the intended use, the FTC’s analysis varies.

During the first six years of its effectiveness, COPPA has received mixed reviews at best. The deterrent effect of prosecutions appears to have been limited. As a practical matter, a large number of websites which are governed by COPPA are simply noncompliant, willingly risking prosecution rather than investing effort in attempting to comply. As demonstrated by several studies, compliance level is generally under 60%, and even those websites which attempt compliance on their face, are frequently easily circumventable in their age verification process. From the perspective of the child user, COPPA has been viewed to only protect the data of the children who wish to have their data protected. For children who simply wish content access, in many instances immediate workarounds are readily available. Often the child merely needs to log in again and provide a false birthdate to gain access to the material to which s/he was denied access.

COPPA makes linear developmental assumptions. First, COPPA is predicated on the idea that an adult parent’s development and proficiency with technology surpasses that of her child, an assumption research demonstrates is unsustainable. Technology learning and development do not always cleanly map on to chronological age. Parents frequently feel their ability to monitor their children’s activities online is limited.

Second, the age of capacity to consent to data gathering stipulated in COPPA, age 13, appears to have been selected arbitrarily. During early adolescence, large divergences in development are visible, perhaps even more so than in later life. Particularly since the issue at hand relates to data security contracting, a more logical age of consent might mirror contractual capacity generally. The usual age of contractual capacity is 18.

Third, COPPA takes into account only one computing context, the home, and presumes a parent’s being available during the child’s internet time. However, children frequently access the internet and give away information about themselves using computers at school, at friends’ houses and in the library. Therefore, a regulatory paradigm presuming parental presence does not reflect the reality of children’s situated learning in multiple contexts.

Fourth, both technology use and development are emergent phenomena. COPPA did not take into account the norms of corporate conduct that would arise to circumvent its restrictions. Because COPPA grants no private rights of action to parents, enforcement of COPPA is the sole province of the FTC, which is an understaffed and overburdened agency. As demonstrated by widespread noncompliance, companies frequently run a risk-benefit calculus regarding the likelihood of prosecution and decide to risk regulatory action rather than invest in compliance structure.

Finally, COPPA presents a technology-focused regulatory design; the focus is on each website that chooses to collect children’s data. However, as technology evolves, a website-centric approach is destined for obsolescence. A more promising regulatory design would be constructed in a human-centric manner, focusing on the child and the child’s information. Such an approach would not only demonstrate greater versatility and regulatory longevity, but systemic efficiencies would also result. In lieu of each website needing to institute a separate age verification process for each child, and each parent approving each website, a child-focused approach could be constructed in such a manner to allow for a single parental approval and a single website registration. In this way, economies of scale could be created through a child data protection structure focused on the child rather than on the website operator. Such an approach would also acknowledge that parents may be less knowledgeable and need more protection than their children, suboptimally suited for a role of gatekeeper.

Crafting Nonlinear Technology Regulation

Nonlinear developmental theory offers five concrete lessons for crafting successful technology regulation.

First, nonlinear developmental theory instructs us that human development and learning is always situated; the proximate zone of development varies across individuals. Development is not something that happens to humans in a preordained manner; development is an interactive process that occurs not within the individual, but on the person-society border. Therefore, the society the person experiences pushes the course of development and visa versa. The same biological individual in two different technology-mediated social contexts will arrive at two different developmental outcomes and potentially two different regulatory prescriptions.

Second, development is an emergent phenomenon. The social context – including the technology itself – changes in frequently unpredictable ways. Thus, regulating in a manner predicated on static assumptions about people and technology results in law destined for quick obsolescence. Both human behavior and technology will evolve in response to law. Nonlinear developmental theory show us that effects on individuals’ development and behavior are emergent across multiple layers of context. Multiple developmental layers must coincide in pushing humans in the direction sought by the regulation. The influence of the exosystem of social norms, the mesosystem of peer groups and the economic exchange and the microsystem of the individual’s current state of development all come into play. Without considering all of these, regulation can frequently be circumvented or ignored.

Third, learning and development do not always cleanly map on to chronological age. An adult user whose only interactions with a software application occurs once a week for an hour in a library on a shared machine experiences technology development and learning differently than does the ten year old child with a dedicated laptop in her bedroom. Technology can act as both an equalizer of abilities and an exacerbator of differences.

Fourth, regulating the way that humans interact with technology means contemplating multiple layers of context that cooperate or conflict to generate development. At various stages of life, developmental progress intersects with identity goals, creating another lens guiding individual behavior and developmental outcomes. Because these identity goals are inherently social in nature, two layers of context push on the individual – first the context shaping development through interactions and second the context in which the individual attempts to work toward identity goals.

Finally, technology is merely a tool that assists humans in achieving more than they otherwise could; the regulatory and developmental focus should always remain human-centric. New technologies should be analyzed merely as tools in a Vygotskian sense. They enable a user to accomplish more than the user ordinarily could without the tool. As such, the conduct that arises from this assisted action is not new; it is merely amplified conduct. Regulating technology creation is, however, not the answer; regulating humans, their conduct and their use of that technology is a more promising approach. These humans, perhaps unlike the technology itself, can demonstrate extreme levels of variation but provide a more efficacious, though more complicated, point for regulation.

Placing these five lessons in regulatory context, the Children’s Online Privacy Protection Act demonstrates how ignoring these five lessons of contextualist developmental theory can result in regulatory suboptimality.

Wednesday, January 10, 2007

Humans + Technology = Emergent Behaviors, part II

Albert Bandura’s Social Learning Theory presents an analysis consonant with Vygotsky and Bronfenbrenner. Bandura's theory views the person-environment interaction as a three way exchange in which the person, an entity with unique characteristics, performs a behavior in an environment which responds to the person and the behavior in a process of reciprocal determinism; it is an idiosyncratic interaction. According to Bandura, models can serve to instruct, motivate, disinhibit, inhibit, socially facilitate, and arouse emotion in a process of vicarious reinforcement. Essentially, development is viewed as a process of quantitative change, during which learning episodes gradually accumulate over time. Although Social Learning Theory does not directly address historical or cultural context, it reflects the tradition of Vygotsky and the contextualist approach by recognizing the dialectical process of a person who is working within and shaped by an environment; a triadic reciprocal determinism occurs among behavior, cognitive factors and the environment. Also, as in the theory of Vygotsky, there is no endpoint to development, and universal behaviors are rare. Thus, children are developmentally malleable but only within constraints of biology and environment, an environment replete with technology.

Finally, Erikson frames development through identification of eight stages/dichotomies of human development and identity formation: (1) basic trust versus mistrust, (2) autonomy versus shame, (3) initiative versus guilt, (4) industry versus inferiority, (5) indentity versus role confusion, (6) intimacy versus isolation, (7) generativity versus stagnation, (8) ego integrity versus despair. Erikson’s stages 1, 2 and 3 represent childhood stages when the individual is not yet capable of interacting with (borrowing a Vygotskian phrase) “cultural tools” such as the internet. Stage 8 is similarly a stage in which the individual is primarily conquering internal dynamics, and, therefore, interaction with culture, its tools and other individuals is not the primary focus of the stage. Conversely, in stages 4, 5, 6, and 7, the individual is learning from and making a place in society. The child becomes a different person in each stage with different cognitive capacities and progressively achieves a greater ability to interact with a wider range of people. For Erikson, the ego can only remain strong through interactions with cultural institutions that enable the development of the child’s capacities and potential. Technology is a key component of these interactions.

These four schools of nonlinear developmental theory offer useful analytical lense for (re)theorizing and assessing technology regulation. A discussion of some of the insights these theories may provide for technology regulation follows.

Humans + Technology = Emergent Behaviors, part I

A group of other developmental theorists, however, developed decidedly nonlinear approaches that hold important contrary insights to the approach of Piaget and other linear developmental theorists. The works of these theorists argues that development and identity are inherently dialectical and interactionist joint constructions. An individual interacts with and within a particular social and technological context to generate development in an emergent manner.

Lev Vygotsky, a key figure of contextualist developmental theory and a contemporary of Piaget, introduced the importance of analyzing development in cultural context. The smallest unit of analysis for Vygotsky is the child in a particular social context, an inherently variable construction across milieu and individuals. Learning and development occurs on the person-society border through an individual interacting inside the “zone of proximal development”. The zone of proximal development refers to the gap between the actual developmental level of the child at the time and the higher level of the child’s potential development with help from adults or more advanced peers. Help in development comes not only from humans in the environment, but also from self-help using cultural "tools" such as computers. For Vygotsky, humans master themselves from the outside through psychological and technical tools, which allow individuals to achieve more in the context. Tools also vary from culture to culture and social contexts. In other words, the focus of assessment using a Vygotskian developmental paradigm is less on the static notion of who the child currently is and more on the dynamic question of who the child can become, depending on context and tools.

An elaboration on the evolving, nonlinear nature of contexts that shape development can be found in the work of Urie Bronfenbrenner. Bronfenbrenner presents an ecological model that illustrates the importance of reviewing dynamics across multiple levels of social context. Specifically, he identifies four levels of analysis – (1) macrosystem; (2) mesosystem; (3) exosystem; and (4) microsystem. Macrosystem level analysis requires examination at the level of culture as a whole, along with belief systems and ideologies underlying cultural rules and norms. In other words, the analysis focuses on the mechanisms of social governance and the worldview prevalent in civil society. Mesosystem level analysis focuses attention on interpersonal dynamics and the dynamics between the individual and secondary settings, such as work. Exosystem level analysis contemplates the interactions outside of the primary sphere of analysis but which, nevertheless, affect or are affected by what happens in the primary setting. On the microsystem level, individuals and their psychological development in a particular context is the primary level of analysis. The individual interacts within and across all four levels and consequently develops. Technology impacts each of the levels of social context.

Humans + Technology = A Straight Line?

Linear and nonlinear developmental psychology differ in key assumptions about the manner humans interact with the world around them, in particular with "tools" like technology.

Linear developmental psychology theory, as demonstrated by the work of Jean Piaget, creates an age-contingent, lock-step trajectory for human development. Jean Piaget divided development into four periods with substages within each period. These four periods include the sensorimotor period, the preoperational period, the concrete operational period and the formal operational period. The sensorimotor stage lasts from birth to age two and is characterized by a child moving from simple reflexes to organized behaviors that are oriented toward interacting with the external world through goal oriented exploration behaviors and object permanence skills. It is followed by the preoperational period which spans age two until age seven. The preoperational period involves development of semiotic function, meaning ability to use symbols, incomplete differentiation of other people from the self while interpreting the world in terms of the self in a loosely logical manner. Next, the concrete period lasts between ages seven and eleven and is marked by the ability to perform logical mental operations, which are internalized and can be reversed. Finally, the formal operational period from age eleven to age fifteen is a time characterized by abstract thinking where mental operations are not necessarily tied to concrete objects. At this point in a linear developmental paradigm, adulthood arrives and development stops. Therefore, adulthood is the goal and signals the highest level of development in a linear paradigm, and the “achievement” of development.

Therefore, in summary, linear developmental theory presumes that all humans develop in a similar fashion, demonstrating an upward developmental trajectory that is tied to chronological age. Consequently, a linear approach to technology regulation presumes a homogeneity among users regarding individuals’ sophistication and comfort level with technology based on their chronological age. In other words, chronologically older individuals should demonstrate more technology proficiency than those of chronologically younger age.

Nonlinear developmental theory adopts the opposite approach. It asserts that chronological age cannot necessarily be tied to assumptions about development; development is an inherently social process that occurs in a particular real-world context using the "tools" of that context. Nonlinear developmental psychology theory is perhaps best reflected in the work of Lev Vygotsky, Urie Bronfenbrenner, Albert Bandura, and Erik Erikson. This work will be explored in the next two entries.

Monday, January 8, 2007

The Most Volatile Technology: The User

Thank you for the introduction and for including me in this project.

Perhaps the first steps in generating a successful broader theory of technology and law include identifying defining characteristics of such a theory. One of these defining characteristics is the ability of the theory to co-evolve with the particular incarnations of technologies we seek to govern. Therefore, a successful theory of technology and law will possess an emergent quality; it will develop in response to the interactions of technologies with the humans who use them.

In this way, any successful theory of technology and law is inextricably bound up with human development. Our assumptions in law about human development and how humans interact with technology are, however, rarely closely assessed. Legislative approaches are usually compartmentalized around a particular technology at issue or a particular legal idea, and rarely does technology legislation discuss users’ perceptions or development as its primary focus. The dominant human development paradigm adopted by technology regulation is one which, by default, presumes that users are a one-dimensional, linear, stagnant piece of the regulatory picture.

Although this assumption about linear human development is based in early developmental psychology theory, it is an approach not informed by later bodies of human development theory. Later developmental psychology is better suited to inform regulation of technology-mediated interactions. Nonlinear developmental theory better contemplates the emergent learning and behaviors in which users participate through technology than do traditional linear paradigms.
By inserting the dynamic nature of users and their development into the technology regulation picture, we begin to generate law that approaches the levels of complexity that actually exist in technology-mediated social systems. The remaining posts during this week will examine the work of nonlinear developmental psychologists and its lessons for technology regulation.

Introducing Andrea Matwyshyn

After a two week hiatus, Law & Tech Theory is back! It is my pleasure to introduce this week's main contributor, Andrea Matwyshyn. I have followed her work for some time and look forward to her writing this week.

Prof. Andrea M. Matwyshyn is the Executive Director of Center for Information Research and an Assistant Professor of Law at the Univ. of Florida Law School. She is an interdisciplinary researcher in the area of innovation policy and enterprise risk management, focusing her work on legal and social implications of information technology and data security.

In addition to her appointment at University of Florida, she is an Affiliate of the Centre for Economics & Policy in the Institute for Manufacturing at the University of Cambridge in the United Kingdom, where she is part of an international group of academics who explore issues at the intersection of information technology and manufacturing. She also lectures regularly to academic and industry groups, both within the United States and internationally, on issues of ethical enterprise risk management strategy, information technology regulation, and proprietary information security.

Her recent presentations include talks at University of Oxford, University of Cambridge, Stanford University, University of Edinburgh, Kellogg Graduate School of Management, Wharton School of Business, RSA Security, and BlackHat . Prior to joining the University of Florida faculty, she taught at Northwestern University School of Law and was a corporate technology transactions attorney in private practice in Chicago.

Welcome Prof. Matwyshyn!