Tuesday, December 28, 2010

Failure is Not an Option

2010 is drawing to an end amongst a flurry of activities in the Health IT field. In a few short days 2011, the year of the Meaningful Use, will be upon us and the stimulus clocks will start ticking furiously. In addition to the yearlong visionary activities from ONC, December 2010 brought us two landmark opinions on the future of medical informatics. The first report, from the President’s Council of Advisors on Science and Technology (PCAST), recommended the creation of a brand new extensible universal health language, along with accelerated and increased government spending on Health IT. Exact dollar amounts were not specified. The second report from the Institute of Medicine (IOM) is a preliminary summary of a three-part workshop conducted by the Roundtable on Value & Science-Driven Health Care with support from ONC, and titled “Digital Infrastructure for the Learning Health System: The Foundation for Continuous Improvement in Health and Health Care”. The IOM report, which incorporates the PCAST recommendations by reference, is breath taking in its vision of an Ultra-Large-System (ULS) consisting of a smart health grid spanning the globe, collecting and exchanging clinical (and non-clinical) data in real-time. Similar to PCAST, the IOM report focuses on the massive research opportunities inherent in such global infrastructure, and like the PCAST report, the IOM summary makes no attempt to estimate costs.

Make no mistake, the IOM vision of a Global Health Grid is equal in magnitude to John Kennedy’s quest for “landing a man on the moon and returning him safely to the earth” and may prove to be infinitely more beneficial to humanity than the Apollo missions were. However, right now, Houston, we’ve had a problem here:
  1. The nation spent upwards of $2.5 trillion on medical services this year
  2. Over 58 million Americans are poor enough to qualify for Medicaid 
  3. Over 46 million Americans are old enough to qualify for Medicare 
  4. Another 50 million residents are without any health insurance
  5. The unemployment rate is at 9.8% with an additional 7.2% underemployed
  6. This year’s federal deficit is over $1.3 trillion and the national debt is at $13.9 trillion
In all fairness, the recent Federal investments in Health IT were spurred by the HITECH Act, which was a part of the ARRA, a recession stimulus bill aimed at injecting money into an ailing economy and creating jobs, while improving national infrastructure. It was not explicitly intended to reduce health care costs or improve access and affordability (that came later with PPACA). Perhaps adding an EHR to every doctor’s office was viewed as the first step towards building the Learning Health System. However, somewhere along the road to fame EHRs were magically endowed with powers to provide patients “with improved quality and safety, more efficient care and better outcomes”. Perhaps these claims came from EHR vendors’ glossy marketing collateral, or perhaps it was just wishful thinking, or perhaps this was a forward looking statement for the fully operational grid of a Learning Health System, or maybe this is just incorrect use of terminology. Health IT is much more than EHRs and Health IT can indeed help improve efficiency, i.e. cut costs, in several ways.

Administrative Simplifications

Section 1104 of the PPACA contains a roadmap for administrative simplifications “to reduce the clerical burden on patients, health care providers, and health plans”. Eligibility transactions must be standardized and deployed by 2012, electronic payments by 2014 and claims, certifications and authorizations by 2016. Physicians spend about 14% of revenue on billing and insurance related functions, while hospitals spend 7% - 11% and health plans spend around 8%, not to mention the aggravation involved. Why do we have to wait 6 years before this particularly wasteful activity is completely addressed? If there is a place where health care can learn from other industries, this is the one. Both the banking and retail industries have solved this problem many years ago. It is trivial to imagine swiping a magnetic card at the doctor’s office to verify eligibility, obtain authorization, and exact dollar amounts for patient responsibility, while initiating a real time payment transaction from insurer to provider. The complexities of a thousand different plans can be easily accommodated by computer algorithms and the technology is available in every supermarket and every gas station. For all those joining Congress in 2011 with the intent of altering PPACA, could we alter Section 1104 and shorten the timeline by a few years?

Fraud

The National Health Care Anti-Fraud Association estimates the costs of health care fraud to be 3% to 10% of expenditures. Despite all the publicity, credit card fraud is estimated to cost 7 cents per each $100 in transactions, or 0.07%, with issue resolution times estimated at 21 hours. This is yet another lesson health care can learn from the financial industry. Granted, purchase patterns in health care are different than the market at large, so the anti-fraud algorithms will need to be tweaked and specialized. Computers are very good at this and from watching the President’s bi-partisan meeting on health care reform last year, I thought this is one area where everybody agrees that something needs to be done. There is nothing tangible in PPACA regarding the use of Health IT for fraud reduction.

Duplication of Tests

If you prescribe electronically through Surescripts, you can see a patient’s medications list courtesy of the PBM. PBMs and insurers know exactly what medications they paid for. They also know exactly what procedures, tests and visits they paid for, and who performed them. Would it be a huge stretch of imagination to envision a display of the last 6 months of tests paid by the insurer every time you attempt to order a test? No, insurers don’t have the results, but if you saw that the patient had an MRI last week, would you order another one today? Or would you call the facility for a copy? When you prescribe electronically, the PBM insists on showing you the formulary and drug price for the individual patient. Why not show you prices for the tests you are about to order, and help you and the patient choose lower priced facilities, just like they steer folks to prescribe generics? This has nothing to do with clinical decision support or changing the way medicine is practiced. These are examples of very simple, common-sense, immediate solutions for reigning in costs without disturbing quality of care.

The Global Learning Health System presents a compelling vision. I wish that the President would commission the necessary budget estimations, go before Congress and in a JFK style oration request appropriations for defeating Cancer (or some other scary thing), appropriations which will include funding for the Learning Health System global grid. It is possible that if such Learning Health System existed today, or could be quickly deployed, it would provide solutions for most health care problems we currently have. However, it is pretty clear that such a system will take many years and many billions of dollars to build. In the meantime we have an immediate problem, which requires an immediate solution with immediately available tools, and no, failure is still not an option.

Wednesday, December 22, 2010

Health IT and the Carob Tree

At a certain point in time, somewhere in America, someone stated that people should all have lifetime, complete medical records. Sounds reasonable and I presume nobody ever asked, “Why?” As time goes by and health care services in America are approaching an unsustainable 18% GDP, the mythical lifetime record is quickly becoming panacea to the obvious problem health care has become. Americans accustomed to thinking about their health care as “the best in the world” are now being instructed that American health care is fraught with errors, needless deaths, unsafe treatments, uninformed physicians, unsanitary hospitals and basically stuck in the stone age of technology. And all this while sucking inordinate amounts of cash from simple-minded folks who have “no skin in the game” and thus completely oblivious to being robbed, bankrupted, maimed and killed by greedy health care providers and industry financiers. Don’t know about anybody else, but I am positively terrified... mortified... petrified... stupefied... by this.

Enter the aforementioned lifetime health record, a.k.a. “EHR for every American by 2014”.  EHR in this context denotes collections of information or data, not a software product. Instead of overstuffed manila folders and oversized yellow envelopes, each one of us will have a complete electronic dossier, stored somewhere TBD later, chock full of every lab result and imaging study we ever had, all blood pressure, weight, height, temperature, etc. ever taken, all pre-op, post-op, consultation and progress notes ever written, all diagnoses and medications, all cuts and bruises, all chief complaints and histories and all treatment plans that we followed and even those that we did not. When our EHR is ready for use, doctors will be making fewer errors, order fewer unnecessary tests, make more informed decisions, prescribe safer treatments and charge less money for more thorough work. Well, maybe the last one is a bit of a stretch….

Problem #1: Do we really need a comprehensive lifetime health record? Here and there, particularly for small children with chronic conditions, such record will be clinically meaningful. For the vast majority of Americans, a lifetime EHR may be a cute thing to have but not really a necessity. One may need records for recent years if managing chronic disease or battling a potentially fatal diagnosis, but for everybody else, including the exotic case of someone ending up in the ER unconscious, buck naked and all alone, the most you will need is a brief summary of vital information. So if we don’t need our pre-school growth charts and we don’t need an itemized litany of every URI we ever had, every story we told our doctor and every “RRR, normal S1, S2 and without murmur, gallop, or rub” ever recorded, what is it that we do need? I guess a reasonably healthy 40-year-old could derive some joy from perusing his comprehensive lifetime record – “Look honey, that awful cold I had in the winter of 87’ when we went skiing for the first time was really pneumonia. No wonder I broke my leg the next day… It’s all here. Isn’t this great?”  When the same 40-year-old goes to see his new family doc the next day for persistent “heartburn”, his 87’ adventure would be largely irrelevant, and if he ends up unconscious and naked at the ED that night, they may be interested in his recent “heartburn”, but still have no use for information on his hapless skiing vacation 23 years ago.

As Dr. David Kibbe aptly observed, what we, or our health care providers need, very much depends on the context. Defining a relevant superset of information should of course be left to practicing physicians, but if I had to define such superset, I would go with immunizations, problem list and medications (current, with option to view historical), allergies, a couple of years of lab results and imaging studies (longer for certain studies), standard major medical and family histories and for chronic or serious conditions, the last few physician notes. Interestingly enough, these data elements are already being captured in structured and codified manner by most currently available technologies. If money were no object, I don’t see a downside to cataloging and retaining every tiny piece of information, provided that it can be contextually filtered for different circumstances. But judging by the billions of dollars being spent on HIT, money is a very big object indeed and either way, those who care for unconscious, naked people presenting at the ED in the middle of the night, should not be expected to peruse lifetime records.

Problem #2: How do we get access to either comprehensive or contextually appropriate information? As we all know, our “fragmented” health care system is nothing but a collection of data “silos” maintained mainly on paper under lock and key by greedy providers, no doubt purposely so in order to maintain a competitive advantage in a brutal health care market where an overabundance of physicians are fiercely competing for an ever dwindling number of patients . Or maybe not…. Perhaps traffic of clinical information has been severely hampered by that one antiquated oath physicians still take which commands doctors to keep patient information downright secret. Either way, since in most instances people are treated by multiple providers, medical information must be shared between providers and certainly must be available to patients electronically (faxing, copying and phone calls are so uncool). Unfortunately, we don’t have a national healthcare system where all providers are employed by one entity, conform to one set of policies, use one technology platform and clinical data is easily shared. We do, however, have a few “look alike” entities such as Kaiser and the VA. Why not do away with the remaining “fragments” and consolidate our health care in a handful (a single one would be too Socialist) of fully integrated systems? It would certainly simplify things for HIT grand-designers and programmers.

The financial system, our beacon of informatics wisdom, has resolved this pesky problem long ago, as evident in the world-spanning network of ATMs, where card carrying customers with unique identifiers can exchange several bytes of information with their remote financial institution. For those desiring comprehensive financial records, there is Yodlee and Mint, which will aggregate all your financial accounts in one cloud based dashboard free of charge (any takers?). Strangely enough this hallmark interoperability accomplishment did not require federal funding, government committees or a compulsory “universal financial language” (arithmetic seldom does). One can never be certain, but it is possible that financial IT experts were less obsessed with fostering/stifling innovation and more concerned with providing pragmatic solutions to real problems without requiring that banks change the way “financial services are delivered” or that smaller banks cease to exist in order to simplify software programming.

Problem #3: Should we plant a carob tree? Legend has it that carob trees require 70 years to reach maturity and bear fruit (more like 7 really), thus planting a carob tree is a selfless act to benefit posterity. There is a remarkable disconnect between the voice of physicians who treat twenty, thirty patients every day, one patient at a time, and physicians in the academia and those in “leadership” roles who routinely converse about population health, bio-surveillance and clinical research. Doctors who make a living by touching patients today, not tomorrow and not after Meaningful Use Stage 5 has been achieved, usually find that an EHR has very little to contribute to the quality of care they deliver to the one patient in front of them. Health IT is promising them a paperless future, devoid of software and hardware both, where every metadata tagged digital piece of information about their patient is “a click of a button” away. Health care delivery will become well informed, efficient and flawless to the point that the patient may not even need to be “seen” in order to be treated. Magically frightening? No; just futuristic technology which may come to fruition in, say, 70 years. Perhaps EHRs are our carob trees.

Moral: If you insist on planting nothing but carob trees, you will starve to death and there will be no one left to enjoy the fruits of the carob tree.

Thursday, December 16, 2010

Thoughts on the PCAST Report

The President’s Council of Advisors on Science and Technology (PCAST) released a report this month ambitiously titled “REALIZING THE FULL POTENTIAL OF HEALTH INFORMATION TECHNOLOGY TO IMPROVE HEALTHCARE FOR AMERICANS: THE PATH FORWARD”, complete with current state of HIT analysis and authoritative recommendations to ONC, CMS and HHS on how to proceed going forward. Initially, I skimmed through the 90 pages of the report and very much liked what I saw. PCAST is recommending a federated model for health information, with medical records stored where they are created and a comprehensive view aggregated on the fly on an as-needed basis by authorized users, including patients and their families. PCAST is urging ONC to significantly accelerate efforts in this direction.  Perfect. And then I took a deeper dive into the details of the report, and disappointingly came across a series of misconceptions and questionable assumptions surrounding what is basically a very good, albeit expensive, strategy.

The State of Affairs

The classic opening to all HIT reports seems to be the obligatory comparison to “other industries”: “Information technology, along with associated managerial and organizational changes, has brought substantial productivity gains to manufacturing, retailing, and many other industries. Healthcare is poised to make a similar transition, but some basic changes in approach are needed to realize the potential of healthcare IT”. While this is true, we should also recognize that medicine is very different than other “industries” in that it lacks 100% repeatable processes. For example, the entire process of manufacturing, packaging, ordering, delivering, stocking and selling a box of Fruit Loops is exactly the same for every single Fruit Loops box. Automation of such process is easy. Unfortunately, people are not very similar to Fruit Loops boxes, and paradoxically, the lack of appeal and utility of current EHRs is in large part due to EHR designers thinking about Fruit Loops instead of the many ways in which people express Severity or Location.

The PCAST report continues with the common mantra regarding our “fragmented” health care system and the fee-for-service model which is preventing the availability of complete health information at the point of care and the advantages of larger health systems which, unlike small practices, “have an incentive to provide care efficiently and reduce duplication or extraneous services when possible”. They must be referring to Integrated Delivery Systems (IDN), since large hospitals and multi-specialty groups have many incentives, but reducing extraneous services is not one of them and if there is an entity which lives or dies by achieving efficiency, it is the solo or small practice operating on abysmal margins. Either way, we need technology recommendations for an existing health care system, not so much for some utopian system were payers and providers cheerfully align their interests with patients and tax payers in general.

Legacy and Condensed Water Vapors

Unsurprisingly the overarching thread in the report is describing current EHR systems as “legacy” systems, which will eventually become obsolete and make room for “innovative” technologies delivered via the Clouds. I can see how using centrally deployed and managed applications (not really a new concept) can be easier and more cost effective for most providers, but does the software have to reside in a vendor datacenter in order to qualify for a “no more software” stamp of approval? Would the many Epic installations accessed remotely by physicians qualify as Clouds? Or must they be deployed in an Epic owned datacenter, and be natively browser based, in order for the software to mysteriously vanish? Or perhaps Epic is too large and thus too heavy to ascend to the clouds? Epic, the fastest selling EHR system in the country, is a “legacy” product built on the 1960s MUMPS programming language, and so is VistA, the Veterans Affairs (VA) EMR, which seems to be a physicians’ all-time favorite. Perhaps the European Space Agency knows something we don’t, since they are taking MUMPS straight through the clouds and all the way up into outer space to map the Milky Way Galaxy.

After affixing the “legacy” label on the EHR industry incumbents, the PCAST report repeatedly emphasizes the government’s role in creating a “vibrant market of innovators”, particularly the “disruptive” type, citing the example of ARRA incentives leading to “substantial innovation and competition” and “more affordable systems and improved products”. This is pure fantasy. Some of the smaller, previously most affordable, EHRs have been forced to double their prices in order to comply with regulations. Most of the larger (legacy?) products have maintained their pre-ARRA pricing, or increased them. And then there are the literally all cloud, and no software to speak of, vendors, who preyed on physicians before ARRA and are continuing to do so after. It seems that Apple and its iTunes App Store has created the illusion that any kid with a completed “Programming for Dummies” curriculum can whip up a useful clinical decision support application in a couple of weeks, sell it on the App Store for $0.99 and help us all get healthy, if we would only give him access to mountains of personal medical records.

Privacy or Lack Thereof

The beauty of the PCAST recommended solution for health information exchange (details below) is that privacy preferences are built into each data element. Anyone attempting to access personal health information would be required to authenticate and validate that the patient’s privacy policy allows access to the requested data element. Moreover, all data will be encrypted both at rest and in transit between users, thus barring all intermediaries from reading or storing any personal information. Rock solid plan; that is until you run into this statement: “It seems likely that the modifications to HIPAA enacted in Subtitle D of the HITECH Act—in particular those that require covered entities to track all disclosures to associates—will further stifle innovation in the health IT field while offering little additional real-world privacy protection”. The PCAST authors seem to dislike the idea of tracking disclosures of personal health information.  Somehow, uninhibited disclosure (or outright wholesale) of patients’ private information to “associates” is a necessary condition for “innovation”. Makes you wonder what exactly is meant by “innovation”.

The Grand Solution

Let me say this again. I love the distributed data concept at the heart of PCAST’s recommendations. No big databases in the sky here. All the pieces of one’s medical record are housed by the institution that created each piece. When someone needs access to a record, or part of a record, an authorized query is issued (think Google search) and the requested information is located and aggregated across multiple data stores and displayed on the requestor’s computer screen (think Google again). The mechanism to achieve such wondrous task is by and large the same one Google uses, with added layers of security and privacy. These tools are dubbed "data element access services (DEAS)" which are nothing more than customized search engines for health information, maintained and operated by large health systems or purposefully built entities. As is the case with Google search, medical records will need to be indexed if the DEAS are going to find them. For this purpose, PCAST suggests “a universal extensible language for the exchange of health information based on “metadata-tagged data elements””. In plain English, medical records will be broken into atomic data elements, each having attached information describing the element (patient identifiers, what it is, when recorded, how, by whom, etc.) and most importantly who can access it (patient directed privacy rules). The search engines will presumably use these metadata tags to locate actual data elements, without ever needing to read the data itself, and return the results to the querying user. Of course, there are more questions than answers at this point, and some interesting discussions too, but the general concept is sound and very innovative.

Beware the Legacy Giants

While PCAST was deliberating and formulating its recommendations, at least one “legacy” EHR vendor was implementing the solution. During its user conference in October, Cerner unveiled “Chart Search”, which is a semantic search engine using Natural Language Processing (NLP) and a specific ontology to allow users to intelligently search a patient’s chart. As in PCAST’s recommendations, Cerner is indexing all medical records and is storing the indices in its own datacenter (cloud). The use of NLP and clinical terminologies, such as SNOMED, allows Cerner to perform contextual searches by concepts (searching for beta-blocker will return all occurrences of Atenolol, Metoprolol, etc.) and rank the most clinically pertinent results on top. You can view a very brief presentation of this feature, shown by my favorite family doc, Dr. Karl Kochendorfer, here. The Cerner semantic search is different than PCAST’s recommended solution in many ways and of course, right now it is limited to Cerner charts in one physical location, but it is real and currently used by actual physicians. Looks like those old “legacy” giants are still packing some punch after all.

In summary, PCAST’s basic concept of where HIT should be headed is very appealing and properly ambitious. The serious considerations given to privacy, security and patient preferences are refreshing, but in order to support the fascinating research agenda proposed in the report, government will at some point need to step in and curb the enthusiasm of Cloud owners, severely curtailing the commerce of medical records. I would have preferred that PCAST refrained from the fashionable and rather baseless assumptions on how innovation occurs and the equally worn-out subtle advocacy for unproven changes to our health care delivery system. Other than that, very interesting report.

Full disclosure: I have no financial interests in Epic, Cerner or any other EHR vendor

Tuesday, June 1, 2010

The Laws of Technology and the Technologies of Law Event 2011

The Laws of Technology and the Technologies of Law Event 2011

Overview


Lawyers and legal institutions regularly face technological change. The public record of the Twentieth and this century is populated by numerous crisis events that surround emerging technology where law was called forth to channel, to regulate, or prohibit certain technologies and technological mediated activities. This rich history coupled with the ever present concern of technological change would suggest that there is a detailed scholarly reflection on the relationship between law and technology. However, this is not necessarily the case. Most scholarship on law and technology is reactive to concerns surrounding a specific technology or technological mediated activity. This orthodox scholarship remains within a reasonably narrow frame of reference concerned with securing a desirable future through law as an instrument of public policy. In this the lawyer-scholar’s task is primarily descriptive; it involves the identification of the ‘issues’, ‘uncertainties’ and the ‘gaps’ to be addressed by policy-makers and legislators. This symposium aims to challenge this orthodoxy at three key points.


The first challenge can be through a taking seriously of the past of a law’s engagement with technology. Instead of issue specific piecemeal engagements that look narrowly to the future, it is hoped through archival, historical and cultural sources to gleam a more sophisticated account of the social, political, economic and cultural factors that gave form to concrete law and technology moments.


The second challenge can be through a taking seriously of the present of law’s engagement with technology. Law faces profound technological change. However, instead of falling back on the narrow nomology of the orthodox scholarship, what is hoped for is a diverse array of methods and resources – social scientific, cultural and literary studies for example – to expose, critique and understand the current political-legal engagements with technological change.

The third challenge can be through a taking seriously of the future of law’s engagement with technology. The predominant theory of law in the orthodox scholarship is instrumental and sovereign. At a fundamental level law is conceived as a process, a machine that can be deployed. And significantly it is a process that can claim sovereignty over the future. Ironically the law called forth by technology can be characterised as technological. Through jurisprudential, philosophic, semiotic, psychoanalytic and other theoretically informed discourses it is hoped to question and think these deep connections between law and technology.

Organiser

Kieran Tranter, Deputy-Director Socio-legal Research Centre and Managing Editor Griffith Law Review.

Event


The focus is a one day workshop at Griffith Law School, Griffith University Gold Coast Campus to be held on 3 May 2011.

While it is hoped that presenters can present in person it is planned that presenters will be able to contribute through Skype and video-linking technologies. The workshop will be run afternoon-evening to allow northern hemisphere presenters to be involved.

There will be no cost for presenters to attend the workshop.

Outcomes

A selection of the papers presented at the workshop will be refereed and edited for appearance as a symposium in the Griffith Law Review (2011) 20(2).

An edited volume comprising all the presented papers with a well regarded law publisher is planned.

Process


Proposals, including a title and 300-word abstract are due 28 January 2011. Send proposals to glr@griffith.edu.au

Confirmed Participants

Lyria Bennett Moses, Faculty of Law, University of New South Wales, “Agents of Change.”

Gaia Bernstein, Seton Hall Law, Seton Hall University, “When is Timing Important in the Regulation of New Technologies?”

Arthur Cockfield, Faculty of Law, Queens University, “From Cyberlaw to Law and Technology.”

Jennifer Chandler, Faculty of Law, University of Ottowa, “Technological Justice: Identification and Distribution of the Benefits and Harms of Cognitive Enhancements and Therapies.”

Charles Lawson, Griffith Law School, “Deploying Law to Bound Nature.”

Joseph Pugliese, Faculty of Arts, Macquarie University, "Drone Technologies and the Inexecution of Law."

Kieran Tranter, Griffith Law School, “Gaming the Speculative Jurisdiction in Law and Technology.”

Monday, February 22, 2010

Welcome Boing Boing Readers

Welcome Boing Boing readers and thanks for taking a minute to look around my blog. I generally focus on science, technology, and DIY topics although there's more here than that. Please use the links to the bottom right or type a topic such as "animal" or "DIY" to get to some interesting subjects.

Is your organization looking for a speaker for an upcoming event? Visit www.WilliamGurstelle.com





Also, here are links to some of my favorite NFTTU posts:

1000 Dead Men:
A description of the Gerry Report, perhaps the most grotesque bureaucratic report in all of American history.

The 10 Best North American Geek Fests
A link to a recent article I wrote for Wired Magazine

The Rise, Fall, and Rise of Robotic Combat
Remember Robot Wars? Many are still at it.

Hollywood's Catapult Warrior
Orlando Bloom's catapult fetish.

Celebratory Gun Firing: Good Idea or Not?
What goes up, must come down. A lot of comments on this one.

Nitric Acid Acts Upon Trousers
Ira Remsen, a chemist with a great sense of humor.

Fun With Jet Engines
Cool video.

Dippy Bird Power
My idea to end the energy crisis.


Navy Swimmer Nullification Program
A bizarre government defense program comes to light

My Name is Bond; Covalent Bond
Chemistry sets ain't what they used to be.

Water Bears - The World's Toughest Animal
Fun with tartigrades.

Sunday, February 7, 2010

Kinds of Computer

Desktop PC




Abbreviation for "Personal Computer," 93% of the computer population uses PCs. PCs for personal use come in almost any shape and design, they usually run Microsoft Windows (for example, Windows XP, Vista, or Windows 7), are exceptionally fast, and are compatible with almost all available computer applications. Most large-scale businesses, corporations, schools, and home users operate PCs due to their customizable features, performance, and generally low price. Leading PC manufacturers include Dell, HP, Samsung, Sony, Toshiba, and many others..

Laptop





A compact, battery powered version of a PC. Also called a "Notebook."

Macs Computer



Even though the Mac (short for "Mackintosh") is a form of personal computer, it is different from a PC because it does not use Microsoft Windows as the Operating System. Instead, it uses the Mac OS series, such as OS X Leopard.

Macs are known for their incredible system stability, quality designs, unique programs and features, and their usually fast speed. Macs are becoming more popular in society, generally used in peoples' homes or in digital graphic design studios. Unfortunately, many people do not invest their money in a Mac because of a Mac's steep price (a $1500 Dell PC is about the same price of a $2400 Apple Macktintosh--both with the same specifications) and the lack of many compatible programs made specifically for Macs (although the number is increasing).

Mainframe





Computers with large Hard Drives, lots of Memory (RAM), multiple CPUs running together, which perform large amounts of computing depending upon the speed of the processors used and amount of RAM included.

Micro Computer







A very small computer, usually used in cameras.

Super Computer





A computer with lots of processors, ALU's, Memory (RAM), etc. Usually used in scientific research work or they are used by the government. For large manufacturers, a supercomputer typically "breaks down" problems, solves them in small "bits," and then puts the problem "back together." They have a capability of 14,000 micro computers.

PDA





"Personal Digital Assistant" or Palmtop.

Analog





Older, out-dated computers. They calculate physical Quantities such as voltage, etc...



Friday, January 22, 2010

Announcing Upcoming Virtual Conference

On March 18, 2010, from 1 pm to 5 pm (Eastern Standard Time), we will hold a 'virtual' conference in Second Life (at the Queen's University Faculty of Education virtual island). The topic of this conference will be the same as the one for our most recent blog: 'Human Autonomy, Law and Technology.'

Dean Jim Chen will open the virtual conference with a keynote speech. Then professors, lawyers and others from different countries will appear as avatars to give papers and commentary.
More information on this virtual conference including a draft agenda is located here, which will be updated as the conference date approaches.

Individuals can view the conference proceedings in three ways: (a) as avatar audience members attending the conference; (b) via a live video feed; or (c) later viewing of an archived digital copy of the conference proceedings.

Thursday, January 7, 2010

The DIY Chip


The January 2010 issue of The Atlantic contains an article I wrote about the way small, cheap, and easy-to-program computers are turning artists into technologists and technologists into artists. It's about the concept of physical computing, or the way people use computers to sense environments and do great things with that sort of information.

The entire issue of The Atlantic is available for reading online without cost. Read the whole article here, or browse to http://www.theatlantic.com/doc/201001/robot-art

How to Build a Double Pendulum

I've written a detailed article describing all things related to building a double pendulum for Make Magazine, issue 22. When Mark Frauenfelder first suggested this project, I wasn't familiar with the device. But the more I found out about it, the more I wanted to make it! They are wonderful, mesmerizing, simple, and complex all at the same time.

Full instructions are in the magazine which will be available in March or April 2010 I think. I've also produced a video that provides a pretty good introduction. Don't worry too much about dimensions: you can make them just about any size and they still look interesting.

This video is posted at youtubehttp://www.youtube.com/watch?v=4W-lRO9kyqk

Sunday, January 3, 2010

Job for the next decade: Spider Farmer


A couple of years ago the always cutting edge David Pescovitz of BoingBoing fame gave me a book called Mr. Wilson's Cabinet Of Wonder: Pronged Ants, Horned Humans, Mice on Toast, and Other Marvels of Jurassic Technology. It i s an interesting look at the "off-kilter scientific oddities that challenge the traditional notions of truth and fiction."

Since then I've been wanting to build my own cabinet of wonders. I'm not sure where I'd put it, but I envision a big oak curio cabinet with shrunken heads, an umbrella stand made from an elephant's foot, a meteorite or two, and of course, some large, preserved insects.

Meteorites are readily available on the Internet in in rock stores. The elephant stand is probably not available and the shrunken head sounds hard to get.

The giant preserved insect? They're for sale on the etsy website. They look cool. What's interesting is that their claimed to be raised on spider farms.
The insects used in our framed shadowbox butterfly art have been raised on natural cruelty free tropical farms around the world

Wow,  spider farming. That's a job for Mike Rowe if I ever heard of one.