February 19, 2014

"Yes you do. I just told you."

Today at Smith Tower (a century-old Seattle building with partly manual elevators: buttons for the floors but levers to open and close the doors):

Me: Five, please.

Elevator operator: Presses "5", closes door. You need to press the floor.

Me: Pardon me?

Elevator operator: It's not my job to press the button. You need to do that.

Me: But you just did it.

Elevator operator: I'm saying it's not my job.

Me: Why?

Elevator operator: We're not psychics. We don't know what you floor you want to go to.

Me: Yes you do. I just told you.

Elevator operator: But that's your job.

Me: So why don't they let me do your job, too?

Elevator operator: You'd have to ask the management that.

Exeunt Frank.

February 16, 2014

Feelings, Invalidated

"How was your shower?"

"I feel a thousand percent better."

"That's impossible."

"You're invalidating my feelings. Don't invalidate my feelings."

February 14, 2014

PrimeSense and the iWatch

Why did Apple buy 3D sensor technology vendor PrimeSense? Before I speculate, let me make a necessary digression as to the nature of the so-called "iWatch".

I don't know what Apple is building. I can guess as well (or as poorly) as the next person, and my guess is that they're going to do to the wearable biometrics market with the iWatch what they did to the tablet market with the iPad. Pre-iPad, tablets were terrible, and a common industry question about Apple's widely anticipated entry was, "What's the killer app?" In fact the killer app was the entire iPad experience. Apple took their time, and when they came out with their own tablet, it was so much better than anything that had come before that it single-handedly defined the category.

The world is full of biometric wearables now, and I've owned two myself: the Jawbone Up and (now) the Fitbit Force -- four if you count my two heart rate monitoring Garmin GPS watches. Most of the current crop (the Up, the Force, Nike's FuelBand, et al) are glorified pedometers. My guess is that where the Up, Force, and FuelBand each collect one or two streams of data, the iWatch will collect half a dozen or more: motion, location (via tethered iPhone), heart rate, respiration, blood pressure, blood oxygenation... the list goes on. And my guess is that Apple will tie all this data together in a coherent way that makes it incredibly compelling -- and, dare I say, fun -- to track one's health and achieve personal health-related goals. The killer app will be the entire experience.

What would I like the iWatch to look like? In my dreams, it would look something like this incredible design concept from Todd Hamilton. But who knows? In my experience, Apple's new products tend to be less fanciful than our imaginations (unencumbered by reality such as they are), yet more useful on a quotidian basis.

In any case, to return to the opening question: why did Apple buy PrimeSense? Remember, PrimeSense is the Israel-based company that Apple bought for a reported $360 million late last year. PrimeSense developed the 3D sensing technology used as the basis of the first version of Microsoft's Kinect system.

When the acquisition was announced, I kept waiting for a coherent explanation of why Apple would buy them. The answers seemed to be that Apple wanted to add Kinect-style capabilities either to AppleTV or to the iPhone. Neither explanation holds much water, managing to seem at once both too obvious and insufficiently useful.

But there is an explanation that makes sense to me, and I haven't heard it anywhere else, so I'll put it forward here. What if Apple bought PrimeSense for the iWatch? One of the limitations of biometric wearables is that, being attached to the body at a single point, they typically don't have a great idea of what the wearer is doing. They can measure motion through space and time, but they're only measuring motion of one body part (usually the wrist). That's incredibly limiting. You'd like your wearable to know that you're running uphill, not just uphill; that you're doing dumbbell presses, not bench presses; that you're walking on a treadmill, not on a trail. (I'd just like my Fitbit Force to know that it's not on my wrist as it thinks, but sitting on the floor of the aircraft cabin where it fell without my knowledge the other day, where it proceeded to rack up thousands of phantom steps due to turbulence.)

Remember that one of PrimeSense's key product areas was mobile. Their Capri 3D sensor is claimed to be the smallest in the world. It's nowhere near small enough for a wrist-based wearable, but with Apple's silicon design expertise, one could imagine this changing rapidly, along with power requirements coming down. (Also, PrimeSense first showed Capri over a year ago; who knows how much smaller and more watt-frugal it is by now?)

So why embed a Kinect-style sensor in the iWatch? Because it would give Apple an incredible amount of information about what its wearer was doing. It wouldn't just know you were doing dumbbell presses; it would be able to critique your form. It wouldn't just know you were running uphill, it would know the slope and your stride length. It wouldn't just know you were sitting at your desk, it would know your posture. Combined with the appropriate iPhone- or iPad-based software, a PrimeSense-equipped iWatch could be an all-encompassing activity coach, ready to help you run faster, lift more, even hit a tennis ball better than you ever have. And combining that motion data with biometric sensing capabilities would give Apple unprecedented accuracy in calculating energy usage and body efficiency. If Apple could solve the technical challenges, it would be a home run for the iWatch.

I don't underestimate the challenges in pulling this off. As noted above, the Capri technology would need to be substantially miniaturized, and put on a serious energy diet (whatever the version showed consumed would almost certainly be too much for a wearable). Further, all the 3D sensing solutions I can think of use fixed sensors -- think of the Kinect sitting on your TV stand. An iWatch-based sensor would need to cope with its own movement through space. All this is no small amount of work -- maybe too much even for Apple in the short term. I don't know.

It's just a theory for now. But it's the best one I've heard.

January 14, 2013

No, No, No

In the past few weeks, various people have been linking to this page, which contains an animated startup screen for a hypothetical 16-bit video game based on Calvin and Hobbes. In case it goes away, here's a capture of it:

People seem to love this idea. And who wouldn't? It's Calvin and Hobbes, which for many people (including me) is and forever will be the greatest comic strip of all time, the creator who walked away at his peak, the Sunday-funnies-equivalent of quarterbacking your team to a Super Bowl and then retiring on the spot. Add in the retro 16-bit look and it's just overflowing with cuteness.

But what would the actual game play be for a Calvin and Hobbes game? What would you as the player do in the game? Take a look at Wikipedia's list of video game genres and ask yourself if even one of them would lend itself to a Calvin and Hobbes game that would be anything other than terrible?

I'll save you the trouble: no, there does not exist a video game genre that would make for a decent -- even halfway-decent -- Calvin and Hobbes game.

Partly this is because Bill Watterson was so protective of his creation, so consistently unwilling to ever commercialize his characters in any way beyond the strip itself. But partly it's because the joy we derive from Calvin and Hobbes comes from interactions at which computers are -- currently, at least, and for the foreseeable future -- terrible. His best friend and his parents make wry observations about his actions that are humorous and provide insight into human nature. Computers aren't wry; they aren't humorous; and they certainly don't understand human nature.

Don't misunderstand me; this isn't some sort of "computers are mindless automatons and ever shall be" rant. Far from it. I'm convinced there's no fundamental difference between biological and electronic computing, and I'm convinced that computers will eventually reach human levels of intelligence, and possibly beyond -- hopefully in my lifetime. But we're not even close yet. That means that a Calvin and Hobbes video game would necessarily be based on existing game design paradigms, which require very little or no machine intelligence to implement.

I can imagine what would happen if a video game company got its hands on Calvin and Hobbes. We'd have a Calvin-vs-Susie snowball fight artillery game, a Spaceman Spiff shooter game, a seek-and-find game with Calvin's mutant snowmen...

I need to go take a shower now.

September 08, 2012

Simulation-Based Science

[This is cross-posted from the original entry on the Z Shift corporate blog.]

This is adapted and extended from ideas I first put forth within a keynote delivered at MODSIM World Canada in Montréal in June 2010.

On the Navajo Indian Reservation in northern Arizona is a place called Long House Valley. This valley was inhabited by a people known as the Anasazi from 1800 BC to 1300 AD, at which point they abandoned it for reasons that remain unclear (1). This abandonment is a mystery that archaeologists have been studying for decades. Why did the Anasazi disappear? There's no written record, and no obvious catastrophe -- no meteors, no volcanoes, no anachronistic herds of saber-toothed cats. Traditional archaeological approaches haven't yielded a definitive answer. We know what killed the dinosaurs 65 million years ago, but we don't know why the Anasazi left Long House Valley 700 years ago.

We can't go back and observe the Anasazi directly. But what if we could simulate their society in an attempt to understand what happened? As it happens many researchers have done just that.

Long House Valley has been described as "one of the icon models of the agent-based modeling community" (2). It's relatively small (just 96 square kilometers) and well-bounded. There is a rich paleoenvironmental record that can be used as the basis of simulations. And there's a mystery to be solved. As a result, numerous Anasazi simulations have been built. These simulations typically cover periods of hundreds of years and model everything from family size and composition to population growth, weather patterns, agricultural productivity, and the like.

What have we learned from these simulations? What they tend to show, consistently, is that environmental factors by themselves don't explain the complete abandonment of Long House Valley. There was a 300-year drought in North America beginning around 1150 AD, and that seems to have contributed to the departure of the Anasazi. There was a drop in water table levels that also contributed. And overfarming seems to have taken its toll. But even with all these factors taken into account, the valley could have supported a smaller population.

In other words, despite all the work has been done, there is still a significant mystery attached to Long House Valley. But thanks to simulation, the mystery is smaller. It's not why the Anasazi left, but instead, and more precisely, why all the Anasazi left. A logical reason would be social pressures, but that remains to be addressed by future simulations.

Can we ever prove what happened to the Anasazi via simulation? No. Simulation can tell us what might have happened, what likely happened, but it can't tell us definitively what did happen. But knowing what might have happened, what likely happened -- these are valuable in and of themselves. They help us narrow our future efforts. They provide a base upon which future researchers can build. And as agent-based computational simulations improve in quality, and as more and more independently developed simulations return similar results, we can move from likely to probably.

Studying the reasons for the disappearance of the Anasazi probably doesn't hold a great deal of relevance for most people on a daily basis. But it does point to something that is important to many people: the use of simulation to understand things we otherwise can't.

The use of simulation as a scientific research tool is spreading rapidly, from simulations of sociological events like the disappearance of the Anasazi to simulations of galactic formation taking place over tens of millions of years; from simulations of rat brains at a level of detail sufficient to replicate their functioning to simulations of global climate patterns and how they are changing (and might change) as the result of human activity. This presents tremendous opportunities for us to know the previously unknowable. The problem with this new new kind of science (with apologies to Steven Wolfram) is that simulation has not traditionally been a component of the scientific method.

The first person I heard articulate the problem of integrating simulation into the scientific method was Dr. Rick Satava, Professor in the Department of Surgery at the University of Washington. As he put it in the abstract to his 2005 paper on the subject (3):

The scientific method has been the mainstay of scientific inquiry and clinical practice for nearly a century. A new methodology has been emerging from the scientific (nonmedical) community: the introduction of modeling and simulation as an integral part of the scientific process. Thus, after the hypothesis is proposed and an experiment is designed, modern scientists perform numerous simulations of the experiment. An iterative optimization of the design of the experiment is performed on the computer and is seen in virtual prototyping and virtual testing and evaluation. After this iterative step, when the best design has been refined, the actual experiment is conducted in the laboratory. The value is that the modeling and simulation step saves time and money for conducting the live experiment. The practice of medicine should look to the tools being used by the rest of the scientific community and consider adopting and adapting those new principles.
As Rick says in the talk he gives based on these concepts, "Would you rather run eight iterations of an experiment, or two to the eighth iterations?"

Rick's focus is on medicine, so he sees (if I understand him correctly) simulation as a way to shrink the experimental space when the experiment moves from the virtual to the real. In other words, if we can use simulation to discard thousands or even millions of scenarios, we can focus our limited dollars for expensive real-world experimentation on the most promising possibilities. But what happens when real-world experimentation isn't possible, as in the case of the Anasazi?

The problem isn't limited to just the study of historical events such as those at Long House Valley. Climate change is an area of tremendous interest to researchers right now. Arguably, one of the reasons that humanity has yet to address climate change with a level of seriousness appropriate to the findings of the scientific community is that scientists can't prove beyond a shadow of a doubt whether carbon dioxide levels will continue to rise, or what will happen to the Earth if they do. Proof is difficult to come by when the duration of a real-world experiment stretches beyond weeks and months into decades and even centuries. Or take the challenge of understanding how galaxies form. We have theories about how they do so, and these theories often involve galactic collisions that take place over tens of millions of years. How can we "prove" our theories when the timescale of the experiment is an order of magnitude (or more) greater than human beings have existed?

I believe that we're going to have to come to a new understanding of the scientific method. For domains in which we can validate the results of a simulation using real-world experimentation, Rick's description of Steven Wolfram's methodology is good: "Build the computer model, add the data from a real world experiment, see if the results match real-world expectations, change the input data to more closely approximate the model, and run the next iteration... until there is concurrence with the evidence of real-world results." But for domains in which real-world validation is impractical for one reason or another, then we will have to agree on a standard of evidence that allows us to accept a theory as provisionally proven, as the best evidence available at the time, and move on, knowing that revisions may be necessary as better evidence is developed.

The obvious danger with this approach is that we could find ourselves building scientific houses of cards. Simulations based on inaccurate starting data, inaccurate simulation algorithms, or both, will produce inaccurate results. It is entirely conceivable that disparate researchers around the world could simulate the same events using different techniques and come to such similar results that all concerned accept these results as provisionally proven and begin building upon them -- and yet for these disparate researchers to all be wrong, even profoundly so.

This will be a growing challenge to the simulation and scientific communities for many years to come. But the rewards are too great to not solve the problem. And I'm sure that we are -- collectively -- up to the task.


1. Axtell RL, Epstein JM, Dean JS, Gumerman GJ, et al. Population growth and collapse in a multiagent model of the Kayenta Anasazi in Long House Valley. Proceedings of the National Academy of Sciences. 2002;99:7275-7379. Full text (PDF).

2. Janssen MA. Understanding artificial Anasazi. Journal of Artificial Societies and Social Simulation. 2009;12:13. Full text.

3. Satava RM. The scientific method is dead-long live the (new) scientific method. Surgical Innovation. 2005;12(2):173-176. Abstract. Full text (PDF).

September 05, 2012

Research Results: Can Simulation Improve Healthcare System Performance?

[This is cross-posted from the original entry on the Z Shift corporate blog.]

Using simulation to improve medical training is a fairly well researched and understood phenomenon at this point. Many articles point to the effectiveness of using simulator-based systems to improve the results of clinician training. (I'm planning on providing an overview of this in a future blog post.) But can simulation be used to improve the systemic performance of clinical care? In other words, can we use simulation to model clinical healthcare systems and then improve their real-world performance based on the results of the simulation? This is one of our focus areas at Z Shift and so it's of great importance to us. Together with my co-author Dr. Robert Szczerba of Lockheed Martin, I hypothesized about this in a paper that won the Best Paper award at I/ITSEC (Interservice/Industry Training, Simulation, and Education Conference) in 2010, "Simulated clinical environments and virtual system-of-systems engineering for health care". (Full text (PDF). Slides (PDF).) But what about results from actual clinical trials?

As it turns out, a number of papers have been written on various aspects of this subject, over a period of decades, and the results have been positive. A brief survey of the available literature revealed the following:

  • A 1989 article published in Annals of Emergency Medicine described a "computer simulation model of emergency department [ED] operations". The authors concluded that "simulation is a potentially useful tool that can help predict the results of changes in the ED system without actually altering it and may have implications for planning, optimizing resources, and improving the efficiency and quality of care". (1)

  • A 2007 study published in Pediatric Emergency Care found that a model of patient flow within a pediatric emergency department based on discrete event simulation "accurately represents patient flow through the department and can provide simulated patient flow information on a variety of scenarios. It can effectively simulate changes to the model and its effects on patient flow". (2)

  • A 2008 article from AORN Journal [Association of periOperative Registered Nurses] described a simulation project "performed to assist with redesign of the surgery department of a large tertiary hospital and to help administrators make the best decisions about relocating, staffing, and equipping the central sterilization department". The authors found that "simulation can facilitate the design of a central sterilization department and improve surgical sterilization operations". (3)

  • An article published in the Journal of Healthcare Management in 2011 discussed the use of discrete event simulation as a "cost-effective way to diagnose inefficiency and create and test strategies for improvement". The authors concluded that while discrete event simulation "is not a cure-all for clinic throughput problems," it can nevertheless "be a strong to provide evidentiary guidance for clinic operational redesign". (4)

  • A 2011 article in the Journal of Healthcare Quality described the use of discrete event simulation to "model the operations and forecast future results for four orthopedic surgery practices". The authors concluded that simulation "was found to be a useful tool for process redesign and decision making even prior to building occupancy". (5)
As stated, this was from a relatively brief survey. I suspect that a more thorough search of the available literature would reveal additional similar articles. I should also point out that I haven't cherry-picked these results; all the relevant articles I found had positive conclusions on the question of using simulation to systemically improve healthcare. It's highly encouraging.


1. Saunders CE, Makens PK, Leblanc LJ. Modeling emergency department operations using advanced computer simulation systems. Annals of Emergency Medicine. 1989 Feb;18(2):134-40. Abstract.

2. Hung GR, Whitehouse SR, O'Neill C, Gray AP, et al. Computer modeling of patient flow in a pediatric emergency department using discrete event simulation. Pediatric Emergency Care. 2007 Jan;23(1):5-10. Abstract.

3. Lin F, Lawley M, Spry C, McCarthy K, et al. Using simulation to design a central sterilization department. AORN Journal. 2008 Oct;88(4):555-67. Abstract.

4. Parks JK, Engblom P, Hamrock E, Satjapot S, et al. Designed to fail: how computer simulation can detect fundamental flaws in clinic flow. Journal of Healthcare Management. 2011 Mar-Apr;56(2):135-44; discussion 145-6. Abstract.

5. Montgomery JB, Linville BA, Slonim AD. Desktop microsimulation: a tool to improve efficiency in the medical office practice. Journal of Healthcare Quality. 2011 Sep 13. doi: 10.1111/j.1945-1474.2011.00166.x. Abstract.

September 03, 2012

Healthcare Simulation in the Post-Mannequin Era

[This, which I co-authored with Dr. Robert Szczerba of Lockheed Martin, is cross-posted from the original entry on my corporate Z Shift blog, where it was in turn posted after originally being published on the Intelligent Hospital Today website.]

In 2010, Apple Inc. CEO Steve Jobs used the words "post-PC era" to describe the impact of new computing devices such as tablet computers. (1) In this description, he was clear that personal computers (PCs) would remain with us for a long time to come. By "post-PC era," Jobs meant the era in which PCs are no longer the focal point of computing for most people.

In a manner similar to the early PC market, the healthcare industry has mainly focused on the interactions with a single technology: the mannequin. When healthcare professionals use the terms "simulation" and "simulation center" to describe how they train clinicians, what they typically mean is "mannequin" and "mannequin center."

Good reasons exist for this. The focus of a clinician's work is, by definition, interacting with patients, and the mannequin is the best technology currently available for the immersive simulation of patients, providing for both input (clinician actions) and output (patient responses) using a variety of mechanisms. Mannequin technology is well-understood, and studies across a variety of clinical task domains have validated the value of mannequin-based training. (2, 3)

Yet with this in mind, we believe that the focus on the mannequin for clinician training is retarding progress in the evolution of healthcare simulation technologies in general, impacting our ability to significantly improve patient safety and the overall quality of care.

We believe that the focus of healthcare simulation can, should, and must switch away from the mannequin and toward the software-based simulation of clinical systems, which integrates virtual patients, clinicians, devices, settings, and processes. Moving our focus away from hardware and toward software will enable more rapid progress in fidelity, functionality, and price-performance.

To illustrate our point, it is useful to examine the history of flight simulation, which serves as the leading example of how training via simulation can improve real-world performance in complex environments.

Four decades ago, creators of flight simulators began to shift their development approach in a way that would profoundly impact the future of aviation. Prior to this time, flight simulators had been based upon simple analog electronics. However, computer developments in the 1960s offered the opportunity to move the task of the simulation of flight from analog electronics to computer software, with the software controlling a physical simulator. This is not dissimilar from how mannequins have worked since the first modern computer-controlled mannequin, Sim One, was introduced in 1967. (4, 5)

After switching to software-based technologies, the next generation of flight simulators used computer graphics to replace the mechanical and video-based systems heretofore used to display terrain to simulator-based pilots. Prior to this change, a typical high-end flight simulator would have included a "model board," a large physical model of terrain, over which a computer-controlled camera would "fly," providing images to the pilots.

While model boards provided a reasonable method of providing imagery given the technology of the time, they were ultimately limiting. Simulators could only "fly" over terrain that had been physically modeled. The model boards took up substantial space and required effort to handle. A model board could only be used by one flight crew in one simulator at a time. The view through the camera was limited by the degrees of freedom provided by the camera control mechanism.

The change to computer-generated graphics removed these limitations and more. Any environment that could be imagined could be drawn algorithmically, from any viewpoint, in a variety of simulated conditions. Further, this development eliminated flight simulation's last remaining dependency upon hardware for user interaction. Although flight simulators continued to make use of physically simulated cockpits and motion control systems, these were no longer essential. A flight simulator could be defined purely in terms of software and interacted with using mass-market input devices such as keyboards and mice. The result of this was that the rate of improvement in the fidelity of flight simulators was limited only by increases in computer processing power, which has grown exponentially since the invention of digital computing. (6)

Around this time, one other development took place that dramatically impacted the evolution of flight simulators. The introduction of personal computers in the late 1970s enabled anyone to develop or extend flight simulator technology. The first amateur flight simulator for a popular personal computer was released in 1980. (7) While it was primitive, within a decade, its successors were capable enough to be certified for pilot training. By the 2000s, entrepreneurs and hobbyists had released tens of thousands of add-ons that extended the functionality of multiple flight simulators.

With this history in mind, we re-examine healthcare simulation.

In flight simulation terms, healthcare simulation is approximately at the model board level of the early 1970s. Mannequins are computer-controlled, but rely on expensive, proprietary hardware for user interaction. The focus of healthcare simulation development is on improving the interactivity and fidelity of mannequins themselves. This would be as if the only focus of flight simulation today were on making better cameras and more realistic-looking model boards.

We do not claim that mannequins are going away. As with PCs in the post-PC era, the post-mannequin era will be one in which mannequins play an important but not central role. In the post-mannequin era, the mannequin will be seen as one of many possible methods of enabling user interaction with rapidly improving simulations of clinical system-of-systems. (8) As a result, we expect to see less investment in proprietary mannequins and associated hardware, and more investment in the research, development, and deployment of software simulations. (9)

Given current practices in the larger software development community and the certain involvement of academia, we expect that most useful and popular software simulations of clinical systems will be partly or fully open. This will move responsibility for the advancement of healthcare simulation away from a relatively small group of for-profit firms and toward the broadest possible community, including healthcare researchers and providers. Experience with prominent open software projects (10) suggests that this will have a dramatic and positive effect on the pace of development and the quality of the results.

The next generation of healthcare simulation has the potential to revolutionize the practice of medicine, from improving the quality of care to reducing overall waste and errors. However, for this "evolution to revolution" to occur, the industry needs to stop thinking in mannequin-centric terms and start thinking in terms of the holistic software simulation of clinical systems, with the mannequin as one piece of a much larger puzzle. To the extent that the healthcare industry adopts this attitude, it will enable dramatic advances in simulation for healthcare, and we will usher in a new era: the post-mannequin era.


1. Steve Jobs at D8: Post-PC era is nigh. CNET News Web site. Full text. June 1, 2010.
2. Issenberg SB, Mcgaghie WC, Petrusa ER, et al. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical Teacher. 2005;27(1), 10-28. Abstract. Full text (PDF).
3. Murin S, Stollenwerk NS. Simulation in procedural training: at the tipping point. Chest. 2010;137(5):1009-1011. Full text.
4. Denson JS, Abrahamson SM. A computer-controlled patient simulator. JAMA. 1969;208(3):504-508. Abstract.
5. Cooper JB, Taqueti VR. A brief history of the development of mannequin simulators for clinical education and training. Quality and Safety in Health Care. 2004;13(suppl 1):i11-i18. Abstract. Full text (PDF).
6. Kurzweil R. The law of accelerating returns. Kurzweil Accelerating Intelligence website. Full text. March 7, 2001.
7. Flight Simulator history introduction. simFlight Web site. Full text. February 20, 2005.
8. Boosman F, Szczerba RJ. Simulated clinical environments and virtual system-of-systems engineering for health care. Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) 2010. Abstract. Full text (PDF).
9. Kneebone RL. Practice, rehearsal, and performance: an approach for simulation-based surgical and procedure training. JAMA. 2009;302(12):1336-1338. Abstract.
10. Lerner J, Tirole J. Some simple economics of open source. Journal of Industrial Economics. 2002;50(2):197-234. Abstract. Full text (PDF).

August 29, 2009

A World with More Guns

Guest-blogging for Andrew Sullivan, Patrick Appel writes (here and here) about the idiocy of people bringing guns to political protests.

I remember how, after the Virginia Tech massacre, anti-gun groups used it as a justification to call for additional restrictions on gun purchases. Pro-gun groups replied by saying, in essence, "if just one other student had been armed, the shooter could have been stopped, so clearly we need more guns."

In an abstract sense, I see both sides' points. Generally speaking, the nations with fewer guns and greater restrictions on purchasing them and owning them have much lower rates of gun violence, so restricting them here might have a beneficial effect. And on the other side of the debate, yes, it's true that one gun-carrying student might have been able to stop the massacre soon after it started -- or even intimidate the shooter into not going through with it in the first place.

But in the real world, both sides' arguments break down.

There are so many guns in this country, so easily purchasable, that it's hard to see how tweaking the system at its edges is going to have any effect at all. If you want a gun, you're going to be able to get a gun. It's that simple. I imagine that gun control advocates are pursuing a policy of incrementalism. They might admit in a private moment that a particular restriction won't have any practical effect on gun violence, but that as part of a gradual progression over many decades, it makes sense. But that's not how it's presented, and I'm not sure that it does make sense in the end. I wonder if, when it comes to the US, the genie is permanently out of the bottle, and in the end there's nothing we can do about it.

It's the gun rights advocates' vision of the world that I ultimately find horrifying, though. Whenever a shooting spree occurs, they remind us of how they've been calling for fewer restrictions on carrying guns, and how if just one person there had been armed, this latest incident wouldn't have had to happen. Have they thought this through to its conclusion? A nation in which a substantial percentage of people walk around carrying guns? Seriously, have they thought this through? Guns at the grocery store? At church? At football games? At political rallies? In hospitals? At business meetings? Can this be the world in which they want to live? Can they believe we'd be safer as a result in this world? Can they believe fewer people would die from guns in such a world?

August 26, 2009

The Stock Market Under Obama

As of yesterday, with a closing mark of 9,539.29, the Dow Jones Industrial Average was up:

  • 20.0 percent since Barack Obama's inauguration (7,949.09 on 20 January)
  • 26.3 percent since the signing of the American Recovery and Reinvestment Act of 2009 (7,552.60 on 17 February)
  • 45.7 percent since the market's low point this year (6547.05 on 9 March)
Now, compare that to the eight years of the Bush administration. From 19 January 2001 to 20 January 2009, the Dow Jones fell 29.0 percent, from 10,686.00 to 8,279.63.

In case you're curious, by my calculations, the Dow Jones rose 228.2 percent during the eight years of the Clinton administration; rose 45.4 percent during the four years of the first Bush administration; and rose 130.6 percent during the eight years of the Reagan administration.

Am I suggesting that presidents be judged by the performance of the stock market during their terms? No. But I do wonder how it is that Democrats are still seen by many as bad for business. At least one important type of empirical evidence would suggest otherwise.

August 25, 2009

"Degraded, Barbaric, and Depraved"

From Glenn Greenwald's article on the CIA Inspector General's 2004 report on torture:

The fact that we are not really bothered any more by taking helpless detainees in our custody and (a) threatening to blow their brains out, torture them with drills, rape their mothers, and murder their children; (b) choking them until they pass out; (c) pouring water down their throats to drown them; (d) hanging them by their arms until their shoulders are dislocated; (e) blowing smoke in their face until they vomit; (f) putting them in diapers, dousing them with cold water, and leaving them on a concrete floor to induce hypothermia; and (g) beating them with the butt of a rifle -- all things that we have always condemend as "torture" and which our laws explicitly criminalize as felonies ("torture means... the threat of imminent death; or the threat that another person will imminently be subjected to death, severe physical pain or suffering...") -- reveals better than all the words in the world could how degraded, barbaric and depraved a society becomes when it lifts the taboo on torturing captives.
I hope that one day -- in my lifetime -- we as a society come to realize just how wrongly we behaved in the years following 9/11. No one I know would support internment camps like those in which we placed Japanese-Americans during World War II. So there is precedent. And hope.

August 23, 2009


I loved this postcard from a recent edition of PostSecret:

From PostSecret
It reads:
I am a 55 year old agnostic. If, when I die I find out there is a heaven, I can't be certain which of the many people I have shared my life will be part of the people I share heaven, but I am positive I will spend eternity tossing this dog a Frisbee.

Alaska's Prayer Cards

I've been doing a fair amount of flying on Alaska Airlines this year, and so have seen their "prayer cards" on many occasions. Prayer cards are included with meals and are printed with Biblical quotations, such as:

I will be glad and rejoice in you;
I will sing praise to your name
O most high.
I will praise God's name in song
and glorify Him with thanksgiving.
PSALM 69:30
Over the years, various people and groups have raised the issue of whether Alaska's prayer cards are appropriate (see here and here). Salon devoted an article to the topic, defending Alaska's right to distribute the cards while taking exception with their official corporate response to complaints, which links "Judeo-Christian beliefs" with the US government.

I generally agree with Salon's analysis, but a couple of points:

First, it seems to me that Alaska is missing an opportunity to draw from a much larger body of religion than simply the Old Testament of the Bible (if they have used verses from the New Testament, I haven't seen them). Other religious texts have words of wisdom, and anything that helps educate people on the broad spectrum of beliefs in the world, hopefully leading to more tolerance, can only be a good thing. For example, this from the Talmud:

Whoever destroys a soul, it is considered as if he destroyed an entire world. And whoever saves a life, it is considered as if he saved an entire world.
Or this from the Koran:
God helps those who persevere.
Or this saying attributed to Buddha:
Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense.
Second, assuming that Alaska is going to use only Christian Biblical quotes, why are the two I could find online (I don't save them) all about praising God?

Perhaps this is the agnostic in me, but I've never understood the obsession in some religious texts with singing the praises of one's chosen divine being. (Doing research for this article, it seemed like most of the quotes I found from the Koran were about the importance of prayer.) Contrast this with the quote from Buddha above, encouraging skepticism, including of himself.

In any case, if you're going to choose to draw from the Bible, how about this from 1 John 4:18:

There is no fear in love. But perfect love drives out fear, because fear has to do with punishment. The one who fears is not made perfect in love.
Or this from Proverbs 12:16:
A fool shows his annoyance at once, but a prudent man overlooks an insult.
Or this from Acts 20:35:
It is more blessed to give than receive.

July 07, 2009

Snakes, Multiple Varieties of

The Seattle Times ran a story on a snake shelter built in Jose Rizal Park in Beacon Hill -- a place for garter snakes to hibernate and hide from predators:

Beacon Hill's newest community center isn't much to look at -- a jumbled pile of rocks, really. But for some of Jose Rizal Park's most secretive and slithery residents, it's a place to hang out.

Last month, garter snakes populating the park's tree-tangled west side got a construction project designed just for them. Called a herpetarium, it's a shelter fit for any reptile, with its small, pyramid-shaped form and walls lined with recycled scrap...

The reptile-friendly pyramid has a base 4 feet long on each side, and an apex 2 ½ feet tall...

Such chambers let snakes hibernate during the winter. When the snakes emerge, they warm themselves on sunlit rocks, with easy escape should predators pass by.

The best thing about the story, though, was the first comment posted:

Odds are good I will find a couple of ex-boyfriends residing here. Posted on July 5, 2009 at 6:57 AM by cheezybreezy.

June 28, 2009

This Week's Tweets


...likes Starbucks' new store design direction: local craftsmen, regional motifs, recycled materials, and LEED as well. http://bit.ly/Tq88K
9:41 AM Jun 28th from bit.ly

While the new store designs are highly interpretive, they share several core characteristics:
  • Celebration of local materials and craftsmanship;
  • Focus on reused and recycled elements;
  • Exposure of structural integrity and authentic roots;
  • Elevation of coffee and removal of unnecessary distractions;
  • Storytelling and customer engagement through all five senses; and
  • Flexibility to meet the needs of many customer types – individual readers and computer users, as well as work, study and social groups.
...highly recommends this article debunking Canadian health care myths. They spend "less money... to get better outcomes." http://bit.ly/u44Sg 7:31 AM Jun 28th from bit.ly
As a Canadian living in the United States for the past 17 years, I am frequently asked by Americans and Canadians alike to declare one health care system as the better one.

Often I'll avoid answering, regardless of the questioner's nationality. To choose one or the other system usually translates into a heated discussion of each one's merits, pitfalls, and an intense recitation of commonly cited statistical comparisons of the two systems.

Because if the only way we compared the two systems was with statistics, there is a clear victor. It is becoming increasingly more difficult to dispute the fact that Canada spends less money on health care to get better outcomes.

...likes living in a place where to say "I'll have the salmon" would be strange and unhelpful -- like saying "I'll have the chicken" elsewhere.
7:29 PM Jun 27th from web

...can't get enough of Jabo0odyDubs' versions of Billy Mays infomercials. They get funnier with repeated viewings. http://bit.ly/2qFuwD
9:36 AM Jun 26th from bit.ly

Wow, was my timing bad on this one. Billy Mays died yesterday. Rest in peace, you overachieving pitchman.

...just read how one travel blogger thinks DL should buy AS. Would it be good for DL? Yes. Good for AS or Seattle? Um, no. http://bit.ly/Vs3hO
9:01 AM Jun 26th from bit.ly

...would very much like US politicians to look overseas as they redesign our health care system, but that's not happening. http://bit.ly/yeMH4
7:03 AM Jun 25th from bit.ly

Every day Washington's leaders tell us that we live in an interdependent world with a globalized economy. A butterfly beats its wings in Guangdong province, and four Wal-Marts materialize in Duluth. The peso plunges, and 30 Honda workers get laid off in Marysville. A coal-fired power plant belches carbon dioxide in Prague, and Lohachara Island sinks into the Bay of Bengal.

But change the subject to reform of the health care system, and the community of nations abruptly vanishes. No France, no Canada, no Germany, no Japan. Let there be no mention of any industrialized democracy save that of the United States, which is proud to claim 37th place in the World Health Organization's rankings of the world's health systems and 15th in the Commonwealth Fund's ranking by avoidable mortality of 19 industrialized countries (the highest rank indicates the fewest such deaths). To achieve a better score would be unpatriotic!

The political establishment's hubristic refusal to consider how other countries manage health care is encapsulated in the cliché "uniquely American," which is what Sen. Max Baucus, D-Mont., the lead legislator on health care reform, says he wishes his bill to be. It therefore goes without saying that the finance committee Baucus chairs could find no place in this year's exhaustive health care hearings for a single expert on how other countries achieve better health outcomes for their populations while typically spending, on a per capita basis, half what we do. When the finance committee releases its draft bill this week, it will be almost completely free of foreign influence.

...just enabled emoji on my iPhone. With everything happening in the world, how trivial a tweet is that? Share and enjoy. http://bit.ly/6YAeG
6:01 PM Jun 24th from bit.ly

...is delighted to see that Doug Coupland is updating "City of Glass", his zine-like guide to Vancouver. I can hardly wait! http://bit.ly/dVOSR
8:43 AM Jun 24th from bit.ly

...thinks that, based on the PBS documentary I saw, this upcoming TR Reid book on health care should be required reading. http://bit.ly/1nT9iG
7:18 AM Jun 24th from bit.ly

June 24, 2009

"..How and If You Truly Were Ever Alive"

Dr Jerri Nielsen FitzGerald, famous for diagnosing and treating her own breast cancer from the South Pole, is gone. The Times quotes from an e-mail she sent to her parents while she was there:

More and more as I am here and see what life really is, I understand that it is not when or how you die but how and if you truly were ever alive.
You will be well remembered, Dr FitzGerald.

June 23, 2009

Two Weeks' Tweets


...is agnostic, but stories like this make me think that maybe I'm atheist after all. Certainly I lean that way. http://bit.ly/yndDT
1:26 PM Jun 23rd from bit.ly

A religious ruling permits ultra-orthodox Jews to operate their mobile phones on the Sabbath and religious holidays with their teeth...

Many of the ultra orthodox volunteers... work on the Sabbath and were confronted with the dilemma of how to activate their mobile phones without violating religious rules...

Rabbi Levy Yitzhak Halperin issued a new set of rules involving the use of a specially designed case that prevents phones from being shut down accidentally. To confirm response to dispatch, workers are permitted to hold a small metal pin between their teeth and press the necessary buttons on the phones.

...heard from his neighbors (she Eritrean; he Ethiopian), "we feel like we've found our long-lost brother." What a wonderful thing to say!
6:49 AM Jun 23rd from web

...already has a gift, a hug, and a phone call to show for Father's Day. What a great way to start the day.
9:17 AM Jun 21st from web

Every day, I think a little more than when it comes to loved ones and friends, the very best gift they can ever give me is their time.

...would love to get together with @DavidRPickering and @Joi in Amman or Dubai. That would be a trip to get excited about.
6:52 PM Jun 20th from web

...had tears in his eyes while reading this article about Pixar, "Up", and a dying girl's wish. http://bit.ly/174lGI
11:43 AM Jun 19th from bit.ly

Colby Curtin, a 10-year-old with a rare form of cancer, was staying alive for one thing -- a movie.

From the minute Colby saw the previews to the Disney-Pixar movie Up, she was desperate to see it. Colby had been diagnosed with vascular cancer about three years ago, said her mother, Lisa Curtin, and at the beginning of this month it became apparent that she would die soon and was too ill to be moved to a theater to see the film.

After a family friend made frantic calls to Pixar to help grant Colby her dying wish, Pixar came to the rescue.

The company flew an employee with a DVD of Up, which is only in theaters, to the Curtins' Huntington Beach home on June 10 for a private viewing of the movie.

The animated movie begins with scenes showing the evolution of a relationship between a husband and wife. After losing his wife in old age, the now grumpy man deals with his loss by attaching thousands of balloons to his house, flying into the sky, and going on an adventure with a little boy.

Colby died about seven hours after seeing the film.

...saw Lauren, Carissa, and Clay for dinner. As in Kelsey's former choir-mate, his former office manager, and Aiken. A dinner of coincidences.
7:46 PM Jun 18th from web

...likes the @jheitzeb rule: if you tweet more than 4x/day, you're at risk of being unfollowed. Yes, this means you. No, not you. Yes, you.
4:49 AM Jun 18th from web

...just 'greened' his Facebook picture. "Where is their vote?" -- that's the rallying cry. May fortune favor the protesters in Iran.
11:03 PM Jun 15th from web

See http://helpiranelection.com.

...is getting his Iran updates from Andrew Sullivan -- highly recommended. As for the mainstream media? Epic fail. http://bit.ly/xj3Q2
6:27 PM Jun 15th from bit.ly

...is just back from a visit to Paradise. The kind on the south slope of Mt Rainier. Far more snow than we expected to see. http://bit.ly/Jbb8c
6:49 PM Jun 14th from bit.ly

...will never, ever tire of listening to the version of "One" by Mary J Blige and U2. She practically defines the term "soaring vocals".
8:41 AM Jun 14th from web

...is actually impressed with someone on Fox News. How exactly does Shepard Smith manage to keep his job there?. http://bit.ly/t1I6
6:24 AM Jun 11th from bit.ly

...doesn't want to think about what he'd do for this house. Funny, funny video, especially for sci-fi fans. Enjoy! http://bit.ly/ZRhSA
7:56 PM Jun 10th from bit.ly

...recently recommended an article by Atul Gawande. Now Obama is recommending the same article, according to the NYT. http://bit.ly/106NT6
7:06 PM Jun 9th from bit.ly

...says the next time someone claims that blogs are inferior to traditional media and journalism, point them to this page. http://bit.ly/WTw4d
2:53 PM Jun 7th from bit.ly

June 07, 2009

This Week's Tweets

I do much more tweeting than blogging these days, which is leaving my blog feeling unloved. I've been wondering how to address the problem. Cross-posting tweets as individual blog entries feels like overkill, even for someone like me who typically tweets once a day (and tries to make it meaningful). What I've come up with is cross-posting my tweets weekly as a single blog entry. I'm going to give a try for a while and see how it goes. To make it more interesting, I'll occasionally add more commentary here than is possible within the confines of 140 characters. Consider this an experiment.


...misses Rob Riggle on The Daily Show. This segment on cloned steak is my favorite of his -- especially the "mutant pit". http://bit.ly/oqnE3
6:41 AM Jun 7th from bit.ly

...will never, ever, ever, ever, ever, ever, ever, fly Ryanair. (With apologies to Winston Churchill.) http://bit.ly/15hbGI
4:03 PM Jun 6th from bit.ly

It may not have been a publicity stunt after all. Ryanair CEO Michael O'Leary says the European low-cost giant will indeed start charging customers one pound (about $1.65) to use the toilets on its flights...

O'Leary adds that he's asking Boeing to look into putting credit-card readers on toilet locks for new jets. "We are serious about it," O'Leary is quoted as saying...

O'Leary didn't stop there, taking the toilet idea one step farther. "He's now proposing ripping out two of the three loos on a Boeing 737 to make way for a further six seats, claiming passengers can learn to cross their legs on flights of only an hour or so," writes Alistair Osborne of the London Telegraph. The London Daily Mail quotes O'Leary as saying: "We are flying aircraft on an average flight time of one hour around Europe. What the hell do we need three toilets for?"

...read about a new tomato-derived supplement that eliminates "the oxidation of harmful fats in blood" within eight weeks. http://bit.ly/vl4bQ
7:43 AM Jun 5th from bit.ly

The tomato pill contains an active ingredient from the Mediterranean diet -- lycopene -- that blocks "bad" LDL cholesterol that can clog the arteries.

Ateronon, made by a biotechnology spin-out company of Cambridge University, is being launched as a dietary supplement and will be sold on the high street...

Preliminary trials involving around 150 people with heart disease indicate that Ateronon can reduce the oxidation of harmful fats in the blood to almost zero within eight weeks, a meeting of the British Cardiovascular Society will be told at Ateronon's launch on Monday.

...wonders what happened to the tradition of American elected officials not criticizing the President while he's abroad. http://bit.ly/dfO3X
4:38 AM Jun 5th from bit.ly

...just saw "Up" again, this time with Kelsey. A beautiful film with my beautiful daughter.
7:23 PM Jun 4th from web

...has posted his blog entry on the coming generation of "gestural natives". Remember, you heard it here first. http://bit.ly/12HXD9
12:56 PM Jun 4th from bit.ly

...likes this: "my problems with Obama... fade away when I read the speech. He is absolutely the right man for the job." http://bit.ly/IzjgM
10:07 AM Jun 4th from bit.ly

All my problems with Obama's handling of the financial crisis, the details about Gitmo, footdragging on DADT, etc. or any other details since he took office fade away when I read the speech. He is absolutely the right man for the job.

There is no other candidate that ran for President that could deliver this speech. They couldn't write it, they couldn't deliver it with any sort of credibility, and in all likelihood wouldn't even want to try.

...needs to check his tweets before saying EA did something that was in fact built by Microsoft (Lionhead, to be precise). http://bit.ly/rDV0m
5:02 PM Jun 3rd from bit.ly

...thinks EA's Project Natal looks stunning. But I'd love some one-on-one time with it to understand its limitations. http://bit.ly/rDV0m
2:20 PM Jun 3rd from bit.ly

I actually meant to say "Project Milo" here. Two mistakes in one tweet. A new record.

...would like to know how a list of the "13 best burgers" in Seattle could miss both Lunchbox Laboratory and Quinn's. http://bit.ly/KTUlV
9:46 PM Jun 1st from web

...checked in to find that his SEA-IAD flight was no more. Bird strike. Fan blade damage. Bad weather -> risky connections -> fly tomorrow.
6:32 PM Jun 1st from web

June 04, 2009

Gestural Natives

I've been thinking about the implications of the advances in gestural technology shown at E3 this week: Microsoft's Natal and Sony's 3D input technology. On reflection, I think the most profound implications for gestural technology are going to be in the longer term.

Just as we've been raising a generation of digital natives, the Wiimote and its more advanced successors could be the start of a generation of gestural natives. Remember that Marc Prensky's digital native thesis is that children raised in an environment of interactive technologies are wired differently than their digital immigrant predecessors. They perceive, process, and respond to information differently. (Not better or worse, just differently.)

It seems quite possible to me that children raised in an environment that includes the Wiimote, Natal, Sony's 3D input device, and even (to a lesser degree) multi-touch devices such as the iPhone could be wired differently from their D-pad-using older brothers and sisters. We've raised a generation of kids who are extremely proficient at making what is a fairly abstract connection between mashing buttons and seeing the corresponding results on the screen. (Yes, if you're in or past your early 40s, this is one of the reasons your kids thrash you at video games.) This next generation, the gestural natives, could be equally proficient at using gestural interfaces.

So what are the implications of this? I can think of two.

First, as the children and teenagers of today become the workforce of tomorrow, they're going to expect gestural interfaces and be frustrated and less productive when they don't have them -- just as the digital natives of today are frustrated by linear, non-interactive experiences. We have to be aware of this as we’re designing the information technology tools of tomorrow.

Second, we all know from experience that great artists generally have to grow up with the media in which they practice. Think about the earliest movies that you truly enjoy, not as historical artifacts, but as legitimately good cinema. My guess is that most people would point to Gone With the Wind, The Wizard of Oz, Citizen Kane, or Casablanca, all movies made in the late 1930s and early 1940s -- a good 25-30 years after The Birth of a Nation. I would argue that modern media and Moore's Law are shortening cycle times for familiarity with new technologies, but still, I don't think we're going to see the full potential of gestural input until we have designers who have been immersed in it for many years. So don’t look for the DW Griffith of gestural input -- much less the Victor Fleming, Orson Welles, or Michael Curtiz -- anytime in the immediate future.

May 30, 2009

No, Sotomayor Isn't a Racist

I was thinking that the "Sotomayor is a racist" meme being pushed by the far-right reminds me in a roundabout way of US-Canada relations. No, bear with me.

Canadians think about the US all the time. They have to. We're ten times their size. Virtually everything we do has the potential to dramatically affect their world.

Meanwhile, most Americans barely think about Canada at all. Can Americans name any two of the five largest cities? The capital? Any province? The name of the prime minister? Right.

As a Caucasian, I didn't think about race much growing up. I had no reason to.

But if I had grown up Hispanic, in a project, in an era when role models for me of my own ethnicity -- at least in popular culture -- were essentially non-existent?

Or if I had grown up African-American and spent many a dinner wondering if I'd be able to flag a taxi to take my date and me home, or if we'd have to walk?

Or if I had grown up Asian-American knowing that less than 20 years before I was born, my ancestors were being held in domestic prison camps on account of the color of their skin?

How could I not look at the world -- at least in part -- in terms of race? How could I not feel that my ethnicity shaped me and gave me a viewpoint distinct from that of people from different backgrounds?

And would my race-influenced viewpoint make me a racist? No, not in the slightest. It would mean that my world view was informed by race, not necessarily governed by it. It would mean that I took note of race, not necessarily that I discriminated on the basis of it.

To suggest otherwise is to lack empathy. Somehow I think the Rush Limbaughs and Newt Gingriches of the world might see this issue differently if they had ever suffered the effects of discrimination, even indirectly, even if only for a day.

And don't get me started on the "Sotomayor isn't bright" meme. Because stupid people are high school valedictorians, graduate summa cum laude from Princeton, and edit the Yale Law Journal. For pete's sake.

May 15, 2009

Listening to a Former SERE Instructor

I've been lazily tweeting instead of blogging of late, but this is worth jumping back into the fray for. Earlier this week, Rachel Maddow interviewed a former master instructor and chief of training at the Navy's Survival, Evasion, Resistance and Escape school, better known as the SERE school. The subject was torture. I'm repeating the interview in its entirety because I think it should be required reading in the debate.

RACHEL MADDOW: Joining us now is Malcolm Nance, former master instructor and chief of training at the Navy's Survival, Evasion, Resistance and Escape school, better known as the SERE school. He's now a U.S. government consultant on terrorism and counterterrorism.

Mr. Nance, thank you so much for joining us.


MADDOW: When you heard Ali Soufan today testify about his interrogation techniques as an FBI experience interrogator versus these force-based techniques that were reverse-engineered from some of the SERE techniques, does that resonate with you in terms of what you understand about the appropriateness of those techniques as interrogation methods?

NANCE: Well, it resonates with me for a very particular reason. One, the SERE program was started in the 1950s exactly because these techniques were being used against American servicemen. The SERE program and all the techniques carried out that we call enhanced interrogation techniques -- these were reverse-engineered from communists, from totalitarian states, and the Nazis.

So, of course, everything that he said about that -- about bringing the prisoner in, interrogating the prisoner and then him becoming recalcitrant and resistant, that's exactly what we want. And, of course, al Qaeda, of course, won by that, because they defeated the purpose of all the interrogations.

MADDOW: In terms of the argument that SERE-based techniques, these techniques, reverse-engineered as you say from what was done by totalitarian regimes, reverse-engineered and figured out in the '50s -- the argument has been made that because we do it to American troops as part of training it can't be torture, because then people like you who were an instructor at SERE could be charged with torture.

NANCE: That's ridiculous on its face. Listen, there's a whole class of people who I call now "torture apologists." And their full-time job is to go out and find spurious arguments in order to justify exactly why they violated, you know, U.S. legal code. And, of course, the standing order from General George Washington to treat prisoners with dignity.

And so, it's ridiculous. What we're doing is we're allowing a service member the opportunity to practice in a controlled environment over a few moments how to behave and how to react in order to act like Abu Zubaydah, in order for them to become resistant and for them to make sure that the techniques that are being applied to them don't work.

MADDOW: On the issue of sleep deprivation, specifically, sleep deprivation is one of those things that I think is at the top of the slippery slope when people start talking about torture. Well, sure, you don't want to get down to things like waterboarding or pulling people's fingernails out, but a little sleep deprivation never really hurt anybody. We've heard testimony that maybe some forms of mild sleep deprivation were used even before there were any new legal justifications ginned up in Washington to explain that.

What do you think about sleep deprivation in terms of its effect on prisoners in custody, whether it should be seen as part of torture?

NANCE: Well, these are softening techniques. All they did was they decide to bring the person up, keep him awake, whether they were going to walk him around, whether they're going to stand him up, whether they're going to give him loud music. And what you're doing is softening that person.

You're making that person, putting him into a state where you think he's going to be susceptible to answer questions. In fact, it's going to be even more difficult to get him to answer questions. And that, of course, you hit them with a harsh interrogation technique right after that, whether it's slapping or walling or some other physical harm or waterboarding, and you think that's going to snap them out of it -- when, in fact, that's the state we want you in. That's where you're going to be least susceptible to answer honestly. You'll answer gibberish.

MADDOW: That's the situation we want you in if you are an American and we want you to...

NANCE: Absolutely. So...


Well, in the case, then, so I guess we can sort of get there, we can follow the math problem. In the case of an actual ticking time bomb scenario, which is a faulting premise because things don't work out this way in the real world -- would you do SERE techniques on a prisoner in that scenario?

NANCE: No, of course not.

MADDOW: Any of them?

NANCE: No, of course not. Because one, it defeats the ticking time bomb scenario, in that all the prisoner has to do is not answer the question, or, better yet, the prisoner will lie.

And once the prisoner lies -- especially with al Qaeda members, let me tell you something, their ideology, they have a concept within their ideology called Al-wara wal'bara (ph), and that is absolutely devotion to their god, but absolute disavowal and hatred of anything that's not their god. Therefore, anything that they do to foil you is well within their plan. And they take great pride in that. And I'm sure when he was brought back to his little cage or to his holding cell, he saw every time that he defeated us, every time he didn't get an answer out of us or got some gibberish out of us, he saw that as a victory.

MADDOW: Yes. So, you got to outwit him.

NANCE: Well, what we've done is we've created al Qaeda SERE school for them.

MADDOW: Malcolm Nance, former master instructor and chief of training at Navy SERE schools, talked to you a few times over the years about this, and every time, I'm very grateful to have the chance to ask you these questions. So thank you.

NANCE: My pleasure.

April 24, 2009

Research = Bad

The nomination of Governor Kathleen Sibelius of Kansas to head the Department of Health and Human Services is being held up in the Senate:

President Obama will have to wait a bit longer to round out his Cabinet. Senate Republicans refused today to allow a confirmation vote on his health secretary nominee Gov. Kathleen Sebelius (D-Kan.)...

At the start of the session today, Senate Majority Leader Harry Reid (D-Nev.) proposed taking a vote after five hours of debate. But Senate Minority Leader Mitch McConnell (R-Ky.) objected, arguing that lawmakers needed more time to consider her "fairly contentious" selection.

A handful of Republicans have complained about Sebelius' support for abortion rights and her failure to report the full extent of campaign contributions she received from a physician who performs abortions.

Fine, this is how the game is played. I get that.

But then comes the next paragraph of the story, which so neatly, so concisely, so perfectly encapsulates what has gone wrong with the Republican party over the last eight years:

Sen. Jon Kyl (R-Ariz.) opposes Sebelius because of the Obama administration's support for research on the comparative effectiveness of disease treatments. He said he fears the evidence-based approach, coupled with information on price, could lead to rationing of care.
In other words, research is bad if it might lead to a conclusion you don't like.

I'm an Independent, not a registered Republican or Democrat. I enthusiastically voted for Barack Obama, and am delighted to have him as our President, but I don't agree with everything he has done to date. I want an effective opposition challenging him and the Democrats. But this is more of the same ideology-over-facts approach that has led to such disaster in recent times.

April 02, 2009

Interiority, Inner Light, and Living On

In 1993, Douglas Hofstadter, author of the brilliant Gödel, Escher, Bach: An Eternal Golden Braid and many other subsequent books, lost his wife Carol quite suddenly while on sabbatical in Italy. From the Carol Ann Brush Hofstadter Memorial Scholarship page:

In 1985, Carol married Doug Hofstadter, who had been an IU professor from 1977 to 1983 but who had recently moved to the University of Michigan. In 1988 Doug was hired back by IU and the young couple returned to Bloomington and started a family -- Danny, born in 1988, and Monica, born in 1991. One of Carol and Doug's fondest dreams was to bring up bilingual children, and so they decided to spend Doug's first sabbatical year (1993-94) in Italy. In August of that year, the family moved to Trento, in the Italian Dolomites, with views of beautiful mountains on every side. In early December, Carol had a series of intense headaches and on December 12, she suddenly fell into a coma. Ten days later, never having regained consciousness, she died of a brain tumor. Though devastated, Doug resolved to finish out the year in Italy in the way Carol would have wanted, and Danny and Monica became truly bilingual, thus realizing Carol's dream.
This is from a January 2007 American Scientist interview with Hofstadter:
In the book you mention losing your wife quite suddenly in 1993, and I was struck by how that affected your thinking and your work. It's a consoling idea that your wife's personality or point of view might persist somehow. Do you still feel that way?

Absolutely. I have to emphasize that the sad truth of the matter is, of course, that whatever persists in me is a very feeble copy of her. Whatever persists of her interiority is not her full self. It's reduced, a sort of low-resolution version, coarse-grained. Otherwise it would be a claim that "it's all fine, she didn't die, she lives on in me just as much as she ever did." And of course I don't believe that. I believe that there is a trace of her "I", her interiority, her inner light, however you want to phrase it, that remains inside me and inside some other people, people who really had internalized her viewpoint, people who really had interacted intimately with her over years, and that trace that remains is a valid trace of her self -- her soul, if you wish. But it's diminished; it's very dilute relative to what existed in her own brain. So there are two sides to the coin. It's consoling on the one hand that there's something left, but of course it doesn't remove the sting of death. It doesn't say, "Oh, well, it didn't matter that she died because she lives on just fine in my brain." Would that it were.

I find this passage quite affecting. Partly it's because it so closely mirrors how I think I would feel in Hofstadter's place. Even more, though, I think it's because it reminds me of the reality of my agnosticism and what I believe the limits are to "living on" after death.

March 29, 2009

The "Feel-Good Movie of the Year"

Writing in the New York Times, Frank Rich described Slumdog Millionaire as follows:

Our feel-good movie of the year is “Slumdog Millionaire,” a Dickensian tale in which we root for an impoverished orphan from Mumbai’s slums to hit the jackpot on the Indian edition of “Who Wants to Be a Millionaire.”
In fact, searching "slumdog millionare 'feel good movie of the year'" on Google turns up over a thousand hits.

I thought Slumdog Millionaire was an excellent film, but the feel-good movie of the year? That would be like getting a sugar-free lollipop on your way out after a root canal and calling it the "feel-good dentist visit of the year".

March 28, 2009

The Inhumanity of Solitary Confinement

One of my favorite non-fiction authors, Atul Gawande (author of Complications: A Surgeon's Notes on an Imperfect Science and Better: A Surgeon's Notes on Performance), has written a typically excellent article for the The New Yorker, "Hellhole", on the effects of long-term solitary confinement:

Consider what we’ve learned from hostages who have been held in solitary confinement [such as journalist Terry Anderson]...

Most hostages survived their ordeal... although relationships, marriages, and careers were often lost. Some found, as John McCain did, that the experience even strengthened them. Yet none saw solitary confinement as anything less than torture. This presents us with an awkward question: If prolonged isolation is -- as research and experience have confirmed for decades -- so objectively horrifying, so intrinsically cruel, how did we end up with a prison system that may subject more of our own citizens to it than any other country in history has?

So hostages held in solitary confinement uniformly view it as a form of torture. What do courts say?

Our first supermax -- our first institution specifically designed for mass solitary confinement -- was not established until 1983, in Marion, Illinois. In 1995, a federal court reviewing California’s first supermax admitted that the conditions "hover on the edge of what is humanly tolerable for those with normal resilience." But it did not rule them to be unconstitutionally cruel or unusual, except in cases of mental illness. The prison's supermax conditions, the court stated, did not pose "a sufficiently high risk to all inmates of incurring a serious mental illness." In other words, there could be no legal objection to its routine use, given that the isolation didn't make everyone crazy.
So that's the legal argument? If something doesn't make everyone crazy, then it's an allowable form of punishment?

What about other Western countries? Do they follow our practices? In a word, no. Gawande describes a British prison program begun in the 1980s that de-emphasized solitary confinement in favor of techniques designed to prevent prison violence, then writes:

The results have been impressive. The use of long-term isolation in England is now negligible. In all of England, there are now fewer prisoners in "extreme custody" than there are in the state of Maine. And the other countries of Europe have, with a similar focus on small units and violence prevention, achieved a similar outcome.
So what about results? At least we can point to results from our efforts, right?
In this country, in June of 2006, a bipartisan national task force, the Commission on Safety and Abuse in America's Prisons, released its recommendations after a yearlong investigation. It called for ending long-term isolation of prisoners. Beyond about ten days, the report noted, practically no benefits can be found and the harm is clear -- not just for inmates but for the public as well. Most prisoners in long-term isolation are returned to society, after all. And evidence from a number of studies has shown that supermax conditions -- in which prisoners have virtually no social interactions and are given no programmatic support -- make it highly likely that they will commit more crimes when they are released. Instead, the report said, we should follow the preventive approaches used in European countries.

The recommendations went nowhere, of course.

So let me see if I have this straight. According to Gawande, we have probably "the vast majority of prisoners who are in long-term solitary confinement", at a cost of over $50,000 per inmate per year. Other Western nations have rejected our approach. Hostages held in solitary confinement come to view it as a form of torture. The legal argument in favor of it is that it doesn't make everyone crazy. And studies show that long-term solitary confinement increases the risk of recidivism among inmates eventually released.

Can someone explain to me why we're doing this?

March 27, 2009

Revisiting Antibiotics Usage in Farming

I've blogged before about the therapeutic use of antibiotics by the cattle, swine, and poultry industries:

[T]he hog industry alone uses three times the amount of all antibiotics used to treat human illnesses... when we count poultry and cattle, nontherapeutic use of antibiotics in livestock rises to eight times total human usage.
Now Bill Niman and Nicolette Hahn Niman have written a short piece for Atlantic on the issue, following up on Nicolette's new book:
Over the past several years, each of us have toured numerous industrial-style animal operations, and they were not pretty. We saw pigs confined in metal buildings living on hard, slatted floors and fed daily rations that include such unsavory ingredients as bone meal, blood meal, and drugs, including antibiotics. Stepping into these buildings, we were immediately enveloped by the stench of rotting eggs. The pigs spend 24 hours of every day in crowded conditions standing over their own liquefied manure, bathing in the odor of decaying feces and continually breathing its fumes...

[F]eeding farm animals daily drugs began in poultry production in the 1950s, both as a means to speed animal growth and as a way to control diseases -- an increasingly daunting problem in the crowded confinement buildings that were coming into vogue at the time. Today, confinement operations commonly feed antibiotics to pigs, as well as chickens and turkeys.

It seems to me that the language "therapeutic use of antibiotics by the cattle, swine, and poultry industries" is too antiseptic, too kind, too easy to favorably misinterpret. Let me reword that:

In the interest of holding down their expenses, many farmers who raise cows, pigs, chickens, and turkeys in disease-encouraging conditions give these animals antibiotics to keep them from becoming sick and to induce them to grow faster. This practice, which totals eight times the total usage of antibiotics by humans, is contributing to the spread of antibiotic-resistant illnesses, including Methicillin-resistant Staphylococcus aureus (MRSA), which may now be responsible for more deaths in the US than AIDS.

When are we going to put an end to this?

Creative Commons License
This weblog is licensed under a Creative Commons License.