prolost

vivere est cogitare

to err is human

leak_trophy
Photo by Flickr user chasingfun/Mark Trammell

Every fall, the 120 teams in the NCAA Football Bowl Subdivision (FBS) play 12 or so weeks of college football. At the end of this regular season, the Bowl Championship Series (BCS) releases its final rankings; the teams ranked 1 and 2 are awarded the privilege of competing for the BCS National Championship.

And that’s it.1

The other bowl games select their participants in rather arbitrary fashion, whether by historical conference affiliations (most famously the venerable Rose Bowl Game, which historically pits a team from the West/PCC/AAWU/Pac-8/10/12 against one from the East/Big Nine/Ten), by selecting the best teams available (the bowls have an arcane but ostensibly logical selection hierarchy), or simply by ignoring all traditional rankings and picking the most financially lucrative matchup for the bowl game itself.

The nature of the championship (a single game between teams ranked 1 and 2 by the BCS) is rather frustrating because in almost all forms of competition the custom is typically to determine the champion by an elimination tournament. The college football model seems not only arbitrary, but unjustifiably so; often more than two teams (maybe many more) can make a reasonable case for being in the championship game. Consequently, the BCS receives considerable and (in my opinion) completely deserved criticism.

What baffles me the most, however, is the disdain for the use of computer models by the BCS. If anything, they are (or ought to be)2 the best part of the entire college football circus.

In brief, the BCS gives equal weight to the Harris Interactive Poll (a media poll), the USA Today Coaches Poll, and the average of the middle four of six computer models in determining the BCS rankings. The computer models thus account for one third (1/3) of the result.

It is extremely difficult for humans to make dispassionate analyses. We struggle to identify the sources of our own biases, we subconsciously process information selectively, and we make mistakes. Computers do none of these things. They perform no more or less than the tasks with which they are entrusted, barring technical errors (which are exceedingly uncommon). Moreover, the decisive element of the “computer rankings” of the BCS is not the computers themselves (modern computers being more or less fungible), it is the mathematical formulae by which the rankings are computed. The entire endeavor can only be criticized on the basis of the soundness of said formulae.

And therein lies my primary objection to the way the BCS implements computer rankings, an objection that can hardly be expressed more eloquently or scathingly than Bill James already did in an article in 2009. What the BCS has right now is not a good representation of what mathematical and statistical modeling has to offer for college football, so to criticize it on the basis of its performance is akin to criticizing automobile safety on the basis of a 2007 Brilliance BS6 crash test. The computer models are hampered neither by any flaw inherent to the concept of computer rankings, nor by a lack of football knowledge on the part of their creators. Their shortcomings are symptomatic of an institutional sluggishness on the part of college football, wherein age-old truisms supersede contradictory evidence.

That most of the six computer models employed by the BCS are run by individuals who like the current system is not insignificant. Some of the justifications for the considerable role of human polls in the BCS ranking are downright silly. This gem appeared in a Daily Fix (a Wall Street Journal sports blog) post about the BCS computer models:

[Jeff Anderson, co-creator of the Anderson & Hester computer ranking] argues that human voters are better equipped to judge scores, and distinguish between a 24-14 game where the losing team scores two touchdowns in garbage time and a 24-14 team where the losing team trailed by three late but threw an interception returned for a touchdown while attempting to mount a game-winning drive. “If margin of victory is going to be included in any part of the rankings, it should be included only in the subjective part,” Anderson says. Others point out that in many other sports, playoff seedings are determined solely by won-loss record, and the computer rankings account for the unique nature of college football by accounting for strength of schedule.

“It’s a matter of sportsmanship,” [Bill Hancock, executive director of the BCS] says. ”You don’t want a team to run up the score on their opponents, merely so they can move up in the computer rankings.” [1]

So instead of giving the computer models the freedom to employ the soundest methods, the BCS bars them from considering the margin of victory, ostensibly to encourage sportsmanship. Yet it gives two thirds of the vote to humans, who will vote not only on the basis of margin of victory, but really on the basis of whatever the hell they feel like. How is that any more fair? And Jeff Anderson, are you sure computers can’t tell the difference between garbage time and a late win?

I would argue that most people vastly overestimate the value of human polls and desperately underestimate the extent of human biases, particularly their own. If you perceive a computational model to be biased, I can assure you it is not (unless it’s Richard Billingsley’s, but that’s for another time). You are biased.

From 2001 to 2004, the BCS gradually eliminated the the use of margin of victory in its computer models. It also doubled the weight of human polls (from 1/3 to 2/3) in 2004, largely in response to the controversy of a split championship between the BCS and the AP poll. The message sent by the BCS (and much of the media, and pretty much everyone else who supported the change) was that the computer models exist only to corroborate and legitimize the human polls. When the computer models diverge meaningfully from human polls or the hopelessly vague and utterly uninformative “eyeball test,” they are made the scapegoat and forced to fall in line.

Throughout this process, we’ve met the most resistance from the computer people,” [Grant Teaff, executive director of American Football Coaches Association] said. “But that’s their deal. They talk about numbers and figures, and we talk about our responsibility to the game and responsibility to coaches and players emotionally. And besides, the polls that are done by the coaches and the writers will probably still make margin of victory a factor still anyhow. [2]

Responsibility to the game and coaches and players emotionally? What does that even mean? This quotation says everything you need to know about the BCS. Yes, the polls will indeed probably still make margin of victory, and the relative strength of the conferences in 1997, and in which time zone the games were played, and how the outcome will impact the coach’s own national championship game, and whether the team’s conference is spelled SEC, and on which team a writer’s son is a third-string kicker, a factor. And they will do it arbitrarily, without telling you. And if the computers don’t match the completely transparent and fair gold standard set by the polls, it’s because they were programmed by some scrawny, glasses-wearing, pocket-protecting brainiac at MIT who doesn’t know anything about what it’s like to coach or play football. Right?

References
[1] Drehs, Wayne. “BCS figures new formula makes for a better title game.” ESPN.com, July 12, 2001. Accessed December 8, 2011. http://static.espn.go.com/ncf/s/2001/0712/1225482.html.
[2] Bialik, Carl. “College Football’s Top Six Computers.” Wall Street Journal Blogs, December 8, 2011. Accessed December 8, 2011. http://blogs.wsj.com/dailyfix/2011/12/08/college-footballs-top-six-computers/.


1Okay, well, other polls (notably the Associated Press, a fascinating tale in its own right) rank teams outside of the BCS, and it is possible for the final AP champion to differ from the BCS champion, but the latter arguably carries more weight de facto.
2If all of the computer models employed were methodologically sound, I would not qualify this statement; sadly this is not currently the case, for all the reasons outlined above.

concerning the dead

This week we learned of some notable deaths, most prominently that of Steve Jobs. He was 56. He had a relatively public fight with pancreatic cancer. Jobs co-founded Apple, and was widely credited with the meteoric rise of his company’s fortunes in the last decade. His death has prompted a rather effusive outpouring of eulogies from almost everybody.

Moments after I saw the first Facebook status updates about Jobs’ passing, I happened to wander into my parents’ kitchen to clean some dishes. KUOW, a local public radio station, was on in the background. I heard the closing minutes of a moving story about the life of the Reverend Fred L. Shuttlesworth. This was followed by an announcement of the news about Jobs. I made a mental note to look up Shuttlesworth when I got back to my computer.

As it turned out, the Reverend Fred L. Shuttlesworth had also died on Wednesday. One could (and many people did, without question) make it through the day without the vaguest notion of who this Shuttlesworth character might have been. I am quite confident that, were it not for the timing of my trip to the kitchen, I would probably never have known who he was, what he did, or when he died. But that would have been a pity, because he was by all accounts a pivotal figure in the American civil rights movement of the 1960s. He led the demonstrations in Birmingham in 1963 that generated the horrifying images of Theophilus Eugene “Bull” Connor and his officers unleashing fire hoses and police dogs against peaceful protesters. To read Connor’s opinion of Shuttlesworth is to recognize at once the latter’s tenacity and the former’s moral depravity:

Mr. Shuttlesworth suffered chest injuries when the pummeling spray of fire hoses was turned on him. “I’m sorry I missed it,” Mr. Connor said when told of the injuries, The New York Times reported in 1963. “I wish they’d carried him away in a hearse.” [1]

It is fitting to remember Steve Jobs. He and his company had a profound influence upon our lives, particularly with respect to the technology that is now so pervasive. He was a visionary of uncommon clarity, and sold his vision to us with unrivaled effectiveness, and became fabulously wealthy as a result. His legacy will be vast.

But its value is more equivocal. While the role of Apple in advancing the ubiquity of very portable electronics is indisputably pivotal, the balance of benefit and detriment to our society and the world is extremely difficult to gauge or even evaluate.

In contrast to Jobs’ morally ambiguous legacy, there can be little doubt Shuttlesworth contributed materially to the advancement of social justice in the United States of America. He was outspoken, iconoclastic, and through his actions revealed his opponents as morally bereft. What sort of a person would rather live in 1960s Birmingham than a world without Apple computers? (Please don’t say a white person.)

I understand that, as it concerns an indescribably more public and dramatically more contemporary figure (Shuttlesworth’s major body of work was nearly 50 years ago; Apple announced a new iPhone, a product and market for which Jobs is largely responsible, the day before he died), Jobs’ death was always going to draw more attention than that of an 89-year-old civil rights leader. We have been affected more visibly and proximally by modern technologies than the (relative) absence of discriminatory laws and racial violence that Shuttlesworth fought. But I think it becomes us to remember not only the famous and influential, but the just and brave.

There are some who point to our men and women in uniform, and they declare that freedom is not free. It is fought for with blood, toil, tears, and sweat. Both of those sentiments are true, but to suggest that our armed forces are the only ones who make grave sacrifices in the name of freedom (if indeed they do that at all, particularly in modern times) is terribly misleading. It is only thanks to those like Shuttlesworth that there is freedom and justice for our fine armed forces to defend.

I do not mean to diminish Jobs’ legacy. I think his impact on technology and by extension, our lives has been undeniable. And certainly, the technological landscape he helped to shape has played a role in political movements around the world. But I don’t think it is fair to attribute to him such distal effects, especially when there is no evidence of intent on his part. He was not in Tahrir Square. He was not gunned down by Syrian troops in Homs.

He changed the world, but so have many others, at much greater cost and to much less fanfare. So while we remember the significance of a figure like Steve Jobs, let us not lose perspective. We call ourselves the “land of the free and the home of the brave,” not the land of the wealthy and the home of the iPhone. Though perhaps we should change it to be more accurate?

References
[1] Nordheimer, Jon. “Rev. Fred L. Shuttlesworth, 89, Dies; Fought on Front Lines for Civil Rights.” New York Times, October 6, 2011. Accessed October 6, 2011.

pall of ignorance

Contemplative
Photo by Flickr user squishband/Richard White

[Edited 2011 March 29 to reflect the printed title of Gawande’s New Yorker article, as opposed to the Web page title]

A fascinating new study of palliative care conducted by Jennifer S. Temel et al. [1] was published today in the New England Journal of Medicine. It examines the effect on quality-of-life and end-of-life care of initiating palliative care (along with standard oncologic therapy) within eight weeks of diagnosis of metastatic non-small-cell lung cancer.

Remarkably, early palliative care was associated with statistically significant improvements in measures of quality-of-life1, mood2, and length of survival. Patients randomly assigned to early palliative care were less likely to receive aggressive end-of-life care3 and more likely to have documented resuscitation preferences (which is a major element of an advance directive). The study was conducted at Massachusetts General Hospital.

I first read about this study in an article in the New York Times, which also led me to a recent New Yorker piece by Atul Gawande, “Letting Go.” I once bought Gawande’s Better on something of an impulse (which is strange for me), and I enjoyed it, though I didn’t find it as profound as The House of God, which I happened to be reading at the time. Admittedly, it is not an entirely fair comparison.

In any case, I think Gawande’s latest New Yorker article is very effective at conveying a very important idea — that the modern institution of medicine struggles with the concept of death, and that patients suffer needlessly as a consequence. At least, that was what I got out of it. It is quite long. I suggest you read it for yourself.

Of the many responses to the piece, at once the most and least interesting was by Avik Roy, on the National Review Online. The author’s position is so willfully ignorant that one can only sit and marvel at the ferocity of cognitive dissonance necessary to sustain it.

Roy argues that Gawande “falls flat” in “[trying] to extrapolate public-policy recommendations from” the stories in his pieces, and points specifically to Gawande’s reference to the (demonstrably false) “death panel” accusation raised during the recent effort to pass health care reform legislation. In defense of “the understandable fear that Americans have that, in a state-run system, [end-of-life care] decisions won’t be theirs,” Roy writes:

“[i]n Britain’s National Health Service, for example, terminally ill patients are incorrectly classified as “close to death” so as to allow the withdrawal of expensive life support” (emphasis mine). (Roy)

As evidence, he offers (a link to another NRO article containing a link to) a Daily Telegraph report on the Liverpool Care Pathway (LCP), a report ostensibly written in response to a letter from a number of UK physicians concerned about the implementation of the LCP throughout the National Health Service (NHS).

To begin, there are a few key points to take away from the discussion regarding the LCP. The first is that neither the Daily Telegraph article nor the concerned UK physicians ever so much as suggested that the LCP facilitated the deliberate, premeditated withdrawal of care for budgetary reasons. Yet this is clearly the conclusion at which Roy intends his readers to arrive, given the wording of his accusation: “[action] so as to [outcome]” can only reasonably be construed to suggest that the action was taken with the outcome in mind, and the use of “expensive life support” gestures conspicuously toward financial considerations (and not at all toward the idea that perhaps such interventions are medically ineffective and cause suffering). In contrast, the concerns voiced by the physicians are essentially intrinsic to any discussion of death, because they are about the act of deciding whether someone is inevitably going to die. Such a decision is necessarily challenging and never as clear-cut as we would like, but more importantly, it is governed by the laws of nature, our understanding thereof, our ability to observe the human body, and how we define concepts such as life and death. It is not subject to legislation, except to a slight degree in the last criterion (though in any case, Roy is not discussion legislation governing the concepts of life and death, because no such legislation has recently seen serious discussion). The LCP was also recently revised, addressing some of the substantive criticisms directed at it, and the British Medical Journal featured an excellent (and refreshingly, fact-based) editorial on the LCP and the media frenzy surrounding it. I think I have established, then, that Roy’s reference to supposed efforts by the NHS to kill people to save money is not related to Gawande’s article, and indeed not even true.

Though he fails to establish any substance to the accusation of “death panels” even in the thoroughly socialistic NHS, Roy forges on with this gem:

But to Gawande, it’s not enough that other hospitals adopt [early discussions about end of life care] on their own. A provision in Obamacare was to provide government funding for doctors to have end-of-life discussions with their patients; to Gawande’s dismay, “it was deemed funding for ‘death panels’ and stripped out of the legislation.” The obvious question doesn’t seem to occur to him: Why do we need a government program to pay doctors to have thoughtful conversations about their patients’ eschatological desires — something they should be doing already, and that doesn’t cost a dime? (Roy)

I have a truly outlandish proposal for answering Roy’s question: RTFA. Usually reserved for seriously addle-brained posts in comment threads, the obviousness of this particular RTFA would make William F. Buckley turn over in his grave. In fact, I will spare Mr. Roy the (surely monumental) effort and post the contextualized excerpt right here:

Given how prolonged some of these conversations have to be, many people argue that the key problem has been the financial incentives: we pay doctors to give chemotherapy and to do surgery, but not to take the time required to sort out when doing so is unwise. This certainly is a factor. (The new health-reform act was to have added Medicare coverage for these conversations, until it was deemed funding for “death panels” and stripped out of the legislation.) But the issue isn’t merely a matter of financing. It arises from a still unresolved argument about what the function of medicine really is—what, in other words, we should and should not be paying for doctors to do. (Gawande)

First of all, the notion that “to Gawande, it’s not enough…” is an utter fabrication. Gawande mentions the “death panel” idiocy to explain the continued lack of compensation for such discussions, and moreover, he does not ever state that federal legislation is the preferred (or even a practical) solution. Secondly, the idea that a physician’s time “doesn’t cost a dime” is complete nonsense. It is so obviously wrong that I don’t even know how respond to it; how can someone’s time be free? Especially a physician’s time? Gawande’s whole point is that the structure of financial incentives does not encourage physicians to take the (considerable) time necessary to really find out what someone wants. I thought free-marketers were familiar with this concept.

As if to compound his failure to read with failure to comprehend what little he apparently did read, Roy concludes vapidly:

There are legislative reforms that can help address these problems. But they involve reducing, not expanding, government control of the health-care system. They involve letting patients decide for themselves, with the aid of their doctors and their families, how best to negotiate their last days on earth. If a free country can’t be about that, it can’t be about much. (Roy)

Okay. Back to Gawande’s article, which clearly establishes the importance of discussing end of life care well before the moment arrives. Theoretical autonomy is nice to talk about, but real, substantive autonomy only exists if one’s wishes are carried out. As Gawande elegantly puts it:

All-out treatment, we tell the terminally ill, is a train you can get off at any time—just say when. But for most patients and their families this is asking too much. They remain riven by doubt and fear and desperation; some are deluded by a fantasy of what medical science can achieve. But our responsibility, in medicine, is to deal with human beings as they are. People die only once. They have no experience to draw upon. They need doctors and nurses who are willing to have the hard discussions and say what they have seen, who will help people prepare for what is to come—and to escape a warehoused oblivion that few really want. (Gawande)

There is no such thing as “letting” someone make a decision that they cannot make independently. Perhaps more to the point, there is no sense in supposing that, as a population, patients and their families are capable of making and conveying these choices proactively, though it is not for lack of desire. Instead, as Gawande argues (but somehow Roy could not or would not acknowledge), it is necessary for physicians and health care practitioners in general to engage them in the process, and moreover that systemic changes are likely needed for such behavior to become the norm (for the reasons outlined in the article itself). It is rather astounding how little evidence Roy offers that he has understood or even read “Letting Go.”

And recall the findings reported by Temel et al., that palliative care is in a sense self-reinforcing; those patients placed in early palliative care were more likely to utilize it than those who were not. Are we to suppose that patients in the control group wanted to suffer horribly before they died? Or can we conclude, far more sensibly, that the presence of palliative support structures increased substantive patient autonomy by enabling them to realize their wishes? And finally, can Roy offer any evidence that less government control actually leads to greater patient autonomy? Consider two alternatives: legislation requiring the acknowledgement and enforcement of advance directives, versus the lack of such legislation. Someone will always control the health care system, and you are fooling yourself if you think it could ever be the patients and their families. At best, we can hope to find ourselves well and faithfully represented by those in charge, which is not coincidentally the same philosophy behind our system of government (though whether it lives up to this is a different question altogether).

Serious discussions about medical ethics, the structure of our system of medical care, etc. such as Gawande’s wonderful work are essential to improving life (and death) in the United States. They are discussions we must have, sooner better than later, and whether we want to or not. But the sort of fact-free drivel found in Avik Roy’s response contributes nothing to the endeavor. It would be great if conservative commentators such as Roy rested their arguments on factually sounder foundations, but of course, reality has a well-known liberal bias.

Additional reading

Atul Gawande, “Letting Go.”
Avik Roy, “Letting Go of Death Panels.”

References

[1] Temel JS et al. Early palliative care for patients with metastatic non–small-cell lung cancer. N Engl J Med. 2010 Aug 19;363(8):733–42.


1Quality-of-life was measured by the Functional Assessment of Cancer Therapy–Lung (FACT-L) scale, the lung-cancer subscale (LCS) of the FACT-L, and a Trial Outcome Index (TOI), the latter being a sum of the LCS and the physical and functional well-being FACT-L subscales.
2Mood was measured by the Hospital Anxiety and Depression Scale (HADS) and the Patient Health Questionnaire 9 (PHQ-9).
3Aggressive end-of-life care was defined as “chemotherapy within 14 days before death, no hospice care, or admission to hospice 3 days or less before death.”

a train that will take you far away

Last Wednesday, on a whim, I decided to watch a movie in a theater (crazy, I know). I wanted to see what the fuss was about this new Christopher Nolan film exploring the fuzzy line between perception and reality, Memento Inception.

Inception is a good movie that has a somewhat complicated plot and very impressive effects. But I completely failed to understand what people seemed to think was so “whoa” about it. I also have some thoughts about the closing shot. It’s hard to say much more without giving away the plot, so…

If you haven’t seen either or both of these movies, and you don’t want to know what happens, you should probably stop reading.

.
.
.
Here is a space so you can avert your gaze.
.
.
.

WARNING: SPOILERS BELOW

I think the main shortcoming of Inception (this is not to say that it wasn’t good, but just something it could have done better) was that it failed to produce any “oh, now I get it” moments. This is because of two things:

1. Dreams in Inception are insanely realistic. Real dreams are utter nonsense when considered in the light of day.
2. Arguably, the realism of the dreams in the film is necessary to keep the audience engaged and convinced.

I think 2 is the strongest possible justification for 1. But 2 is utterly squandered by the absence of any hidden dream layers, except for the relatively brief introduction of Saito. I think the movie would have been much more mind-bending and reality-altering if it had revealed the plot and the shape of the world in a novel way, like Memento did. Of course you can’t keep making the same movie over and over again, but I still feel as though Inception was a charismatic, dazzling movie that happened to be about dreams whereas Memento was a more subtle and careful study of memory and reality.

Anyway.

At the very end of Inception, Cobb spins his totem and greets his children, but we never see if the totem falls or not (which obviously has implications for whether the world is real or not). I think it doesn’t matter whether it falls or not, for the following reasons:

1. It is obviously meant to be a cliffhanger, so that the audience is supposed to wonder. There isn’t really a meaningful “right answer.”
2. The totem might only be useful for distinguishing reality from someone else’s dream (at least, this is all that is offered to us explicitly in the film). If Cobb is himself dreaming all of what we see, the totem might not be diagnostic. However, he does use it after waking in Yusuf’s basement, so it’s not clear what the totem really does.
3. Cobb explained his totem to Ariadne halfway through the film. From this point on, it is no longer safe to assume that his totem is diagnostic. Arthur also mentions how often Cobb does things he tells others not to do, suggesting that Cobb’s totem is a pretty poorly-kept secret.

If we take 1 to be true, then the answer is of no real interest because the movie is not supposed to give us an answer, so anything we come up with is just our own imagination. If we take 2 or 3 to be true, then a world in which the totem falls is indistinguishable from a world in which it does not fall, again rendering the answer meaningless.

In particular, the confusion regarding the totem in reason 2 is representative of the plot holes in Inception. A lot of the questions about the movie seems to arise not from any deliberate secrecy or a particularly rich world, but simply inconsistencies introduced by the insane number of plot devices and machinations. Things like kicks, limbo, and totems are all used repeatedly but inconsistently, which naturally invites a lot of questions.

To my mind, Inception is fundamentally a film about Cobb’s love, guilt, and regret. It is about his struggle to come to terms with his wife’s death. It’s only sort of about dreams, and its treatment of the former topics is better than of the latter, appearances aside. My favorite part of Inception was actually when Cobb explains to Mal that he had realized his dream of living a life with her, and the audience is shown how old the two really were at the end of their time in limbo (in contrast to what was presumably Cobb’s idealized recollections). I heard some people in the theater say “aw.” It was rather touching.

consequence

I heard a mention of an interesting story this weekend on NPR’s fantastic Wait Wait… Don’t Tell Me! about an Irish man who saved Adolf Hitler’s life. It’s one of the best good news…bad news stories I’ve come across.

It also highlights one of the biggest reasons why I think a robustly consequentialist moral framework is not workable. Perfectly successful consequentialism requires exact knowledge of the outcome of every action ad infinitum. And in the case of a probabilistic model (i.e. taking actions most likely to lead to positive consequences), the likelihood of attaining the desired outcome decreases accordingly. Thus, in the absence of perfect knowledge, consequentialism actually fails to achieve its intended outcome (to wit, the maximization of some desirable quantity). By its own metric, consequentialism is at best very flawed.

Whereas consequentialism almost necessarily fails to satisfy its own criterion, well-conceived deontological approaches ought not to present such a paradox. The “duty” of the deontologist is simply to adhere to deontology.

This is not to say that deontology is better than consequentialism. We all care about the consequences of our actions to (at least) some degree. But robust, exclusive consequentialism just makes no sense in a world where it is impossible to know perfectly what will happen as a result of our actions.

delusion angel

Richard Linklater’s Before Sunrise and its sequel, Before Sunset, are fantastic. They are everything romantic films ought to be (but usually are not).

daydream delusion
limousine eyelash
oh baby with your pretty face
drop a tear in my wineglass
look at those big eyes
see what you mean to me
sweet-cakes and milkshakes
i’m a delusion angel
i’m a fantasy parade
i want you to know what i think
don’t want you to guess anymore
you have no idea where i came from
we have no idea where we’re going
lodged in life
like branches in a river
flowing downstream
caught in the current
i carry you
you’ll carry me
that’s how it could be
don’t you know me?
don’t you know me by now?

changes

This past Tuesday, Michelle and I made a day trip up to Vancouver, BC. We saw a black squirrel, ate some plants, and warmed our hands. It was fun.

I was going through some old photographs with my mom, and it’s really interesting how old age is like adolescence. Except backwards. While my cousin and I “grew up” considerably over the past few years, my grandparents have gotten noticeably older. And of course, old age can resemble childhood, in the sense that the elderly often need to be cared for in ways similar to children.

On a slightly less depressing note, this is what the Lions Gate Bridge (connecting downtown Vancouver with North & West Vancouver) looked like in 2005.

The Lion's Gate Bridge in 2005

And a beautiful song by Kate Havnevik:

se meg
som jeg er
ta det som kommer
viser meg
hvor jeg er
hvor jeg skal
og hvem du er

belated

I have trouble keeping a blog updated. If you know me at all, this fact should not be difficult to understand. Indeed, I rewrote the previous sentence about 6 times, and this very sentence several times. I may even rewrite them again after finishing this paragraph; they just don’t sound quite right. Nevertheless, I shall soldier on, as it were.

When I make a blog post, I have an expectation that it will be interesting, full of content, and well-cited or linked. A good example is my 2009 influenza A(H1N1) S-OIV “swine flu” (and a variety of other uninformative names) post. It’s a thing of beauty, brimming with citations and…other things. Nobody really cares.

Unfortunately, while it can be very time-consuming to develop a thought into a well-conceived contribution worthy of posting, doing so is only for my own benefit. I’m pretty sure no more than 3 people found my aforementioned post the least bit interesting. I occasionally drop incomplete or less tedious ideas into this blog, but that is mostly out of a feeling of obligation to prevent it disintegrating entirely from lack of use.

All of this is a very long way of saying that, perhaps, I will make an effort to post more spontaneous or raw ideas, generally briefer (and coincidentally more interesting) than I would otherwise prefer, for the sake of actually putting something into this sad excuse for a blog.

With luck, I will also learn to write. Seriously.

acer palmatum dissectum

Click for full size (3120 x 4560)

I made a typology for ART 140. I am rather fond of it. Perhaps I will write about it at length when I am not so exhausted. Perhaps I will not. It is a mystery.

july 5

God speed the year of jubilee
The wide world o’er!
When from their galling chains set free,
Th’ oppress’d shall vilely bend the knee,
And wear the yoke of tyranny
Like brutes no more.
That year will come, and freedom’s reign,
To man his plundered rights again
Restore.

God speed the day when human blood
Shall cease to flow!
In every clime be understood,
The claims of human brotherhood,
And each return for evil, good,
Not blow for blow;
That day will come all feuds to end,
And change into a faithful friend
Each foe.

God speed the hour, the glorious hour,
When none on earth
Shall exercise a lordly power,
Nor in a tyrant’s presence cower;
But to all manhood’s stature tower,
By equal birth!
That hour will come, to each, to all,
And from his Prison-house, to thrall
Go forth.

Until that year, day, hour, arrive,
With head, and heart, and hand I’ll strive,
To break the rod, and rend the gyve,
The spoiler of his prey deprive –
So witness Heaven!
And never from my chosen post,
Whate’er the peril or the cost,
Be driven.

watchmaker

This photo needs to be viewed in really large format to be fully appreciated. The sheer size and beauty of the universe are simply staggering. Reality trumps fiction any day (sorry BSG, you’re still pretty cool).

The Watchmaker analogy is totally broken, but it’s easy to see why such a sentiment is appealing; our world is amazing.

MKAILVVLLY

Those of you who do not live directly beneath a rock may have heard about this whole “swine flu” thing. Unfortunately, there is a considerable amount of misinformation and confusion in the public consciousness, and the media at large seems not to be helping much in the panic-mitigation department.

So before you start building your vault, a few points to keep in mind:

1. First of all, calm down.

2. There is still no compelling reason to believe that this strain, influenza A(H1N1)1, is significantly more virulent than a typical seasonal influenza.

Your run-of-the-mill flu season has a case-fatality ratio of very roughly 0.1%, or 32% of hospitalizations [1]. Let’s narrow that to the 19-to-64 demographic, which could be most susceptible to this current outbreak (an unusual pattern seen in pandemic flus and likely caused by an overly robust immune response in healthy adults [2]), and is least susceptible to the seasonal flu. Within that population, CFR is about 0.03%, or 7% of hospitalizations [1]. Past influenza pandemics have had CFRs of anywhere from 0.1% in the 1957 and 1968 outbreaks to 2.5%2 in the 1918 “Spanish flu” [3].

In contrast, the CFR in the case of influenza A(H1N1) could be anywhere from 3.1% (an upper bound, based on a maximum of 8 laboratory-confirmed influenza A(H1N1) deaths out of a minimum of 257 laboratory-confirmed influenza A(H1N1) cases worldwide, from WHO figures available at time of writing) to 0.0016% (a very conservative lower bound, based on an approximate hospitalization rate of 0.4% of all cases in the 19-64 demographic in a typical flu season [1], with which an attack rate was extrapolated from 2000 estimated hospitalizations in Mexico).

Using figures that are quite popular in the press gives a CFR of about 7.5% in Mexico (some 150 deaths in 2000 hospitalizations, the latter very dubiously assumed to be equal to the number of cases). Because of the unreliability of the “suspected” case count in Mexico, I am not convinced that this particular CFR estimate is useful at all, even as an upper bound. It’s far more likely that the actual CFR falls somewhere between 0.0016% and 3.1%.

All of these numbers don’t tell us very much (except that it is highly unlikely that this is some epic killer virus), but that’s exactly the point. Just because (thanks in large part to the surveillance infrastructure put into place in the wake of the “avian flu” panic) this (potential) pandemic has been spotted, there is no reason to assume that we have any solid evidence suggesting that the virulence of this pathogen is particularly high. However, this may very well change as time goes on and as the situation becomes clearer, and it certainly does not mean that the virus is not dangerous.

3. Virulence is not the same as pathogenicity. Perhaps more precisely, the concepts are not the same, though the terms may often become scrambled in the fray. The salient point is that while influenza A(H1N1) has proven highly pathogenic (i.e. it is highly infectious and spreads rapidly), there is not much evidence to suggest that it is especially virulent (i.e. it has not been associated with unusually high mortality or morbidity). So while governments everywhere are preparing for the possibility of a pandemic, the severity of the disease (to wit, the “causing serious illness” criterion from the linked WHO document) is far from clear at this point. And hopefully I was able to convince you in Point 2 that there is as yet no reason to suspect any greater virulence from this strain than a typical seasonal flu strain.

4. Influenza A(H1N1) has a few key differences to Severe Acute Respiratory Syndrome (SARS) and influenza A(H5N1) or “avian flu”. For one, both SARS and avian flu were much deadlier; the SARS outbreak in Hong Kong had a CFR of about 14-17% [4], while the avian flu has a CFR of something like 14-33% [3]. However, avian flu never demonstrated efficient human-to-human transmission, which made it a very deadly disease that was unlikely to spread quickly. Likewise, SARS has never been observed to be contagious before the onset of symptoms, which significantly increases the likelihood that a person at risk of transmitting SARS can be identified by basic surveillance. Influenza A(H1N1), while appearing (for now) to be far less virulent than either of these two recent serious respiratory disease outbreaks, is also considerably more likely to spread rapidly and become pandemic.

5. There is a lot of talk in the news about “suspected” and “probable” cases of influenza A(H1N1). When these words are used by a media outlet, then frankly all bets are off. On the other hand, if a news report quotes a health official referring to a case as “probable” or “suspected,” that official is (hopefully) adhering to the CDC’s Case Definitions for Infection with Swine-origin Influenza A (H1N1) Virus (S-OIV):

A confirmed case of S-OIV infection is defined as a person with an acute febrile respiratory illness with laboratory confirmed S-OIV infection at CDC by one or more of the following tests:

  1. real-time RT-PCR
  2. viral culture

A probable case of S-OIV infection is defined as a person with an acute febrile respiratory illness who is positive for influenza A, but negative for H1 and H3 by influenza RT-PCR

A suspected case of S-OIV infection is defined as a person with acute febrile respiratory illness with onset

  • within 7 days of close contact with a person who is a confirmed case of S-OIV infection, or
  • within 7 days of travel to community either within the United States or internationally where there are one or more confirmed cases of S-OIV infection, or
  • resides in a community where there are one or more confirmed cases of S-OIV infection.

You can make of that what you will. It seems to me that there is probably no logistical barrier preventing health care entities other than the CDC from confirming the influenza A(H1N1) subtype, except for one reason or another it doesn’t count as “confirmed” unless the CDC does it.

6. When I first began considering and looking into the actual severity of the whole “swine flu” panic, I thought exactly the same thing that Obama said earlier this week: this flu outbreak (and likely pandemic) is, based on the information we currently have, a cause for concern but not alarm.

If there is one good thing that has come out of what is arguably a gross overreaction by the American media, it is a heightened awareness of the importance of public health and good hygiene. So remember kids, listen to the President and wash your hands.

References

[1] Weycker, D. et al. Population-wide benefits of routine vaccination of children against influenza. Vaccine 23, 1284-1293 (2005).

[2] Kobasa, D. et al. Enhanced virulence of influenza A viruses with the haemagglutinin of the 1918 pandemic virus. Nature 431, 703-707 (2004).

[3] Li, F. C. K. et al. Finding the real case-fatality rate of H5N1 avian influenza. J Epidemiol and Community Health 62, 555-559 (2008).

[4] Jewell, N. P. et al. Non-parametric estimation of the case fatality ratio with competing risks data: an application to Severe Acute Respiratory Syndrome (SARS). Statist Med 26, 1982-1998 (2006).


1I have used the nomenclature preferred by the World Health Organization as of 30 April 2009.

2The 2.5% CFR figure for the 1918 pandemic, though almost canonical, seems highly questionable given the estimates of 20-100 million deaths at a time when the world had a population under 2 billion. In any case, data from that pandemic are likely iffy at best.

we don’t want your kind here

Q: “I don’t trust Obama, I have read [sic] about him. He’s not… He’s not… Errr… He’s an Arab.”
A: “No, ma’am. No, ma’am. He’s a decent, family man, a citizen that I just happen to have disagreements with…”

Colin Powell offers the correct answer to the Muslim/Arab “attacks,” and it is truly a pity that Obama has not yet spoken out about this:

But the really right answer is, what if he is? Is there something wrong with being a Muslim in this country? The answer’s no, that’s not America.

We’ve come so far since those bad old days, one could be forgiven for believing that we’ve made some progress. That is, until one has observed that which is the conservative “pro-America” (if by America you mean bigotry) population.

McCarthy would be proud

Michele Bachmann expresses a nonsensical, ideologically inspired and dangerous view that demonstrates a complete lack of understanding about America and its founding ideals. To call those who are critical of government policy and structural inequities that cause suffering (i.e. a failure to realize the ideals of America, whatever that even means) anti-American is not only profoundly idiotic, but smacks of the violently nationalistic attitude that empowered the most terrible regimes the world has ever seen.

hocus focus

arm is moved
hat is different
leaf is missing
foot is moved
skirt is shorter
sleeve is shorter
summer is shorter
temper is shorter
something is different
heart on sleeve is missing
heart fell off shorter sleeve
chest is different
chest is emptier
smile is smaller
hand is missing
john is patrick
hot is colder
time is longer
life is shorter
young is older
you are missing
you are moved
i am not
everything is different

by Rebecca Hoogs (rendered as faithfully as possible from a recording)

this makes no sense

A seagull nearly landed right on me today. I’ve no idea why.

what riders taught me

The other day I was reading a book on the bus, the first chapter of which features the town of Bethel, Alaska. I happened to be reading that particular part when a lady came to be seated next to me. Remarkably, it turned out that she had been a resident of said town, which has a population somewhere in the neighborhood of 5,000. We talked briefly about rural medicine and cultural encounters. They were in Seattle because her young daughter required surgery.

It was amazing.

RCW 46.61.667

People need to stop calling Bluetooth headsets (that is, headsets and similar devices utilizing the Bluetooth wireless communication protocol) “bluetooths.” It makes you sound stupid.

Northwest Profile #76

So make mine a triple venti grande americano chai vanilla minty mochaccino caramel macchiato; half caf, half decaf, one Equal, one Sweet ‘n Low, one raw sugar; skinny organic extra virgin olive oil soy breve with no foam, extra whip extra hot; and oh yes, leave off the top so I can put on my own sprinkles.

maybe i’m just an elitist

For some reason, when I saw the previews for last night’s episode of America’s Got Talent, I thought the American version of the opera singer dude would actually be good. Alas, just like Mr. Potts, he was okay but really not all that good. A decent performance and much better than anything I could muster, but far from any standard that would be applied to a professional tenor.

Of course any such comparison is ludicrous, but the fact is that people have made comparisons between Paul Potts and real opera tenors. I’ve seen remarks about how “emotional” his singing is. That’s nice, but good tenors are emotional and have good technical skills. The technique of Potts and Boyd just really don’t stand up to scrutiny.

And what is it about Nessun dorma? Seriously, can’t you come up with something more technically challenging, that will demonstrate your ability a little better? The only reason that aria is popular is because Pavarotti made it so. Hell, they even repeat his inserted syllable in “vincero” (probably without realizing it). There are plenty of beautiful, challenging arias out there.

Follow

Get every new post delivered to your Inbox.