Wednesday, September 29, 2010

The Difficulties of Anger

Mark Twain once said, "When angry, count to four.  When very angry, swear."

As I watch the austerity protests sweep across Europe and listen to the rhetoric of the Tea Party and other (in many cases, justifiably) disgruntled groups here in America, I've been thinking a lot about the phenomenon of anger.  As I look to other writers and thinkers for advice and inspiration, I'm particularly struck by the fact that many of them express serious doubts about the value and efficacy of anger, even as they acknowledge the prominence of this very troublesome and all-too-common human emotion.

For instance, Albert Einstein cautions, "Anger dwells only in the bosom of fools."  Benjamin Franklin admits, "Anger is never without a reason, but seldom with a good one," and as a result, "Whatever is begun in anger ends in shame."  Likewise, Seneca, the Roman philosopher and tragedian, observes, "Anger, if not restrained, is frequently more hurtful than the injury that provokes it."

Philosophers seem particularly concerned with remaining calm.  Not surprisingly, the Greek Stoic Epictetus tells us, “When you are offended at any man's fault, turn to yourself and study your own failings. Then you will forget your anger.”  (He doesn't mention, of course, that we'll forget our anger because we've now made ourselves majorly depressed.)

Others seem more goal-oriented.  Plato argues, "There are two things a person should never be angry at, what they can help, and what they cannot."  So, really, if you can do something about it, do it, and if you can't, you can't.  In effect, Plato seems to suggest that anger is simply a waste of time: it's either a pointless substitute for action or an emotional response to helplessness that won't change the fact of our powerlessness.

Aristotle offers food for further thought.  He notes, "Anyone can become angry--that is easy, but to be angry with the right person at the right time, and for the right purpose and in the right way--that is not easy."  How to tap into useful, productive, well-directed anger, that is the question.  How can we know when the time is right for anger? We usually know it only after the fact, because we're already angry-- and at that point, the urge to justify ourselves by whatever means possible is nearly overpowering.

The idea that there are various ways of being angry is worth considering.  I think we tend to think of anger or rage as a blanket-emotion.  It usually covers a multitude of situations and sensations, but if we force ourselves to be precise about the nature and cause of our anger, a funny thing happens.  We calm down.  We get clear--very, very clear--about what exactly it is that's bothering us and why, and we can even sometimes begin to let people know what they need to do to stop ticking us off.

I think what concerns me about the many angry people and political movements that I see is their almost complete lack of clarity, precision and--in the most extreme cases--coherence.  People are angry about anything and everything.  Mention something annoying, and it's quickly added to a list of grievances, with no time for thought or reflection.  We're just mad, damn it.

I think crowds can be useful manifestations of anger, but too often they come to represent forces of unthinking rage--something that's often dangerous and usually unproductive.  There is strength in numbers, but there is also chaos in crowds.  They are easily led by emotion and rarely follow a logical train of thought to its many and multifarious conclusions. 

When we're angry, we stop asking questions.  We yell, insist and assert.  And we almost never listen.

One of my favorite quotations about anger comes from the Greek playwright Euripides.  In his play, The Medea, Euripides dramatizes the nearly-nuclear fallout that comes when the Greek hero Jason decides to leave his witchy-wife Medea and their two children and marry--of course--a much-younger princess.  As Jason and Medea angrily accuse one another of being the one who screwed everything up, the Chorus comments, "It is a strange form of anger, difficult to cure/ When two friends turn upon each other in hatred."

I think this is the difference between "objective" anger and "personal" anger.  When some idiot cuts us off on the highway, we feel angry, but the anger is somehow detached from us.  It isn't personal, it's just a guy--or girl--doing something annoying that can create really serious consequences that we'll have to deal with.

When we're angry at a friend, though, we feel betrayed.  When we feel this way, getting clarity is much, much more difficult than it might be in a case of objective anger.  We find it much harder to be precise about the cause of our anger, and articulating a solution is often nearly impossible.  We're just hurt-- really, really freakin' hurt-- and we can't believe that this one person of all people is the one who has caused our pain. 

I think that in some way, this is what is lurking behind the anger of the crowds and political movements awash on the American and European landscapes lately.  A strange form of anger, difficult to cure.  A collective sense of betrayal at something or someone we trusted. 

But have we earned that sense of betrayal?  When all of the banks and governments were doing what they were all doing for a decade or more, we weren't angry.  We weren't even paying all that much attention, because hey, it wasn't our problem if no one knew where the money was, so long as the stock market was up.  It wasn't up to us to keep tabs on it all, to make people and institutions accountable, if we weren't feeling the consequences of their actions.

But as it turns out, it was up to us and it still is--and unfortunately, it will be for months to come.  So which is it: are we angry because of something we couldn't do anything about, or are we angry about something we could have done something about, but chose not to?  Were we helpless all those years, or did we just choose not to act? 

I think if we all consider and answer these questions, we can begin to move from anger as a personal outlet to an anger that can be used at the right time, for the right purposes and in the right way.

Tuesday, September 28, 2010

Creative and Intellectual Productivity

This week, The MacArthur Foundation announced the MacArthur Fellows for 2010.  These lucky men and women will each receive $500,000. over the next five years, with no strings attached and no need to report to the Foundation about how they ultimately spend it.  Known as the "Genius Awards," these grants reward productive and creative individuals with the money (which in turn gives them the time) to do what they want.

I love this idea.  Every year, I eagerly await the news of the new Fellows and, for some strange, vicarious reason I have yet to comprehend, I'm always extremely excited for them.  I actually spend a few minutes trying to imagine what each of them will do over the next few years.  I even email all of my friends to let them know that the award winners have been announced.  (Over the years, my friends have learned to use the delete button regularly, and they either completely ignore or, if trapped, quietly humor my idiosyncrasies.  It's just one of the drawbacks of having a friend who's a bookworm and a complete nerd.)

I'm reading Dostoevsky's Crime and Punishment right now, and this announcement started me thinking.  On the one hand, I think giving people lots and lots of money for creative and intellectual projects is wonderful, but on the other hand, here I am, reading an amazing novel by an amazing novelist, and he wrote it totally under the gun. 

Because he had backed himself into a serious financial corner, Dostoevsky was compelled to borrow money from his publisher in exchange for a promise to produce a new novel.  If he didn't produce one by November 1, 1866, the publisher would be granted the rights to publish all of Dostoevsky's works for the next nine years, without compensating the writer at all.  Basically, in July of 1865, Dostoevsky gambled all future profits from his writing on the certainty that he could produce an entire novel in a little over a year.

It was an insane bargain.

He had no idea at the time that he would write anything like Crime and Punishment.  But as the months wore on, Dostoevsky became consumed with writing what would become one of the greatest novels in world literature.  He postponed writing the novel he had promised his publisher in order to focus on publishing Crime and Punishment in serialized installments.  In October 1866, he finally admitted to a friend that he was, in fact, screwed: he was not going to be able to meet his end of the bargain.

Realizing his predicament, the friend suggested that Dostoevsky hire a secretary and dictate the novel instead of writing it out himself.  He did and, in an odd twist of fate, he not only ended up keeping his end of the bargain, but also meeting the woman who would eventually become his second wife.  Oh, and he finished writing Crime and Punishment too.

Literature abounds with stories like these: William Faulkner claimed to have written As I Lay Dying in the space of six weeks, working as a nightwatchman.  He said that he set out to write a novel by which he could stand or fall as a writer, whether he ever wrote anything else afterward.  And I must say, it is an awfully good novel...

Interestingly, Robert Boice, Professor Emeritus of Psychology at SUNY-Stony Brook has written extensively on the phenomenon of productive writers in academia, and he takes issue with the idea that people are most productive as writers and thinkers when they are granted extended amounts of time in which to write.  In fact, Boice argues, the most productive writers and thinkers are often the ones who are busiest with other things: they manage to fit their writing into an already overbooked schedule and somehow, some way, they actually make far more progress on their work than those who are given world enough and time.

Some good news for all the rest of us out there who don't stand a snowball's chance of ever getting a Genius Award.

Monday, September 27, 2010

Teach Your Children (A Whole Lot of Unanswered Questions)

One of the recent trends in American media coverage that both fascinates and dismays me is the focus on education--or, more specifically, on teachers.  Bad teachers, that is.  How many of them there are.  How we're all sick of it, goddammit.  How they're wasting our money, and we're not going to take it anymore. 

Look at them, you know who they are, you probably had one yourself at some point.  They do nothing all year, and then they have summers off.  You can tell they're doing nothing, because look at how stupid everyone is (present company excepted, of course).  Look at how stupid children and teenagers are today; it's embarrassing.  None of us were that stupid when we were fifteen or ten or five.  We know, because we remember.  We were definitely a lot smarter, no question.

The debate is interesting, strange and somewhat terrifying to me, particularly since I'm a teacher myself.  I don't have any answers to the problems it poses, just a whole lot of unanswered questions and some observations that I'm not sure I can coherently connect, but well, here goes nothing. 

On the one hand, I can't help but notice that golden parachutes are STILL floating to the ground all over Wall Street, but we've all become strangely fixated on how teachers' salaries are way too high, given what they "do" (i.e., nothing). 

But if teachers really are doing nothing, well, at least they aren't doing what Kenneth Lay and Andrew Fastow and Jeffrey Skilling did.  That should count for something nowadays, shouldn't it?

In a world of corporate fraud and financial collapse, it seems to me that we've become obsessed with results and accountability, but only in the classroom, not the board-room.  It is very odd to me that there have been no discussions whatsoever of administrative salaries in higher education.  If you think you're pissed at Bobby's fourth-grade social studies teacher, check out what the Assistant to the Assistant Principal at his school made last year, and then find out what exactly he or she did to earn that salary.  See what I mean?

We can always find someone to blame, someone who's making more than they should be and doing far less than (we think that) they should be doing.  But what good is finger-pointing, when the facts are clear: the money isn't there, it hasn't been there for a while, and it isn't going to be there for quite some time to come. 

So what the hell are we going to do now?

President Obama has weighed in, indicating that it's not about money, it's about standards, and that American kids may need to go to school longer as a way of off-setting their apparent inability to retain information over the dog days of summer.  In a really interesting blog posting on nj.com, journalist Bob Braun argues that no, yes, it is about money, and that there is a direct correlation between money and academic performance.  According to Braun, "Wealth and achievement are inextricably linked. Give the College Board, the agency that produces the SAT Reasoning Test, your family income numbers and your race and educational level of your parents and it will predict your scores and almost always be right."

But this means, of course, that the SAT's, like most standardized tests, don't measure "knowledge" at all--something all of us in higher education have known for a very long time.  And yet, we still elevate them as way of evaluating whether teachers are doing right by our children: if test scores are low, somebody's not doing what they're supposed to be doing and the country is suffering as a result. 

I think we hold these kinds of standardized, evaluative measures in high regard because they're a ticket to something we've decided we should all strive for.  Something expensive and time- consuming, but ultimately very valuable.  What that "something" really is, though, we're not so clear about.  

I think that what is at issue is an underlying crisis of American identity and self-definition.  Who are we?  Who do we want our children to be, and what do we want them to have?  If we don't want our children to be the Kenneth Lays and the Jeffrey Skillings of the future (and in some cases, I wonder whether maybe we don't care if they are, so long as they're smart enough not to get caught), what do we want them to know, value and strive towards?

A hundred years ago, the answers to these questions may have seemed far simpler, although I don't think they ever really were.  If your family was poor, you went to school only if and when your family could afford to do without your labor.  In many rural communities, education was more or less seasonal.  If you weren't in school, chances are, you were working, and at that time, most people had to work. 

In today's world, we have come to regard labor with an odd kind of voyeurism.  Like many, I'm a fan of Mike Rowe's show, "Dirty Jobs."  But when you think about it, this show's popularity is a really interesting comment on contemporary American culture.  We don't want to actually DO these jobs, we want to watch someone else do them and get dirty, so that we can laugh. 

Personally, we'd all prefer to stay clean and get paid a lot more money doing a lot less "work," because if you're smart enough, "real" work should be nothing more to you than a form of entertainment.   And, at the end of the day, this is what we want for our children, to some extent. 

I find it interesting that Rowe's experience with the show has alerted him to a crisis in America's infrastructure, and he is committed to raising awareness about and reevaluating the definition and significance of manual labor in the United States (check out his site for more about his project, "Mikeroweworks"). 

I think his arguments are interesting and persuasive, and they lead me to wonder, is our current focus on teachers' salaries in higher education really "about" something besides education and the opportunities that we want for our children?  Is it really about the financial identity that separates the middle class from the working class?  Is our anxiety about upholding the social markers of education a way of deflecting our fears about the harsh realities that have been spawned in the wake of Wall Street's collapse?   

Saturday, September 25, 2010

More About Intention

As you can probably tell by now, my brain is definitely my favorite body part.  I spend a lot of time reading and thinking about what makes other people do what they do, and then I try to figure out what the rest of us are supposed to do with them once they've done what it is that they've done.

So I found Tim Wogan's article "Murder or Accident: The Brain Knows" (ScienceNOW, March 29, 2010) really interesting--especially since I'm reading Dostoevsky's Crime and Punishment (1866) right now.

Briefly, Wogan describes a series of scans and studies designed to identify which area of the human brain "allows us to make moral judgments of another person's motives."  So, if you've ever found yourself thinking, "Why, that sneaky little no-good bastard...", chances are, the right temporoparietal junction (RTPJ) of your brain was all lit up at the time.

According to Wogan, this area of the brain, located "just above the right ear," "receives more blood than usual when we read about people’s beliefs and intentions, particularly if we use the information to judge people negatively."

In an effort to try to determine which comes first, other people's bad intentions or the brain's reaction to them, scientists used a magnetic field to disable certain regions of the brain in a group of volunteers, and then gave them stories of accidental and attempted murder to read.

I have several reactions to this.  First, I say "kudos!" to all of the brave volunteers between the ages of 18 and 30 who nonchalantly agreed to let other people use a really big magnet to temporarily shut off part of their brains.  Personally, I stare anxiously at my computer screen every morning, praying that it will boot up same as always. 

Secondly, it is interesting that, with the RTPJ turned off, research subjects gave attempted murderers a pass--their perceptions of other people's intentional states seemed to be altered in favor of forgiveness.  But forgiveness is a value-laden term, dependent on context.  In Crime and Punishment, for example, the protagonist's act of murder stems from a well-intentioned but inherently wrong idea pursued in single-minded isolation.

Raskolnikov's intentions constantly shift and morph throughout the first part of the novel, and his quest for redemption and forgiveness hinges on his own eventual understanding of why he did what he did, and whether or not he is genuinely remorseful.

The thing is, though, sometimes he is, and sometimes he isn't.  Like all of us, Raskolnikov has feelings of terrible guilt interspersed with periods of denial, self-justification, and just plain old orneriness (the "what's-done-is-done-and I-can't-help-it-so-stop-bothering-me-about-it" mindset).

For Dostoevsky, intentions are never fixed, even if our perceptions of them are (or seem to be).  More importantly, Dostoevsky repeatedly suggests that our perceptions of others' intentions should not cause us to assume that we know the full story about their inherent moral goodness or future prospects for ethical decency and humane action.

For Dostoevsky, forgiveness can't be measured in brain waves or blood flow; it is a leap of faith that defies logic.  It comes from the heart, not the head.  To the psychological adage, "The best predictor of future behavior is past behavior," Dostoevsky would say, "Nonsense.  The future is never predictable.  People are never predictable.  You can never know for sure what someone will do or why they'll do it."

And that's what makes us all so interesting.

Friday, September 24, 2010

The Measure of Our Mistakes

In George Eliot's novel Adam Bede (1859), after the carpenter Adam Bede fights with a longtime friend who has (rather easily) seduced the woman that Adam himself loves, he muses, "I seem as if I'd been measuring my work from a false line, and had got it all to measure over again."

We often accidentally measure our work from a false line, but we are rarely inclined to measure it all over again, because to do so means openly admitting that we've made a mistake.  This admission is a hard one to make to ourselves or to others, and it becomes even more difficult when the measurements seem particularly serious or important.  

Instead, we seem to prefer go through life with ill-fitting careers or relationships, hoping that no one will notice the awkward gaps or the mental and emotional patches we've plastered over our obvious mismeasurements.  Because starting over again is too much work and leaves us open to the negative judgments of others, we'd rather just carry on as if everything is fine.  Because who knows?  Maybe it will all work out in the end.  

As any good carpenter can tell you, though, it won't. 

In Mindset (2006), Carol Dweck argues that the mental viewpoint that we adopt about success and failure dramatically influences--you guessed it--our successes and failures.  According to Dweck, "[b]elieving that your qualities are carved in stone--the fixed mindset--creates an urgency to prove yourself over and over" (6).  By contrast, if you adopt what Dweck labels a "growth mindset," you assume that "the hand you're dealt is just the starting point for development" (7).  

In essence, growth-minded individuals tend to roll with the punches whereas people with a fixed mindset know when they're beaten.  And both groups act accordingly.

A case in point: I love to cook.  Nothing makes me happier than making a feast for someone else to eat.  Don't get me wrong, I like to eat, but I love to cook. 

When I first started cooking, I made horrible mistakes.  Used the wrong ingredients, took bad short cuts, spilled stuff, burned stuff--you name it.  It used to make me terribly self-conscious.  I loved to cook, but I couldn't cook without constantly worrying that I would make a mistake while cooking for someone else, and that they would notice it and hate it.  And, by extension, that they would then hate me for it. 

Finally, as I was standing over the stove obsessing one night, a longtime friend told me very gently, "You know, you kinda have a little thing about things turning out perfectly." 

I rounded on him immediately; I was positively indignant.  I told him right then and there, in no uncertain terms: "I do not have a little thing about things turning out perfectly.  I have a big thing about things turning out perfectly."

When I thought about it later that night, I remembered something that had happened several years previously, when I was in college.  A guy I was dating invited me to have dinner with his family, and he cooked for us.  He made chicken and rice, and the rice was undercooked and crunchy.   I remember that at the time, all I could think was, "I hope he doesn't think it matters, because it really doesn't.  It's not his fault.  Rice is tricky sometimes--everyone knows that.  It was so nice of him to cook all of this for all of us, no one cares if one little thing didn't turn out right."

And, as it turns out, no one did.

So, I wondered, why couldn't I extend the same courtesy to myself?  If I didn't care when someone else made a mistake and if I was always still aware of their kindness and generosity and goodness, why did I assume that people would lose sight of my own good qualities in a mad rush to judge me on the basis of a faulty flan or an undercooked carbohydrate?

As Dweck observes, the difference between a fixed or a growth mindset can make a great deal of difference in how one perceives one's own efforts and their results.  Dweck notes that seventh graders with a fixed mindset typically blame themselves for their failures ("I am the stupidest," "I suck in math") or else they blame someone else ("[The math teacher] is a fat male slut ... and [the English teacher] is a slob with a pink ass") (57).  

As a similarly pink-assed English teacher, I have frequently marveled at how worried students can be about their mistakes, whether real or potential, no matter how small or negligible those errors might be.  I often have to remind them that I lead a rich, full, and happy life outside of class, so that they should not for a single second assume that I'm lying in bed late at night thinking, "Wow.  I can't believe Susie didn't know that participles can also function as adjectives.  I mean, where has she been for the last twenty years?!  And Bobby.  What's his story?  He didn't even know that George Eliot was actually a woman."

In the mid-70s, Andy Pratt sang, "If you could see yourself through my eyes/ You'd never be worried/ Just do what you gotta do."  When we start to feel the pull of the fixed mindset, it would be nice if we could remember those words and then just do what we've gotta do.   

As for me, I've solved the problem of my obsession with cooking perfection.  I recently hung a sign in my kitchen that reads, "I kiss better than I cook."   

Never underestimate the power of distraction.  

Sunday, September 19, 2010

Mindful Memorizing

In "Drill, Baby, Drill," Virginia Heffernan questions the practice of rote memorization--formally known as "distributive practice"--in the learning process.  Referred to disparagingly in academic circles as "drill and kill," the assumption is that the repetition characteristic of rote memorization deadens students' enthusiasm and does more harm than good, if it does any good at all.

The problem, of course, is that every field has some information that has to be "automatic," readily available at your fingertips when you need it.  Searching for the right word once in a while is not a problem, but not knowing what words mean and when and how to use them in a sentence, is.  Counting on a calculator to do simple math assumes there is no real benefit to knowing how to do it yourself--an assumption that simply isn't true.

I'm not a huge fan of rote memorization as it is typically practiced, but I think that one of the advantages of drilling is that, when done thoughtfully, it promotes pattern recognition.  When I was in 3rd grade, my teacher insisted that we all had to memorize our multiplication tables and then recite them in front of her.  By the end of the year, we had to know them, cold.  No complaints, no excuses.  She gave us a chart of our "times tables" from 0-10, and told us to get cracking.

After weeks of procrastinating, thinking, "this is impossible!" "this is going to be boring!" and--of course--"this is stupid!", I still remember the epiphany that I had as I was studying the chart one evening.  All of a sudden it dawned on me: "multiplication is addition, only faster, and that means that division is really multiplication in reverse."  I realized that math was not a series of discrete, unrelated activities, but a system of very useful interconnections between numbers. 

What I've since realized is that, in this moment, I transcended the handicap of rote memorization.  As Ellen Langer has argued in The Power of Mindful Learning (1998), one of the problems of memorization is that it is often "a strategy for taking in material that has no personal meaning."  Divorced from context, packets of facts don't stay with us because they don't possess personal relevance.  We're "learning" them, but not thinking about them.

The result is that they typically don't stay with us unless they become so ingrained that we simply spout them out on cue, but getting to this point of automatic retention is tiresome and, for most people at least, painfully dull.  Chances are, when we do learn things in this way, we don't really know what the facts mean or how they're useful.  If we had to explain them to someone else, we probably couldn't.  We don't know why we know what we know, we just know it, but not in any conscious, constructive or useful way.

So, really, what's the point of "knowing" it at all?  We're right back where we started, caught in a vicious cycle of mindless repetition that, for many, passes as "knowledge" or "learning."

The problem, Langer suggests, is that academics often rewards this kind of thoughtless retention of information.  Although there has been a decided shift in approaches to teaching, we still tend to embrace a model that rewards the students who sit still and "learn" the information that is delivered to them, and we typically test this retention through--you guessed it--tests that reward memorization.  As Langer shows, however, this really isn't the most effective way for anyone to learn anything.

Our brains retain information best when it stimulates our neural pathways in new or interesting ways, either because it matters to our gray matter or because it somehow leaves an impression.  Like when you put your hand on a hot stove.  You realize, in rapid succession, two basic things: 1) what "hot" means and 2) what stoves "do," and as a result, you don't ever deliberately put your hand on one again.

Although it doesn't have to be physically painful, forging new neural pathways can be uncomfortable because we haven't yet made the mental connections that render those neural exchanges automatic.  As a result, we pay attention in these moments of mental activity because something very different is going on inside our heads all of a sudden.

Similarly, we can also retain information better if we memorize facts while performing some kind of kinesthetic activity.  Because we're stimulating our neural pathways at the same time that we're taking in information that is inherently less-than-stimulating, our brains keep the fodder that accompanies the fuel. 

I actually tested this approach.  I was studying Japanese, and after weeks of trying, I decided that the only way I could possibly learn my numbers well enough to be able to shop in Tokyo one day would be to put away the vocabulary lists and figure out a way to apply them.  This accounts for an odd and completely random fact about me: when I swim, I count laps in Japanese.  I set myself a basic rule before I started: if I couldn't remember a number or got stuck, I had to start over from 1.  Since I really, really, REALLY didn't want to end up swimming 3000 laps in increments of 5 instead of my usual mile, I was motivated to retain the information.  And retain it I did: I can now count as well in Japanese as I can in English (and probably better, if you dunk me in water first).

So although Heffernan resorts to "colorful, happy apps" as a way of counteracting the boredom that accompanies memorization, programs that offer "just enough screen magic to surprise and engage the mind and memory" may be no more effective than, say, getting out of the chair and taking a walk with a set of flashcards or mentally reciting a list of words while riding a bike.  If the adage "use it or lose it" applies to the brain--and in many ways, it does--then we need to motivate ourselves to memorize by finding personal uses for the facts we'd rather not lose.

Underground Compassion

I have read Dostoevsky's Notes from Underground (1864) many times.  Although most critics focus on the philosophical argument against rational egoism presented in the first part, the emotional crux of the novel really lies in Part Two.

Having gone to a party (to which he was not invited and at which he was not welcome), the narrator gets drunk, insults the other guests and heads off to a whorehouse, where he has sex with a prostitute named Liza, who strikes his fancy because of her kind and unusually serious eyes and expression.  As he remarks, the fact that she looks so thoughtful can only work to her detriment in her profession, and the narrator revels in their mutual degradation--she is a quiet and serious whore, he is an agitated and repulsive john.

Afterward, he finds out that, in fact, her parents sold her to the madam.  Because he has been humiliated at the party that precedes his trip to the brothel, the narrator decides to taunt and torment Liza, to even the scales by reasserting his superiority over her.  He paints a picture of the future that awaits her--mental and moral degradation, followed by poverty, illness, and death--and then poetically suggests that someone who could see past all of this might one day come along and save her, giving her a chance at a home, a family, and love.

The narrator comments, "What chiefly attracted me was the game itself"--the chance to see how good a player he really is.  He wants to see if he can convince Liza that he is the kind of man that he pretends to be, to see if he can get her to overlook the obvious and painful realities that should have already clued her in about the type of guy he really is.  Much to his surprise, he succeeds.  She falls for it.  He wins.

As it turns out, Liza does hold out hope that she will be able to leave the life to which her parents have condemned her.  She has recently reconnected with a childhood friend at a party, a man who ostensibly knows nothing of her current profession and who has written to her, telling her that he loves her.  She believes she can still escape her fate; she still has hope.

Fascinated, the narrator realizes that he has miscalculated the depth of Liza's spirit and integrity.  Nevertheless, he tells her to "come see him" at his apartment.  He immediately regrets it.  He has presented himself as confident, well-off, intelligent and altruistic, as a thoughtful and compassionate man-of-the-world, who just happened to drop by the whorehouse drunk and disheveled that one time.  In fact, as he well knows, he is insecure, poor, disgusting and spiteful, and he knows that the minute that Liza sees where and how he lives, she will realize the truth about him.

Days pass, and she doesn't come.  When she finally does, the narrator is in the middle of a frantic shouting match with his servant--in short, Liza couldn't have picked a worse time to drop by.

At this point, the narrator unloads all of his anger and resentment.  He laughs at Liza for believing in him, tells her it was all just a game to him and that he thinks she is ridiculous for believing that he is a better man than he actually is.  He lets her know that, basically, she needs to get a clue and face reality, that this is how people are and that this is how the world really works. 

As he goes on and on, he describes how "something exceedingly strange happened": "Liza, whom I had so abused and humiliated, understood a great deal more than I imagined.  She understood that ... I myself was unhappy."

Instead of being crushed by his words, Liza sees through them to the suffering underneath.  She realizes that the games that the narrator plays with words and sex have only left him lonely and very unhappy, and that he is too frightened to ever break out of the emotional underground that he has created for himself.  He has buried himself alive, and although he claims that he prefers it there and that he is happy and entertained watching it all unfold from the safety of his "corner," she knows that really, he is tired, anxious and miserable.

Because she understands, she offers him a genuine connection of sympathy and compassion, in spite of the pain that he has caused her.  She looks past her own pain and humiliation because she sees that they are simply a result of the actions of a man who is, in the end, emotionally and spiritually lost.  And she realizes that this is a terrible feeling.

The narrator, true to form, takes advantage of her emotional generosity, sleeps with her again, and then kicks her out.  As he shows her the door, he pays her and then turns away.

When he turns around, Liza has left.  And she has left the money behind. 

Faced with the narrator's repeated efforts to showcase human nature as fundamentally egotistical and painfully confronted with his need to reduce relationships to a game of words that everyone supposedly plays and enjoys, Liza silently asserts her own understanding of life and its purposes. 

It is one of my favorite moments in all of literature. 


Saturday, September 18, 2010

Whistling in the Dark

In Walt Disney's "Pinocchio," Jiminy Cricket advises Pinocchio (and by extension, all of us) to "Always Let Your Conscience Be Your Guide."  In particular, he tells us that "When you get in trouble and you don't know right from wrong" or "When you meet temptation and the urge is very strong," you should "Give a little whistle!"

Personally, even as a five-year-old, I found this advice completely unhelpful.  Standing there with my little hand poised over the cookie jar, I'd give a little whistle, at which point my mom would call from the other room, "What are you whistling about?"  I'd answer, "Ohhhh, nnothhinng...," and be right back where I started, wondering whether it was really so very wrong to just take a little cookie.   

More powerfully, when asked to inform against friends who had connections to the Communist Party, writer Lillian Hellman stated in her letter of May 19, 1952 to the US House of Representatives Committee on Un-American Activities, "I am ready and willing to testify before the representatives of our Government as to my own opinions and my own actions, regardless of any risks or consequences to myself...  But to hurt innocent people whom I knew many years ago in order to save myself is, to me, inhuman and indecent and dishonorable. I cannot and will not cut my conscience to fit this year's fashions...".     

So what is human integrity made of?  Is it solid and substantial, something that can be tailored (for better or for worse) by our circumstances?  Or is it ephemeral, a little whistle in the (sometimes overwhelming) moral darkness that surrounds us? 

A recent study published in Science suggests that the capacity for introspection (a key component of any functioning conscience) may be connected to specific areas of the brain.  In "Relating Introspective Accuracy to Individual Differences in Brain Structure" (Science 329, September, 17, 2010, pg. 1541), Stephen M. Fleming, Rimona S. Weil, Zoltan Nagy, Raymond J. Dolan, and Geraint Rees argue that there is a correlation between "introspective ability" and "gray matter volume" and "white-matter microstructure" in the anterior prefrontal cortex (the region of the brain that tends to be more highly developed in humans and that is associated with problem-solving and planning).  

Put much too simply, it would seem that the ability to inwardly debate one's own perceptions and "discriminate correct from incorrect perceptual decisions" with respect to the outside world may be shaped by one's neuroanatomy.

At the same time, however, neuroscientists have increasingly acknowledged the fact that "brain maps"--schematics that associate certain skills or abilities with specific regions of the brain--tell only part of the story.  Recent studies in neuroplasticity suggest that our brains form and reform neural connections throughout our lives, both in response to new experiences and information and in order to recover from injury and disease.  The brain can--and often will--reorganize itself in order to function more effectively, provided it receives appropriate stimuli.  (For more on this subject, check out this lecture by Michael Merzenich or Norman Doidge's fascinating book, The Brain That Changes Itself [2007]).

So, it may be that, whatever our particular allotment and configuration of gray matter, our conscience can develop through use or, conversely, atrophy through disuse.  Although it may seem that one day, we're pilfering a cookie, and the next, we're Eliot Spitzer, hiring a high-priced call girl to help us celebrate The-Day-Before-Valentine's-Day, our brains are not to blame.  Consider instead the many days in between, when the mental connections between right and wrong and our place in the world remain severed--the times when the slippery slope of moral bankruptcy still provides the illusion of traction and so we don't even so much as whistle.

Thursday, September 16, 2010

Thinking Inside of the Box

In "Boxing Lessons," boxing coach and philosopher Gordon Marino argues that "Aside from the possibilities for self-fulfillment, boxing can also contribute to our moral lives."  I've taken boxing lessons, but for many of the reasons mentioned in the comments on Marino's article, I've never actually sparred with anyone in the ring.  Enduring repeated blows to the head simply isn't good for your brain, and since I (allegedly) make my living using my noggin', I thought it best to put my relatively successful career ahead of my slightly aggressive impulses.  

Although I also question the link that Marino draws between morality and physically assaulting an opponent, I think that there is a link to be made between athletics and the intellectual life.  I box, I swim, I bike, and I do Pilates, and in all of these activities, I have a very clear sense that I am not simply warding off osteoporosis and flab.  I am also becoming a better thinker.

As Marino notes, even before Descartes, the mind and the body have been separated in Western philosophy, a practice that is in distinct contrast to many of the teachings of various Eastern philosophies.  The Roman satirist Juvenal suggested that we should pray for "mens sana in corpore sano" ("a sound mind in a sound body"), but the fact that he offers this suggestion in an ostensibly satiric context makes it difficult to determine whether he was advocating this conjunction or acknowledging its ultimate impossibility.  (Juvenal also tell us that, while we're at it, we should "Ask for a brave soul that lacks the fear of death,/ which places the length of life last among nature’s blessings,/ which is able to bear whatever kind of sufferings,/ does not know anger, lusts for nothing and believes/ the hardships and savage labors of Hercules better than/ the satisfactions, feasts, and feather bed of an Eastern king."  Well, I guess it never hurts to ask...)

So, setting aside the question of whether we can all be both brainy and buff, I'd like to meditate a bit on what it is that I learn from boxing that can be transferred to my intellectual life.  While Marino argues that boxing teaches and develops courage, I would argue that the ability to conquer fear is the least of boxing's influences on my own experience.  I think you have to be pretty bold young lady to begin with, if you're going to decide to go see what boxing is all about.

What I did learn from boxing, though, is how important forethought and anticipation--to say nothing of endurance--really are.  It's not about brute strength, although that certainly helps, and it's not about pounding someone to a pulp, although that sometimes happens.  Boxers have to assess their opponents on a wide range of levels: size, speed, skill, and attitude vary greatly from one boxer to the next.   You have to have the ability to think through different strategies for coping with the specific challenges posed by an opponent's unique combination of these qualities, and you have to know how to switch gears when a particular strategy simply isn't working.  All of this is particularly difficult to do, of course, when someone is simultaneously trying to out-think you and hit you very hard in return.

Boxing teaches you to keep your guard up: there's no point in being the smartest, fastest, strongest boxer out there, if you leave yourself vulnerable. (This was what drove me nuts about the film "Million Dollar Baby"--no good fighter would "forget" to keep his or her guard up while fighting.  As my boxing instructor told me, "After you get punched in the face, you remember, no problem.")  As I also learned, it's not always about punching; it's also about not being punched.  You have to stay on your toes and assess when it's more prudent to hang back and try to wear your opponent down.  If they're smart, they'll be trying to cut you off so that you can't wear them down, so once again, you need to think on your feet, conserve your energy and put your strength to good use.

Speed, precision and quick decision-making are crucial: you need to think about where you'll be striking, when, and why.  You want to strike fast, so that you can surprise your opponent and get your guard back up before s/he hits back.  You have to persevere and practice, practice, practice: reactions need to become automatic, muscle-memory effectively channeled through conscious thought and skill.

Boxing uses your entire body.  Your core has to stay balanced, centered and strong (this is where Pilates really helps), because let's face it: an off-balanced punch is no good to anyone but your opponent, and you can't slip (duck and weave) effectively (if at all) if you're off-center.  You throw a punch by pivoting your hips around the axis of your own centered body, so that the energy of the blow travels up your leg, through your arm and out your fist.  The force of a shot literally seems to start in the tip of your toes, gain momentum and power through your hips, and then explode out the end of your arm.  The faster you can channel this force and momentum, the better. 

So, setting aside its debatable morality, I think there are reasons that boxers love to box, and their decision to pursue the sport may have very little to do with an unthinking lust for violence.  Boxing can cultivate a kind of self-awareness that has applications to all areas of life and its challenges.  Given the chance, who wouldn't want to float like a butterfly and sting like a bee?

Monday, September 13, 2010

Moral Rumble Strips

In 1965, Eric Burden of The Animals mournfully pleaded, "I'm just a soul whose intentions are good, Oh Lord, please don't let me be misunderstood."  It's an apt plea and an old one: the 12th century French abbot, St. Bernard of Clairvaux, is credited with originating the aphorism, "the road to hell is paved with good intentions"--ironic testimony to the fact that "meaning well" is all-too-often an immoral asshole's brand of asphalt, designed to cover a multitude of sins.  

Historian and political theorist Hannah Arendt coined the phrase the "banality of evil" in 1963, while reporting on the trial of Adolf Eichmann for The New Yorker.  In effect, Arendt confronts the notion that--Hollywood and fiction aside--people who commit actions that most of us would consider "evil" don't wear black and hide in corners, don't rub their hands together gleefully while chuckling grimly, and don't have a tell-tale gleam of malice in their eyes.  They walk, talk, look, sound, and even think just like you and me--except that they somehow end up doing terrible things.

This bothers us because, in our own eyes at least, we're always souls whose intentions are good (the many misunderstandings of others notwithstanding).  And yet, our actions may not always bear up quite so well under a moral microscope.  We typically don't think of "bad" behavior as a series of small-scale decisions, some of which may seem trivial and nearly microscopic at the time that we make them.  As a result, we often don't accurately assess the overall direction of our lives, thoughts and intentions until it is too late--until someone else has branded us "bad" or, even worse, "evil."

In "Random Thoughts On American Beauty," I wrote about the phenomenon of "sedation" in American society, and in "Dostoevsky's The House of the Dead," I considered the role of intention in human action.  I think there exists a kind of "moral sedation"--in effect, a merger of the two ideas.  All too often, we do bad things without realizing they're "bad," per se, because we see our actions solely through the distorting lens of our own intentions.  We know, in our own minds, what was supposed to happen--what we thought would happen and what we wanted to happen--and this exerts a powerful influence over our ability to see what has happened, what is happening, what will happen.  

Over the course of a lifetime, we morally sedate ourselves, in a sense, because whenever we are confronted with our own sketchy behavior by someone who necessarily views our actions from a perspective other than that of our own intentions, our first impulse is to defensively innoculate ourselves.  We insist, "But I never meant to...", even as friends or strangers continue screaming at us in ever-increasing anger. 

It is extremely painful to realize that we've hurt someone, particularly if it's someone we care about; the tendency to invoke our own good intentions effectively numbs the pain that accompanies that realization.  Over time, good intentions can become a form of moral sedation, rendering us unable to accurately assess the pain that we're causing others because we no longer register what has--in someone else's mind at least--"actually" happened. 

What we need, really, are moral rumble strips.  On the highway, rumble strips alert inattentive drivers that they've crossed the lane markers and are headed for the shoulder (and, by extension, for potential disaster).  They're effective, because although most of us pride ourselves on our excellent driving abilities, all of us make mistakes, big and small.  Rumble strips ensure that a small, innocent mistake, committed in a moment of inattention or distraction, can be corrected before it turns into a fatal, five-car pileup.

Rumble strips only work, of course, if the shoulder is wide enough to accommodate them.  Putting a rumble strip on the shoulder of the highway that runs along the Amalfi Coast isn't going to do anyone much good--a driver obviously has to have enough room to correct his or her error, and some roads just don't leave that kind of margin.

I think it is the same with the moral pathways of our lives: sometimes, we can't really afford to screw up, and if we do, we may be faced with consequences that we don't enjoy or we may end up having to pay hugely for mistakes that simply can't be fixed. 

Most of the time, however, we can correct our behavior and get ourselves back on track pretty early on.  The problem is, if we don't have moral rumble strips in place, we don't realize how far over the line we've slowly drifted.  All too often, even when we're headed straight for the moral equivalent of a jersey barrier, our impulse is to floor it: we insist that we're still in control and capable of getting back up on the high road, but really, we need to just hit the brakes before it's too late.

What are these moral rumble strips, you ask?  Well, humility, for one.  We live in a culture of self-esteem that values (and perhaps overvalues) assertion and self-validation. Don't get me wrong: I'm a confident, assertive person, and I think those are definitely positive attributes.  But sometimes, I think we've lost sight of the virtue of humility.  In Dostoevsky's novel, The Brothers Karamazov, the Russian monk Father Zosima repeatedly asks, "Who am I that others should wait on me?" and insists, "Everyone is responsible for everyone else."

We don't like such ideas, because we only want to be responsible for ourselves and we'd prefer to take responsibility only for the things in our lives that are going well.  Instead of the reflexive reaction of "I meant well," humility compels us to think, "Maybe I'm wrong," and to follow up on this possibility with a willingness to listen to and think through the situation from another perspective.

Which brings us to a second moral rumble strip: empathy.  As I mentioned in a previous posting ("Some Unfinished Thoughts on Happiness"), we don't like to defer to the judgment of others when it comes to determining the paths of our own lives.  If we practice empathy, we retain the right to make our own decisions about where we're headed, but we consciously opt to look at the world from someone else's perspective for a period of time.  As the old, pre-automotive adage has it, we show our willingness to "walk a mile" in someone else's shoes when we empathize with them. 

Rather than finding ourselves "long regretting/ Some foolish thing, some little simple thing" we've done, consciously installing moral rumble strips along the pathways of our lives is an overt acknowledgment that "no one alive/ Can always be an angel."  If we check our thoughts and behavior through the practice of humility and empathy, we may find that, in the end, we're not misunderstood.

Saturday, September 11, 2010

Getting By With a Little Help From Our Friends

The inimitable and wonderfully cynical British dramatist Oscar Wilde once claimed, "True friends stab you in the front."  Decidedly less urbane and somewhat more optimistic, the Greek philosopher Aristotle argues in his Ethics that genuine friendship occurs when there is a mutual sense of eunoia, often translated as "good will."  Basically, according to Aristotle's conception of friendship, true friends will never stab you anywhere, unless they happen to be performing a C-section, tracheotomy or emergency appendectomy on you.  Because genuine friendship involves a reciprocal willingness on the part of each friend to always act in the best interests of the other, it is premised on a mutual recognition of and appreciation for each individual's good character and virtue.

Essentially, Aristotle argues that we like people for three possible reasons: 1) they are good; 2) they are useful; or 3) they are pleasant.  The quality of the friendship formed depends upon the reasons that bind the respective individuals to one another.  While we all value people who are useful and pleasant, Aristotle argues that these friendships are ultimately "imperfect" because they lack a firm foundation in trust, they are prone to dissolve suddenly, and they often erupt in quarrels.  

The quality of eunoia or "good will," Aristotle suggests, is only present in friendships based on goodness and virtue.  Although we can obviously help and benefit people who are pleasant and useful, when we do so, we are ultimately acting for our own benefit.  Any benefit that accrues to the other person is a coincidence and, according to Aristotle, “Those who wish good things to their friends for the sake of the latter are friends most of all, because they do so because of their friends themselves, and not coincidentally” (1156b9-11).  

Self-interest does have a place in Aristotle's conception of friendship: it is not simply the case that friends continually sacrifice themselves for one another.  According to Aristotle, the decision to act for the good of someone other than oneself is an exercise of one's own ethical values.  For Aristotle, it is by being ethical-- that is, by being a person of good character who actively places someone else's interests ahead of his or her own immediate pleasure and advantage--that we achieve happiness.

In short, your true friends do what they do with respect to you because of their overwhelming admiration for who you are, and not because of what they themselves like, desire or need at any given moment.  Because your goodness is what turns them on--and because theirs is what turns you on--you work together to promote a respective sense of ethical well-being and mutual happiness, both as friends and as morally well-developed individuals.

In an essay that appeared in The Chronicle of Higher Education in December of 2009, writer William Deresiewicz examines the phenomenon of friendship in cyberspace.  In "Faux Friendship" (The Chronicle of Higher Education, 12/11/2009, Vol. 56, Issue 16, pB6-B10), Deresiewicz castigates contemporary electronic media as vehicles of friendship.  Taking issue with social networking sites such as Facebook and MySpace, Deresiewicz asks, "Having been relegated to our screens, are our friendships now anything more than a form of distraction? When they've shrunk to the size of a wall post, do they retain any content? If we have 768 'friends,' in what sense do we have any?"

I think an even more interesting question to consider is, what happens to the idea of true friendship when it is considered in conjunction with the opportunities for limited, broad-based social interaction offered by social networking sites?  If true friends stab you in the front, does that mean that the guy--or girl--who breaks up with you on Facebook by changing his or her relationship status or by some other equally (and embarrassingly) public means, is really your best friend?  I think we would all say, probably not.  Okay, definitely not.

More importantly, is it possible to be a good friend--a person of good character who actively works for the advantage of the person s/he cares about--if one conducts the important exchanges of friendship through emails, Tweets, texts, and social network postings?  Aristotle is quite clear on this point: true friendship requires time spent with the other person, in face-to-face interactions or shared activities that are mutually beneficial and exercise each individual's capacity for virtue.

Aristotle recognizes, however, that genuine friendships are rare, precisely because it is not possible to invest this kind of time in relationships with a large number of people.  What email, IM, Facebook, Twitter, and MySpace offer, however, is an illusion of time spent with friends.  If we're logged on more or less constantly, we may come to feel that we have spent a large portion of our day with those we care about--even if they're vacationing in Crete and we're sitting at home in Connecticut.  

Who among us hasn't wasted upwards of an hour on a social networking site lately?  (I'll save a discussion of time spent in Farmville for another post.)  In that time, what did we actually learn about the character of our so-called friends--about their essential goodness, their sense of moral virtue and their desire to support and cherish our own ethical values?  Something?  Anything?  

More likely, nothing. 

To compound the problem, our electronic exchanges are increasingly limited--and for many, comfortably so--by the imposition of character limitations and by our ability to type or text rapidly and semi-coherently.  If we can only respond and post in so many characters (450 on Facebook, 140 on Twitter), how then can we continue to expand and develop our own characters as good friends and virtuous individuals?  If everything must always be boiled down to a concise posting accessible to anyone and everyone, how can we work through the nuances of our thoughts and elaborate our ethical positions with respect to those specific individuals whose goodness we purport to admire and advance? 

If social networking is slowly consuming the bulk of our social interactions--and for many, it is--does that mean that we will slowly find ourselves losing the capacity to form bonds based on a genuine understanding of the character and goodness of others?  Perhaps more disturbingly, does it mean that we may find ourselves increasingly unwilling (or even fundamentally unable) to contemplate what might be in the best interests of the person we have identified as a "friend"? Will we one day find ourselves incapable of setting aside considerations of what is pleasant or useful for ourselves in order to comprehend the character, goodness and emotional well-being of another?

Monday, September 6, 2010

Honor Thy Father and Thy Mother

At the end of May 2006, my father was diagnosed with stage 4 lung cancer: it had metastasized to his brain and his kidneys.  He died on July 31st.  

In January of 2009, my mother was diagnosed with osteomyelitis of the sternum.  In 1973, she received extensive rounds of cobalt radiation after a radical mastectomy.  One of the potential long-term effects of cobalt radiation treatment is what is known as "osteonecrosis" or, literally, "bone death."  In my mother's case, the bones in her sternum began to mutate sometime in the mid-1980s, creating what appeared to be bone spurs.  These continued to grow and eventually broke through the skin on her chest.  When one of these spurs subsequently fell off, it became apparent that a chronic bone infection had eaten a hole right through her sternum.  There is no cure for this kind of infection.  She died in March of 2010 of congestive heart failure brought about by acute infection.

My father received in-home hospice care in the last two months of his life; in my mother's case, this was simply not an option, given the nature of her condition.  I sat alone in my parents' living room with my father when he died and, three and a half years later, I sat alone in my mother's hospital room when she died.

To watch someone who has been an important part of your life deteriorate and die is an incredibly painful and ugly process.  In his final weeks, my dad frequently suffered from hypoxia (lack of oxygen to the brain) which caused him to experience panic attacks and hallucinate.  He coughed up blood, experienced referred pain in his right shoulder, and became incontinent.  

My mother became extremely paranoid and also experienced hallucinations: she died insisting that I had caused her death by putting her in the hospital.  She was also incontinent and, as a result of the infection, her skin became mottled and her hair fell out.  She allowed herself to starve and become dehydrated because she believed that her food and water were poisoned.  

Both my mom and my dad suffered from edema, or fluid build-up: when they slept on one side, the fluid would pool up on that side, and their faces would become abnormally swollen, distorting their features.  Both of them had a strange, nearly unbearable smell that seemed to come from deep inside of them when they breathed.  In their final hours, they experienced what is called Cheyne-Stokes breathing: at times, they would stop breathing entirely, sometimes for almost a minute, only to resume.  Because the air is traveling over membranes that have become congested, a dying person sometimes sounds as if they are drowning, trying to breathe under water--in my dad's case, this sound lasted for two days, in my mom's case, for about 12 hours.  In both cases, their mouths became puckered and fish-like, their limbs turned blue and the skin became swollen and mottled and cold.  When a person dies, their bladder and bowels empty. 

In both cases, my parents were given every form of medical attention possible (morphine, oxygen, etc.).  What I have described are simply the physical facts of death.  It is part of the story of my parents' lives: it is not the entire story, obviously.

After my dad died, I went through a phase of extreme anger, which I think I (more or less) successfully tried to suppress for well over a year.  Eventually, I simply stopped speaking about what I had seen, felt, and experienced.  When my mom was dying, the nurses repeatedly asked me whether there was anyone who could come sit with me.  I thanked them for their concern, but I told them there was no one.  In the movies, no one is ever alone at the end.  In life, sometimes we have to bear what we have to bear on our own.  That's just how it is.

I have never questioned the goodness of people's intentions.  I question the way in which we, as a culture, represent and conceptualize death and dying.  Americans don't like the idea of dying, and in many ways, we have constructed a culture that refuses to consider it.   We don't want to see it, if it isn't gentle and serene and peaceful (and it isn't, ever) and we don't want to hear about it unless we're at a safe and comfortable distance.  

We live in a culture of medical miracles, and we all want to believe in them, regardless of the circumstances.  Death can always be avoided, except when it can't, and when that happens, we would all prefer to look the other way.  

In Tolstoy's short story, "The Death of Ivan Ilych," the narrator summarizes the servant Gerasim's reaction to his dying master, Ivan Ilych: "Once when Ivan Ilych was sending him away he said straight out: "'We shall all of us die, so why should I grudge a little trouble?'-- expressing the fact that he did not think his work burdensome, because he was doing it for a dying man and hoped someone would do the same for him when his time came."

Saturday, September 4, 2010

Intellectuals or Birdbrains?

Several years ago, I read an article about a phenomenon known as "mobbing."  Mobbing is something birds do when they perceive a threat: they gang up and attack the potential predator, driving it away.  As John Gravois writes in "Mob Rule" (The Chronicle of Higher Education 52, no.32, 1 April 14, 2006), "The birds seldom actually touch their target (though reports from the field have it that some species can defecate or vomit on the predator with 'amazing accuracy'). The barrage simply continues until the intruder sulks away."

In the 1980s, Heinz Leymann, a German psychiatrist and industrial psychologist working in Sweden, applied the term "mobbing" to workplace behavior that constitutes "an impassioned, collective campaign by co-workers to exclude, punish, and humiliate a targeted worker" (qtd. in Gravois).  Leymann offered an extensive list of characteristics of mobbing: "You are interrupted constantly"; "you are isolated in a room far from others"; "management gives you no possibility to communicate"; "you are given meaningless work tasks"; "you are given dangerous work tasks"; "you are treated as if you are mentally ill" (qtd. in Gravois).

Although Leymann's description might seem slightly humorous (or unfortunately all-too-familiar), mobbing is no laughing matter.  In fact, Leymann would go on to suggest that over 10 percent of all suicides in Sweden could be linked to incidents of workplace mobbing.  Leymann's work led to the creation of anti-mobbing laws in France and raised awareness of the problem throughout Europe.  Recently, Leymann's work has been continued by Dr. Kenneth Westhues, a sociologist at the University of Waterloo in Ontario, Canada. (For an extensive list of resources on mobbing, see Westhues' website).


What is particularly striking and, in my opinion, particularly sad is the fact that, in the studies conducted by Leymann, Westhues and others, universities in general and academic departments in particular are among the most frequent sites for incidents of mobbing.  What begins as the gradual isolation of a "target" individual (as Gravois describes it, "Colleagues begin to roll their eyes at you during meetings. You get the sense that more people dislike you than you once thought"), quickly escalates into "petty harassment" ("Your administrative requests are repeatedly delayed or misplaced. Your parking space is moved to the outer reaches of the lot. Your classes or meetings get scheduled at odd times"), and culminates in a "critical incident" that leads colleagues to insist on administrative action--in effect, "the incident confirms what they have always suspected about you" and "makes them wonder aloud what you're really capable of."  This confirmation and concern often takes the form of a petition, signed by many, if not most, of the target's colleagues.  Even if the behavior is ultimately stopped at the administrative level, win or lose, the target generally leaves the job, usually feeling mentally and emotionally traumatized in the process.


As Gravois notes, incidents of mobbing overlap with a parallel principle of human behavior, what Cass R. Sunstein, Professor of Law and Political Science at the University of Chicago has identified as "The Law of Group Polarization."  In "The Law of Group Polarization" (The Journal of Political Philosophy, Vol. 10, No. 2, 2002, pp. 175-195), Sunstein argues that "members of a deliberating group predictably move toward a more extreme point in the direction indicated by the members' predeliberation tendencies" (176, italics in original).  So, if we are all in agreement before we meet to deliberate an issue, we will predictably endorse a more extreme collective position after the meeting is over.  Ultimately, Sunstein's research has shown that regular meetings of like-minded individuals "without sustained exposure to competing views" frequently result in extreme movements (176). 


Sunstein summarizes the resulting dilemma quite compellingly: "If deliberation predictably pushes groups toward a more extreme point in the direction of their original tendency, whatever it may be, do we have any reason to think that deliberation is producing improvements?" (177).  Obviously, this is a crucial question for every American, living in a representative democracy that is founded on principles of rational deliberation and debate.  

Sunstein's conclusions are interesting: on the one hand, "enclave deliberations" (discussions among like-minded individuals already in agreement) can lead to extremism, but on the other hand, in discussions conducted in heterogeneous groups, "participants ... tend to give least weight to the views of low-status members" (177).  Thus, low-status members of a social group can benefit from meetings of like-minded individuals because these deliberations "might be the only way to ensure" that "unjustly suppressed views" can be further developed and eventually heard (177).


In the end, for me, all of this seems like an interesting merging of the ideas I've tried to flesh out in two of my earlier posts--my reflections on Madison's arguments against domestic factions in The Federalist Papers #10 and my thoughts about the representation of suburban conformity in Sam Mendes' film "American Beauty."  On the one hand, as free-thinking, rational human beings, we all support the idea of dissension and debate.  On the other hand, we have all witnessed some form of birdbrained behavior--like mobbing or group polarization-- that is ultimately unworthy of free-thinking, rational human beings.  To sustain the one and eliminate the other is clearly no easy task.