Sunday, February 26, 2017

Slow and Steady

Recently, I happened upon a couple of texts that have been helping me think about the main challenge I've faced over the past year or two: how to stay productive and not get down on yourself when you can no longer count on feeling “great” (or even just “healthy”) from one day to the next.

It’s an exercise in patience, obviously, but it’s a bit more than that as well. It requires a new mindset.

I gained some insight into how to implement this mindset by reading Anthony Ongaro’s October 22, 2016 blog post, “25 Simple Habits You Can Build From Scratch.” While I’m not necessarily interested in building a new habit, per se, I have been seeking a way to achieve “the consistency needed to make significant changes over time.”

I’m a planner and a doer. If I plan it, I want to do it. So if I plan it and then I can’t do it, I resort to doing one of two things: 1) feeling bad, and 2) beating up on myself for being a failure. Neither of these behaviors is good for the long haul—and they accomplish nothing.

I’ve found myself inadvertently facing a situation not unlike the one that Ongaro describes, in which I find myself “jogging for 10 hours straight then not jogging for 19 days.” Except that I’m not jogging, of course, because I don’t jog. But you get the idea.

As Ongaro points out, “jogging for 30 minutes every day for 20 days in a row” would result in the same number of hours spent on the activity, but lead to a far more productive (i.e., “beneficial”) outcome. But based on my own experience, it can be hard to make that switch to an exponentially reduced level of activity if you’re used to being able to work for 10 hours straight.

It forces us to rethink how we define ourselves in relation to our world and our productivity. I don’t think this is too broad of a statement to make, because it echoes the insights offered by Maggie Berg and Barbara K. Seeber’s book, The Slow Professor: Challenging the Culture of Speed in the Academy (2016). Berg and Seeber suggest that we need to rethink how we define what constitutes “productivity,” particularly when it comes to the profession of college teaching.

Building upon the mindset of “the Slow movement” (Slow Food as opposed to fast food, for example), Berg & Seeber argue that “[b]y taking the time for reflection and dialogue, the Slow Professor takes back the intellectual life of the university” (x).

My goal was much less far-reaching and far more self-involved: I wanted to take back my own intellectual life and productivity, so that I felt less disheartened by the way that my body was not syncing with the dictates of my mind.

Going to bed at night, night after night, thinking, “Tomorrow, I will feel better and I’ll be able to do X. And Y. And Z!! Yes, Z needs to get done tomorrow, and I will do it!!” and then awaking to realize, “I’ll be lucky to do X today,” and midway through the morning it becomes clear that it is unlikely Y will get done and Z is just a pipe dream… well, that can be discouraging, to say the least.

In my own case, it wasn’t that I was simply feeling “lazy” or “procrastinating” (although I’m as guilty of those feelings as the next person), it was that my physical health was affecting my mental focus and determination to such an extent that I could no longer “be” the kind of writer, scholar, and thinker that I had always prided myself on being.

The insights of Berg and Seeber helped me to recalibrate, both emotionally and intellectually, by offering a new way of thinking about how my mindset might have been shaped by the “corporatization of the university”—that is, by the idea that we’re always racing against the clock, fixated on the idea of productivity and efficiency.

As Berg and Seeber point out, there is a “link between time pressure and feelings of powerlessness” (26)—if we feel we have to finish by X date (or hour), the realization that we aren’t going to be able to do that can leave us feeling particularly helpless and drained.

But what if we simply reframe our thinking, so that we don’t succumb to (or, at least, try not to succumb to) “time pressure”? Berg and Seeber argue that this would help us to develop a sense of—and a “place” for— “timeless time” in our lives. Ideally, we’d silence the “inner bully” in our minds, tune out the voices of all of the people out there who think that professors in particular and teachers in generally aren’t “really” doing anything anymore these days (the “must be nice to get the entire summer off” contingent of the population), and realize that any given writing and research task, in order to be well-done, will probably take at least twice as long as you had hoped.

Sounds easy, I know. But reading that sentence actually makes me wince. (“TWICE as long?! I don’t want it to be like that! Because my summers are more relaxed than what most people experience, working 9 to 5 year round, so I really don’t have any excuses, and hey, didn’t I just take a full hour off to watch an X-Files rerun? Well, the research certainly isn’t going to get done if I keep doing that, now is it?!”) 

The point, I think, is not the question of time or productivity—it’s a question of attitude. To do your best work, you have to be your best self, and you simply can’t do that if you’re constantly setting the bar too high and then failing (or crawling to bed bruised and defeated because the bar actually fell from that great height when you accidentally knocked into the goal post… and both the bar and the goal post subsequently hit you on the head). As Berg and Seeber note, “If we think of time only in terms of things accomplished (“done and done” as the newly popular saying goes), we will never have enough of it” (55).

Yes, even those of us with the (alleged) “summers off.”

Because, as Berg and Seeber point out, “Slowing down is about asserting the importance of contemplation, connectedness, fruition, and complexity” (57). The first idea is not always the best idea, and it takes time to work towards what might eventually be the best—the most connected, fruitful, and complex—iteration of an idea.

The Slow Professor reminded me of what I’ve always known, more or less, but chosen not to highlight about the nature of my own work. That “periods of rest” also have “meaning” because “research does not rule like a mechanism; there are rhythms, which include pauses and periods that may seem unproductive” (57). The British novelist Virginia Woolf used to remind herself of this in her journal: when she got on her own case for not writing enough, she would recollect that the creative and intellectual life requires periods of “wool gathering.”

As Berg and Seeber point out, we need to learn to wait (64), to openly acknowledge “the list of detours, delays, and abandoned projects” that we typically hide from view (65), to recognize that “More is not necessarily better”—although paradoxically, sometimes, it actually is (66)—and to give ourselves the time we need to read, think, and reflect (the essence of “research”) so that we can “follow our heart” (i.e., pursue projects “driven by genuine curiosity about a problem even if that is not a ‘hot’ topic at the moment” [68]).

As the fable of the Tortoise and the Hare teaches us, “Slow and steady wins the race.” But more importantly, we need to realize that it isn’t always—because it simply can’t and shouldn’t be—a race.

Monday, February 13, 2017

Who's In Charge Here?

In the wake of the social and political turmoil that has marked the past several weeks, I spent the weekend reading Albert Bandura’s Moral Disengagement: How Good People Can Do Harm and Feel Good About Themselves (2015). As his title suggests, Bandura—an Emeritus Professor of Social Science in Psychology at Stanford and the founder of what has come to be known as “social cognitive theory” —is interested in the question of “moral agency,” the question of when, how, and why human beings behave in ways that align with their stated moral values.

In particular, Bandura is interested in identifying those points at which human beings set aside their moral values and behave inhumanely towards others. In these instances, people rationalize and justify doing harm to others, Bandura argues, not because they are immoral, but because they are “morally disengaged” from the situation and/or the people affected by their behavior.

Bandura’s concept of moral agency thus dispenses with the idea that some people are just innately “bad” or “evil,” while others are inherently “good” or “virtuous.” Instead, Bandura argues that “Moral agency is … exercised through the constraint of negative self-sanctions for conduct that violates one’s moral standards and the support of positive self-sanctions for conduct faithful to personal moral standards” (1)—in short,  that we beat ourselves up for doing things that are not in keeping with our moral values, but feel good about ourselves if we do things that align with our values.

Remember Jiminy Cricket’s advice, to “always let your conscience be your guide”? Well, Bandura argues that this kind of moral guidance has “dual aspects”—it can be “inhibitive” (“don’t do that—you know it’s wrong!”) or “proactive” (“I want to do this, because that’s the kind of person I am.”) 

The problem, according to Bandura, is that “Moral standards, whether characterized as conscience, moral prescripts, or principles, do not function as unceasing internal regulators of conduct” (2)—to put it more simply, our internal moral guidance system isn’t always automatically switched “on.” And that’s not just because Jiminy Cricket is taking a nap or because he stepped out to use the bathroom or go get a sandwich. He might be right where he always is, but he’s… staring at a speck on his sleeve, pretending he doesn’t see what you’re up to.

Even more troubling are the moments in life when you’re doing something quite immoral or “wrong” and Jiminy Cricket is not only not objecting to it, he’s actually cheering you on (“anything your heart desires will come to you—no request is too extreme—who said it’s embezzlement?—fate is kind!!”). These moments—when Jiminy Cricket goes rogue or offline, so to speak—are points of “moral disengagement.” As Bandura notes, “People often face pressures to engage in harmful activities that provide desired benefits but violate their moral standards. To engage in those activities and live with themselves, they have to strip morality from their actions or invest them with worthy purposes. Disengagement of moral self-sanctions enables people to compromise their moral standards and still retain their sense of moral integrity.” (2)

This is why, in many instances, people who have engaged in behavior that strikes the rest of us as blatantly immoral remain convinced that they are “good” people who didn’t really do anything “wrong,” per se. And why they will insist on trying to convince the rest of us that we have them all wrong, that they aren’t callous or cruel or deceitful or downright “evil.” As Bandura puts it, “people sanctify harmful means by investing them with worthy social and moral purposes” (2). Or, as the adage has it, “The road to hell is paved with good intentions.” In the moments when they’re behaving immorally, people are quite convinced that, because their intentions are good, their behavior isn’t all that bad.

This continued belief in one’s own moral stature and integrity stems from the fact that, in instances of “moral disengagement,” people do not alter or abandon their stated moral values. Instead, they “circumvent” their standards by opting to “strip morality from harmful behavior and their responsibility for it” (3). This kind of seductive moral strip-tease, if you will, can occur in various ways. On the behavioral level, people can believe that they’re ultimately preventing more suffering than they’re currently causing, and they can use language in ways that enable them to uphold this perception (through euphemisms that cast their behavior as “innocuous” or that work to “sanitize” it) (2). On the level of agency, people operate by “displacing responsibility to others and by dispersing it widely so that no one bears responsibility” (3)—personally, I consider this as the “everyone-does-it-and/or-anyway-s/he-started-it” system of moral reasoning.

Finally, on the level of what Bandura labels the “victim locus,” people can circumvent their own stated moral values by dehumanizing the people affected by their behavior—in particular, “by divesting them of human qualities or attributing animalistic qualities to them” (3). This is, on one level, the impetus behind the “blame the victim” mentality in general. It is a mindset that is even more toxic than “blame the victim,” however, because it insists that the victim isn’t “one of us,” so s/he isn’t fully human and thus not capable of feeling the way we might feel if these things happened to us.

And if you’re about to deploy the “Well-but-there’s-nothing-anyone-can-do-about-that-because-people-are-naturally-aggressive-and-this-mentality-has-been-around-since-the-dawn-of-time” argument, I’ve got news for you. Research has shown that “stimulating the same brain site activated aggression or submissiveness, depending on the animal’s level of social power” (18, emphasis added).

So don’t blame it all on the human brain. Research shows that, if we want to regulate aggression by biochemical or neurological means, we still need to examine how it is shaped and how it operates “in the context of power relations and other social determinants of aggression” (18).

Perhaps more importantly, Bandura insists that we must regard morality as a more complex operation than we typically do. As he points out, “Higher levels of moral ‘maturity’ do not necessarily foretell stronger commitment to humane conduct. This is because justifications, whatever their level, can be applied in the service of harmful activities as well as benevolent ones” (25). As we all know, context is important: soldiers in battle override their personal moral standard not to kill because they justify what would otherwise be considered a harmful activity as a potentially “benevolent” one (by adopting the moral reasoning for a “just cause” or a “just war,” for example). As a result, we do not morally evaluate all acts of killing in the same way. But at the same time, when a drug dealer refers to his “soldiers” and we characterize their acts of murder as components of an ongoing “war,” for example, we are using language to sanitize—if not legitimate—behavior that does not align with our stated moral values. (The drug dealer and his flunkies aren’t actually “soldiers” at all and their “war” is not only unjustified, but morally unsanctioned and ultimately illegal.)

Which brings me to the point that Bandura makes that I find most compelling, as an English professor and general wordsmith. According to Bandura, “individuals can exercise some measure of control over how situations influence them and how they shape their situations” (45)—in particular, Bandura argues, “People do not usually engage in harmful conduct until they have justified to themselves the morality of their actions” (49). In short, we can strive to cultivate a mindset that helps to prevent moral disengagement.

One of the most potent ways of doing this is by looking at how we speak to—and about—others that we are inclined to dislike, disagree with, or otherwise condemn. As Bandura points out, the increasing predominance of online communications has “ushered in a ubiquitous vehicle for disengaging moral self-sanctions from transgressive conduct” (68). A “highly decentralized system that defies regulation,” the internet is a place where, “Anybody can get into the act, and nobody is in charge” (68).

Except that, ultimately, we are in charge. If nothing else, when we go online, we can strive to ensure that our own moral self-sanctions continue to operate. If I would hesitate to look someone in the eye and call him/her, for example, a “friggin’ orange Cheeto who spews nothing but lies and hate” or a “pathetic little snowflake who needs to grow up and stop expecting mommy to change their diaper,” I can refrain from typing it on Twitter or Facebook or in the comments section on someone’s webpage.

Because, let’s be honest, most of us will self-censor (to an extent) when face-to-face with someone we strongly dislike or vehemently disagree with, because we know that such behavior is potentially rude and unkind and may make us look bad in the eyes of others. But, as Bandura acknowledges, “Cloaked in the anonymity afforded by the Internet, people say cruel things that they would never dare say publicly” (39). The problem, however, is that over time, if you say enough cruel things anonymously or repeatedly justify your own contempt for others who disagree with you (because even if you don’t say it out loud or type it online, odds are, you’re saying it to yourself), your moral self-sanctions begin to weaken with respect to the individuals or groups that you’re targeting.

Keep it up, and you’ll start speaking—and behaving—in ways that you would never condone if you were morally “engaged” with the humanity of those around you, because you’ll cease to see the objects of your contempt as “human” in the same way that you yourself are. You’ll psychologically justify a lack of compassion, the practice of overt aggression, and a mindset that sees your victims as always “guilty,” never potentially “innocent.” And often without even realizing that this is what is happening to you.

This is not to say that we should all just agree with, accept, or normalize words and behavior that we fundamentally cannot agree with or accept.
And this is certainly not about silencing or circumscribing the voices of others. It’s not about “going along to get along”—a phrasing which itself seeks to “sanitize” unacceptable or potentially immoral behavior.

What I’m suggesting is that empirical psychological research has shown that it is all too easy to get used to stereotyping and dehumanizing people if you never actually have to see the people you’re “talking” to or see the effect of your words. And if the algorithms that structure your social media feeds lead you to believe that “everyone” does it and that “most people” agree with you, you’ll be less likely to perceive your words and actions as unkind or immoral or “wrong.” As Bandura points out, the “unknowns” that go hand-in-hand with online communications “make it easy to persuade oneself that one’s conduct caused little or no harm and that one is merely exercising the constitutionally protected right of free speech” (69).

So what can the morally engaged do, in times of collective crisis, when everyone has gotten into the act but nobody seems to be in charge? 

Well, for starters, we can remember that, although behavioral psychology often focuses on what used to be labelled “man’s cruelty to man,” there is “equally striking evidence that most people refuse to behave cruelly toward humanized others under strong authoritarian commands, and when they have to inflict pain directly rather than remotely” (91).

The key phrase in that statement, of course, is “humanized others.” Not “Cheetos” or “snowflakes” or “deplorables” or “libtards” or “Trumpsters.”

When typing hostile comments or aggressively “debating” on social media, maybe we should imagine ourselves, not saying these things publicly to someone—since moral disengagement can quickly enable us to tell ourselves that the objects of our scorn “started it” and/or “deserve it” and because we’ve all been raised on the very (VERY) flawed maxim that “sticks and stones may break our bones but names will never hurt us”—but instead imagine ourselves walking up to a total stranger on the street and repeatedly punching or kicking him/her because we think we “should” because something about them has told us that we “know” what they’re all about.

And, when or if that person begins to scream or kick back, we can imagine ourselves simply laughing gleefully as we continue to punch and kick them, while others crowd around (and maybe join in too). When we’re done, we can just imagine ourselves walking away to go get a coffee, comfortable in the idea that there’s been “no harm done.” And the next day, we’ll do it again—and maybe even look for the person we did it to the day before.

That’s the scenario that of moral disengagement. 

By contrast, Bandura notes, “The affirmation of common humanity can bring out the best in others” (91). This doesn’t mean we insist that we’re “all the same,” somehow. It simply means that we realize that we’re human beings who often disagree, that we can control our own levels of moral engagement (or disengagement), and that social forces and the language we use to talk to and about others have not only a measurable, but also a profound effect on our own levels of aggression and moral engagement.

If, instead of labeling someone, we take a moment to sympathetically think about what the world might look like from that person’s perspective—even if it is a viewpoint we cannot possibly agree with, support, or condone—we strengthen and affirm our own moral functioning. 

Empathy is not a sign of stupidity, weakness, indecisiveness, or capitulation to injustice. It’s an exercise in moral reasoning that strengthens our own moral fiber and the integrity of our moral musculature.

And, believe it or not—and despite what may seem to be overwhelming evidence to the contrary—we really do change the world a little whenever we practice empathy.

Because, as Bandura points out, “vicarious influence”—that is, simply watching someone else behave kindly and morally—helps us to “acquire lasting attitudes, emotional reactions, and behavior proclivities toward persons, places, or things that have been associated with the emotional experiences of others” (93). In these moments, we “make what happens to others predictive of what might happen to oneself” (93).

“I’m not going to say that to this person because that could be me.” “I’m not going to say that because I’ve been on the receiving end of comments like that and I know they stay with you and hurt.” “If someone called me a stupid snowflake or a paunchy Cheeto or deplorable, well, gosh, I’d be kind of upset about that.” “She can’t help who her father is.” “I’m just not going to shop there anymore.”

There are plenty of ways to make our voices heard—and our strength and integrity and moral convictions felt—without resorting to the dark side of anger and the language of dehumanization and cruelty.

If no one is in charge, then we each need to be in charge—of ourselves.

If we want to be morally engaged agents of change, I think we need to resist the constant temptation to succumb to the social pressures and the language of moral disengagement.

And yes, we’ll all slip up from time to time. We’re only human.

And that is precisely the point: we’re all—and only—human.