Monday, February 13, 2017

Who's In Charge Here?

In the wake of the social and political turmoil that has marked the past several weeks, I spent the weekend reading Albert Bandura’s Moral Disengagement: How Good People Can Do Harm and Feel Good About Themselves (2015). As his title suggests, Bandura—an Emeritus Professor of Social Science in Psychology at Stanford and the founder of what has come to be known as “social cognitive theory” —is interested in the question of “moral agency,” the question of when, how, and why human beings behave in ways that align with their stated moral values.

In particular, Bandura is interested in identifying those points at which human beings set aside their moral values and behave inhumanely towards others. In these instances, people rationalize and justify doing harm to others, Bandura argues, not because they are immoral, but because they are “morally disengaged” from the situation and/or the people affected by their behavior.

Bandura’s concept of moral agency thus dispenses with the idea that some people are just innately “bad” or “evil,” while others are inherently “good” or “virtuous.” Instead, Bandura argues that “Moral agency is … exercised through the constraint of negative self-sanctions for conduct that violates one’s moral standards and the support of positive self-sanctions for conduct faithful to personal moral standards” (1)—in short,  that we beat ourselves up for doing things that are not in keeping with our moral values, but feel good about ourselves if we do things that align with our values.

Remember Jiminy Cricket’s advice, to “always let your conscience be your guide”? Well, Bandura argues that this kind of moral guidance has “dual aspects”—it can be “inhibitive” (“don’t do that—you know it’s wrong!”) or “proactive” (“I want to do this, because that’s the kind of person I am.”) 

The problem, according to Bandura, is that “Moral standards, whether characterized as conscience, moral prescripts, or principles, do not function as unceasing internal regulators of conduct” (2)—to put it more simply, our internal moral guidance system isn’t always automatically switched “on.” And that’s not just because Jiminy Cricket is taking a nap or because he stepped out to use the bathroom or go get a sandwich. He might be right where he always is, but he’s… staring at a speck on his sleeve, pretending he doesn’t see what you’re up to.

Even more troubling are the moments in life when you’re doing something quite immoral or “wrong” and Jiminy Cricket is not only not objecting to it, he’s actually cheering you on (“anything your heart desires will come to you—no request is too extreme—who said it’s embezzlement?—fate is kind!!”). These moments—when Jiminy Cricket goes rogue or offline, so to speak—are points of “moral disengagement.” As Bandura notes, “People often face pressures to engage in harmful activities that provide desired benefits but violate their moral standards. To engage in those activities and live with themselves, they have to strip morality from their actions or invest them with worthy purposes. Disengagement of moral self-sanctions enables people to compromise their moral standards and still retain their sense of moral integrity.” (2)

This is why, in many instances, people who have engaged in behavior that strikes the rest of us as blatantly immoral remain convinced that they are “good” people who didn’t really do anything “wrong,” per se. And why they will insist on trying to convince the rest of us that we have them all wrong, that they aren’t callous or cruel or deceitful or downright “evil.” As Bandura puts it, “people sanctify harmful means by investing them with worthy social and moral purposes” (2). Or, as the adage has it, “The road to hell is paved with good intentions.” In the moments when they’re behaving immorally, people are quite convinced that, because their intentions are good, their behavior isn’t all that bad.

This continued belief in one’s own moral stature and integrity stems from the fact that, in instances of “moral disengagement,” people do not alter or abandon their stated moral values. Instead, they “circumvent” their standards by opting to “strip morality from harmful behavior and their responsibility for it” (3). This kind of seductive moral strip-tease, if you will, can occur in various ways. On the behavioral level, people can believe that they’re ultimately preventing more suffering than they’re currently causing, and they can use language in ways that enable them to uphold this perception (through euphemisms that cast their behavior as “innocuous” or that work to “sanitize” it) (2). On the level of agency, people operate by “displacing responsibility to others and by dispersing it widely so that no one bears responsibility” (3)—personally, I consider this as the “everyone-does-it-and/or-anyway-s/he-started-it” system of moral reasoning.

Finally, on the level of what Bandura labels the “victim locus,” people can circumvent their own stated moral values by dehumanizing the people affected by their behavior—in particular, “by divesting them of human qualities or attributing animalistic qualities to them” (3). This is, on one level, the impetus behind the “blame the victim” mentality in general. It is a mindset that is even more toxic than “blame the victim,” however, because it insists that the victim isn’t “one of us,” so s/he isn’t fully human and thus not capable of feeling the way we might feel if these things happened to us.

And if you’re about to deploy the “Well-but-there’s-nothing-anyone-can-do-about-that-because-people-are-naturally-aggressive-and-this-mentality-has-been-around-since-the-dawn-of-time” argument, I’ve got news for you. Research has shown that “stimulating the same brain site activated aggression or submissiveness, depending on the animal’s level of social power” (18, emphasis added).

So don’t blame it all on the human brain. Research shows that, if we want to regulate aggression by biochemical or neurological means, we still need to examine how it is shaped and how it operates “in the context of power relations and other social determinants of aggression” (18).

Perhaps more importantly, Bandura insists that we must regard morality as a more complex operation than we typically do. As he points out, “Higher levels of moral ‘maturity’ do not necessarily foretell stronger commitment to humane conduct. This is because justifications, whatever their level, can be applied in the service of harmful activities as well as benevolent ones” (25). As we all know, context is important: soldiers in battle override their personal moral standard not to kill because they justify what would otherwise be considered a harmful activity as a potentially “benevolent” one (by adopting the moral reasoning for a “just cause” or a “just war,” for example). As a result, we do not morally evaluate all acts of killing in the same way. But at the same time, when a drug dealer refers to his “soldiers” and we characterize their acts of murder as components of an ongoing “war,” for example, we are using language to sanitize—if not legitimate—behavior that does not align with our stated moral values. (The drug dealer and his flunkies aren’t actually “soldiers” at all and their “war” is not only unjustified, but morally unsanctioned and ultimately illegal.)

Which brings me to the point that Bandura makes that I find most compelling, as an English professor and general wordsmith. According to Bandura, “individuals can exercise some measure of control over how situations influence them and how they shape their situations” (45)—in particular, Bandura argues, “People do not usually engage in harmful conduct until they have justified to themselves the morality of their actions” (49). In short, we can strive to cultivate a mindset that helps to prevent moral disengagement.

One of the most potent ways of doing this is by looking at how we speak to—and about—others that we are inclined to dislike, disagree with, or otherwise condemn. As Bandura points out, the increasing predominance of online communications has “ushered in a ubiquitous vehicle for disengaging moral self-sanctions from transgressive conduct” (68). A “highly decentralized system that defies regulation,” the internet is a place where, “Anybody can get into the act, and nobody is in charge” (68).

Except that, ultimately, we are in charge. If nothing else, when we go online, we can strive to ensure that our own moral self-sanctions continue to operate. If I would hesitate to look someone in the eye and call him/her, for example, a “friggin’ orange Cheeto who spews nothing but lies and hate” or a “pathetic little snowflake who needs to grow up and stop expecting mommy to change their diaper,” I can refrain from typing it on Twitter or Facebook or in the comments section on someone’s webpage.

Because, let’s be honest, most of us will self-censor (to an extent) when face-to-face with someone we strongly dislike or vehemently disagree with, because we know that such behavior is potentially rude and unkind and may make us look bad in the eyes of others. But, as Bandura acknowledges, “Cloaked in the anonymity afforded by the Internet, people say cruel things that they would never dare say publicly” (39). The problem, however, is that over time, if you say enough cruel things anonymously or repeatedly justify your own contempt for others who disagree with you (because even if you don’t say it out loud or type it online, odds are, you’re saying it to yourself), your moral self-sanctions begin to weaken with respect to the individuals or groups that you’re targeting.

Keep it up, and you’ll start speaking—and behaving—in ways that you would never condone if you were morally “engaged” with the humanity of those around you, because you’ll cease to see the objects of your contempt as “human” in the same way that you yourself are. You’ll psychologically justify a lack of compassion, the practice of overt aggression, and a mindset that sees your victims as always “guilty,” never potentially “innocent.” And often without even realizing that this is what is happening to you.

This is not to say that we should all just agree with, accept, or normalize words and behavior that we fundamentally cannot agree with or accept.
And this is certainly not about silencing or circumscribing the voices of others. It’s not about “going along to get along”—a phrasing which itself seeks to “sanitize” unacceptable or potentially immoral behavior.

What I’m suggesting is that empirical psychological research has shown that it is all too easy to get used to stereotyping and dehumanizing people if you never actually have to see the people you’re “talking” to or see the effect of your words. And if the algorithms that structure your social media feeds lead you to believe that “everyone” does it and that “most people” agree with you, you’ll be less likely to perceive your words and actions as unkind or immoral or “wrong.” As Bandura points out, the “unknowns” that go hand-in-hand with online communications “make it easy to persuade oneself that one’s conduct caused little or no harm and that one is merely exercising the constitutionally protected right of free speech” (69).

So what can the morally engaged do, in times of collective crisis, when everyone has gotten into the act but nobody seems to be in charge? 

Well, for starters, we can remember that, although behavioral psychology often focuses on what used to be labelled “man’s cruelty to man,” there is “equally striking evidence that most people refuse to behave cruelly toward humanized others under strong authoritarian commands, and when they have to inflict pain directly rather than remotely” (91).

The key phrase in that statement, of course, is “humanized others.” Not “Cheetos” or “snowflakes” or “deplorables” or “libtards” or “Trumpsters.”

When typing hostile comments or aggressively “debating” on social media, maybe we should imagine ourselves, not saying these things publicly to someone—since moral disengagement can quickly enable us to tell ourselves that the objects of our scorn “started it” and/or “deserve it” and because we’ve all been raised on the very (VERY) flawed maxim that “sticks and stones may break our bones but names will never hurt us”—but instead imagine ourselves walking up to a total stranger on the street and repeatedly punching or kicking him/her because we think we “should” because something about them has told us that we “know” what they’re all about.

And, when or if that person begins to scream or kick back, we can imagine ourselves simply laughing gleefully as we continue to punch and kick them, while others crowd around (and maybe join in too). When we’re done, we can just imagine ourselves walking away to go get a coffee, comfortable in the idea that there’s been “no harm done.” And the next day, we’ll do it again—and maybe even look for the person we did it to the day before.

That’s the scenario that of moral disengagement. 

By contrast, Bandura notes, “The affirmation of common humanity can bring out the best in others” (91). This doesn’t mean we insist that we’re “all the same,” somehow. It simply means that we realize that we’re human beings who often disagree, that we can control our own levels of moral engagement (or disengagement), and that social forces and the language we use to talk to and about others have not only a measurable, but also a profound effect on our own levels of aggression and moral engagement.

If, instead of labeling someone, we take a moment to sympathetically think about what the world might look like from that person’s perspective—even if it is a viewpoint we cannot possibly agree with, support, or condone—we strengthen and affirm our own moral functioning. 

Empathy is not a sign of stupidity, weakness, indecisiveness, or capitulation to injustice. It’s an exercise in moral reasoning that strengthens our own moral fiber and the integrity of our moral musculature.

And, believe it or not—and despite what may seem to be overwhelming evidence to the contrary—we really do change the world a little whenever we practice empathy.

Because, as Bandura points out, “vicarious influence”—that is, simply watching someone else behave kindly and morally—helps us to “acquire lasting attitudes, emotional reactions, and behavior proclivities toward persons, places, or things that have been associated with the emotional experiences of others” (93). In these moments, we “make what happens to others predictive of what might happen to oneself” (93).

“I’m not going to say that to this person because that could be me.” “I’m not going to say that because I’ve been on the receiving end of comments like that and I know they stay with you and hurt.” “If someone called me a stupid snowflake or a paunchy Cheeto or deplorable, well, gosh, I’d be kind of upset about that.” “She can’t help who her father is.” “I’m just not going to shop there anymore.”

There are plenty of ways to make our voices heard—and our strength and integrity and moral convictions felt—without resorting to the dark side of anger and the language of dehumanization and cruelty.

If no one is in charge, then we each need to be in charge—of ourselves.

If we want to be morally engaged agents of change, I think we need to resist the constant temptation to succumb to the social pressures and the language of moral disengagement.

And yes, we’ll all slip up from time to time. We’re only human.

And that is precisely the point: we’re all—and only—human.

No comments:

Post a Comment

Ralph Waldo Emerson once wrote, "Life is short, but there is always time for courtesy."