In this section, we will explore the established basics of how to communicate effectively with stakeholders when they are upset or angry. This section takes about twelve minutes to read.
Most of us understand how communication works between people when their emotional stress is low. However, the rules of communication change drastically when stakeholders are subjected to the high stress of a risk controversy. Researchers in risk communication have boiled these rules down to five principles:
Principle 1: When under stress, people tend to lose at least some of their ability to comprehend, accept, or recall information.
Risk communicators call this the Mental Noise Theory. “Research shows that mental noise can reduce a person’s ability to process information by more than 80 percent,” risk communications researcher Vincent T. Covello says (Covello, Minamyer and Clayton, 2007). “This is mostly due to trauma and a heightened emotional state during a crisis.”
In the traditional low-stress model, communication begins with a sender who transmits a message to a receiver via a channel. For example, if I am talking to you, then I (the sender) am sending a message to you (the receiver) through the air between us (the channel). If you are reading The New York Times, then the reporter (sender) is sending a message to you (the receiver) via newsprint (the channel).
Sometimes, a message is distorted by noise. For example, if you are watching television during a thunderstorm, the signal may cut in and out. This “noise” interferes with the message that the network (the sender) is attempting to send to you (the receiver).
However, noise is not always physical. Sometimes noise is mental.
Let’s say that I want to send a message to you. I choose to do this in person, verbally. If I present my message in a calm tone, and I am able to attract your interest in what I have to say, then odds are good that you will receive this message with at least a moderate degree of accuracy.
However, if I put a loaded gun to your head while I deliver my message, what are the odds that you will accurately remember my message? Poor, at best. That’s because the loaded gun will probably raise your stress levels and thus create “mental noise” that interferes with your brain’s ability to comprehend what I’m saying. In essence, you have lost your capacity to trust me or my message.
High stress causes stakeholders to lose much of their ability to process complex messages. The mental noise created by the stress interferes with comprehension of messages; the higher the stress, the lower the comprehension. According to Covello’s research, here’s what we can expect from our audiences in low vs. high stress situations (see Fig. 1):
(Fig. 1) Sources: Covello (2003); NCFPD (2007; 2016)
As a result, in any high-stress situation, our messages must be simple. They must be “dumbed down” to their most fundamental levels. Why? Because if we present a complex message to an audience that is under high levels of stress, our message will get lost in the mental noise. Indeed, if our subject-matter experts complain, “You are oversimplifying the message,” then we probably are on the right track.
Principle 2: People under high stress tend to focus their attention less on the positive and more on the negative.
There’s a lopsided relationship between how people under stress react to good news and to bad news, as well as to positive words and to negative words. Covello calls this the Negative Dominance Theory, which is based on research that shows one negative word or message is roughly equal to three positive words or messages.
When delivering bad news, an organization should plan to deliver three positive messages for every negative message. This is not to say that the organization should seek to over-reassure its stakeholders about a given risk controversy; as we shall see during our discussion of outrage management, over-reassurance tends to make upset stakeholders even more upset. This principle does mean the organization should emphasize the positive whenever doing so is honest and forthright. This can be a difficult line to walk.
Principle 3: People under high stress place far more value on empathy than expertise.
In high-stress situation, stakeholders want to know that our organization cares about their outrage, Covello says. Only then will they listen to what we know. Covello calls this the Trust Determination Model. We have to be willing to listen to their concerns. We have to be willing to express our empathy. If we fail to meet this standard, we lose their trust and their willingness to heed our advice.
In a low-stress situation, Covello says, the receiver’s trust in the sender’s message is based largely on the sender’s competence and expertise (2003). But when the stress is increased (such as during an outbreak of foodborne illness), the receiver’s trust in the sender’s message is based on how the receiver perceives the sender’s:
- Listening, caring and empathy (50 percent).
- Honesty and openness (15 to 20 percent).
- Competence and expertise (15 to 20 percent).
- Dedication and commitment (15 to 20 percent).
Now here’s the really bad news, according to Covello: During a high-stress situation, the sender of a risk message has roughly thirty seconds to establish trust with the receiver. As senders, if we fail to demonstrate listening, caring, and empathy in the time it takes to watch a television commercial, we’re in trouble.
This contradicts traditional public relations, which says we should send in the calm, cool, rational expert to speak to the public: a lawyer, an official spokesperson, or a scientist. In a high-stress situation, our audience will not invest its trust in a spokesperson based on competence and expertise. The public is looking for someone who can communicate simply and clearly with candor and empathy.
Principle 4: People under high stress tend to trust experts and other authorities who acknowledge the situation’s uncertainty.
Stakeholders will cast a jaundiced eye on messages that are overly reassuring, risk communication consultant Peter M. Sandman says (1993). Over-confidence tends to provoke acrimony with stakeholders and to destroy our organization’s credibility.
Instead, we must learn to talk about risk honestly and forthrightly. We should be ready to clearly distinguish between what we know, what we think we know, and we do not yet know. This means we must often talk to stakeholders about the risk before we are ready to talk about it. “If you’re going to communicate about risk, you will need the courage to talk when your information is uncertain,” Sandman and psychiatrist Jody Lanard write in a co-authored column about the relationship between risk, communication, and uncertainty (2011, Aug. 14). “And you will need the skill to express uncertainty in ways that guide your audience’s decisions and minimize the cost (to you and your audience both) if you turn out mistaken. The communication of uncertainty is a central risk communication capability.”
Principle 5: People who are under high stress tend to become far more concerned with their outrage than with any hazard.
During a risk controversy, the subject-matter experts want to talk about the potential for death and injury; meanwhile, the non-experts want to talk about their anger or their fear, Covello and Sandman say (2001). This dynamic tends to generate rancor between stakeholders and organizations in a high-stress situations. It's important to understand why.
As early as 1993, psychology professor Paul Slovic identified two specific trends in how American society was dealing with the perception and management of risk.
First, he noted, as life in the United States has grown healthier and safer since the 1970s, the American public has become more concerned with the risks that may affect its health and safety: “We have come to perceive ourselves as increasingly vulnerable to life’s hazards and to believe that our land, air, and water are more contaminated by toxic substances than ever before.”
Second, he says, the assessment and management of these risks has become a greater source of contention between the technical experts in those risks and the non-expert public: “Polarized views, controversy, and overt conflicts have become more pervasive.”
In frustration, scientists and industrialists often scold the non-expert public for behaving in ways that may appear irrational or ignorant, Slovic says. However, research demonstrates that such criticism is misplaced. The research shows that the non-experts’ reactions to risk are often guided by a general sensitivity to technical, social, and psychological qualities of a given hazard that are missing from the technical models. These qualities may include a lack of certainty, a lack of control, a lack of fairness, or just an overall sense of dread (Slovic, 2000).
Stakeholders and experts rarely have similar concerns about a given risk. With all of that in mind, Covello and Sandman say (2001), suppose we generated a long list of risks, and then asked a group of technical experts to rank those risks from the most dangerous to the least dangerous. Next, let’s say we took that same list and conducted a survey of non-experts, asking them to rank those same risks from the most upsetting to the least upsetting.
If we compare those two versions of the list, we will find a statistical correlation of about 0.2.
“There is virtually no correlation between the ranking of hazards according to statistics on expected annual mortality and the ranking of the same hazards by how upsetting they are,” Covello and Sandman say. “There are many risks that make people furious even though they cause little harm – and others that kill many, but without making anybody mad.”
By taking this insight to its logical conclusion, Sandman arrived at a game-changing realization. Most technical experts consider “risk” to be synonymous with “hazard”; however, Sandman says (1993), the 0.2 correlation indicates that non-experts are clearly looking at “risk” very differently.
Recognizing this, Sandman redefined risk. He took what the technical experts call “risk” – that is, anything that presents a tangible threat to life, health, safety or property – and renamed it “hazard.” Then, he took what the non-experts call “risk” – that is, anything that communities tend to find upsetting – and re-named it “outrage.” Sandman then created a formula to express the overall concept: Risk = Hazard + Outrage.
“The implication for those who are communicating risk (whether in care, consensus, or crisis communication) is that a presentation of the technical facts will not necessarily give most audiences the information they want,” Lundgren and McMakin say (2013). “Indeed, the audience will probably not ever listen to those facts until their concerns and feelings have been addressed.”
Sandman (1993) recognizes twelve primary factors that tend to trigger outrage among communities of non-experts. He frames the twelve primary factors with the question, “Is it X or is it Y?” If the answer is “Y,” then the risk is likely to include high outrage:
- Voluntary or coerced? When a community believes it has no real choice in accepting a risk, it tends to become outraged.
- Natural or industrial? A community can accept an act of God far more readily than a man-made threat.
- Familiar or exotic? If the threat is new to a community’s experience, then it is more likely to provoke outrage.
- Not memorable or memorable? Arguments tend to provoke outrage when they take the form of easily recalled images, metaphors, icons, slogans, or nicknames.
- Not dreaded or dreaded? A feeling of disgust or fear will tend to provoke outrage. (Sandman sometimes refers to this as the Yuck Factor.)
- Chronic or catastrophic? If the perceived hazard outweighs any perceived benefit, then people tend to become outraged. (Sandman uses the example of planes vs. automobiles. Cars kill far more people than planes do, but far more people are scared of flying than they are of driving.)
- Knowable or unknowable? If a perceived hazard exists beyond the perception of the five human senses, then it tends to trigger outrage.
- Controlled by me or controlled by others? When a community is offered little or no control over a risk, it tends to become outraged.
- Fair or unfair? When a community shoulders most of the risk, but someone else gets all or most of the benefits, the community will tend to become outraged.
- Morally irrelevant or morally relevant? If a community can clearly identify a moral issue in the context of the risk, it tends to become outraged.
- Trustworthy or untrustworthy? When the perceived source of a risk is deceitful or dishonest, the community tends to become outraged.
- Responsive or unresponsive? When the perceived source of a risk fails to respond in a manner the community considers appropriate or effective, then the community tends to become outraged.
Another way to think of these factors, Sandman says (2011), is to divide them into two categories: “safe” and “risky” (Fig. 2). The first item in each row (that is, voluntary, natural, familiar, etc.) represents a “safe” factor when it comes to outrage. These are factors that will leave folks calm, Sandman says, even if it kills them. Each of the second items (that is, coerced, industrial, exotic, etc.) represents a “risky” factor. These are factors that tend to upset people even if they pose no actual threat to their well-being.
Any of these twelve factors may become dominant in a controversy. However, Sandman says (2011), the three factors that most frequently become dominant are a lack of control, a lack of trust, and a lack of response.
(Fig. 2) Source: Sandman (1993)
In addition to the twelve primary factors, Sandman (1993) identifies eight secondary factors.
- Vulnerable populations: The public is more likely to become outraged if a risk affects the elderly, the very young, the sick, the poor, and the otherwise helpless.
- Delayed vs. immediate effects: A risk that lies in wait to strike is more likely to trigger outrage than will an immediate threat.
- Effect on future generations: Stakeholders want to know how a risk might harm their great, great, great grandchildren.
- Identifiability of the victim: Large numbers of faceless victims will trigger less outrage than will a single, easily recognizable victim.
- Reduction of risk: Stakeholders want to eliminate the risk, not merely reduce it, whenever possible.
- Risk-benefit ratio: Stakeholders in general are willing to look at the big picture. If their sacrifice makes sense, they may accept it. If not, watch out.
- Media attention: The media cannot cause stakeholders to become outraged, but they can amplify existing outrage.
- Opportunity for collective action: Stakeholders are far more likely to become outraged if they can identify the chance for effective, collective action. For example, calling a neighborhood meeting or marching on city hall.
It is vital to understand that Sandman (2011) considers each of these factors to be a component of risk, and not a misperception of risk. Indeed, he says, this is his “explicit argument,” that outrage is just as real and just as measurable as any hazard. “Social scientists can tell you to within three decimal places the impact of most controversial risks on people’s opinions; no one can tell you to within three decimal places their impact on people’s health,” Sandman says. “So if we are going to get into a competition over which of the two is science, I am in grave danger of winning (1993).”