From ScienceWriters: It’s all in your head, or maybe not

Scholarly Pursuits: Academic research relevant to the workaday world of science writing

By Ben Carollo and Rick Borchelt

People make judgments about science based on many factors, and many of these factors may be well outside of our control.

As readers of this column likely experience on a regular basis, the way other people perceive our work can be quite context dependent. We are particularly sensitive to contextual variables like scientific literacy level and socioeconomic background in our audience. However, when it comes to science communication, we probably won’t get off the hook looking at things so simply (as if those were simple tasks in and of themselves). We are featuring three pieces this issue that reinforce the importance of context in science communication.

Kahneman, Daniel. Two Systems in the Mind. Bulletin of the American Academy of Arts & Sciences 65(2) (2012) 55-59.

In this piece, Kahneman focuses on the way in which humans process thoughts and interact with the world. This work is generally interesting at a global level, but we think it has particularly important ramifications in the field of science communication. These implications become even clearer when viewed through the lens of the additional featured pieces. Kahneman discusses how there are two kinds of thinking — intuition and computation (referred to as System 1 and System 2, respectively). We will start by describing System 2, which reflects higher order thinking skills. Finding the answer to a math problem like 245x587 requires that you stop other thought processes and perform a computation. This level of effort is consistent with System 2 thinking and would also come into play if you need to do something like remember a new 10-digit telephone number. Attention is a limited resource, and being asked to perform multiple complex tasks at one time will usually result in impaired ability to continue performing all of those functions. For instance, you are unlikely to be able to recall the new telephone number somebody just told you if you are then asked to solve the complex math problem.

The defining characteristic of System 1, or intuitive thinking, is automaticity. It does not require that you do anything; it just happens to you. For example, when we make judgments about a person’s mood based on a facial expression. It also happens when we develop an intuitive expertise. For instance, a chess master can look at a chess board and instantly know the right move to take next. Additionally, most people can look at the math problem 2+2 and intuitively know that the answer is 4. This happens because System 1 is a repository of all of the information that we have accumulated over the years. System 1 is a huge network of ideas, and it is activated by the most minimal of stimuli in order to prepare us for additional ideas. System 1 is also where context comes into play quite significantly.

System 1 automatically generates causal connections between the things we experience. That is, we automatically develop stories in response to stimuli that help us understand what is happening. System 1 also seeks to suppress ambiguity and will draw on whatever context exists in one’s knowledge repository to create a coherent background story. This is great when we have accurate contextual information, but what will likely strike most readers of this column as disturbing is that System 1 will still generate a story in absence of expertise on a matter. Kahneman describes this process as “judgment by heuristics,” whereby we answer difficult questions by substituting them with an easier, seemingly related question. We’re not generally aware that we do this and consequently are not aware there could be an alternative, possibly more accurate, narrative.

We expect that this will make most of you reflect deeply about your audience and the approaches you take to engage their audience about science issues: What is my audience’s preexisting context? Is it the right context? Can I create enough context in 140 characters so that my audience does not jump to an erroneous conclusion? The answer is probably “no” to all of those questions, unfortunately, which underscores the need for us to be careful about how we engage people in discussions about science and continue to create new, innovative approaches that can provide the appropriate context for general audiences to develop accurate personal narratives about science.

Hanson, Valerie. Amidst Nanotechnology’s Molecular Landscapes: The Changing Trope of Subvisible Worlds. Science Communication, published online 19 May 2011. (Accessed online 2/10/12 at http://scx.sagepub.com/content/34/1/57)

One very important contextual cue is an image. In our experience, science communicators often are the strictest about whether a particular image will work with a story or not. Even though the general observer would never know the difference, we never want to put a picture of an osteosarcoma cell in a story about melanoma on the off chance that there is a pathologist who will see the image and yell bloody murder. Science is a field where accuracy is important, and our credibility as communicators on the subject also depends on our ability to represent the science accurately. However, it turns out that there may be broader implications for choosing one scientific image over another.

In this paper, Hanson explores the scientific metaphors associated with nanoscale phenomena and compares these metaphors to the metaphors associated with simple microscopy. One argument of this paper is that the images that have been used to represent nanoscale phenomena meaningfully change the perspective that people develop in relation to the nanoscale world. In particular, the paper notes that the visualization techniques used for nanoscale images are more closely associated with familiar images of “participatory” worlds like computer-generated graphics or virtual reality programs. Hanson notes that this promotes a way of understanding the nanoscale that is quite different from how we understand the microscopic world.

By creating images that are reminiscent of familiar interactive environments, and often explicitly noting when artistic rendering has taken place, these images reinforce how nanoscale worlds are not observed or discovered like microscopic worlds, but are created through human manipulation.

The context of a built world provided in nanoscale images is subtle and came to be without explicit intent. However, these subtle cues reinforce a critical aspect of nanotechnology. If we look at this through the System 1 framework discussed by Kahneman, this type of data is critical to understanding an advanced concept. We suspect that image cues alone will not lead to a complete understanding of nanotechnology and the nanoscale environment, but this additional detail clarifies the ambiguous context inherent in a new advanced concept.

O’Brien, Timothy L. Scientific authority in policy contexts: public attitudes about environmental scientists, medical researchers, and economists. Public Understanding of Science, published online 22 February 2012. (Accessed online 2/23/12 at http://pus.sagepub.com/content/early/2012/02/22/0963662511435054)

The final article we’re highlighting in this column has some fascinating implications. The statistical analysis underlying the work is complex, but the observed outcomes are fairly simple. Using data from the U.S. General Social Survey, the author explores several variables that contribute to an individual’s feelings about how much influence scientists should have over public policy decisions.

The paper notes that a majority of adults support some level of reliance on scientific expertise in political decision making. However, there is a significant amount of variation in the extent to which people believe that scientists should influence policy. In the United States this happens in a very complicated environment where policy makers must make a decision to rely on various scientific opinions based not only on their own personal beliefs but also the perceived beliefs that their constituents are likely to have. Accordingly, scientific authority in this context is tied to a knowledgeable public that is predisposed to accept technocratic authority.

This paper investigates three variables for a link to the level of policy influence that people feel scientists should have. These variables include the level to which one feels that a scientist: (1) is knowledgeable of the subject at hand, (2) has the national interest in mind as opposed to personal interests, and (3) is in agreement with the broader scientific community on the topic. Not surprisingly, all of these variables serve as predictors as to whether an individual supports scientist involvement in the policy process. However, the strongest predictive factor is whether or not an individual believed that the scientist had the national interest in mind. This is somewhat concerning considering recent polls showing dipping levels of trust in scientists’ ability to put public interests ahead of their own.

If we apply the model Kahneman describes to this situation, we gain some additional insight. Kahneman notes that non-experts will substitute easy questions over difficult questions. Making a judgment about the level of influence that scientists should have in the policy process is complicated and requires some level of expertise to answer. Answers to this question are influenced by personal background, and respondents likely used some type of simpler surrogate question to come to a conclusion. The data in this paper suggest that one of the most important surrogates that people use for answering the question about the level of influence scientists should have in policy making is whether they feel that scientists have the national interest in mind.

An additional insight that Kahneman shares is that people are very good at remembering agents and what they do. We create mental lists of agents and assign certain attributes to the people in these roles. Scientists have a very specific role in the minds of most people which may not be consistent with participating in the policy process. With all of this in mind, there are a few potential implications for science communicators. First, we need to be particularly mindful of how we describe the role of scientists and how it may conflict with historically formed context. Second, this reinforces the importance of building public trust of scientists and the need to engage people in meaningful ways about scientists’ motivations. These additional pieces of context may make a significant amount of difference in the collective mind of your audience, and color the outcomes of many strategies for engagement of scientists with policymakers and the public.

Ben Carollo leads the issues analysis and response team at the National Cancer Institute at NIH. Rick Borchelt is special assistant for public affairs to the director at the National Cancer Institute at NIH.

Scholarly Pursuits features articles from journals produced in the United States and abroad. If you read an article you think would make a good candidate for this column, send it along to rickb@nasw.org.

June 30, 2012

ADVERTISEMENT
BWF Climate Change and Human Health Seed Grants

ADVERTISEMENT
EurekAlert! Travel Awards

ADVERTISEMENT
Sharon Begley Science Reporting Award