Chris Mooney had a great piece at Mother Jones recently that has been making the rounds. The title is “The Science of Why We Don’t Believe in Science” and it’s a good primer on some of the literature on how we rationalize to protect our biases and more generally our worldview. If you haven’t read it yet I highly recommend it. Here’s the gist:
Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber  of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. “They retrieve thoughts that are consistent with their previous beliefs,” says Taber, “and that will lead them to build an argument and challenge what they’re hearing.”
In other words, when we think we’re reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt : We may think we’re being scientists, but we’re actually being lawyers  (PDF). Our “reasoning” is a means to a predetermined end—winning our “case”—and is shot through with biases. They include “confirmation bias,” in which we give greater heed to evidence and arguments that bolster our beliefs, and “disconfirmation bias,” in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.
This is related to the concept of “sacred beliefs” that I’ve been harping on lately and the general point I want to make is that the problem Mooney sketches out can be viewed as a challenge that media can help to overcome. What if the media you’re consuming knew from your history and profile that you had certain biases, and therefore presented the information in a way that makes it easier for you to overcome those biases?
There are clearly a number of challenges here. Determining bias is tricky in the first place, because it has to be done in reference to some “truth”, which, given the nature of the problem, is likely controversial. And even once that is done there would need to be a way to measure progress in overcoming biases. But we take for granted that digital media offers the opportunity to design experiences that are customized and interactive in a way newspapers and other “old media” are not. Why not focus on cognitive personalization aimed at helping us think more rationally?