A limited version of objectivity worth defending

Objectivity was a major topic at the Nieman Foundation’s 80th anniversary event this weekend, especially during a panel on the line between activism and journalism. Nieman Reports has a new(ish) article on that subject, too. “Impartiality”, “fairness”, and “accuracy” were all terms that came up as possible replacements for “objectivity.” The article and the event together raised a lot of interesting questions, most of which I won’t even try to address.

I want to focus more narrowly, offering a limited defense of a certain kind of objectivity. Here’s a great quote from Harvard’s Yochai Benkler, from the Nieman Reports piece:

“Professional journalism needs to shift away from the way in which it performs objectivity. The critical move needs to be from objectivity as neutrality to objectivity as truth-seeking. That’s how you avoid false equivalencies. In a propaganda-rich system, to be neutral is to be complicit.”

“Truth” can mean many things, so I’ll narrow it even further: from objectivity as neutrality to objectivity as empirical truth-seeking.

The first advantage of objectivity as the search for empirical truth is that it flat out doesn’t apply to some key journalistic questions to which “objectivity” was offered as an answer. What stories should a newspaper cover? That just plainly isn’t an empirical question; there is no “objective” answer in the sense of objectivity as empirical truth-seeking.

A newspaper that tries to remain “neutral” in what it chooses to cover might opt to defer to other institutions like political parties to set the agenda. Claiming that this strategy is “objective” is nonsensical and harmful. That doesn’t mean “neutrality” can’t ever be defensible. A trade publication might look to trends and attention within the industry it covers to decide what it should report on. Claiming that this is being “objective” is deeply misguided, but adopting this neutral posture might make sense for the business.

Civic journalism can do better. Decisions like what to cover depend on values, and the best journalistic institutions won’t simply punt on questions of values in order to maintain some appearance of neutrality.

But those publications can still aim for “objectivity” in the sense of empirical truth-seeking, and I’m partial to that term over either “fairness” or “accuracy”. Fairness is an important value, especially for journalism, but it doesn’t proceed from the search for empirical truth. Accuracy doesn’t have that problem and so is closer, but the word can be misconstrued so as to let journalists off the hook. If you write about a thorny empirical topic like climate change or fiscal policy and you faithfully report everyone’s opinions you’ve in one sense accurately described the debate. But you may not be helping readers understand the truth.

Objectivity remains, in my view, the best word for conveying a commitment to the search for empirical truth — particularly in areas where that truth is more complicated than straightforward matters of fact. Objectivity is not an appropriate answer to many of journalism’s toughest questions but understood narrowly it can still be useful.

UPDATE: More from Benkler in a Q&A with Boston Review. This was interesting, on why objectivity-as-neutrality works less well than it once did:

Journalistic core practices have never been perfect but, broadly speaking, they have worked reasonably well. That is largely because, until recently, both political parties in the United States and the major actors—corporations, unions, nonprofits—more or less complied with a set of elite norms about how much you could attack basic foundational facts, how much you could fabricate. This meant that the model of journalistic objectivity and balance—being neutral and reporting on both sides—was not systematically biased in favor of one major party or the other. It reflected, more or less, the elite consensus range of views. Trust in media largely oscillated with the party in power: critical coverage meant that if your party was in power, your trust in journalism declined, and then rebounded when the other party took power.

Examples of how media could help overcome bias

I have a piece up at The Atlantic (went up Friday) titled “The Future of Media Bias” that I hope you’ll read. I suppose the title is deliberately misleading, since the topic isn’t media bias in the typical sense. Here’s the premise:

Context can affect bias, and on the Web — if I can riff on Lessig — code is context. So why not design media that accounts for the user’s biases and helps him overcome them?

Head over to The Atlantic to read it. In the meantime, I want to expand a bit on some of my ideas.

1) This is not just about pop-up ads. The conceit of the post is visiting a conservative friend’s site and being hit with red-meat pop-ups that act as priming mechanisms. But that was just a way of introducing my point. (Evidence from comments and Twitter suggest this may have distracted some.) So while pop-ups can illustrate the above premise, the premise is in no way restricted to the impacts of pop-ups either as they exist in practice to sell ads or as they might be used in theory.

2) More on self-affirmation. I might have been clearer on how self-affirmation exercises work. Although they were not described in detail in either paper I referenced. Here’s how I understand them: you’re asked to select a value that is important to your self-worth – maybe something like honesty – and then you write a few sentences about how you live by that value. Writing out a few sentences about what an honest and therefore valuable person you are makes you less worried about information threatening your self-worth.

I want to address a few potential objections to embedding an exercise like this in media. One might argue that no one would complete the exercise (I’m imagining it as a pop-up right now.) Perhaps. But you could incentivize it. Perhaps anyone who completes it gets their comments displayed higher or something like that. Build incentives into a community reputation system. Second objection is that maybe you could get people to complete it once, but it’s impractical to think anyone would do it before every article. Fair point. But perhaps you just need people to do it once, and then it’s just displayed alongside or above the content, for the reader to view, to prime them. Finally, I want to note that this is just one random example and I don’t think my argument really rides too much on it. The reason I used it was a) there was lots of good research behind it and b) it fit nicely with the pop-up conceit of the post.

3) More examples. One paper I referenced re: global warming suggests that the headline can impact susceptibility to confirmation/disconfirmation biases. So what if the headline changed depending on the user’s biases? This would be tricky in various ways, but it’s hardly inconceivable. In fact I wish I’d mentioned it since in some ways it seems more practical than the self-confirmation exercises. It would, however, introduce a lot of new difficulties into the headline writing process.

Another thing I might have mentioned is the ordering of content. Imagine you’re looking at a Room for Debate at NYT. Which post should you see first? In the course of researching the Atlantic piece, I came across some evidence that the order you receive information matters (with the first piece being privileged) but I’m having trouble finding where I saw that now. And it’s not obvious that that kind of effect would persist in cases of political information. But, still, there may well be room to explore ordering as a mechanism for dealing with bias.

Finally – and at this point I’m working off of no research and just thinking out loud – what if you established the author’s credibility by showing the work the user was most likely to agree with, in cases of disconfirmation bias (and the reverse in confirmation bias cases)? So, say I’m reading about climate change and you knew I’d be biased against evidence for it. But the author making the case for that evidence wrote something last week that I do agree with, that does fit my worldview. What if a teaser for that piece was displayed alongside the global warming content? Would that help?

I’ve also wondered if asking users to solve fairly simple math problems would prime them to think more rationally, but again, that’s not anything based on research; just a thought.

So that’s it. A few clarifications and some extra thoughts. To me, the hope of this piece would be to inject the basic idea into the dialogue, so that researchers start to think of media as an avenue for testing their theories, and so that designers, coders, journalists, etc. start thinking of this research as input for innovation into how they create new media.

UPDATE: One more cool one I forgot to mention… there’s some evidence that hitting someone with graphical information more forcefully makes a point, such that it would basically take too much mental energy to rationalize around it. You can read more about that here. This column in the Boston Globe refers to a study concluding – in the columnist’s words – “people will actually update their beliefs if you hit them “between the eyes” with bluntly presented, objective facts that contradict their preconceived ideas.” This strikes me as along the same lines as the graph experiment. And so these are things to keep in mind as well.

Exposing sacred arguments

Moral psychologist Jonathan Haidt gave a talk in February arguing that the social psychology field was a “moral community” by virtue of its political liberalism, and that this was compromising its ability to do good science. I want to use one piece of his argument as a jumping off point to discuss what I see as one of the biggest obstacles to productive public discussion. Haidt:

Sacredness is a central and subtle concept in sociology and anthropology, but we can get a simple working definition of it from Phil Tetlock [a social psychologist at the University of Pennsylvania]. Tetlock defines a sacred values as “any value that a moral community implicitly or explicitly treats as possessing infinite or transcendental significance …” If something is a sacred value, you can’t make utilitarian tradeoffs; you can’t think in a utilitarian way. You can’t sell a little piece of it for a lot of money, for example. Sacredness precludes tradeoffs. When sacred values are threatened, we turn into “intuitive theologians.” That is, we use our reasoning not to find the truth, but to find ways to defend what we hold sacred…

…Sacralizing distorts thinking. These distortions are easy for outsiders to see, but they are invisible to those inside the force field.

For the most part there’s nothing wrong with sacredness, per se. The problem arises when the sacred principle is challenged by someone outside the moral community. As Haidt notes, the result is that reasoning comes to the aid of justifying a principle, and that leads to sloppy arguments. If your commitment to the principle of nonviolence is challenged, for instance, you may start arguing about the ineffectiveness of military interventions. But what’s really driving that argument isn’t the facts; it’s the desire to defend a principle that in your moral vision really doesn’t even need defending. If you’re a pacifist, that’s fine. What’s not fine is marshalling weak arguments when a sacred view is challenged.

Now in practice my guess is that few things are held as entirely sacred, but many things are held nearly sacred. By that I mean that for most people, few beliefs are beyond any tradeoffs, but quite a few principles are sacred enough to require an exceptionally high bar be cleared before they’re willing to start trading it away.

There’s been some good back-and-forth in the libertarian blogosphere recently on the extent to which policy differences between liberals and libertarians are caused by different opinions on empirical matters, versus different values or principles. Ilya Somin at Volokh Conspiracy is thinking along the same lines as I am, writing:

Within political philosophy, many scholars are either pure utilitarian consequentialists (thinkers who believe that we are justified in doing whatever it takes to maximize happiness) or pure deontologists (people who argue that we must respect certain rights absolutely, regardless of consequences)… Outside philosophy departments, however, few people endorse either of these positions.

So sacredness in practice is probably a matter of extent. But that does nothing to detract from its importance in public debate. If someone is arguing in favor of a principle they hold sacred I want to know. If you’ve written an op-ed detailing all the reasons military intervention in Libya would be ill-advised, the fact that you’re a pacifist – that nonviolence is a sacred principle for you – is extremely relevant.

I see the identification of sacredness as a crucial challenge in the public sphere, and therefore a crucial challenge within media. As I’ve mentioned before, there’s lots of talk about the importance of transparency in the brave new world of online media, and I’m in favor of that. But transparency means a lot of things (again, as I’ve discussed before). It’s easy to say “I’m a liberal, I generally favor x, y and z and am a fan of these thinkers or politicians. I voted for so-and-so for president.” That’s one kind of transparency. But it’s a very thin transparency. I’d love to see some media experiments that go further and try to identify sacred principles. Let’s play around with ways of telling me the author is a pacifist.

This is a hard problem because most of us have a rough time identifying what we consider sacred. As Haidt notes, it’s often something that is obvious only outsiders. And once extents are thrown into the mix things get even messier. In a way the blogosphere offers a really rudimentary partial fix just by removing word/page limits. When there’s no limit to length you can talk endlessly about the principles behind the authors, as the libertarian discussion makes clear. But I think we can do better. I don’t have many good specifics on how just yet, but it’s something I think about. Ideas?