Beware the platform business

Here’s Ben Parr:

If you want to build a multi-billion dollar business, you can’t just build a product: you need to build a platform.

That was my major takeaway from the news that Spotify is launching an app platform. It’s in its infancy (thus why journalists are rightfully bashing it), but it’s the start of a transformation for Spotify.

In my opinion, the major inflection point for Facebook — the moment it transformed from a million dollar business into a billion dollar one — was May 2007, when it launched the Facebook Platform. When it opened up its APIs to developers, it created an ecosystem that drove up Facebook’s value. The next year, Facebook hit 100 million users and the engagement skyrocketed from there.

The web was a platform too, but a platform built around an open set of standards not controlled by any single corporation. If you want to build a billion dollar corporation you need to build a platform, and you need to own it. But is that really the best outcome for the rest of us?

Willpower and belief

I’ve blogged a bunch now about Roy Baumeister’s work on self-control, including the idea that willpower is finite in the short-term, and is depleted throughout the day as you use it. So I feel compelled to post this NYT op-ed claiming something quite different. I don’t know who’s right, but here’s the gist:

In research that we conducted with the psychologist Veronika Job, we confirmed that willpower can indeed be quite limited — but only if you believe it is. When people believe that willpower is fixed and limited, their willpower is easily depleted. But when people believe that willpower is self-renewing — that when you work hard, you’re energized to work more; that when you’ve resisted one temptation, you can better resist the next one — then people successfully exert more willpower. It turns out that willpower is in your head…

…You may contend that these results show only that some people just happen to have more willpower — and know that they do. But on the contrary, we found that anyone can be prompted to think that willpower is not so limited. When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.

I’ll keep my eyes open for a response to this from Baumeister or his colleagues, and let me know if you see one. Meanwhile, this reminded me of a similar phenomenon with respect to IQ:

Yet social psychologists Aronson, Fried, and Good (2001) have developed a possible antidote to stereotype threat. They taught African American and European American college students to think of intelligence as changeable, rather than fixed – a lesson that many psychological studies suggests is true. Students in a control group did not receive this message. Those students who learned about IQ’s malleability improved their grades more than did students who did not receive this message, and also saw academics as more important than did students in the control group. Even more exciting was the finding that Black students benefited more from learning about the malleable nature of intelligence than did White students, showing that this intervention may successfully counteract stereotype threat.

Both of these lines of research suggest that belief matters. Fascinating stuff.

Don’t blog on an empty stomach

(The clip above covers some basics of mental energy and depletion.)

The alternative title for this post was “I’m hungry; you’re wrong.” I’m not sure which is better… In any case, consider this bit from Kahneman:

Resisting this large collection of potential availability biases is possible, but tiresome. You must make the effort to reconsider your intuitions… Maintaining one’s vigilance against biases is a chore — but the chance to avoid a costly mistake is sometimes worth the effort.

Now as I understand it, this is basically a function of self-control. By taxing your brain to counteract biases, you’re drawing on a finite pool of mental energy. We know from studies of willpower that doing so can cause problems. As John Tierney reported in an excellent NYT Magazine piece on decision fatigue:

Decision fatigue helps explain why ordinarily sensible people get angry at colleagues and families, splurge on clothes, buy junk food at the supermarket and can’t resist the dealer’s offer to rustproof their new car. No matter how rational and high-minded you try to be, you can’t make decision after decision without paying a biological price. It’s different from ordinary physical fatigue — you’re not consciously aware of being tired — but you’re low on mental energy.

He also relates a fascinating study of Israeli parole hearings:

There was a pattern to the parole board’s decisions, but it wasn’t related to the men’s ethnic backgrounds, crimes or sentences. It was all about timing, as researchers discovered by analyzing more than 1,100 decisions over the course of a year. Judges, who would hear the prisoners’ appeals and then get advice from the other members of the board, approved parole in about a third of the cases, but the probability of being paroled fluctuated wildly throughout the day. Prisoners who appeared early in the morning received parole about 70 percent of the time, while those who appeared late in the day were paroled less than 10 percent of the time.

It gets more interesting:

As the body uses up glucose, it looks for a quick way to replenish the fuel, leading to a craving for sugar… The benefits of glucose were unmistakable in the study of the Israeli parole board. In midmorning, usually a little before 10:30, the parole board would take a break, and the judges would be served a sandwich and a piece of fruit. The prisoners who appeared just before the break had only about a 20 percent chance of getting parole, but the ones appearing right after had around a 65 percent chance. The odds dropped again as the morning wore on, and prisoners really didn’t want to appear just before lunch: the chance of getting parole at that time was only 10 percent. After lunch it soared up to 60 percent, but only briefly.

So, returning to the Kahneman bit, I wonder if we might observe a similar phenomenon with respect to political bloggers. Would ad hominem attacks follow the same pattern throughout the day? Might bloggers who had just eaten have the mental energy to counter their biases, to treat opponents with respect, etc.? And might that ability be depleted as the time between meals wears on and their mental energy is lowered? This could be tested pretty easily by analyzing the frequency of certain ad hominem clues like, say, the use of the word “idiot”, and then checking frequency against time of day. I’d love to see this data, and not just because I want an excuse to snack while I write.

Algorithms and the future of divorce

In Chapter 21 of Thinking, Fast and Slow Dan Kahneman discusses the frequent superiority of algorithms over intuition. He documents a wide range of studies showing that algorithms tend to beat expert intuition in areas such as medicine, business, career satisfaction and more. In general, the value of algorithms tends to be in “low-validity environments” which are characterized by “a significant degree of uncertainty and unpredictability.”*

Further, says Kahneman, the algorithms in question need not be complex:

…it is possible to develop useful algorithms without any prior statistical research. Smple equally weighted formulas based on existing statistics or on common sense are often very good predctors of significant outcomes. In a memorable example, Daws showed that marital stability is well predicted by a formula:

frequency of lovemaking minus frequency of quarrels

You con’t want your result to be a negative number.

Kahneman concludes the chapter with an example of how this might be used practically: hiring someone at work.

A vast amount of research offers a promise: you are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as “I looked into his eyes and liked what I saw.”

All of this makes me think of online dating. This is an area where we are transitioning from almost entirely intuition to a mixture of algorithms and intuition. Though algorithms aren’t making any final decisions, they are increasingly playing a major role in shaping peoples’ dating activity. If Kahneman is right, and if finding a significant other is a “low-validity environment”, will our increased use of algorithms lead to more optimal outcomes? What truly excites me about this is that we should be able to measure it. Of course, doing so will require very careful attention to the various confounding variables, but I can’t help but wonder: will couples that meet online have a lower divorce rate in 20 years than couples that didn’t? Will individuals who spent significant time dating online be less likely to have been divorced than those that never tried it?

*One might reasonably object that this definition stacks the deck against intuition, and I think this aspect of the debate deserved a mention in the chapter. The focus on “low-validity environments” is the focus on areas where intuition is lousy. So how shocking is it that these are cases where other methods do better? And yet, the conclusions here are extremely valuable. Even though we know that these “low-validity” scenarios are tough to predict, we still generally tend to overrate our ability to predict via intuition and underrate the value of simple algorithms. So in the end this caveat – while worth making – doesn’t really take away from Kahneman’s point.

Open source and inequality

I finally got around to this piece at The Atlantic “Why Workers Are Losing the War Against Machines”, by two MIT professors who’ve written a book on the subject. It’s a good piece focused on three disparities:

1. High-Skilled vs. Low-Skilled Workers

2. Superstars vs. Everyone Else

3. Capital vs. Labor

I want to focus on something the authors mention in #2 and make a quick point:

Technology can convert an ordinary market into one that is characterized by superstars. Before the era of recorded music, the very best singer might have filled a large concert hall but at most would only be able to reach thousands of listeners over the course of a year. Each city might have its own local stars, with a few top performers touring nationally, but even the best singer in the nation could reach only a relatively small fraction of the potential listening audience. Once music could be recorded and distributed at a very low marginal cost, however, a small number of top performers could capture the majority of revenues in every market, from classical music’s Yo-Yo Ma to pop’s Lady Gaga.

Economists Robert Frank and Philip Cook documented how winner-take-all markets have proliferated as technology transformed not only recorded music but also software, drama, sports, and every other industry that can be transmitted as digital bits. This trend has accelerated as more of the economy is based on software, either implicitly or explicitly. As we discussed in our 2008 Harvard Business Review article, digital technologies make it possible to replicate not only bits but also processes. For instance, companies like CVS have embedded processes like prescription drug ordering into their enterprise information systems. Each time CVS makes an improvement, it is propagated across 4,000 stores nationwide, amplifying its value. As a result, the reach and impact of an executive decision, like how to organize a process, is correspondingly larger.

This touches on themes I write about here frequently, and I see it as the link between intellectual property + open source, and inequality. The music example should be familiar to readers of this blog by now. Changing copyright terms could help transform a superstar market for music back into a folk or peer-to-peer market.

The software example is also amenable to more equitable IP approaches. What if the software that CVS used was open source? That would negate the winner-take-all nature of the example. I’m not recommending anything specific here, but just making the point that, given technology’s role in inequality via the creation of superstar markets, open source and intellectual property have to be part of the inequality discussion.

Fight bias with math

I just finished the chapter in Kahneman’s book on reasoning that dealt with “taming intuitive predictions.” Basically, we make predictions that are too extreme, ignoring regression to the mean, assuming the evidence to be stronger than it is, and ignoring other variables through a phenomenon called “intensity matching.” 

Here’s an example (not from the book; made up by me):

Jane is a ferociously hard-working student who always completes her work well ahead of time.

What GPA do you think she graduates college with? Formulate it in your mind, an actual number.

So Kahneman explains “intensity matching” as being able to toggle back and forth intuitively between variables. If it sounds like Jane is in the top 10% in motivation/work ethic, she must be in the top 10% in GPA. And our mind is pretty good at adjusting between those two. I’m going to pick 3.7 as the intuitive GPA number; if yours is different you can substitute it in below.

Kahneman says this is biased because you’re ignoring regression to the mean, another way of saying that GPA and work ethic aren’t perfectly correlated. so here’s a model to use Kahneman’s trick for taming your prediction.

GPA = work ethic + other factors

What is the correlation between work ethic and GPA? Let’s guess .3 (It can be whatever you think is most accurate).

Now what is the average GPA of college students? Let’s say 2.5? (Again, doesn’t matter).

Here’s Kahneman’s formula for taming your intuitive predictions:

0.3(3.7-2.5)+2.5 = statistically reasonable prediction

So apply the correlation between GPA and work ethic to the difference between your intuitive prediction and the mean, and then go from the mean in the direction of your intuition by that amount.

I played around with some different examples here because my intuition was grappling with some issues around luck vs. static variables, but those aside, this is a neat way to counter one’s bias in the face of limited information.

I can’t help but wonder, though, if the knowledge that this exercise was designed to counter bias led anyone to avoid or at least temper intensity matching. In other words, what were your intuitions for the GPA she’d have after just reading the description of her hard work? Did the knowledge that you were biased lead you to a lower score than the one I mentioned?

Here’s what I’m getting at… If it’s possible (and this is just me riffing right now) to dial down your biases (either consciously or not) when the issue of bias is on your mind, it would seem possible that one’s intuitions could be dialed down going into this exercise, at the point of the original GPA intuition, which could ruin the outcome. Put another way, the math above relies on accurate intensity matching which is itself a bias! If someone were able to come into this with that bias dialed down, they might actually end up with a worse prediction if they also did Kahneman’s suggested process.

Why we need journalists (good ones)

I’m in the middle of Daniel Kahneman’s Thinking, Fast and Slow. From Chapter 16:

Nisbett and Borgida found that when they presented their students with a surprising statistical fact, the students managed to learn nothing at all. But when the students were surprised by individual cases – two nice people who had not helped – they immediately made the generalization and inferred that helping is more diffuclt than they thought. Nisbett and Bordiga summarize the results in a memorable sentence:

Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular.

Consider this the psychological case for man-on-the-street stories. Humanizing data with individual examples is essential to helping people absorb information.

But journalists aren’t themselves immune to this phenomenon. It’s essential that reporters assess the evidence behind their stories, and consciously try to overcome their bias to react more strongly to individual anecdotes than to data. But if journalists are able to overcome this bias and base their stories on good data, then their ability to apply individual cases to explain larger trends can be a crucial mechanism for informing the public.


Follow the script

I was alerted via Twitter to the NYT Opinion page today, where the descriptions of the columnists’ latest offering amounted to “self-parody”:
















You have to admit that this is pretty funny, at least if you’re a regular NYT reader. For my money, this also shows why Kristof is so necessary. As I think Ezra Klein once said, he takes the moral weight of his column inches more seriously than perhaps any writer I know. But yeah, this image is just great.

Why ThinkProgress needs a Technology channel is losing one if its major voices – blogger Matt Yglesias – to Slate, which is as good a reason as any for me to lay out an idea I pitched on Twitter a few weeks ago. As ThinkProgress (TP from here on) considers how to make up for the traffic Yglesias will inevitably take with him, it should consider adding a channel dedicated specifically to Technology.

This may end up being a fairly long post, but ultimately what I hope to argue is simple:

1. Technology and technological change are seldom, if ever, politically neutral.
2. Covering technology through the various lenses of the economy, the environment, etc. is necessary but inadequate; technology is its own useful lens for thinking about the world.

So… why does TP need a technology channel?

Technology is changing the nature of jobs and work

Technology is always transforming the way we work. New technologies are constantly displacing workers, but – so the story goes – by enhancing productivity, technology also creates wealth (and jobs) that more than make up for the initial displacement. Assuming that’s the case, it is imperative that society help prepare workers for new jobs, assuage their fears of new technology, and ensure that an adequate safety net does exist. In my view, much of the fear of technology comes from public misunderstanding of the relationship between technology, productivity, and prosperity. That we are currently facing a jobs crisis makes this imperative all the more pressing.

There is, however, another view that has garnered attention as of late. This view holds that technological change is accelerating so quickly that human labor cannot keep up. You can read about it here and here. If this view is right, it’s even more obvious that we need to closely consider our relationship with technology.

Technology is reshaping the public sphere

This is perhaps the most obvious sense in which technology is political, as anyone who cares about what TP names its content channels well knows. Has the internet revolution made the public sphere any more democratic? Here there are competing views. Yochai Benkler presents evidence that it has in The Wealth of Networks. Matthew Hindman makes a compelling case that it has not in The Myth of Digital Democracy. This question matters tremendously, and the way that we build and use technology can have major ramifications for our politics.

There is also the digital divide to consider. As political speech and activism continues to migrate online, it is important to consider who does and does not have access to the digital public sphere. The availability of technology – along with the requisite skills to effectively use it – poses a major justice consideration.

A challenge to the free market paradigm

I’ve written previously that our experience with the internet thus far “invites us to question some of the most basic premises that have led us to organize our society around the market.” Most of us liberals accept the efficacy of markets, provided they are properly regulated and supplemented by a safety net. But this isn’t out of any deep love for capitalism.

What we see when we look at Wikipedia or Linux leads us to directly question the assumption that humans are basically self-interested. Similarly, new models for aggregating information online are relevant to Hayekian justifications built around the difficulties of aggregating preferences.

To be less theoretical: open source and similar models of collaboration offer a production model that in many ways fits better with liberals’ preference for justice and equality. Liberals need to be paying more attention to open source software and similar experiments, and thinking deeply about how such models might be adopted in as of yet unexplored domains.

Privacy, corporations and user control

Technology now allows both corporations and the government to gain unbelievably detailed knowledge about consumers. Many if not most users have no idea the information they are making available when they surf, communicate and shop online. To make matters more complicated, giving sites access to personal information can in many cases be quite useful. There is an argument for highly targeted ads, for restaurant recommendations based on your social network and current location, etc. That makes the challenge of protecting privacy in this day and age all the more challenging.

What rights do we have when we go online? Will the onus to protect privacy fall solely on the user? Or will websites and the corporations behind them bear some of the burden? These questions are fraught with difficulty, but they cannot go unanswered.

Technology is not a black box

My aim so far was to demonstrate that technology is political in nature. Technology, broadly defined, is the practical application of knowledge, and in that sense it overlaps with nearly everything TP writes about. So why is it necessary to have a separate Technology channel? Why not just cover technology as it relates to the economy, to the environment, to justice, etc.?

I want to give two reasons. The first is a simple case of focus and bandwidth. Everyone is inundated with information and stretched thin these days, and I’m sure the TP writers are no exception. With so much to write about in any of the TP channels, technology issues will necessarily be a mere piece of each one’s focus. But, as we’ve seen, technology is an essential piece to any number of challenges that the world faces.

Perhaps more importantly, a direct focus on technology would help move beyond the notion of technology as a black box, or as a given. Too often, all of us tend to consider technologies only as they exist, rather than as they might exist. Technology is, by definition, created by humans. As such, it is a mistake to ignore the process by which technology comes to exist.

And yet that is what we frequently do. Even the discipline of economics treated technological change as “exogenous” until recently; it was a separate process that just happened. Now, more attention is paid to the circumstances under which technological progress happens, and some of this – like advocating for R&D – fits nicely under discussion of the economy. But a TP Technology channel could spend considerable effort considering not just the use of and access to technology, but debating, informing, and influencing the very process by which technology is created.

Are open software systems better for the public than closed systems? When is the use of digital rights management (DRM) technologies justified? Should the government be using only open source software? Do incentives exist for mobile payment technologies that actually aim to help consumers improve purchasing habits? How can we make it easier for users to recognize and control the extent to which they are tracked online?

To think about these questions from the perspective of technology is ask both: how can we design systems that further liberal goals; and what barriers exist to the creation of such systems that we might mitigate through policy or other means. A TP Technology channel would consider technological development as a process to be optimized, rather than just something that happens and then gets put to work.

Why ThinkProgress?

The arguments I’ve made apply broadly to organizations and publications covering politics. I’d like to see Technology called out as a focus at The American Prospect and The Nation too (props to TPM for having a Tech channel). But the rare combination of its youth and prominence makes TP an ideal target. TP functions as both a rapid-response network helping liberals fight back against the right, as well as a home for up-and-coming liberal writers. It should set an example by making Technology a focus.

The center and right tend to assume technological change will work out for the best, ignoring issues of distribution, access, cost, etc. But considerable portions of the left view technology primarily as a risk or disturbance, rather than an opportunity. In reality, technology is a driver of economic growth, of welfare enhancements, and of sustainability. But it is also a process. Liberals need to engage that process to ensure that it meets our goals. I’d love to see TP give it a shot.