On Wikileaks

I’ve held off posting anything about Wikileaks, as the subject’s complexity is a little daunting.  I still don’t have any polished thoughts, but I’ll offer a few unpolished ones alongside some reading recommendations:

If you haven’t been following this story, check out The Beginner’s Guide to Wikileaks at The Atlantic.

One of the basic aspects of the story I was missing at first was the extent to which Wikileaks worked with media organizations and even governments to redact the documents and decide what to publish.  To learn more about that, read Glenn Greenwald here.

The one question that consistently hurt my ability to think clearly about this story was Is Wikileaks good or bad? or, put another way, Should these cables have been published? So my advice is to put that aside for now and focus on a few other interesting aspects of the story, like…

Is Wikileaks a new kind of media organization or a new kind of source? The New York Times treats it as “a source, not a partner”, according to NYT executive editor Bill Keller.  An excellent summary of his comments on Wikileaks is available at the Nieman Lab.  For a different perspective try Matthew Ingram of GigaOM arguing “Like It or Not, WikiLeaks is a Media Organization”. NYT’s David Carr has thoughts here.

Another interesting line of inquiry looks at how governments can exert indirect control over organizations like Wikileaks in cases where they lack the ability to exert direct control.  Henry Farrell at Crooked Timber has a good post on this topic.

Another thing I’ve been pondering is what predispositions predict one’s opinion of Wikileaks.  This post by Tom Slee, which I found via both Clay Shirky and Crooked Timber, puts it this way:

Your answer to “what data should the government make public?” depends not so much on what you think about data, but what you think about the government.

I think he’s only partly right.  What you think about government matters tremendously.  But I wouldn’t downplay data.  I’m finding, in reading and conversations, that what you think about it does also hinge on what you think about technology.  All else equal, if you’re bullish about technology’s prospects for improving the world, you’re more likely to approve of Wikileaks’ data dump.  Ditto if you’re already sympathetic to hacker culture.  Or if you generally view increased access to information as crucial to improving society.

Put these two items – thoughts on government and thoughts on technology – together and I think it explains much of the disconnect between the standard Washingtonian’s view of Wikileaks and the standard geek view.  The latter is dominated by a combination of liberals and libertarians, both of which are likely to harbor deep suspicions about the government’s handling of international affairs.  Add to that a predisposition towards technology – contrasted with a view where tech can cause as many problems as it solves – and a true disconnect is revealed.  For these two reasons, the geek world has a much stronger bias towards transparency than the beltway insider.

I don’t mean “bias” in a pejorative way, and certainly don’t mean to suggest that one or the other view is closer to being right.  My own sympathies in this case are all over the map.  But I’d love to test my theory.  How much power would questions about Iraq and waterboarding have in predicting sympathy to Wikileaks? I imagine quite a lot.  But what about one’s reaction to a statement like “information wants to be free”? I’d bet that has some predictive power as well.

In closing, in place of any master synthesis or confident opinion, I’ll simply link to Clay Shirky’s post on the topic, which I think lays out the issues nicely.

For more reading, The Atlantic has a terrific roundup of reactions here.  My Delicious links on Wikileaks are here.

Bad arguments against net neutrality

Net neutrality is something where my bias is so clear (in favor) that I try to be extra careful not to stake out a position before I’ve thoroughly researched the issue.  For that reason I’m still not sure what I think about it even in principle, much less the FCC’s recently proposed rules.

So I was interested to see what points libertarian magazine Reason put forth in this anti-net neutrality video:

To start, I want to zero in on this line:

If AT&T DSL blocked your access to Google because they wanted you to use Yahoo, what would you do? Probably cancel your plan and go to a provider that gives you easy access to your favorite sites.

There are multiple things wrong with this statement.

First, this scenario completely ignores the widespread lack of competition between ISPs.  The consumer behavior the video describes only makes sense in the context of competitive internet service.  The libertarian’s response is to point out that the answer to that is reforming telecom regulation to foster competition.  But if that’s really your argument, you have to make some reference to it instead of misleading viewers into thinking that competition already exists.  In other words, the fact that the scenario Reason describes is only even possible if significant reforms pass first seems relevant.

Ok, now put that aside for a moment, and imagine enough competition existed for this kind of consumer behavior to be possible.  Is it plausible?  The video uses a nice trick to make us think it is: appealing to our universal love of Google.  So let’s flip it around and try that on for size:

If AT&T DSL blocked your access to Yahoo because they wanted you to use Google, what would you do?  Off the top of my head my answer is “Probably nothing.”

The third misleading thing about that example is the choice between incumbents.  Yahoo or Google?  If you tell me I can’t use Google I know enough about how much I love Google to seek out another provider.

But what about Google vs. the next great search company?  If Google can reach into its deep pockets to ensure its searches are delivered faster, that makes it a lot harder for an emerging company/technology to compete for market share.  New entrants not only lack the deep pockets to pay ISPs, they lack the name recognition required to convince consumers to seek out neutral ISPs.  Even if I would switch ISPs to make sure Google isn’t disadvantaged relative to Yahoo, would I actively switch from an ISP that equally prioritized incumbents in order to access new entrants I’d never heard of?

I don’t consider any of these to be case-closed arguments in favor of net neutrality.  But if these are the best arguments against net neutrality, they’re fairly weak.  The libertarian trump card, played at the end of the video, is unforeseen consequences, and I take that point seriously.  That’s one of the reasons I’ve not yet reached a firm position on net neutrality.

But while I may not have a stance on the issue, I do have a starting point, and it’s this: something incredibly important is at stake here.  Both at the software layer and the content layer we are seeing the rise of a fascinating model of information production.  I’ll once again defer on defining that model, except to say that is significantly non-commercial.  Non-commercial production is threatened by a non-neutral net, even more so than emergent commercial entities.  If you care about preserving that non-commercial aspect, if only to learn more about it and to see its full potential, you should care a lot about net neutrality.

My parents’ coffee table, online

Growing up I remember my parents subscribing to three magazines: The New Yorker, The Atlantic and Harper’s.  Over the past couple of weeks I’ve seen updates on how each is faring online.  Results vary.  A lot.

Let’s start with the worst…


harperscoverIt wasn’t all that long ago that I was considering subscribing to Harper’s.  They have some great essays, including this quality August nonfiction essay: Happiness is a Worn Gun.  But if this week’s column by the magazine’s publisher is any indication, Harper’s holds the web in outright contempt.  Publisher John MacArthur describes his web strategy as “protectionist” and that says it all.  MacArthur is impervious to the recommendations of “internet hucksters”.  Moreover, he’s “offended” by “the online sensibility”.

The internet’s impact on our politics is a controversial topic.  There are a diverse range of respectable views on the subject.  But it’s clear MacArthur has no interest in pro-internet arguments, no matter their merit or credentials.  He prefers to dismiss them, just like he dismissed email.  And we all know how that turned out… No joke, this is the argument.  You can’t make this up.

With a publisher like this at the helm, I’m not optimistic about Harper’s chances.  And that’s a real shame.

The New Yorkernewyorkercover

The New Yorker is my boring, middle-of-the-road example here.  It’s not a natural web innovator, but it hasn’t rolled over either.  They’ve got a robust blogs section.  And they recently redesigned their site.  It feels more web-like, yet preserves much of the classic New Yorker look.

The Atlantic

AtlanticcoverFull disclosure: The Atlantic was my favorite of the three growing up.  And it still is.  These days a lot of that is due to its excellence online.  The magazine has assembled a top-notch team of bloggers, built around the celebrity of Andrew Sullivan.  And just months ago it launched a terrific Technology channel which quickly became my favorite source of tech news and analysis.  On top of that, they’ve parleyed it all into profit.

Online matters

I could go on at length about all the reasons online matters, but for now I’ll address only how it impacts my consumption and support of these magazines.  I can say with confidence that so long as John MacArthur is publisher of Harper’s I will never subscribe.  I was given a subscription to The New Yorker as a gift, and I’m really enjoying it.  But would I purchase it on my own?  Doubtful.  On any given week I read maybe 2 New Yorker pieces.  But the Atlantic?  It’s increasingly a go-to source for me.  I haven’t subscribed to the magazine, but that’s more a reflection of how I read the news.  If they ever offered a donation model, I’d happily support them.  Not that they’d need it, given their success with advertising revenue.

Markets and Networks

Several weeks ago Steven Johnson took to the op-ed page of The New York Times to defend his excellent new book on innovation and to declare “I am not a Communist.”  The question of possible communist sympathies was raised, apparently, on a World Network imagebook tour, in reference to his support of what he dubs “fourth quadrant” innovation.  The “fourth quadrant” refers to innovations produced by networked non-market actors, a category including open-source software, among other things, which Johnson argues has an unparalleled track record in fostering breakthroughs.

Does that make him a communist?  He doesn’t think so:

the problem is that we don’t have a word that does justice to those of us who believe in the generative power of the fourth quadrant… The choice shouldn’t be between decentralized markets and command-and-control states.

And he’s right.  The rise of the web has exposed the market-state dichotomy as transparently inadequate.  Projects like Linux and Wikipedia hint at the emergence existence of a very different model of economic organization that seemingly fits neither category.

It is a model we are only beginning to understand, and yet in many ways it challenges some of our core beliefs about how to organize a society.  In the contest between markets and central planning, the market has been largely (and largely justifiably) ascendant.  Yet the lessons of its ascendancy are subtly and not-so-subtly contradicted by the ways in which we organize, communicate and produce information online.

To understand how, we have to temporarily return to the battle between market and state.  In The Future of Ideas Lawrence Lessig writes:

Over the past hundred years, much of the heat in political argument has been about which system for controlling resources – the state or the market – works best.  That war is over.  For most resources, most of the time, the market trumps the state.  There are exceptions, of course, and dissenters still.  But if the twentieth century taught us one lesson, it is the dominance of private over state ordering.*

Why?  That is, of course, a question fit for a lifetime of inquiry.  But let me take a stab at summing it up: because humans are selfish and stupid.


Markets motivate us by aligning incentives.  We are more likely to exert effort when doing so directly benefits us.  A considerable portion of social science revolves around this tenet, which might be expressed short-hand as Most of us are self-interested most of the time.  We often tend to simplify even further by treating selfishness as profit maximization.  As Harvard’s Yochai Benkler explains it in his masterpiece The Wealth of Networks:

Much of economics achieves analytic tractability by adopting a very simple model of human motivation… Adding more of something people want, like money, to any given interaction will, all things considered, make that interaction more desirable to rational people.  While simplistic, this highly tractable model of human motivation has enabled policy prescriptions that have proven far more productive than prescriptions that depended on other models of human motivation — such as assuming that benign administrators will be motivated to serve their people, or that individuals will undertake self-sacrifice for the good of the nation or the commune. (pg. 92)


Markets prevail over central planning in large part due to the stupidity cognitive constraints of central planners.  We can only gather and process so much information.  Which means our actions have unforeseen consequences, the future is hard to predict, etc.  Here I’ll lean on Cass Sunstein channelling Hayek in his book Infotopia:

Hayek claims that the great advantage of prices is that they aggregate both the information and the tastes of numerous people, incorporating far more material than could possibly be assembled by any central planner or board… For Hayek, the key economics question is how to incorporate that unorganized and dispersed knowledge.  That problem cannot possibly be solved by any particular person or board.  Central planners cannot have access to all of the knowledge held by particular people.  Taken as a whole, the knowledge held by those people is far greater than that held by even the most well-chosen experts. (pg. 119)

Similarly, in his 1977 book “Politics and Markets”, political scientist Charles Lindblom describes the “key difference” between markets and central planning as “the role of intellect in social organization” with “on the one side, a confident distinctive view of man using his intelligence in social organization [central planning]; on the other side, a skeptical view of his capacity [markets].” (pg. 248)

The Networked Information Economy

At the macro level markets continue to maintain these advantages over planning.  But is there another game in town?  What we see on the web challenges us to at least reconsider the unassailability of markets, both with respect to motivation and information.  Asks Benkler:

Why can fifty thousand volunteers successfully coauthor Wikipedia… and then turn around and give it away for free?  Why do 4.5 million volunteers contribute their leftover computer cycles to create the most powerful supercomputer on Earth, SETI@Home?

Econ 101 has a hard time answering.  The high profile success of these and other projects forces us to remember that the simplistic model of human motivation, central as it is to our faith in markets, was never universally true.  Further, they invite us to revisit the usefulness of such an assumption, and to strive for a more complete model of human motivation.  We create and produce for any number of reasons beyond profit, including altruism, status, or even – in a world of low transaction costs – boredom.

Just as the market’s claim to dominance in motivating us is starting to be challenged, some are revisiting its dominance in aggregating information.  Sunstein explores the subject in Infotopia and highlights increasing efforts to aggregate human preferences online, including Amazon and Netflix.  If it’s obvious that we are doing better and better at aggregating information thanks to the Net, it’s less obvious how this might challenge the role of the market.

Imagine that Netflix has a small, set number of a rare movie to rent, and that it’s in high demand.  Who should get it first?  Auction the privilege off to the highest bidder, responds the free market advocate.  And, particularly in a scenario where customers have equal wealth at their disposal, this method has a lot to recommend it.  The market is incredibly efficient at allocating resources under ideal settings.  Tremendous gains in human welfare have been predicated on this fact.  But Netflix is developing sophisticated algorithms to use your preferences for movies you’ve seen to predict what movies you’ll like.  Is it so hard to believe that some day in the future an algorithm could – given the aim of maximizing viewer enjoyment – “beat the market” in determining how to distribute the movie?

We are undoubtedly in the early stages of understanding what motivates us to collaborate online (and off), and probably even less far along in our efforts to manage and make useful the wealth of information online, including identifying and aggregating our preferences.  I’ve been purposefully vague here in describing the new model I’m discussing.  Better defining that model will be the topic of a future post.

My argument here is simply that our increasingly connected world – what Benkler calls the “networked information economy” – invites us to question some of the most basic premises that have led us to organize our society around the market.  It would be foolish to let those premises, and the new models that challenge them, go unexamined.

*Lessig is explicit that he is talking about consumption, not production.  It’s a useful distinction, however the two are more related than he seems to admit in this instance.
**In borrowing Lessig’s words here I don’t mean to subscribe to any aggressively free-market worldview.  The choice between a centrally planned economy and a mostly privately organized one may be settled.  The battle over where to draw the lines in the mixed economy rages on.

The home page still matters

I honestly couldn’t tell you what the home pages for several of my favorite websites look like.  I do a lot of my reading through an RSS reader, which delivers new content from the blogs and news feeds to which I subscribe.  When I do visit the actual sites it’s almost exclusively through links from Twitter, Facebook, Gchat, email, etc., which direct me to specific pieces of content.  I almost never head over to a site’s hogawker-logomepage to see what’s promoted there.

But the home page still matters.  A lot.  That’s my takeaway from Gawker founder Nick Denton’s description of the site’s redesign.  The redesign “represents an evolution of the very blog form that has transformed online media over the last eight years” according to Denton.  And one of the central changes is replacing the reverse-chronological content display of the typical blog with “one visually appealing “splash” story, typically built around compelling video or other widescreen imagery and run in full.”

There’s a lot more in the post, including about the merits of video and of scoops over aggregation.  But more than anything, I was surprised by the enduring emphasis on the home page, even in the age of Twitter.  Maybe beyond the times oughta mix up the home page a bit?

(If you don’t have time to read the whole Denton post, The Atlantic’s Alexis Madrigal has a Gawker-esque top 5 takeaways post.)

The creative case for work-life balance

A while back I read an interesting NYT piece on how entrepreneurs often exhibit manic tendencies.  Most extreme was Scvngr CEO Seth Priebatsch:

To keep the pace of his thoughts and conversation at manageable levels, he runs on a track every morning until he literally collapses. He can work 96 hours in a row. He plans to live in his office, crashing in a sleeping bag. He describes anything that distracts him and his future colleagues, even for minutes, as “evil.”

Intense.  After reading this, I began to wonder about how crucial this sort of intensity and stamina is to success.  Is it possible to compete with this personality type while getting 8 hours of sleep every night?  While having a life?

I was reminded of this by a post by Matt Douglas, himself a startup CEO.  Douglas zeroes in on a number of Priebatsch quotes from various sources and argues the merits of work-life balance.  It’s well worth a read.

As I reconsidered Priebatsch’s case, I recalled a line from Steven Johnson’s latest book: Where Good Ideas Come From.

I’ll be posting more about the book – and on topics more relevant to this blog’s core focus – but I wanted to share a bit that I count as an argument against the Priebatsch model.

Writing about the importance of “exapting” ideas from one field to another, Johnson relates the discovery of the double-helix structure of DNA, including this bit:

It is a fitting footnote to the story that Watson and Crick were notorious for taking long, rambling coffee breaks, where they tossed around ideas in a more playful setting outside the lab –a practice that was generally scorned by their more fastidious colleagues.  With their weak-tie connections to disparate fields, and their exaptative intelligence, Watson and Crick worked their way to a Nobel Prize in their own private coffeehouse.

An anecdote like that is hardly compelling evidence on its own, but the lesson here is consistent with the book’s larger thesis.

On the one hand, work-life balance recommends itself and doesn’t need to lean on arguments about fostering innovation.  On the other, I’d sure love to be able to work effectively on 3 or 4 hours of sleep every night.

But just in case other mere mortals are discouraged by stories of the Priebatsch’s of the world, they ought to take heart: a coffee break, a bit of pleasure reading, perhaps even a bit of day-dreaming can foster creativity. It seems at least possible that the very same focus that is helping Priebatsch succeed could also be holding him back.

Imagine a smart chair

Hearing others’ visions for the future of the Net can be inspiring.  But a lot of the time it’s not.  One thing I’m struck by with the explosion of social media, in particular, is the shallow nature of the industry’s ambition.  For every person writing about how Twitter can enable political change, five others are preparing slidedecks on how social media can offset your firm’s direct mail budget.  There’s a place for that, of course.  But one of the great things about the internet is that it invites us to consider more radical possibilities for change.

The Success of Open Source

As I was thinking about this I was reminded of a quote from the end of Steven Weber’s 2004 book The Success of Open Source, and I decided it was worth sharing.

(He’s just finished describing Wired editor Kevin Kelly’s vision of smart objects, priced in real-time.)  Weber:

Imagine a smart chair, connected to a lot of other smart things, with huge bandwidth between them, bringing transaction costs effectively to zero.  Now ask yourself, With all that processing power and all that connectivity, why would a smart chair (or a smart car or a smart person) choose to exchange information with other smart things through the incredibly small aperture of a “price”? A price is a single, mono-layered piece of data that carries extraordinarily little information in and of itself.  (Of course it is a compressed form of lots of other information, but an awful lot is lost in the process of compression.)  The question for the perfect market that I’ve envisioned above is, Why compress?  My point is that even a perfect market exchange is an incredibly thin kind of interaction, whether it happens between chairs or between people, whether it is an exchange of goods, ideas, or political bargains.  I want to emphasize that communities, regimes, and other public spheres can come in many different shapes and forms.  The “marketized” version is only one, and it is in many ways an extraordinarily narrow one that barely makes use of the technology at hand.

So there you are.  The point of this blog, really, is to take the internet up on its invitation, and to think more creatively about society and its future.

Who are you calling reduced?

Zadie Smith has a… I’ll say frustrating… essay in The New York Review of Books about Facebook, The Social Network and Jaron Lanier’s book You Are Not a Gadget.  While she raises some interesting questions, and while I look forward to reading Lanier’s book, there’s a lot I don’t accept.  Over at The Atlantic Alexis Madrigal has a smart and tempered response taking on, among other things, the charge that Facebook promotes homogenization.

Smith’s central point, as I read her, is this:

When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendships. Language. Sensibility. In a way it’s a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears.

Who among us has lost our messy feelings, our desires and our fears?  Anyone? Bueller?  I thought not.  As I keep pointing out, we use online tools to supplement our offline lives rather than to replace them.

That doesn’t mean that there’s no reason for concern.  Smith quotes Lanier:

Different media designs stimulate different potentials in human nature.

So what potentials are we stimulating with today’s web tools?  Smith seems to think we’re not stimulating much worthwhile. Ezra Klein has a different take:

…if you’re someone who likes to spend Saturday in a quiet room with a good book and a long time to think about it, you might find Facebook unnerving. And Zadie Smith and Ross Douthat do. Sometimes, I’d guess, we all do. Conversely, if you’re someone who likes people but has trouble meeting them, or gets shy in unfamiliar social settings, you probably don’t think the Internet has made you less human.

It’s worth reading Ezra’s whole post.  He references “Alter Ego”,  a book matching photos of online gamers with their avatars.  Ezra’s post highlights a particularly compelling example of “becoming human” online, so please give it a read.

For a more philosophical examination of how the web is contributing to human self-actualization, I recommend Yochai Benkler’s The Wealth of Networks.

Benkler argues persuasively that the ‘net is enhancing our autonomy and enabling our individuality.  This is not guaranteed by the technology.  And so many of Smith’s concerns end up being important to guaranteeing that the net continues to improve human welfare.  Yet, we have not been reduced and little has been lost.  Rather, much has already been gained.

Code is law, and also romance

Alexis Madrigal has an interesting column in this month’s Atlantic on the use of algorithms in online dating.  If data mining and algorithms can help people more efficiently find matches, what could be wrong with that?  Plenty, says Madrigal:

The company can quantify things you could guess but might rather not prove. For instance, all races of women respond better to white men than they should based on the men’s looks. Black women, as a group, are the least likely to have their missives returned, but they are the most likely to respond to messages.

I asked Yagan whether OkCupid might try tailoring its algorithm to surface more statistically successful racial combinations. Such a measure wasn’t out of the question, he said. “Imagine we did a lot of research, and we found that there were certain demographic or psychographic attributes that were predictors of three-ways. Hispanic men and Indian women, say,” Yagan suggested. “If we thought that drove success, we could tweak it so those matches showed up more often. Not because of a social mission, but because if it’s working, there needs to be more of it.”

So perhaps it’s a bit tricker than we might think.  Moreover, it’s hard to disagree with his basic point:

Algorithms are made to restrict the amount of information the user sees—that’s their raison d’être. By drawing on data about the world we live in, they end up reinforcing whatever societal values happen to be dominant, without our even noticing. They are normativity made into code—albeit a code that we barely understand, even as it shapes our lives.

We’re not going to stop using algorithms. They’re too useful. But we need to be more aware of the algorithmic perversity that’s creeping into our lives.

Quite so.  This point is in line with Lawrence Lessig’s argument that “code is law”, and I certainly agree that we need to care, as a society, about the values underlying our code.

That said, Madrigal points out that dating algorithms are 1) not transparent and 2) can accelerate disturbing social phenomena, like racial inequity.

True enough, but is this any different from offline dating?  The social phenomena in question are presumably the result of the state of the offline world, so the issue then is primarily transparency.

Does offline dating foster transparency in a way online dating does not?  I’m not sure.  Think about the circumstances by which you might meet someone offline.  Perhaps a friend’s party.  How much information do you really have about the people you’re seeing?  You know a little, certainly.  Presumably they are all connected to the host in some way.  But beyond that, it’s not clear that you know much more than you do when you fire up OkCupid.  On what basis were they invited to the party?  Did the host consciously invite certain groups of friends and not others, based on who he or she thought would get along together?

Is it at least possible that, given the complexity of life, we are no more aware of the real-world “algorithms” that shape our lives?

None of this takes away from the salience of Madrigal’s point: we should want to know more about the algorithms that dictate our online behavior.  Not because we aren’t used to the opaque complexity of circumstance, but because we are.

(FWIW, I highly recommend OkCupid’s blog, OkTrends.  They put the scary amount of data to which they have access to consistently interesting use.)

Facebook and face-to-face

I’ve blogged about this before, but I wanted to share a great post from Ed Glaeser at NYT’s Economix on how social networking – in this case Facebook – supplements in-person interaction, rather than replacing it:Facebook-icon

it isn’t clear if Facebook will increase or decrease the demand for face-to-face interactions.When theory is ambiguous, we need to turn to the data, and it seems empirically that Facebook supports, rather than replaces, in-person meetings. For example, surveys of Facebook users have found that the use of “Facebook to meet previously unknown people remained low and stable” and that “students view the primary audience for their profile to be people with whom they share an offline connection.” In other words, Facebook seems to be typically used to connect people who have connected through some other medium, like being in the same class or meeting at a party, which seems to suggest complementarity between meeting face-to-face and connecting on Facebook.

Another paper looks at whether people who are good at face-to-face interactions made greater use of social-networking sites. The study examined a group of 13- to 14-year-olds in 1998-9 and rated their ability to connect well in person with a close friend. In 2006-8, those same people were asked about their involvement with social-networking sites.

The people who were better at interacting face-to-face in adolescence had more friends on social-networking sites as young adults. Again, electronic interactions seem to complement face-to-face connections.