Social networks and the app ecosystems that surround them may find themselves at odds over user privacy.

girlsaroundme1.jpg

Cult of Mac's original screencap of the Girls Around Me application.

Last week, Cult of Mac had a fascinating, stomach-churning story about an application called Girls Around Me that scraped public Foursquare and Facebook checkins onto a map that showed people in your vicinity. Its branding was crass -- "In the mood for love, or just after a one-night stand? Girls Around Me puts you in control!" -- but, as the developers of the app argued, they had technically done nothing wrong aside from being piggish and crude.

Oddly, they were right.

They took publicly available data and put it on a map. The sexysexy frame they put around it made it *seem* creepier, but in terms of the data they accessed and presented, everything was within the rules of the game. They had done nothing that couldn't be done by another app developer.

This is basically how app ecosystems working with data from Foursquare and Facebook and Twitter are supposed to work. Some people out there get an idea for something that the main services had never thought of and they build it out of whatever data is available.

Using the traditional privacy idea that once something's public, it is public for any purpose, you're lead down a very narrow path of reasoning about this app's appropriateness and what the services it was built on could do about it. If app developers can use any data that's out there and there is data that can be assembled into a creepy application, then there will be creepy applications. Foursquare, Apple, and Facebook are absolved from responsibility because of the technical impossibility of checking on each and every app's creepiness factor. As Amit Runchal put it on his blog:

The implicit blame in these conversations is confusing. What, exactly, is Foursquare et al. expected to do? Comb through each and every request to their API? As of last year there were 10,000 developers using the Foursquare API. There are 146,000 publishers in iOS and likely a similar number to be found on Facebook. Collectively there are well over a million apps between all three platforms, with far fewer employees available to review these apps. What are the solutions? Hire more employees? There will never be enough. Do more in-depth testing of apps? That will merely slow down the already frustrating-to-many approval process and likely threaten the ecosystem that we have come to depend upon. And still creepiness will get through. It's inevitable.

This is where we have to break down the concept of creepiness. And precisely where NYU philosopher Helen Nissenbaum's concept of privacy in context is so important. First, a quick recap of her idea from last week's story:

The standard explanation for privacy freakouts is that people get upset because they've "lost control" of data about themselves or there is simply too much data available. Nissenbaum argues that the real problem "is the inapproproriateness of the flow of information due to the mediation of technology." In her scheme, there are senders and receivers of messages, who communicate different types of information with very specific expectations of how it will be used. Privacy violations occur not when too much data accumulates or people can't direct it, but when one of the receivers or transmission principles change. The key academic term is "context-relative informational norms." Bust a norm and people get upset.

According to the traditional privacy framework, there is no reason for people to get upset about Girls Around Me, or less crass apps that do the same thing. They'd already given up their data to the public, so why was anyone upset? But using Nissenbaum's theory, the bad feelings that people have around the app make sense: People gave data to Foursquare or Facebook in one context and then it showed up in another context that they weren't expecting.

I still take Runchal's point: as long as there are thousands of applications being produced using data from Foursquare, Twitter, and Facebook, it is going to be very, very hard to keep these kinds of contextual privacy violations from occurring. I would go so far as to say that Nissenbaum's ideas challenge the very notion of how application ecosystems are supposed to work. Social app developers, almost by definition, are supposed to come up with new uses -- undreamed-of uses -- for people's personal data.

As more data about people flows into the Internet's core social networks, I think their role as platforms that feed information to ecosystems of apps and their role as services that people use directly may find themselves at odds. Doing one of these tasks well might preclude you from doing the other well, too. After all, retaining the contextual integrity of information is how you get people to share with you in the first place. Think about it this way: Who would you tell your secrets to? The person who keeps his mouth shut or the person who allows your information to be repurposed because other people might find it entertaining?

If violations like this continue, respecting the context in which data's given might not just be a good privacy practice, it might become a good business practice. And I'm not sure what that would mean for app developers who've become dependent on data from the big services.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.