Danah Boyd – Gigaomhttps://gigaom.com
The industry leader in emerging technology researchMon, 25 Sep 2017 15:05:19 +0000en-UShourly1When it comes to social media, teens are not all created equalhttps://gigaom.com/2015/01/12/when-it-comes-to-social-media-teens-are-not-all-created-equal/
https://gigaom.com/2015/01/12/when-it-comes-to-social-media-teens-are-not-all-created-equal/#commentsMon, 12 Jan 2015 19:13:34 +0000http://gigaom.com/?p=905886I wrote a post recently based on something that 19-year-old Andrew Watts published on Medium, about the way he uses various social media networks. It was a fascinating look at how different services — from SnapChat and Instagram to Facebook and Twitter — appeal or don’t appeal to Andrew and his friends. But as sociologist and teen expert danah boyd rightly points out in a follow-up post at Medium, it’s dangerous to extrapolate too much from what one teenager like Andrew does with social media.

This is something that boyd (who chooses to spell her name without using capital letters) is in a position to know better than just about anyone, because she has spent more than a decade studying the way that teenagers use social media, and has written a number of fascinating papers as well as a book devoted to what they do and don’t do.

In her post, boyd notes that there is a tendency in tech and media circles to see teenagers as a kind of undifferentiated mass about which little is known, and so when someone like Andrew speaks up about their own experience, their contributions are inevitably used to generalize about what teenagers as a whole want or don’t want from social media. As boyd puts it:

“I work hard to account for the biases in whose voices I have access to because I’m painfully aware that it’s hard to generalize about a population that’s roughly 16 million people strong. They are very diverse and, yet, journalists and entrepreneurs want to label them under one category and describe them as one thing.”

Social media and race

In my own defense, I did point out in my post that Andrew is a single user (something he noted as well) and also that he is a 19-year-old student at the University of Texas, and therefore his behavior and usage patterns wouldn’t necessarily apply to those of other users. But boyd goes into more detail about why we shouldn’t do generalize — and one of the reasons is that Andrew is a white male university student, and therefore likely from a relatively privileged background compared to lots of other social-media users.

Photo: Cienpies/Pond5

This is especially important when considering what Andrew says about Twitter, boyd says: He mostly dismisses it as something that he and his friends don’t really understand or use, except perhaps to complain about school. But boyd notes that Twitter has played a central role in the discussion around events such as the shooting of Michael Brown in Ferguson, and has been used by many members of the black community and others to spread both information and to rally support using hashtags like #BlackLivesMatter.

“Let me put this bluntly: teens’ use of social media is significantly shaped by race and class, geography and cultural background. The world of Twitter is many things and what journalists and tech elites see from Twitter is not even remotely similar to what many of the teens that I study see, especially black and brown urban youth.”

While there’s nothing wrong with Andrew’s viewpoint per se, boyd says, it represents only a specific segment of the population, and one that routinely gets more attention than other perspectives do — in large part because it comes from someone who is more or less like those in the tech industry (i.e, predominantly male, upper middle-class and white). The problem is that this attention “whitewashes teens’ practices in deeply problematic ways,” boyd says.

Others like us

To take another example, boyd notes that Andrew talks about WhatsApp as only being useful if you travel abroad, which “renders invisible” the way that many teenagers from immigrant families use the app to communicate with loved ones in other countries, where text messaging is prohibitively expensive (one recent study showed that in some countries like Qatar, the app is also used for general news and discussion, even more than Facebook or Twitter).

I think boyd makes an important point, which is that by paying more attention to narratives that fit our own experiences, we not only miss interesting aspects of those networks as they apply to the lives of different users, but we also help reinforce what could be damaging biases about what those networks are for, or how they should be used — and that in turn can help determine which services succeed or fail. As she puts it:

“The fact that professionals prefer anecdotes from people like us over concerted efforts to understand a demographic as a whole is shameful. More importantly, it’s downright dangerous. It shapes what the tech industry builds and invests in, what gets promoted by journalists, and what gets legitimized by institutions of power.”

There’s no question that the temptation to focus solely on the perspective of users who are similar to us is a powerful one, and one I myself have succumbed to on more than one occasion, because doing so reinforces or validates our own behavior. But in many ways, looking at how people different from us are using those networks is not only more interesting but more valuable as well — because it can highlight strengths or weaknesses that would never have become obvious if all we looked at was the experience of our peer group.

]]>https://gigaom.com/2015/01/12/when-it-comes-to-social-media-teens-are-not-all-created-equal/feed/5This might be the best thing anyone can do with datahttps://gigaom.com/2013/04/10/this-might-be-the-best-thing-anyone-can-do-with-data/
Wed, 10 Apr 2013 20:56:10 +0000http://gigaom.com/?p=629458Sometimes, when I find myself reading about new ways to serve better ads or recommendations, or to analyze who likes what on Twitter, and I find myself asking who the hell cares. That’s because, sometimes, it all seems beyond trivial. When I imagine myself in the shoes of a modern-day slave being forced to work grueling hours under grueling conditions in a developing country, or a child whose parents are pimping her out to pedophiles, I can’t seem to figure out why it matters that my Starbucks coupon is delivered at the ideal time when I’m approaching the store.

So when I wrote on Monday about the work of the SumAll Foundation to bring the world of business and next-generation data analytics to non-profits, I was genuinely excited about what they were doing. The foundation’s first effort was around quantifying human trafficking and raising awareness of the problem. One of its next projects has to do with analyzing the online behavior of pedophiles. And the SumAll Foundation isn’t just gathering data and making infographics, but rather sharing deeper data with the relevant organizations and teaching them how to do some of this work themselves.

I was even happier on Tuesday when I began going through my Google Reader feeds to read about two other efforts dedicated to using data fighting human trafficking and sexual exploitation. One is from Microsoft researcher and Ivy League academician danah boyd. The other is from Google.

Tech can help when it understands human nature

Boyd’s work isn’t so much a project as it is a framework for helping the growing number of technologists she sees working with non-profit organizations and government institutions to fight the exploitation of children. On her blog, boyd notes that technology certainly can help combat human trafficking, but that there are very human and complex factors that need to be considered before just building a system like, presumably, one would for serving targeted ads.

“On too many occasions, I’ve watched well-intentioned technologists approach the space with a naiveté that comes from only knowing about human trafficking through media portrayals. While the portraits that receive widespread attention are important for motivating people to act, understanding the nuance and pitfalls of the space are critical for building interventions that will actually make a difference.”

You can read the full four-page primer on her site, but here are the 10 points she addresses. She learned these lessons in part from discussions with leading scholars — some of whom Microsoft funded — researching the role that technology plays in facilitating human trafficking:

Post-identification support should be in place before identification interventions are implemented.

Evaluation, assessment, and accountability are critical for any intervention.

Efforts need to be evidence-based.

The cleanliness of data matters.

Civil liberties are important considerations.

A global network, backed by some data heavyweights

Then there’s Google, which awarded $3 million to three anti-trafficking organizations based in the United States, Asia and Europe in order to establish a Global Human Trafficking Hotline Network. The goal of the network, Google’s blog post explains, is to “collect data from local hotline efforts, share promising practices and create anti-trafficking strategies that build on common patterns and focus on eradication, prevention and victim protection.” This is critical: As the team at SumAll pointed out, one of the hardest things to do is facilitate effective data sharing across organizations so everyone has a clearer picture of what’s actually happening.

Here’s how Google explains the role of data sharing:

“Appropriate data can tell the anti-trafficking community which campaigns are most effective at reducing slavery, what sectors are undergoing global spikes in slavery, or if the reduction of slavery in one country coincides with an increase right across the border.”

This isn’t Google’s first foray into funding anti-trafficking efforts. In 2011, the company donated $11.5 million to the cause. This time, though it’s joined by the intelligence sector’s favorite data-analysis startup, Palantir, as well as Salesforce.com, which is helping to scale the call-tracking infrastructure.

And of course I understand that advertising and other commercial efforts are a necessary part of the economy, but watching data-analysis technology do little else but line the pockets of already rich individuals and corporations does get a bit old. However, when the money these efforts generate and the technologies they inspire help fund and fight some of the most egregious abuses on the planet — abuses that affect individuals from demographics no advertiser really cares about, and abuses that sometimes help corporations drive larger profits — the whole discussion around the importance of data starts to seem a lot more meaningful.

Here’s Google’s three-minute video explaining the problem and how it thinks its hotline network can help:

]]>Elsevier buys Mendeley, many prominent users migratinghttps://gigaom.com/2013/04/09/elsevier-buys-mendeley-many-prominent-users-migrating/
Tue, 09 Apr 2013 13:38:29 +0000http://pro.gigaom.com/?post_type=go_shortpost&p=173169Elsevier, the publisher, has acquired Mendeley, the London/NYC developer of a very popular academic/research social network. Elsevier is hoping to become a dominant player in the expanding market for open and online academic sharing, and Mendeley has grown to 2.3 million users. The acquisition is reported to be for as much as £65m.

There has been a loud outcry from many prominent researchers, who believe that Elsevier uses its position in the academic publishing world in a way that is negative for researchers, such as its monopolistic sales of bundled journals to libraries and organizations, the company’s support of SOPA (Stop Online Piracy Act), and related positions that seem counter to openness and the best interests of researchers.

Here’s danah boyd, who says she plans to export her data and find some other alternative.

David Weinberger engaged in a discussion with William Gunn, the head of Academic Outreach at Mendeley, and Weinberger is unconvinced of his expressions of sincerity:

I

It looks like a rocky road ahead for Mendeley. Perhaps Elsevier will turn out to be benevolent in their intentions, but in the meanwhile a great many of those who participated in Mendeley will be going, it seems.

May be an opportunity for others — like LinkedIn or MightyBell — to create an alternative to Mendeley. Start with a small subset of the most critical capabilities, and build it up?

It was a year ago when Andy Baio posted this tweet, and in his (far less than) 140 characters perfectly captured the key elements in this blog post: trending topics, geographies, and exogenous/endogenous events.

There are two types of trending topics on Twitter: endogenous and exogenous. Endogenous TTs happen when a topic has a viral spread. Once it becomes a TT, everyone jumps onto it to spread it even further. So when we see a hashtag like #intenyears we know it didn’t happen naturally [snip]. Exogenous TTs happen when everyone is talking about the same thing simultaneously, not really responding to each other or to the trending topic per say but responding to a cultural moment. This often happens when there are major new events or TV shows that are broadcasting something of great interest.

There is some value judgment implicit in Danah’s post, which I still cannot pinpoint quite clearly; but it does seem that exogenous trends are of some higher value, perhaps because they reflect on events that occur in the physical world, not just the virtual one (granted, virtual events can eventually spread to the real world and affect it — ask Anthony Weiner — but it is not clear how often that happens).

My co-authors and I recently published a paper (pdf) that asked two related questions: What types of trending topics can be captured in Twitter data (using standard techniques)? What are the important dimensions according to which these trends can be characterized, and what are the key distinguishing features of trends that can assist in automatically categorizing and differentiating the different trending topics?

To keep this post short, I would just mention that we did some qualitative coding and affinity analysis of a sample of trends from one geographic location (New York). This is with a computer science PhD student — Hila Becker — in her first dwelling into qualitative methods (yes, I am pretty proud of that, and her; Luis Gravano is another co-author on this NSF-funded work). The emerging categories we identified were split into “endogenous” and “exogenous” types, and included categories like breaking news events, broadcast media events (like the type Danah alludes to), planned events (like the pep rally mentions in Danah’s post) and so forth.

We quickly transitioned to more traditional computer science tools, and took a sample of trends and all their associated data: tweets, networks of users and a bunch of derived variables (or “features”). The features were chosen with the hypothesis that they can help us automatically differentiate between different types of trends. We grouped these computed features into five categories:

Content features (based on the content of messages for the trend, e.g., proportion of posts with URLs)

Interaction features (the characteristics of conversation/social interaction in the trend’s content)

Time features (how quickly the trend develops and dies)

Participation features (how spread out versus centralized the trend is — are there a few key users posting content or not)

Social network features (how dense is the network between users posting about that trend)

The social network features, for example, can perhaps help differentiate exogenous trends (where many people at once react to the same “event”) and endogenous trends (that may spread through connected people in the Twitter network). And indeed, as we show in the paper, that set of features did turn out to be different, in general, between endogenous and exogenous trends.

I leave you with the full paper to get more details about differences between trend types (clue: Table 5). However, this finding begs the question: can we use these features to automatically classify trends into exogenous or endogenous ones?

Well yes we can, at least to some degree. We show this result in our most recent paper, “Beyond Trending Topics: Real-World Event Identification on Twitter” (pdf, to be presented in Barcelona next week at the ICWSM conference). While we use a slightly different method to compute and detect “message clusters” (such as tweets that correspond to the same topic), we show that we can pretty robustly classify these clusters into those that reflect some real-world occurrence, and those that do not. Future work: show that this works with the full stream of Twitter data, and how early we can do it after the new trending topic was detected. In the meantime, in our second ICWSM paper, we show that we can also select top representative tweets for each cluster.

So, if we were in Portland with @waxpancake, our system could perhaps show that #rain is *not* a “Justin Beiber” because unlike most tweets about the young pop star, rain tweets and trending topics will correspond to a real-world occurrence (note to Bieber fans: I am not saying he is not real, but I guess this depends how you define “real”).

There are many other interesting social media papers at ICWSM, so be sure to check them out.

If there’s one issue that unites major Internet giants like Google (s goog) and Facebook, it’s privacy. Google tries to offer a new service with Buzz, and triggers a series of privacy land mines; Facebook tries to offer new services and runs afoul of privacy concerns as well, then it changes its privacy settings and (according to some) makes the problem worse instead of better. Why is privacy so hard? Sociologist Danah Boyd, who specializes in the way people use social networks, says in the latest issue of MIT’s Technology Review magazine that it’s because “the way privacy is encoded into software doesn’t match the way we handle it in real life.”

Privacy settings are often binary: show this photo to these people, but not this update, and so on. Check a box, click a button. But the real world allows for many shades of grey when it comes to privacy, says Boyd. If you happen to be in a restaurant having a meal with someone, for example, you both know implicitly that you’re in public and therefore whatever you do is public by default, without having to click on any terms-of-use agreements or read pop-up disclosure statements. At the same time, you can easily lean close to the other person and whisper a word or two privately. As Boyd describes it:

We count on what Erving Goffman called “civil inattention”: people will politely ignore us, and even if they listen they won’t join in, because doing so violates social norms. Of course, if a close friend sits at the neighboring table, everything changes. Whether an environment is public or not is beside the point. It’s the situation that matters.

In other words, we all view privacy differently based on the situation we’re in, the other people around us and our relationships with them, our goals and desires within that particular situation, and so on. These things combine to create a complex web of competing pressures and incentives related to whether we keep something private or not: a web so complex that it makes a mockery of the various tools that most services such as Facebook use to help you manage your privacy. Even the ability to create specific lists of friends who have access to certain things quickly becomes cumbersome, as Facebook CEO Mark Zuckerberg has acknowledged. However, Facebook has to make the attempt because it is being pressured by both users and governments over the issue.

At the other end of the spectrum is a site like 4chan, where founder Christopher “Moot” Poole has pursued a defiantly anonymous approach to community, by allowing almost total freedom for users to post content without any repercussions whatsoever. The result is a kind of anything-goes Wild West atmosphere — as described, coincidentally enough, in another piece in the latest issue of Technology Review magazine. This kind of community can also have some positive aspects as well, as Poole argued in a recent presentation at the TED conference (embedded below). Anonymity can often allow people to do constructive things as well as destructive things.

So how do we go about managing our multiple online selves and our constantly shifting spectrum of privacy demands? Some companies are trying to make it easier for users to take an ad-hoc approach to divulging privacy information such as location, for example. A startup called EchoEcho offers a service that allows you to request someone’s location quickly and easily, and they can decide to tell you or not, depending on where they are, what they’re doing, and what relationship you have with them. EchoEcho founder Nick Bicanic says this makes it easier for people to change their minds on who they want to tell, rather than just constantly broadcasting their location to everyone.

As Liz has described, however, privacy isn’t just a technical problem, and it isn’t just a business problem. It’s also a cultural problem and to some extent an educational or behavioral problem. And it’s one that’s likely to get harder before it gets easier.

Is the rise of Facebook a result of “white flight” away from MySpace? That’s the argument made by sociologist Danah Boyd, in a chapter from a yet-to-be-released anthology (PDF link) on the issue of race and how it affects the way we behave online. The fact that Facebook has claimed the social-networking crown is relatively obvious — it will soon hit the half-billion user mark, if it hasn’t already, and revenue is expected to hit $1 billion this year. Meanwhile, MySpace seems to have more or less fallen off the map, and has beens struggling with a number of management and other issues. But is there really a racial element behind Facebook’s success? Boyd says there is, but her case is far from convincing.

In the chapter (hat tip to Christopher Mims at MIT’s Technology Review), Boyd builds on the research she did for her doctorate (she is now a researcher at Microsoft and is a fellow with the Harvard Berkman Center for Internet & Society) which looked at teenage behavior online, based on a series of interviews that she did with individual Internet users in a number of different states. In 2007, Boyd wrote an essay that said her research showed Facebook users were primarily white and from middle-class or more affluent households and neighborhoods, while MySpace users were more likely to be from immigrant or working-class families and households — a conclusion that got a lot of attention, and also some criticism. Boyd said at the time that it was just a blog post, not scholarly research, but her latest offering carries a similar message.

The book chapter is entitled “White Flight in Networked Publics? How Race and Class Shaped American Teen Engagement with MySpace and Facebook,” and is part of a book called Digital Race Anthology, which is being published later this year by Routledge Press. In it, Boyd describes how during her research in 2007, one teenaged interview subject named Kat said that she didn’t like MySpace any more because it was what she called “ghetto”:

It’s not really racist, but I guess you could say that. I’m not really into racism, but I think that MySpace is now more like ghetto or whatever.

Boyd says that this kind of comment in a number of interviews drove home the point that “Facebook went beyond simple consumer choice; it reflected a reproduction of social categories that exist in schools throughout the United States. Because race, ethnicity and socio-economic status shape social categories, the choice between MySpace and Facebook became racialized.” Later on, Boyd uses the metaphor referred to in her chapter’s title when she says that one way to understand the shift that teenagers appeared to make in 2007 from using MySpace to using Facebook is to see it “through the lens of white flight” — that is, the departure of white and middle-class residents from inner-city neighborhoods.

But is that really a fair comparison, or is Boyd overplaying the race card when it comes to looking at the shift in fortunes of these two social networks? Without exhaustive demographic analysis of the user bases of MySpace and Facebook — which even Boyd admits she has not done — it’s impossible to say what role race might have played in the rise of one and the fall of the other. It could just as easily be explained by looking at the obvious differences between the two sites and what they had to offer users, regardless of what color those users were.

It’s true that Facebook started as a university-based network, and so likely got a jump start in terms of middle-class and higher-income users, a group that likely included a greater proportion of white users (although again there is little data to show this). And it’s true that MySpace became associated with alternative music, in many cases hip-hop and other “street” culture. But this doesn’t necessarily imply a strictly racial divide — as Boyd acknowledges in her chapter, the term “ghetto” has two distinct meanings: one referring to a specific location in a city or town that is defined by race and class, and the other meaning a whole subculture of fashion, music and other tastes that got their start in such inner-city neighborhoods.

So perhaps MySpace got associated with hip-hop music — that doesn’t mean people leaving were necessarily engaged in “white flight,” so much as “anti-hip-hop” flight. Boyd also said when her essay came out in 2007 that MySpace was a home for “freaks, geeks and queers”, which is hardly an explicitly racial group. And it’s also true that MySpace had (and in many ways still has) a truly hideous user interface, and that there is very little users can do on a MySpace page except post comments or listen to a snippet of music. Even in its early days, Facebook offered what was arguably a much cleaner, friendlier and more appealing experience, with more social elements.

Facebook has also arguably done a far better job of monetizing and expanding its social network and its features, while MySpace has not had much success, despite repeated attempts to monetize its user base through the use of widgets and other features. Could it not be that one network prospered because it was just better, easier to use, offered better features and was better managed? And that the other has declined because it is ugly, the user interface is terrible, and all you can do is listen to small snippets of largely irritating music? Does it necessarily have to be driven by some kind of “white flight” from an online ghetto?

Maybe my perceptions are a result of some hidden racial divide as well, but Boyd is going to have to do a little more work before she can make that case.

]]>https://gigaom.com/2010/07/16/myspace-facebook-race/feed/41When It Comes to Open Data, Is Transparency Enough?https://gigaom.com/2010/05/28/open-data-transparency-is-not-enough-boyd-says/
https://gigaom.com/2010/05/28/open-data-transparency-is-not-enough-boyd-says/#commentsFri, 28 May 2010 14:30:50 +0000http://gigaom.com/?p=123054

The push to free up more public information and make government more transparent is one of the primary goals of open data advocates and the “Government 2.0” movement. But sociologist and Microsoft (s msft) researcher Danah Boyd warned attendees at the recent Gov 2.0 conference that there’s a downside to such efforts. “Transparency is not enough,” she said (video is embedded below), and raw information without the ability to understand or make sense of it does no one any good.

The issues with transparency are similar to the issues with Internet access and the digital divide. In focusing on the first step – transparency or access – it’s easy to forget the bigger picture. Internet access does not automagically created an informed citizenry. Likewise, transparent data doesn’t make an informed citizenry. Transparency is only the first step. And when we treat transparency as an ends in itself, we can create all sorts of unintended consequences.

Boyd described how laws require the publication of lists of registered sex offenders, which she agreed is a positive thing in most ways. But as she noted, many of the names on them belong to young people who were charged with criminal acts for having a consensual relationship with someone under the age of official consent. Having just the raw information about such types of things can actually make things worse if it’s not interpreted correctly:

Information is power. This is precisely why we want to get information into the hands of more people. But as we do, we need to account for a new twist in all of this: Spinning the interpretation of the information is even more powerful. And the more that we make information available, the more that those in power twist it to tell their story. When everyone has information, information is no longer nearly as powerful as the ability to control its narrative.

Information alone doesn’t empower people, she said, adding that: “Information is never neutral. Neutrality is another one of those lovely ideals. But Wikipedia entries are not neutral nor is the algorithm that produces Google (s goog) News.” To be able to take advantage of all the transparency and open data that people are calling for, Boyd argued, we need information literacy, which includes the skills to interpret information in context. “If you want information access because you want a better-informed citizenry and a fairer society, you must start embracing the importance of information literacy and the need to provide infrastructure to help people build these skills.”

Boyd isn’t the only prominent figure to raise the issue of the unintended consequences of transparency in government: Harvard Law professor Larry Lessig wrote an essay last year that argued transparency can be a double-edged sword.

]]>https://gigaom.com/2010/05/28/open-data-transparency-is-not-enough-boyd-says/feed/5Does Android Have a Target on its Back?https://gigaom.com/2010/05/10/does-android-have-a-target-on-its-back/
https://gigaom.com/2010/05/10/does-android-have-a-target-on-its-back/#commentsMon, 10 May 2010 12:08:00 +0000http://jkontherun.com/?p=63117Android (s goog) is the hot smartphone platform currently, and that means the competition has it squarely in its sights. Apple (s aapl) fired the first salvo with its patent infringement claims against HTC. HTC is the largest maker of Android phones, so the suit is a shot across the bow of Android. Then we had HTC sign a deal with Microsoft (s msft) that gives the handset maker protection over potential infringement of Redmond’s intellectual property (IP) for all Android handsets sold. No matter what you think about Apple’s claims, the HTC deal with Microsoft may have the biggest long-term impact on Android.

Android is hanging in the breeze a bit due to Google’s lack of a mobile IP portfolio. The company is new to the smartphone game, so it lacks years of patents to cover its back like most of the competition. No matter which side of the fence you are on in regards to IP protection actions, having its own IP would at least give Google a fallback position against claims, current and future.

It’s hard to predict what the outcome in the Apple/ HTC case will be, but the Microsoft deal HTC signed may have a bigger affect on Android than the Apple situation. HTC is the largest maker of Android phones, and by signing the deal with Microsoft it has basically admitted that Android may indeed infringe on Redmond’s technology. HTC will pay a royalty to Microsoft on every Android phone it sells, so it’s not likely the protection deal was signed “just in case.”

Now that HTC has taken this position with Microsoft, it may behoove other Android phone makers to do the same. Motorola (s mot) has emerged big in the Android space, and I wouldn’t be surprised if it was in conversation with Microsoft about doing a deal similar to the HTC agreement. No matter how this all shakes out, there is a potential disruption in the growth Android has been achieving since its launch. It is worth keeping an eye on this whole Android/ IP mess, and for those wanting a closer look at the situation check out my analysis (subscription required).

]]>https://gigaom.com/2010/05/10/does-android-have-a-target-on-its-back/feed/16SXSW: Google Accepts Buzz Criticism, Invites Boyd to Speak on Privacyhttps://gigaom.com/2010/03/14/google-accepts-buzz-criticism-invites-boyd-to-speak-on-privacy/
https://gigaom.com/2010/03/14/google-accepts-buzz-criticism-invites-boyd-to-speak-on-privacy/#commentsMon, 15 Mar 2010 02:28:46 +0000http://gigaom.com/?p=105678Google, (s GOOG) to its credit, is rolling with the punches thrown in response to its Buzz launch from last month, making changes to the product to address user concerns, and staying committed to it despite a messy launch. Members of the Buzz product team spoke on a behind-the-scenes-of-Gmail panel at SXSW today, addressing industry-wide criticism as well as a cutting attack over privacy issues from SXSW keynoter and researcher Danah Boyd delivered on Saturday.

Todd Jackson, product manager for Gmail and Buzz

Google has been criticized far and wide for the way it launched and implemented Buzz, its new social web aggregation product. The primary complaint was over privacy — since Buzz exposed and made assumptions about pre-existing Gmail users’ relationships with each other. Buzz also has other problems: it’s noisy, limited to just a few services, and for many does not serve a distinct enough purpose from competitors.

Buzz product manager Todd Jackson, who spoke on the panel and to GigaOM afterwards, said he attended Boyd’s talk and found it “extremely insightful, fair and something we could work from.” He said he personally emailed Boyd afterwards and invited her to deliver the same talk at Google.

“It’s a challenging set of questions navigating in this space that’s fairly new and especially for Google,” Jackson said. He said some of Boyd’s most resonant points concerned how users perceive their different social groups, and the idea that making public information more public could violate users’ privacy.

Jackson said that he and his team are committed to Buzz as an idea and a product and will continue to have the support of Google to evolve it — for instance, this week Buzz released better inbox alert controls. Buzz now has “millions” of users (Google will only say that Gmail, Buzz’ parent product and its host, has “hundreds of millions” of users).

Next on the Buzz to-do list are more ways to control users’ streams, and moderation for how often an item bumps to the top, Jackson said. He also said that Buzz for Google Apps should be out “in next couple months-ish.”

The Buzz team also plans to experiment more with communicating with users before launching new features through things like its own “Google Buzz” account. “In general we don’t like preannouncing things,” Jackson said on the panel, pointing out that users will want to try something when they hear about it. But the Buzz team is finding through experience and through feedback such as Boyd’s speech that talking with its users will help them appreciate, expect and even influence product development.

You’d still hope they would have gotten this right (or a lot more right) the first time, but at least now they have the right attitude.

For the GigaOM network’s complete SXSW coverage, check out this round-up.

]]>https://gigaom.com/2010/03/14/google-accepts-buzz-criticism-invites-boyd-to-speak-on-privacy/feed/12SXSW: Is Privacy on the Social Web a Technical Problem?https://gigaom.com/2010/03/13/sxsw-is-privacy-on-the-social-web-a-technical-problem/
https://gigaom.com/2010/03/13/sxsw-is-privacy-on-the-social-web-a-technical-problem/#commentsSun, 14 Mar 2010 01:00:53 +0000http://gigaom.com/?p=105588Updated

How to deal with user privacy on social networks as they grow, mature and become more sophisticated has been a frequent topic of conversation at this year’s SXSW — and not just in researcher Danah Boyd’s keynote address that argued aggregating public information can be a privacy breach, and slammed Google (s GOOG) and Facebook for their missteps with users’ expectations.

Is privacy just a technical problem? That’s what Google engineer Brett Slatkin, co-creator of the PubSubHubbub real-time syndication protocol, proposed on a Saturday morning panel. WebFinger, a cross-platform standard that conveys explicit privacy settingsUpdate:user preferences, which could include explicit privacy settings from one social network to another, could take care of understanding the relationships between users and the information they want to control, Slatkin said. He added that he felt that the reason users are confused about privacy is because of inconsistency among the social sites they use.

But Microsoft (s MSFT) program manager Dare Obasanjo contended that for-profit social web companies’ interests will always be at odds with user privacy, because there’s too much value in harnessing the crowd for things like Twitter’s trending topics and search. He said he felt the industry needs to “clean up [its] act” on privacy, citing Netflix’s cancellation of its second Netflix Prize contest this week due to concerns that the dataset it provided competitors could be matched to its customers.

Obasanjo also argued that approaches like WebFinger might not work because asking users to specify privacy controls introduces friction into their use of your site.

Collecta co-founder Jack Moffitt disagreed with Google’s Slatkin, but for a different reason. “I don’t think the solution is purely technical,” he said. “I don’t think users will understand the repercussions of these decisions.”

Google’s approach to the social web, where it has fallen behind competitors like Facebook, is heavily focused around standards, and Slatkin described things like OStatus, DiSo, Activity Streams and AtomSource as the key to solving all sorts of usability problems, privacy included. But Slatkin is far from the only true believer in standards. Jon Phillips of Status.net, on a later panel that focused on whether tweets can be copyrighted, spoke about how interoperability can help pass along information as to who owns information and where it comes from. Both Slatkin and Phillips said they feel that if information can be properly sourced and transmitted, rather than replicated, it can be better owned by users.

Still, companies like Google and Facebook don’t have the luxury of being able to start with a clean slate of user expectations and privacy settings, because they’re evolving to adapt to the enormous growth of online sharing and the increasing influence of social sites on the rest of the web through things like real-time search. Kerfuffles like the ones Boyd highlighted in her keynote are the result of changing what information is private and public after people have come to expect something else.

In yet another discussion at the conference, Flickr (s YHOO) head of product Matthew Rothenberg said that his team has had to deal with similar privacy issues concerning displaying publicly when a user has favorited a photo on the site (which has always defaulted images to public). “Our bet is that by enforcing public behavior we’re going to change the nature of what they’re doing,” he said. But “you have to properly educate people as to what you’re doing.”

So privacy, it seems, is indeed a technical problem, but also a cultural problem, an educational problem and a business problem. Not to mention endless conversation fodder.

For the GigaOM network’s complete SXSW coverage, check out this round-up.