Menu

Tag Archives: Twitter

In the centuries since William Shakespeare wrote one of Juliet’s most enduring lines in Romeo and Juliet that “A rose by any other name would smell as sweet”, it has been almost always been interpreted as meaning that the mere names of people, by themselves, have no real effect upon who and what they are in this world.

This past week, the following trio of related articles was published that brought this to mind, specifically about the modern meanings, values and analytics of words as they appear online:

All of these are highly recommended and worth reading in their entirety for their informative and thought-provoking reports containing so many words about, well, so many words.

Then to reframe and update the original quote above to serve as a starting point here, I would like to ask whether a post by any other name in Twitter’s domain would smell as [s/t]weet? To try to answer this, I will focus on the first of these articles in order to summarize and annotate it, and then ask some of my own non-theatrical questions.

According to the Phys.org article, which nicely summarizes the study of a team of US and UK university scientists that was published on PLOS|ONE.org entitled Studying User Income through Language, Behaviour and Affect in Social Media by Daniel Preotiuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach and Nikolaos Aletras, a link exists between the language used in tweets and the authors’ income. (These additional ten Subway Fold posts covered other applications of demographic analyses of Twitter traffic.)

Methodology

Using only the actual tweets of Twitter users, that often contain “intimate details” despite the lack of privacy on this social media platform, the two researchers on the team from the University of Pennsylvania’s World Well-Being Project are actively investigating whether social media can be used as a “research tool” to replace more expensive surveys that can be “limited and potentially biased”. (The work of the World Well-Being Project, among others, was first covered in a closely related Subway Fold post on March 20, 2015 entitled Studies Link Social Media Data with Personality and Health Indicators.)

The full research team began this study by examining “Twitter users’ self-described occupations”. Then they gathered a “representative sampling” of 10 million tweets from 5,191 users spanning each of the nine distinct groups classified in the UK’s official Standard Occupational Classification guide and calculated the average income for each group. Using this data, they built an algorithm upon “words that people in each code use distinctly”. That is, the algorithm parsed what words had the highest predictive value for determining which of the classification groups the users were in the sample were likely fall within.

Results

Some of the team’s results “validated what’s already known”, such as a user’s words can indicate “age and gender” which, in turn, are linked to income. The leader of the researchers, Daniel Preoţiuc-Pietro, also cited the following unexpected results:

Higher earners on Twitter tend to:

write with “more fear and anger”

more often discussed “politics, corporations and the nonprofit world”

use it to distribute news

use it more for professional than personal purposes, while

Lower earners on Twitter tend to:

be optimists

swear more in their tweets

use it more for personal communication

This study will be used as the basis for future efforts to evaluate the correlations between user incomes with other data from the real world. (Please see also these eight Subway Fold posts on the distinctions between correlation and causation.)

MyQuestions

Might the inverse of these findings, that certain language could draw users with certain income levels, be used by online marketers, advertisers and content specialists to attract their desired demographic group(s)?

Does this type of data on the particularly sensitive subject of income, risk segmenting users in some form of de facto discriminatory manner? If this possibility exists, how can researchers avoid this in the future?

Would a follow-up study perhaps find that certain words used in tweets by authors who aspire to move up from one income level to the next one? If so, how can this data be used by the same specialists mentioned in the first two questions above?

It is a simple and straight-forward basic business concept in any area of commerce: Do not become too overly reliant upon a single customer or supplier. Rather, try to build a diversified portfolio of business relationships to diligently avoid this possibility and, at the same time, assist in developing potential new business.

Starting in May 2015, Facebook instituted certain limits upon access to the valuable data about its 1.5 billion user base¹ to commercial and non-commercial third parties. This has caused serious disruption and even the end of operations for some of them who had so heavily depended on the social media giant’s data flow. Let’s see what happened.

This story was reported in a very informative and instructive article in the September 22, 2015 edition of The Wall Street Journal entitled Facebook’s Restrictions on User Data Cast a Long Shadow by Deepa Seetharaman and Elizabeth Dwoskin. (Subscription required.) If you have access to the WSJ.com, I highly recommend reading in its entirety. I will summarize and annotate it, and then pose some of my own third-party questions.

This change in Facebook’s policy has resulted in “dozen of startups” closing, changing their approach or being bought out. This has also affected political data consultants and independent researchers.

This is a significant shift in Facebook’s approach to sharing “one of the world’s richest sources of information on human relationships”. Dating back to 2007, CEO Mark Zuckerberg opened to access to Facebook’s “social graph” to outsiders. This included data points, among many others, about users’ friends, interests and “likes“.

However, the company recently changed this strategy due to users’ concerns about their data being shared with third parties without any notice. A spokeswoman from the company stated this is now being done in manner that is “more privacy protective”. This change has been implemented to thus give greater control to their user base.

Other social media leaders including LinkedIn and Twitter have likewise limited access, but Facebook’s move in this direction has been more controversial. (These 10 recent Subway Fold posts cover a variety of ways that data from Twitter is being mined, analyzed and applied.)

Examples of the applications that developers have built upon this data include requests to have friends join games, vote, and highlight a mutual friend of two people on a date. The reduction or loss of this data flow from Facebook will affect these and numerous other services previously dependent on it. As well, privacy experts have expressed their concern that this change might result in “more objectionable” data-mining practices.

Others view these new limits are a result of the company’s expansion and “emergence as the world’s largest social network”.

Facebook will provide data to outsiders about certain data types like birthdays. However, information about users’ friends is mostly not available. Some developers have expressed complaints about the process for requesting user data as well as results of “unexpected outcomes”.

These new restrictions have specifically affected the following Facebook-dependent websites in various ways:

The dating site Tinder asked Facebook about the new data policy shortly after it was announced because they were concerned that limiting data about relationships would impact their business. A compromise was eventually obtained but limited this site only to access to “photos and names of mutual friends”.

College Connect, an app that provided forms of social information and assistance to first-generation students, could not longer continue its operations when it lost access to Facebook’s data. (The site still remains online.)

An app called Jobs With Friends that connected job searchers with similar interests met a similar fate.

Social psychologist Benjamin Crosier was in the process of creating an app searching for connections “between social media activity and ills like drug addiction”. He is currently trying to save this project by requesting eight data types from Facebook.

An app used by President Obama’s 2012 re-election campaign was “also stymied” as a result. It was used to identify potential supporters and trying to get them to vote and encourage their friends on Facebook to vote or register to vote.²

Other companies are trying an alternative strategy to build their own social networks. For example, Yesgraph Inc. employs predictive analytics³ methodology to assist clients who run social apps in finding new users by data-mining, with the user base’s permission, through lists of email addresses and phone contacts.

My questions are as follows:

What are the best practices and policies for social networks to use to optimally balance the interests of data-dependent third parties and users’ privacy concerns? Do they vary from network to network or are they more likely applicable to all or most of them?

Are most social network users fully or even partially concerned about the privacy and safety of their personal data? If so, what practical steps can they take to protect themselves from unwanted access and usage of it?

For any given data-driven business, what is the threshold for over-reliance on a particular data supplier? How and when should their roster of data suppliers be further diversified in order to protect themselves from disruptions to their operations if one or more of them change their access policies?

1. Speaking of interesting data, on Monday, August 24, 2015, for the first time ever in the history of the web, one billion users logged onto the same site, Facebook. For the details, see One Out of Every 7 People on Earth Used Facebook on Monday, by Alexei Oreskovic, posted on BusinessInsider.com on August 27, 2015.

2. See the comprehensive report entitled A More Perfect Union by Sasha Issenberg in the December 2012 issue of MIT’s Technology Review about how this campaign made highly effective use of its data and social networks apps and data analytics in their winning 2012 re-election campaign.

First, for some initial perspective, on January 21, 2015, a Subway Fold Post entitled The Transformation of News Distribution by Social Media Platforms in 2015, examined how the nature of news media was being dramatically impacted by social media. This new Pew Research Institute report focuses on the changing demographics of Facebook and Twitter users for news consumption.

This new study found that 63% of both Twitter and Facebook users are now getting their news from these leading social media platforms. As compared to a similar Pew survey in 2013, this is a 52% increase for Twitter and a 47% increase for Facebook. Of those following a live news event as it occurs, the split is more pronounced as 59% of Twitter users and 31% of Facebook users are engaged in viewing such coverage.

According to Amy Mitchell, one of the report’s authors and Pew’s Director of Journalism Research, each social media site “adapt to their role” and provide “unique features”. As well, they ways in which US users connect in different ways “have implications” for how they “learn about their world” and partake in their democracy.

In order enhance their growing commitment to live coverage, both sites have recently rolled out innovative new services. Twitter has a full-featured multimedia app called Project Lightening to facilitate following news in real-time. Facebook is likewise expanding its news operations with their recent announced of the launch of Instant Articles, a rapid news co-publishing app in cooperation with nine of the world’s leading news organizations.

Further parsing the survey’s demographic data for US adults generated the following findings:

Sources of News: 10% get their news on Twitter while 41% get their news on Facebook, with an overlap of 8% using both. This is also due to the fact that Facebook has a much larger user base than Twitter. Furthermore, while the total US user bases of both platforms currently remains steady, the percentages of those users therein seeking news on both is itself increasing.

Comparative Trends in Five Key Demographics: The very enlightening chart at the bottom of Page 2 of the report breaks down Twitter’s and Facebook’s percentages and percentage increases between 2013 and 2015 for gender, race, age, education level, and incomes.

Relative Importance of Platforms: These results are further qualified in that those surveyed reported that Americans still see both of these platforms overall as “secondary news sources” and “not a very important way” to stay current.

Age Groups: When age levels were added, this changes to nearly 50% of those between 18 and 35 years finding Twitter and Facebook to be “the most important” sources of news. Moving on to those over 35 years, the numbers declined to 34% of Facebook users and 31% of Twitter users responding that these platforms were among the “most important” news sources.

Content Types Sought and Engaged: Facebook users were more likely to click on political content than Twitter users to the extent of 32% to 25%, respectively. The revealing charts in the middle of Page 3 demonstrate that Twitter users see and pursue a wider variety of 11 key news topics. As well, the percentage tallies of gender differences by topic and by platform are also presented.

My own questions are as follows:

Might Twitter and Facebook benefit from additional cooperative ventures to further expand their comprehensiveness, target demographics, and enhanced data analytics for news categories by exploring additional projects with other organizations. For instance, and among many other possibilities, there are Dataminr who track and parse the entirety of the Twitterverse in real-time (as previously covered in these three Subway Fold posts); Quid who is tracking massive amount of online news (as previously covered in this Subway Fold post); and GDELT which is translating online news in real-time in 65 languages (as previously covered in this Subway Fold post).

What additional demographic categories would be helpful in future studies by Pew and other researchers as this market and its supporting technologies, particularly in an increasingly social and mobile web world, continue to evolve so quickly? For example, how might different online access speeds affect the distribution and audience segmentation of news distributed on social platforms?

Are these news consumption demographics limited only to Twitter and Facebook? For example, LinkedIn has gone to great lengths in the past few years to upgrade its content offerings. How might the results have differed if the Pew questionnaire had included LinkedIn and possibly others like Instagram?

How can this Pew study be used to improve the effectiveness of marketing and business development for news organizations for their sponsors, content strategist for their clients, and internal and external SEO professionals for their organizations?

On a daily basis, we see news, commentary, videos, photos, tweets, blog posts, podcasts, articles, rumors and memes go viral where they spread rapidly across the web like a propulsive digital wave. From YouTube postings of dogs and cats doing goofy things to in-the-moment hashtags and tweets about late-breaking current events, attention grabbing content now spreads at nearly the speed of light.

All content creators, strategists and distributors want to know how to infuse their offerings with this elusive clickable contagion. Providing eight very useful and scientifically proven elements to, at the very least, increase the probability of new content going viral, is a new article entitled The Science Behind What Content Goes Viral, by Sarah Snow, posted on SocialMediaToday.com on July 6, 2015. I will sum up, annotate, and pose some not entirely scientific questions of my own.

For further reading I also highly recommend clicking through and reading The Secret to Online Success: What Makes Content Go Viral, by Liz Rees-Jones, Katherine L. Milkman and Jonah Berger (the second and third of whom are professors at the University of Pennsylvania – – the “U of P”), posted on ScientificAmerican.com (“SciAm”) on April 14, 2015. Two fully detailed and fascinating reports by Milkman and Berger that underlie their SciAm article are available here and here. Ms. Snow’s article cites many of the findings in the SciAm piece. As well, I suggest checking out a May 22, 2015 blog post by Peter Gasca entitled The 4 Essentials of the Most Read Content posted on Entrepreneur.com for some additionally effective content strategies, not to mention a hilarious picture of a dog wearing glasses.

Ms. Snow organized her article into a series of eight individual hypotheses about online virality that she then proceeds to provide references to support them. I will put each of these in bold and quotes below as she stated them in her text. (My own highlights in orange are explained afterwards.)

“Long, in-depth posts tend to go viral more than short ones.”: Drawing from the findings of Milkman’s and Berger’s studies that, among other things, examined the data from the feature on the home page of the NYTimes.com called Most Emailed, longer articles had a higher tendency to be shared. As also stated by Carson Ward of the search engine optimization (SEO) consulting firm called Moz, of all possible variables, word count most closely correlate with the breadth of online sharing. Further, he believes this is a directly causal relationship. (The distinctions between correlation and causation have been previously raised in other various contexts in these six Subway Fold posts.) See also, Mr. Ward’s practical and informative January 14, 2013 posting on Moz’s site entitled Why Content Goes Viral: the Theory and Proof.

“Inspire anger, awe, or anxiety and your post will go viral.”: Evidence shows that “high energy emotions” such as awe and anger, as opposed to “law energy emotions”, are more likely to spur virality. Among them, anger is the most effective, but it must be, well, tempered without insulting the audience. It is best for content authors to write about something that angers them, which, in return, require “some tolerance” by their readers. In terms of usage data, blog content which engages controversial topics generates twice as many comments in response. Alternatively, awe is a better emotion for those who wish to avoid controversy and instead focuses on the positive effects of brands and heroic acts.

“Showing a little vulnerability or emotion helps content go viral.”: This is indeed true again according to the U of P studies. Readers respond to emotional content because they “want to feel things when they read”. The author Walter Kirn is quoted recommending that writers should begin with what they feel “most shameful about”. This is where conflict resides and writing about it makes you vulnerable to your readers. For other content creators, rather than shame, writers can start with some other genuine “human emotion”.

“Viral content is practically useful, surprising, and interesting.”: Clearly, engaging and practical content beats boring and dull any day of the week. Content that is useful generates the highest levels of online sharing. For example, posting pragmatic suggestions and solutions to “how-to” questions is going to draw many more clicks.

“Content written by known authors is more likely to go viral.”: Milkman’s and Berger’s reports further showed that being a known writer had a significant impact on the sharing of a news article. Name recognition translates into credibility and trust.

“Content written by women is more likely to go viral.”: The U of P professors also reported that on NYTimes.com, the gender of a writer had an effect insofar as the data showed that articles by female authors had a tendency to be shared more that stories by male authors.

“Posts that spend a lot of time on the home page are more likely to go viral.”: Yes, insofar as the NYTimes.com goes. (The article does not mention whether other sites have been tested or are planning to be tested for this variable.)

“Content that is truly and broadly viral is almost always funny“: This quote about humorfrom Ward’s post (linked above in the first factor about blog post length), is helpful for content authors as it gives all of them an opportunity to be funny. This is particularly so in efforts to make online ads go viral.

I propose the following mnemonic to assist in remembering all of these variables tracking with the key words highlighted above in orange:

Is going viral purely an objective and quantifiable matter of the numbers of clicks and visitors, or are there some more qualitative factors involved? For instance, might marketing specialists and content strategists be more interested in reaching a significant percentage of traffic among a particular demographic group or market segment and just attaining X clicks and Y visitors regardless of whether or not they involve identifiable cohorts?

Do the above eight factors lend themselves to be transposed into an algorithm? Assuming this is possible, how would it be applied to optimize viral content and, in turn, overall SEO strategic planning?

Beside the length of content discussed as the first factor above, how do the other seven factors lend themselves to being evaluated for degrees of correlation and causation of viral results?

We interface with our devices’ screens for inputs and outputs nearly all day and everyday. What many of the gadgets will soon be able to display and, moreover, understand about digital imagery is about to take a significant leap forward. This will be due to the pending arrival of new chips embedded into their circuitry that are enabled by artificial intelligence (AI) algorithms. Let’s have a look.

The key technology behind these new chips is an AI methodology called deep learning. In these 10 recent Subway Fold posts, deep learning has been covered in a range of applications in various online and real world marketplaces including, among others, entertainment, news, social media, law, medicine, finance and education. The emergence of these smarter new chips will likely bring additional significant enhancements to all of them and many others insofar as their abilities to better comprehend the nature of the content of images.

Two major computer chip companies, Synopsis and Qualcomm, and the Chinese search firm Baidu, are developing systems, based upon deep learning, for mobile devices, autos and other screen-based hardware. They were discussed by their representatives at the May 2015 Embedded Vision Summit held on Tuesday, May 12, 2015, in Santa Clara, California. The companies’ representatives were:

Pierre Paul, the director of Research and Development at Synopsis, who presented a demo of a new chip core that “recognized speed limit signs” on the road for vehicles and enabled facial recognition for security apps. This chip uses less power than current chips on the market and, moreover, could add some “visual intelligence” to phone and car apps, and security cameras. (Here is the link to the abstracts of the presentations, listed by speaker including Mr. Paul’s entitled Low-power Embedded Vision: A Face Tracker Case Study from the Summit’s website.)

Ren Wu, Distinguished Scientist, Baidu Institute of Deep Learning, said that deep learning-based chips are important for computers used for research, and called for making such intelligence as ubiquitous as possible. (Here is the link to the abstracts of the presentations, listed by speaker including Mr. Wu’s, entitled Enabling Ubiquitous Visual Intelligence Through Deep Learning from the Summit’s website.)

Both Wu and Gehlhaar said that adding more intelligence to mobile device’s ability to recognize photos could be used to address the privacy implications of some apps by lessening the quantity of personal data they upload to the web.

My questions are as follows:

Whether and how should social networks employ these chips? For example, what if such visually intelligent capabilities were to be added to the recently rolled out live video apps Periscope and MeerKat on Twitter?

Will these chips be adapted to the forthcoming commercial augmented and virtual reality systems (as discussed in the five recent Subway Fold posts)? If so, what new capabilities might they add to these environments?

What additional privacy and security concerns will need to be addressed by manufacturers, consumers and regulators as these chips are introduced into their respective marketplaces?

There have been many efforts over the past few decades to use visualization methods and technologies to create graphical representations of the law. These have been undertaken by innovative lawyers in diversity of settings including public and private practice, and in legal academia.

I wrote an article about this topic years ago entitled “Graphics and Visualization: Drawing on All Your Resources”, in the August 25, 1992* edition of the New York Law Journal. (No link is currently available.) Not to paint with too broad a brush here, but things have changed dramatically since then in terms of how and why to create compelling legal visualizations.

Two very interesting projects have recently gotten significant notice online for their ingenuity and the deeper levels of understanding they have facilitated.

First are the legal visualizations of Harry Surden. He is a professor at the University of Colorado School of Law. He teaches, researches and writes about intellectual property law, legal informatics, legal automation and information privacy.

I had the opportunity to hear the professor speak at the Reinvent Law NYC program held in New York in February 2014. This was a memorable one-day event with about 40 speakers who captivated the audience with their presentations about the multitude of ways that technology is dramatically changing the contemporary marketplace for legal services.

US Code Explorer 1 consisting of a nested tree structure for Title 35 of the US Code covering patents. Clicking on each levels starting with Part I and continuing through V will, in turn, open up to the Chapters, Sections and Subsections. This is an immediately accessible interactive means to unfold Title 35’s structure.

Professor Surden’s visualizations are instantly and intuitively navigable as soon as you view them. As a result, you will immediately be drawn into exploring them. For legal professionals and the public alike, he impressively presents these displays in a clear manner that belies the complexities of the underlying laws. I highly recommend clicking through to check out and navigate all of these imaginative visualizations. Furthermore, I hope his work inspires others to experiment with additional forms of visualization of the other federal, state and local codes, laws and regulations.

The full-text of the Law Review article contains the very engaging details and methodologies employed. Moreover, it demonstrates the incredible amount of analytical work the authors spent to arrive at their findings. Just as one example, please have a look at the network visualization on Page 29 entitled Figure 5. LANS Graph of Stylistic Similarity Between Justices. It truly brings the author’s efforts to life. I believe this article is a very instructive, well, case where the graphics and text skillfully elevate each other’s effectiveness.

* To get online then you needed something called a Lynx browser that only displayed text after you connected with a very zippy 14.4K dial-up modem. What fun it was back then!

As incredibly vast as New York City is, it has always been a great place to walk around. Its multitude of wonderfully diverse neighborhoods, streets, buildings, parks, shops and endless array of other sites can always be more fully appreciated going on foot here and there in – – as we NYC natives like call it – – “The City”.

The April 26, 2015 edition of The New York Times Magazine was devoted to this tradition. The lead off piece by Steve Duenes was entitled How to Walk in New York. This was followed by several other pieces and then reports on 15 walks around specific neighborhoods. (Clicking on the Magazine’s link above and then scrolling down to the second and third pages will produce links to nearly all of these articles.) I was thrilled by reading this because I am such an avid walker myself.

The very next day, on May 27, 2015, Wired.com carried a fascinating story about how one of the issues’ accompanying and rather astonishing supporting graphics was actually done in a report by Angela Watercutter entitled How the NY Times is Sparking the VR Journalism Revolution. But even that’s not the half of it – – the NYTimes has made available for downloading a full virtual reality file of the full construction and deconstruction of the graphic. The Wired.com post contains the link as well as a truly mind-boggling high-speed YouTube video of the graphic’s rapid appearance and disappearance and a screen capture from the VR file itself. (Is “screen capture” really accurate to describe it or is something more like “VR frame”?) This could take news reporting into an entirely new dimension where viewers literally go inside of a story.

This all began on April 11, 2015 when a French artist named JR pieced together and then removed in less than 24 hours, a 150-foot photograph right across the street from the landmark Flatiron Building. This New York Times commissioned image was of “a 20-year-old Azerbaijani immigrant named Elmar Aliyev”. It was used on the cover of this special NYTimes Magazine edition. Upon its completion JR then photographed from a helicopter hovering above. (See the March 19, 2015 Subway Fold post entitled Spectacular Views of New York, San Francisco and Las Vegas at Night from 7,500 Feet Up for another innovative project inject involving highly advanced photography of New York also taken from a helicopter.)

The NYTimes deployed VR technology from a company called VRSE.tools to transform this whole artistic experience into a fully immersive presentation entitled Walking New York. The paper introduced this new creation at a news conference on April 27th. To summarize the NYTimes Magazine’s editor-in-chief, Jake Silverstein, this project was chosen for a VR implementation because it would so dramatically enhance a viewer’s experience of it. Otherwise, pedestrians walking over the image across the sidewalk would not nearly get the full effect of it.

Viewing Walking New York in full VR mode will require an app from VRSE’s site (linked above), and a VR viewer such as, among others, Google Cardboard.

The boost to VR as an emerging medium by the NYTimes‘ engagement on this project is quite significant. Moreover, this demonstrates how it can now be implemented in journalism. Mr. Silverman, to paraphrase his points of view, believes this demonstrates how it can be used to literally and virtually bring someone into a story. Furthermore, by doing so, the effect upon the VR viewer is likely to be an increased amount of empathy for certain individuals and circumstances who are the subjects of these more immersive reports.

There will more than likely be a long way to go before “VR filming rigs” can be sent out by news organizations to cover stories as they occur. The hardware is just now that widespread or mainstream yet. As well, the number of people who are trained and know how to use this equipment is still quite small and, even for those who do, preparing such a virtual presentation lags behind today’s pace of news reporting.

Let’s assume that out on the not too distant horizon that VR journalism gains acceptance, its mobility and ease-of-use increases, and the rosters of VR-trained reporters and producers increases so that this field undergoes some genuine economies of scale. Then, as with many other life cycles of emergent technologies, the applications in this nascent field would only become limited by the imaginations by its professionals and their audiences. My questions are as follows:

What if the leading social media platforms such as Twitter, Facebook (which already purchased Oculus, the maker of VR headsets for $2B last year), LinkedIn, Instagram (VR Instgramming, anyone?), and others integrate VR into their capabilities? For example, Twitter has recently added a live video feature called Periscope that its users have quickly and widely embraced. In fact, it is already being used for live news reporting as users turn their phones towards live events as they happen. Would they just as likely equally swarm to VR?

What if new startup social media platforms launch that are purely focused on experiencing news, commentary, and discussion in VR?

Will previously unanticipated ethical standards be needed and likewise dilemmas result as journalists move up the experience curve with VR?