Beginning with the origins, Oycib means in Mayan language "the place of honey". In this projet, Oycib is an e-Research infrastructure for the Collective Intelligence Analysis.

With Oycib infrastructure we propose an analysis model, based in the digital practices and collaboration profiles for the development of Social Learning and the Context Awareness in the Collective Intelligence process.

The infrastructure design and the profiles proposed here, are based on historical studies about social organization glyphs in Mayan culture made by Montgomery (2002) and Calvin (2012).

Initially we worked with four collaboration profiles: the "Itzaat", the "Pitziil", the "Ayuxul" and the "Sajal" (profiles), but we can find others depending of the organization context. Thus, it's important to mention that each profile is found based on the e-Xploración model and they are the qualitative and quantitative interpretation of the collaborative practices. In this way, we propose methods based on Social Network Analysis for the learning and knowledge management.

Thus, the network in Oycib is called "Kaan" (sky or network in Mayan Lenguage). In the "Kaan" we present the visualization of the subjects and objects, such as persons, forums, blogs, files, groups and all the interactions among them. Additionally, each profile and their interactions is presented.

The visualization was constructed entirely in HTML5 and JavaScript. Four major libraries were used:

- Sigma.js for displaying the networks. The latest version does not contain some key functionality for dynamically and additively loading and unloading of subgraphs into the main graph, so the source code was updated with required methods. Separate article on that topic is upcoming.

- Three.js for rotating Earth and all geographically-related work. Simile Timeline for the timeline.

On January 15, BuzzFeed released a large dataset of Donald Trump’s connections, including people, organizations and the nature of their relationships.

In their article “Help Us Map TrumpWorld”, the four authors of the investigation, John Templon, Anthony Cormier, Alex Campbell and Jeremy Singer-Vine, asked the public to help them understand and analyse the data.

Now we are asking the public to use our data to find connections we may have missed, and to give us context we don’t currently understand. We hope you will help us — and the public — learn more about TrumpWorld and how this unprecedented array of businesses might affect public policy.”

So we decided to see what it would looks like in Linkurious, our graph analysis and visualization tool.

The dataset is publicly available in a Google spreadsheet. We imported it in a Neo4j graph database using the following script inspired by Michael Hunger’s work:

This research measures and maps the production of nineteenth-century space using the tools of the digital age. Computational analysis allowed me to quantify how late nineteenth-century newspapers crafted a view of the world for their readers. Specifically, I examine the Houston Daily Post from 1894 to 1901 to study how late nineteenth-century America appeared from a specific vantage point in time and space. What places loomed large in the paper’s imagined geography? How did large-scale processes of incorporation, standardization, and nationalization shape the paper’s production of space and place? What was the relationship between region and nation? To answer these questions I combine traditional historical research with digital analysis of the paper.

A new set of search tools called Memex, developed by DARPA, peers into the “deep Web” to reveal illegal activity

luiy's insight:

DARPA has said very little about Memex and its use by law enforcement and prosecutors to investigate suspected criminals.

According to published reports, including one from Carnegie Mellon University, the NYDA’s Office is one of several law enforcement agencies that have used early versions of Memex software over the past year to find and prosecute human traffickers, who coerce or abduct people—typically women and children—for the purposes of exploitation, sexual or otherwise. “Memex”—a combination of the words “memory” and “index” first coined in a 1945 article for The Atlantic—currently includes eight open-source, browser-based search, analysis and data-visualization programs as well as back-end server software that perform complex computations and data analysis.

Such capabilities could become a crucial component of fighting human trafficking, a crime with low conviction rates, primarily because of strategies that traffickers use to disguise their victims’ identities (pdf). The United Nations Office on Drugs and Crimeestimates there are about 2.5 million human trafficking victims worldwide at any given time, yet putting the criminals who press them into service behind bars is difficult. In its 2014 study on human trafficking (pdf) the U.N. agency found that 40 percent of countries surveyed reported less than 10 convictions per year between 2010 and 2012. About 15 percent of the 128 countries covered in the report did not record any convictions.

Social scientists have never understood why some countries are more corrupt than others. But the first study that links corruption with wealth could help change that.

One question that social scientists and economists have long puzzled over is how corruption arises in different cultures and why it is more prevalent in some countries than others. But it has always been difficult to find correlations between corruption and other measures of economic or social activity.

Michal Paulus and Ladislav Kristoufek at Charles University in Prague, Czech Republic, have for the first time found a correlation between the perception of corruption in different countries and their economic development.

The data they use comes from Transparency International, a nonprofit campaigning organisation based in Berlin, Germany, and which defines corruption as the misuse of public power for private benefit. Each year, this organization publishes a global list of countries ranked according to their perceived levels of corruption. The list is compiled using at least three sources of information but does not directly measure corruption, because of the difficulties in gathering such data.

Instead, it gathers information from a wide range of sources such as the African Development Bank and the Economist Intelligence Unit. But it also places significant weight on the opinions of experts who are asked to assess corruption levels.

The result is the Corruption Perceptions Index ranking countries between 0 (highly corrupt) to 100 (very clean). In 2014, Denmark occupied of the top spot as the world’s least corrupt nation while Somalia and North Korea prop up the table in an unenviable tie for the most corrupt countries on the planet.

The spear-phishing email contained a link directing the employees to a malicious, faux-Google website that would request their login credentials and then hand them over to the hackers. The NSA identified seven “potential victims” at the company. While malicious emails targeting three of the potential victims were rejected by an email server, at least one of the employee accounts was likely compromised, the agency concluded. The NSA notes in its report that it is “unknown whether the aforementioned spear-phishing deployment successfully compromised all the intended victims, and what potential data from the victim could have been exfiltrated.”

VR Systems declined to respond to a request for comment on the specific hacking operation outlined in the NSA document. Chief Operating Officer Ben Martin replied by email to The Intercept’s request for comment with the following statement:

Phishing and spear-phishing are not uncommon in our industry. We regularly participate in cyber alliances with state officials and members of the law enforcement community in an effort to address these types of threats. We have policies and procedures in effect to protect our customers and our company.

In the past two years, ProtonMail has grown enormously, especially after the recent US election, and today we are the world’s largest encrypted email service with over 2 million users. We have come a long way since our user community initially crowdfunded the project.

ProtonMail today is much larger in scope than what was originally envisioned when our founding team met at CERN in 2013. As ProtonMail has evolved, the world has also been changing around us.

Civil liberties have been increasingly restricted in all corners of the globe. Even Western democracies such as the US have not been immune to this trend, which is most starkly illustrated by the forced enlistment of US tech companies into the US surveillance apparatus.

In fact, we have reached the point where it simply not possible to run a privacy and security focused service in the US or in the UK. At the same time, the stakes are also higher than ever before. As ProtonMail has grown, we have become increasingly aware of our role as a tool for freedom of speech, and in particular for investigative journalism.

Last fall, we were invited to the 2nd Asian Investigative Journalism Conference and were able to get a firsthand look at the importance of tools like ProtonMail in the field.

A sorting algorithm is an algorithm that organizes elements of a sequence in a certain order. Since the early days of computing, the sorting problem has been one of the main battlefields for researchers. The reason behind this is not only the need of solving a very common task but also the challenge of solving a complex problem in the most efficient way.

SORTING is an attempt to visualize and help to understand how some of the most famous sorting algorithms work. This project provides two standpoints to look at algorithms, one is more artistic (apologies to any real artist out there), the other is more analytical aiming at explaining algorithm step by step.

This project does not want to teach the theory of sorting algorithms, there are amazing resources, books and courses for this purpose. SORTING is for the ones who want to see these algorithms under a different ligth and hopefully appreciate the processing and brain power behind these piece of genius that in many ways have changed the way we live.

How to Transition from Excel to R - An Intro to R for Microsoft Excel Users

luiy's insight:

In today's increasingly data-driven world, business people are constantly talking about how they want more powerful and flexible analytical tools, but are usually intimidated by the programming knowledge these tools require and the learning curve they must overcome just to be able to reproduce what they already know how to do in the programs they've become accustomed to using. For most business people, the go-to tool for doing anything analytical is Microsoft Excel.

We were looking for a different type of visualization for a project at work this past week and my thoughts immediately gravitated towards streamgraphs. The TLDR on streamgraphs is they they are generalized versions of stacked area graphs with free baselines across the x axis. They are somewhat controversial but have a “draw you in” […]

luiy's insight:

Streamgraphs require a continuous variable for the x axis, and thestreamgraph widget/package works with years or dates (support for xtsobjects and POSIXct types coming soon). Since they display categorical values in the area regions, the data in R needs to be in long format which is easy to do with dplyr & tidyr.

The package recognizes when years are being used and does all the necessary conversions for you. It also uses a technique similar to expand.grid to ensure all categories are represented at every observation (not doing so makesd3.stack unhappy).

How does a company keep tabs on thousands of suppliers? That’s the question Bruce Arntzen tried to answer when he started the Hi-Viz Research Project. As Executive Director of MIT’s Supply Chain Management Program, Arntzen works with corporations to find innovative solutions to supply chain problems. The idea for the Hi-Viz project came during a 2011 meeting of the Supply Chain Risk Leadership Council. A survey of attendees listed Supply Chain Visibility as the top concern. Why? With thousands of suppliers and sub-suppliers, it can be very time-consuming to find the weakest link in a supply chain. Arntzen’s solution: an automatic visualization of the end-to-end supply chain where the weakest links could be seen in real time. Watch his interview to learn how MIT and Sourcemap developed the first automated risk visualization [more details below the fold].

In 2015, the Hi-Viz project is partnering with actuarial data providers to provide predictive risk analytics. Sourcemap is making available inventory risk mapping as part of its enterprise software-as-a-service. Want to get involved? Learn more about the Hi-Viz project, or contact Sourcemap for a demo.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.