Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The ‘communities’ of questionnaire items that emerge from our community detection method form possible ‘functional constructs’ inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such ‘functional constructs’ suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling.

The total knowledge contained within a collective supersedes the knowledge of even its most intelligent member. Yet the collective knowledge will remain inaccessible to us unless we are able to find efficient knowledge aggregation methods that produce reliable decisions based on the behavior or opinions of the collective’s members. It is often stated that simple averaging of a pool of opinions is a good and in many cases the optimal way to extract knowledge from a crowd. The method of averaging has been applied to analysis of decision-making in very different fields, such as forecasting, collective animal behavior, individual psychology, and machine learning. Two mathematical theorems, Condorcet’s theorem and Jensen’s inequality, provide a general theoretical justification for the averaging procedure. Yet the necessary conditions which guarantee the applicability of these theorems are often not met in practice. Under such circumstances, averaging can lead to suboptimal and sometimes very poor performance. Practitioners in many different fields have independently developed procedures to counteract the failures of averaging. We review such knowledge aggregation procedures and interpret the methods in the light of a statistical decision theory framework to explain when their application is justified. Our analysis indicates that in the ideal case, there should be a matching between the aggregation procedure and the nature of the knowledge distribution, correlations, and associated error costs. This leads us to explore how machine learning techniques can be used to extract near-optimal decision rules in a data-driven manner. We end with a discussion of open frontiers in the domain of knowledge aggregation and collective intelligence in general.

Author summary A central challenge in cognitive neuroscience is decoding mental representations from patterns of brain activity. With functional magnetic resonance imaging (fMRI), multivariate decoding methods like multivoxel pattern analysis (MVPA) have produced numerous discoveries about the brain. However, what information these methods draw upon remains the subject of debate. Typically, each voxel is thought to contribute information through its selectivity (i.e., how differently it responds to the classes being decoded), with improved sensitivity reflecting the aggregation of selectivity across voxels. We show that this interpretation downplays an important factor: MVPA is also highly attuned to noise correlations between voxels with opposite selectivity. Across several analyses of an fMRI dataset, we demonstrate a positive relationship between the magnitude of noise correlations and multivariate decoding performance. Indeed, voxels more selective for one class, or heavily weighted in MVPA, tend to be more strongly correlated with voxels selective for the opposite class. Furthermore, using a model to simulate different levels of selectivity and noise correlations, we find that the benefit of noise correlations for decoding is a general property of fMRI data. These findings help elucidate the computational underpinnings of multivariate decoding in cognitive neuroscience and provide insight into the nature of neural representations.

High-Throughput technologies provide genomic and trascriptomic data that are suitable for biomarker detection for classification purposes. However, the high dimension of the output of such technologies and the characteristics of the data sets analysed represent an issue for the classification task. Here we present a new feature selection method based on three steps to detect class-specific biomarkers in case of high-dimensional data sets. The first step detects the differentially expressed genes according to the experimental conditions tested in the experimental design, the second step filters out the features with low discriminative power and the third step detects the class-specific features and defines the final biomarker as the union of the class-specific features. The proposed procedure is tested on two microarray datasets, one characterized by a strong imbalance between the size of classes and the other one where the size of classes is perfectly balanced. We show that, using the proposed feature selection procedure, the classification performances of a Support Vector Machine on the imbalanced data set reach a 82% whereas other methods do not exceed 73%. Furthermore, in case of perfectly balanced dataset, the classification performances are comparable with other methods. Finally, the Gene Ontology enrichments performed on the signatures selected with the proposed pipeline, confirm the biological relevance of our methodology. The download of the package with the implementation of Peculiar Genes Selection, ‘PGS’, is available for R users at: http://github.com/mbeccuti/PGS.

Israeli startup Oryx Vision has raised a $50 million Series B round led by Third Point Ventures and WRV to help continue to develop and commercialize its innovative LiDAR tech, which is designed to be as simple as a digital camera with greater reliability and sensitivity than existing LiDAR, while also achieving a low cost.

Oryx’s LiDAR has no moving parts, and uses antennas in place of photodetectors to retrieve both range and velocity information for the points of light in its high-resolution scans of its surroundings. Oryx says its unique method means that the system is “a million times more sensitive” than existing LiDAR systems, and is also able to deal better with interference from sunlight, and from other LiDARs in operation on the road.

SpaceX's drone landing ships have already proven that uncrewed vessels can handle some of the most dangerous jobs at sea. Now, two Norwegian companies are poised to put robo-boats into one of the most dull: hauling cargo down the fjord.

Two Norwegian companies are teaming together to construct a short-range, all-electric coastal container ship that will eventually operate autonomously—eliminating up to 40,000 diesel truck trips per year. The ship, the Yara Birkeland, will begin operations in 2018 with a crew, but it's expected to operate largely autonomously (and crewless) by 2020 (regulatory clearance permitting, of course).

The UK government has announced plans to introduce drone registration and safety awareness courses for owners of the small unmanned aircraft. It will affect anyone who owns a drone which weighs more than 250 grams (8oz). Drone maker DJI said it was in favour of the measures. There is no time frame or firm plans as to how the new rules will be enforced and the Department of Transport admitted that "the nuts and bolts still have to be ironed out".

Partially autonomous and intelligent systems have been used in military technology since at least the Second World War, but advances in machine learning and Artificial Intelligence (AI) represent a turning point in the use of automation in warfare. Though the United States military and intelligence communities are planning for expanded use of AI across their portfolios, many of the most transformative applications of AI have not yet been addressed.

In this piece, we propose three goals for developing future policy on AI and national security: preserving U.S. technological leadership, supporting peaceful and commercial use, and mitigating catastrophic risk. By looking at four prior cases of transformative military technology—nuclear, aerospace, cyber, and biotech—we develop lessons learned and recommendations for national security policy toward AI.

Lyft has been coming after Uber’s crown at full force, and it’s showing no signs of slowing. The ridesharing company has continued to charge through the door that’s been left wide open by Uber, and in its latest move, has begun developing self-driving technology of its own. On Friday, the firm announced that it was venturing into autonomous vehicles, and has opened a new self-driving-research center in Palo Alto, California. In the next few weeks, Lyft expects to hire a number of new engineering and technical folks to staff this new facility, and hopefully, overtake Uber as the leader in the future of transportation.

During nearly every discussion about organizational change, someone makes the obvious assertion that “change is hard.” On the surface, this is true: change requires effort. But the problem with this attitude, which permeates all levels of our organizations, is that it equates “hard” with “failure,” and, by doing so, it hobbles our change initiatives, which have higher success rates than we lead ourselves to believe.

Our biases toward failure is wired into our brains. In a recently published series of studies, University of Chicago researchers Ed O’Brien and Nadav Klein found that we assume that failure is a more likely outcome than success, and, as a result, we wrongly treat successful outcomes as flukes and bad results as irrefutable proof that change is difficult.

Rolls-Royce marine has partnered with Google Cloud to apply neural net and machine learning technology to shipping. Their grand plan is to have a fully autonomous ship at sea by 2020, but the technology will also drive safety and efficiency improvements across the shipping industry.

Internet of Things (IoT) is an ecosystem of connected physical objects that are accessible through the internet. M2M / IoT Software and Services. Machine to Machine (M2M) and Internet of Things (IoT) projects follow a common technological paradigm: intelligent devices, seamlessly connected to the Internet, enable remote services and provide actionable data.

The report offers a multi-step view of the Global Internet Of Things (IoT) Software Market. The first approach focuses through an impression of the market. This passage includes several arrangements, definitions, the chain assembly of the industry in one piece, and the various uses for the global market. This section also integrates an all-inclusive analysis of the different enlargement plans and government strategies that influence the market, its cost assemblies and industrialized processes. The current growth and development patterns of this market have been encapsulated in this study.

Artificial intelligence (AI) has the potential to dramatically transform huge swathes of the economy and society for the better, and as the technology continues to make headlines many countries are developing plans to ensure they can take full advantage of these benefits. Below is a high-level overv

Artificial intelligence is now powering a growing number of computing functions, and today the developer community today is getting another AI boost, courtesy of Yandex. Today, the Russian search giant — which, like its US counterpart Google, has extended into a myriad of other business lines, from mobile to maps and more — announced the launch of CatBoost, an open source machine learning library based on gradient boosting — the branch of ML that is specifically designed to help “teach” systems when you have a very sparse amount of data, and especially when the data may not all be sensorial (such as audio, text or imagery), but includes transactional or historical data, too.

IN 1899, THE world’s most powerful nations signed a treaty at The Hague that banned military use of aircraft, fearing the emerging technology’s destructive power. Five years later the moratorium was allowed to expire, and before long aircraft were helping to enable the slaughter of World War I. “Some technologies are so powerful as to be irresistible,” says Greg Allen, a fellow at the Center for New American Security, a non-partisan Washington DC think tank. “Militaries around the world have essentially come to the same conclusion with respect to artificial intelligence.” Allen is coauthor of a 132-page new report on the effect of artificial intelligence on national security. One of its conclusions is that the impact of technologies such as autonomous robots on war and international relations could rival that of nuclear weapons. The report was produced by Harvard’s Belfer Center for Science and International Affairs, at the request of IARPA, the research agency of the Office of the Director of National Intelligence. It lays out why technologies like drones with bird-like agility, robot hackers, and software that generates photo-real fake video are on track to make the American military and its rivals much more powerful.

XL Catlin today announced the launch of its Cyber and Data Protection Insurance policy in Asia Pacific. With cyber risk ranking in the top 10 in today’s emerging risks, the new policy is designed to protect businesses from the increasing exposures they face from a malicious network compromise and data breach.

The insurance solution covers business interruption arising from a network compromise, associated extortion demands and first party incident response costs such as notification of the compromise of the network, forensic investigations and public relations support. Importantly, the policy covers third-party liability costs that organisations face as a result of a data breach, including any regulatory investigation or contractual liability associated with the Payment Card Industry Data Security Standard. Additionally, it offers broad coverage for liability associated with media exposures such as copyright infringement, trademark infringement, invasion of privacy and false advertising, in both offline and online content as well as social media.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.