About Jacobo Elosua

Posts by Jacobo Elosua:

Project Aiur’s Knowledge Validation Engine will receive an input text, either a research paper or a self-written text, and return additional knowledge ultimately connected to the input text’s scientific quality. This process will initially consist of three sequential steps:

Enrichment of the input document’s content;

Preparation of its hypothesis tree; and,

Validation of the document’s building blocks.

Understanding the core essence of an input research document is a prerequisite. Hence, the project’s initial MVP will identify the main topic and sub-topics discussed in a given input document. Building from there, Milestone 1 will focus on extracting the main structural elements contained therein: i.e. Problem, Solution, Evaluation and Result descriptions. Milestone 2 will then use this extracted information to build knowledge graphs. Lastly, Milestone 3 will incorporate information from images, tables, graphs, etc. (i.e. ‘black data’) into these graphs.

In a nutshell, our development roadmap points to the creation of causal connections between scientific articles, finding similarities on a lower structural level (i.e. like common solution, evaluation method or results obtained). The aim is to build software that can extract a pseudo hypothesis or argument (e.g. a Problem — Solution — Evaluation — Result sequence) from any given scientific text inputted. This output will then be used for developing more advanced science assistance functionalities, such as summarization of information, recommending similar articles, identification of building blocks, or finding true references, among others.

As relayed in project Aiur’s white paper, the ultimate goal of the Knowledge Validation Engine is, err, validating knowledge! In practice, this means that a user should be able to recruit the Knowledge Validation Engine’s help automating certain tasks in the process of approaching a given research challenge. Answering, for example, to what extent are the assertions made in a paper backed up by previous research? What is the quality of that supporting evidence? Is it conflicting? Are there boundary conditions identified in the literature not being taken into account?

These are just a few of the questions that the Knowledge Validation Engine should help address, to authors, reviewers or implementers alike. With one final question asked at the end of the journey: can this paper be preproducedand, in that case, reproduced?

A Knowledge Validation Engine should thus significantly improve scientific publishing, peer review and industrial innovation alike. This intelligent machine should help bring about new standards in transparency and accountability, laying a solid foundation to address some of the critical shortcomings of the inefficient, traditional processes plaguing science today.

With an open-for-scrutiny, decentralized, community-owned Knowledge Validation Engine scientists, acting in their different roles, would face a whole new set of incentives better aligned with bringing about disintermediation, raising quality standards and the collective advancement of scientific knowledge. In fact, as members of arguably the world’s most exciting community –at least from a future impact viewpoint!–, the engine’s users could go a long way in addressing some of the most often cited issues they face, namely: (1) information overload; (2) access barriers; (3) reproducibility issues; (4) built-in biases; and, (5) incentive misalignment.

As pointed out by Lambert Heller, blockchain technology holds great potential in bringing us closer to forging a technology friendly, new generation scholarly commons. In the path towards achieving that vision, it is our firm belief that the world’s broad science ecosystem would benefit immensely from a functioning, community-owned Knowledge Validation Engine like Aiur. We sincerely hope you consider joining us in this attempt to conceptualize, design and execute it in a truly joint effort.

There are plenty of disagreements around blockchain today. But one of the few statements that generates a really high level of consensus is that blockchains have phenomenal potential as incentive machines. At the same time, creating a new ecosystem to re-align incentives requires making some choices, including on who and how bears the cost of what.

As opposed to blockchains, governments have been around forever. And conceded by everyone, progressives and libertarians alike, is that they have accumulated vast experience dealing with incentives too (with more or less productive learnings gathered, depending on perspective). When one looks at the suite of tools deployed by governments to introduce public policy incentives over the history of civilization there are a couple of sure hits, and they invariably include taxation.

However, blockchain developments to date have conspicuously ignored this powerful lever in the design of targeted incentive structures. Economists have long focused on taxation as a crucial mechanism to ensure communal systemic sustainability. However, this thinking has not been featured much in blockchain developments to date. These have focused more on monetary policy mechanisms than on fiscal policy ones.

We see this as a sign of current governance immaturity in the blockchain space. In fact, in the utility vs. security token dichotomy, we believe taxation will soon acquire particular relevance. And this is because when generating artificial scarcity to fuel token price upward spirals is not on the cards, neither as a goal nor as a desired outcome, taxation can incentivize and penalized user behaviors very effectively (i.e. disincentivizing ‘hodling’).

Utility-focused blockchain projects, and particularly those true to the spirit of generating value through disintermediation and placing the community at the center of their endeavours, should carefully consider introducing taxation provisions. In a field ripe for experimentation, blockchains such as Ethereum currently allow for great flexibility to customize single-transaction taxation algorithms to an extent never possible in the world of offline, analog governance. But we wanted to go further.

With project Aiur we aim to provide a fresh perspective on the sharing of collective and societal burdens by ecosystem members. In a novel experiment in the blockchain space, we will do so through the introduction of preliminary taxation logic, with specific use-cases coded into smart contracts.

Aiur is a project that aims to build an open, community-governed AI Engine for Knowledge Validation. In making it a reality we will place a bet on the development of new community-oriented taxation systems, where each transaction will be treated individually as a result of the application of transparent criteria — to reward and penalize different behaviors towards the Aiur ecosystem.

Let’s zoom in on the Tax Man mechanism of project Aiur. The service will assess the health of the token economy monitoring a basket of indicators, including the evolution of the market value of AIUR tokens expressed in ETH. This assessment will drive the general tax level applied fluidly at any given point in time.

With the general tax level set, all AIUR transactions will then be individually taxed. Tax rates will be calculated based on the status of the seller and other key circumstances, as a function of four key factors:

— What percentage of the seller’s stake is generated vs. acquired tokens?

— How long has the seller held the tokens?

— Is the account of the seller public or anonymous?

— Is this a transaction where the institution acting as central bank is involved?

We are very excited to be exploring this logic in project Aiur, initially via the prototype development of smart contracts and the vast amount of effort we have put into overcoming some of the biggest barriers to implementation. These include how to tax not only the more straightforward transactions between two third parties (akin to sales or value-added taxation), but also ecosystem status (more similar to property or wealth taxation).

On common goals, natural alliances and short-term compromises.

Some thirty years ago Tim Berners-Lee invented the World Wide Web – and he did so, to a large extent, for society to share scientific knowledge much more freely and effectively, without artificial boundaries in between disciplines and geographies. However, we seem to be only marginally closer to that goal today than we were back in 1989.

I have been deeply involved in the Open Data and Open Government movements since 2009. Through this neighboring activism, together with deep personal experience in a case of much needed access to specific medical knowledge, I have been a staunch supporter of the Open Science agenda for a good number of years now.

But what does that mean? Per its Wikipedia article, Open Science is “the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional.” That definition raises a number of questions. Here go a few:

(1) Are actions directed at improving access only for some members of society (and not all) compatible with Open Science practices? Can partial improvements ‘redeem’ themselves via wider accessibility goals over a longer timeframe?

(2) Are all actions towards making research, data and dissemination more accessible congruent or necessarily compatible with each other? What if a research team publishes the research article, but not the data? Does that count as Open Science?

(3) What about derivative works? Do those need to be equally accessible for an initiative to qualify as aligned with Open Science?

When dealing with broad base, largely decentralized movements, like Open Science, there rarely is anyone at the other end offering a clear-cut response to this sort of questions. Here I try to offer some reflections in connection with Project Aiur.

When we launched Iris.ai back in 2015, with the stated ambition of building the world’s first AI-driven science assistant, we saw ourselves diving quite nicely into the broad movement demanding more openness around scientific knowledge. Almost three years and four full product releases later, today we can proudly say that we have provided new open tools to power more effective contextual scientific discovery for any user of scientific research world-wide – and we, bien sur, publish our own scientific research Open Access.

With Project Aiur, we now have the goal of creating an open, community-governed AI Engine for Knowledge Validation. In the brave new world we envision, any knowledge seeker should be able to input a scientific text, be it an existing research paper or a self-written problem statement, and query the system, Aiur, to get a number of related outputs, including a validation of the input’s hypotheses and building blocks against all of the world’s existing science.

Bearing that in mind, we believe our project’s goals are fundamentally aligned with the Open Science agenda. But, going back to the questions above, there can be friction too. Can a new ecosystem be built, establishing an entire rethought set of economic incentives, with universal access granted to all knowledge for everyone at once?We believe blockchain technology has immense potential to drive true industry and status quo disruption. In contrast with other more gradual approaches limited to the incremental adoption of gold or green open access standards, blockchain developments, such as the ones we propose with Project Aiur, can redesign an entire ecosystem at remarkable speed, creating better aligned incentives for all system participants. But not everything can be achieved simultaneously.

And herein lies the potential tension. Blockchain-based initiatives are dependent on the definition of new user communities for their success. And whilst these new communities might boast universal growth ambitions, community membership must, by definition, not encompass the population at launch. Does that equate to parting ways with the Open Science movement? We believe not.

Also, importantly, can new initiatives only aspire to being subsidized through public funding mechanisms, including indirect taxation-based ones, or, on the contrary, should they experiment creating new economic and user payment models aimed at attaining system sustainability? We are of the latter opinion, and we fail to see a wedge between our initiative and the broad Open Science movement here either.

In our view, via its explicit theory of change, Project Aiur is fundamentally aligned with Open Science advocacy. And this alignment should result in natural alliances cemented with other ecosystem actors around a common vision and shared long-term goals. We aim to bypass what we see as naïve, maximalist restrictions curtailing our project’s ambition to help generate better, more ubiquitous science.

With that said, we look forward to working together with the Open Science movement side-by-side to achieve a world where the research community generates better, more ubiquitous science. That is indeed the very reason why we exist!

How to build a rocket with composite materials? Together with the leading European research institute Swerea SICOMP and Chalmers University we organised a science hackathon, or scithon, as we call it. The goal of the 4-hour sprint was to map out solutions to this space challenge and, in the process, get a grasp of where Iris.AI 2.0* stands compared to traditional science discovery tools.

On September 20th two teams of cross-disciplinary Masters and Ph.D. students from fields spanning from mechanical engineering and industrial design to computer science, astrophysics and entrepreneurship were handed a research challenge: Is it possible to make a reusable rocket made completely out of composite materials?

This challenge provided by Swerea SICOMP is particularly difficult due to issues like the performance of composites at extreme temperatures, the limited durability against UV and space radiation, chemical resistance issues with rocket fuels, and oxidation in high concentrations of oxygen.

After introducing the challenge and the rules of the game, the teams were pitted against each other. They both had four hours to achieve two goals: (1) map and categorise related scientific articles; and, (2) summarise the key findings by skimming through the categories and papers. Only one of the teams had access to Iris.AI.

The specific criteria they would be evaluated on were the relevance, breadth and completeness of the research papers identified. Teams’ work was also assessed based on the quality of the conclusions drawn, including elements like issues surfaced, key trends and current research directions identified.

After the sprint, an expert panel evaluated the results obtained by both teams. Team 2, using Iris.AI as the tool, generated a score of 95%.Team 1, using the current market standard product, scored 45%.

The number of generally relevant papers identified was similar for both teams. The different angles covered by these papers (with categories like validation research vs. evaluation research vs. solution proposals vs. philosophical papers vs. opinion papers vs. experience papers) was broadly similar, too.

The scithon jury attributed a significantly higher score to the Team 2, i.e. the team that had used Iris.AI, on three accounts: (1) finding three papers with a top score in terms of fitting the problem statement; (2) showing higher quality of key findings structured around identified topics; and, (3) drawing superior conclusions and summarising the relevant knowledge.

While the team using an existing market standard tool struggled to formulate the relevant keywords to optimise their searches, i.e. facing issues around dated terminology, members of the jury from our co-organisers Swerea SICOMP were particularly impressed by the papers identified by the team using Iris.AI. More specifically, the team using Iris.AI found papers around silicon-based nanoparticles and a distributed health monitoring system for reusable liquid rocket engines. These two key avenues of research could bring us a lot closer to building reusable rockets made of composite materials!

This means that version 2.0 of Iris.AI, with its full text search, unbiased mathematical architecture, neural topic modelling and visual navigation interface features, is beginning to show significant value added for researchers looking to speed up the effective discovery and deployment of scientific research.

The scithon also allowed us to gather invaluable feedback from researchers around the importance of features like filtering (including search criteria refinements) and interaction (including discarding concepts presented by Iris.AI in results maps), which will be included in our near term product roadmap.

The next scithon will be organised on October 28th in Stockholm in collaboration with Iris.AI and Future Earth. If you are in the area and would like to join us to identify solutions to climate change, contact Maria at maria@iris.ai.

*The new version of Iris.AI will be launched on the 22nd of September. Be the first one to hear about it by signing up to our newsletter at www.iris.ai.

Interested in having a look at the Scithon material? Here’s the Dropbox link to view the results delivered by both teams as well as the full version of the problem statement.

What did Iris.AI think of it when she read the transcript? What was the talk about, in her mind?

Weeelll. Sexy? Really? The comparison between pleasure derived from finally understanding the right reasoning to solve a mathematical problem and sexual pleasure was not picked up. In fact, it went completely over young Iris.AI’s head.

The comparison between pleasure derived from finally understanding the right reasoning to solve a mathematical problem and sexual pleasure was not picked up. In fact, it went completely over young Iris.AI’s head.

Iris.AI today struggles with elements such as metaphors, figurative speech or ironic remarks built within scripts. But then, didn’t you back in the day?

Through overlaying supervised training efforts on top of the currently deployed unsupervised NLP algorithms, our goal is for her to get better at these things over time. Much like for humans, the more context the brain is exposed to, the better its ability to make sense of things.

There is little doubt that we are in the middle of a surge in Artificial Intelligence research and development. New advances are being aggressively pursued on a wide variety of fields from autonomous driving to automatic scheduling, to name a couple.

One of the aspects that we spend more time reflecting on, at Iris.AI, is the one to do with who will capture the expected benefits from this ongoing technological progress. Or, to be more precise, what shape will the who-will-drive-it and who-will-reap-the-benefits equation take?

In the good ol’ days the answer to this type of question was relatively straightforward. Prospective inventors understood the intellectual property legal framework well enough to be able to figure out where they stood. They had to race to a finish line, largely leveraging their individual effort, and then a number of more or less predictable things would trigger.

That traditional IPR system has come under growing criticism over time, but as professor James Robinson –co-author of the highly refreshing and thought provoking essay ‘Why Nations Fail: The Origins of Power, Prosperity, and Poverty’– reiterated at a recent conference, that patent-based system could be categorized as an inclusive economic institution largely beneficial to society as a whole. It had the merit of aligning effort and reward.

Fast forward now to 2016 and you will find a digital environment where there is ever less clarity over what will the equivalent system look like in the future. And in addition to this, a lot of very smart people happen to be expressing serious concerns about where this fascinating AI road might lead us to in the coming years. The highly publicized launch of Open AI is but a testimony to those concerns prompting action — and not an insignificant one!

One of the key things that have changed from then to now, we would contend, is that merit has become fuzzier and more complex. Leaving aside some sordid stories about ruthless appropriation of other colleagues’ work, the basic rule that held true in the past was that merit accrued to the inventor registering the patent.

When it comes to AI –and supervised machine learning in particular–, however, a new category of stakeholders is emerging: AI trainers. How will their role be recognized in distributing the value generated through future incremental algorithm advances?

These issues might seem a bit far-fetched today, but we believe they provide a solid foundation for the vision we have chosen to embrace: one of developing our AI in the open, with maximum levels of transparency and encouraging broad reusability to address different use cases that should bring about different social and economic benefits.

That is our commitment at Iris.AI, both to our staunch and growing supporter base and to the community at large: one of transparency and openness every step of the way, whilst we pursue our ambitious product roadmap going forward.

We hope we can contribute to a healthy discussion and, hopefully, that more people join us in promoting this vision. One thing we know for sure: we will not get very far unless we are joined by a critical mass of researchers, AI practitioners, corporate innovators, fellow startups, investors, advocates and knowledge seekers.