Open access

To build Project Aiur we at Iris.ai decided to distribute free tokens to researchers and students through the Aiur Airdrop campaign simply because we wanted to reward the individuals for the work they’ve already done and because we believe that a strong community of supporters and contributors is key to fixing the challenges of the current scientific system.

The campaign exceeded all our expectations. In the past week, 10,000 Aiur tokens were claimed by researchers and students all over the world. Given the high demand we’ve decided to launch the Aiur Airdrop Vol. 2 towards the end of the month. It’s a shout-out to researchers, students and anyone who connects with the project and wants to contribute. Tokens will be distributed upon completion of simple tasks to be outlined on the Aiur Airdrop website.

If you missed the first opportunity, make sure you’re among the first ones to be notified about the second one through our newsletter! We’ll ping you once we’re all set.

Through Project Aiur we will distribute 10,000 AIUR tokens free of charge for students and researchers who already have, or are planning to, contribute to the world of open scientific knowledge – simply because the work that has already been done deserves to be rewarded and because we believe in building a solid foundation for our community.

Project Aiur exists because science needs to be democratized. Luckily, we don’t need to start from scratch. A vast group of people has been working relentlessly towards the same goal for years. They are supporters of open science, who, among other things, publish their papers open access, and by doing so, make sure that anyone with an internet access can leverage the results that have already been discovered.

Project Aiur would never have a chance without the efforts already made on that front, and as my co-founder Jacobo mentions, our long-term goals are very much aligned with those of the Open Science movement. To reward the current and future contributors of open scientific knowledge, we decided to launch the Aiur Airdrop Campaign. Just in case you haven’t heard the term before, airdrop means free token distribution. Unlike most airdrops around, Aiur Airdrop is available only for a specific audience and only for people who ask for it.

Through Aiur Airdrop, we will give 10,000 AIUR tokens to early-adopter scientists, researchers and students. We currently expect the price of AIUR tokens to be initially set at ETH 0.01. 15 AIUR tokens will be made available for researchers, who have already published a paper open access. Students and non-published researchers can get 5 AIUR tokens each. An Ethereum wallet ID will be required from all participants. If you don’t know how to set that up, no worries. Detailed instructions for creating a wallet are available here(scroll down to the FAQ section).

To succeed, Project Aiur needs a rock-solid community of people, who are driven by changing the status quo and solving the problems we currently face in the world of science. We hope you will join the ride and invite your friends along!

On common goals, natural alliances and short-term compromises.

Some thirty years ago Tim Berners-Lee invented the World Wide Web – and he did so, to a large extent, for society to share scientific knowledge much more freely and effectively, without artificial boundaries in between disciplines and geographies. However, we seem to be only marginally closer to that goal today than we were back in 1989.

I have been deeply involved in the Open Data and Open Government movements since 2009. Through this neighboring activism, together with deep personal experience in a case of much needed access to specific medical knowledge, I have been a staunch supporter of the Open Science agenda for a good number of years now.

But what does that mean? Per its Wikipedia article, Open Science is “the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional.” That definition raises a number of questions. Here go a few:

(1) Are actions directed at improving access only for some members of society (and not all) compatible with Open Science practices? Can partial improvements ‘redeem’ themselves via wider accessibility goals over a longer timeframe?

(2) Are all actions towards making research, data and dissemination more accessible congruent or necessarily compatible with each other? What if a research team publishes the research article, but not the data? Does that count as Open Science?

(3) What about derivative works? Do those need to be equally accessible for an initiative to qualify as aligned with Open Science?

When dealing with broad base, largely decentralized movements, like Open Science, there rarely is anyone at the other end offering a clear-cut response to this sort of questions. Here I try to offer some reflections in connection with Project Aiur.

When we launched Iris.ai back in 2015, with the stated ambition of building the world’s first AI-driven science assistant, we saw ourselves diving quite nicely into the broad movement demanding more openness around scientific knowledge. Almost three years and four full product releases later, today we can proudly say that we have provided new open tools to power more effective contextual scientific discovery for any user of scientific research world-wide – and we, bien sur, publish our own scientific research Open Access.

With Project Aiur, we now have the goal of creating an open, community-governed AI Engine for Knowledge Validation. In the brave new world we envision, any knowledge seeker should be able to input a scientific text, be it an existing research paper or a self-written problem statement, and query the system, Aiur, to get a number of related outputs, including a validation of the input’s hypotheses and building blocks against all of the world’s existing science.

Bearing that in mind, we believe our project’s goals are fundamentally aligned with the Open Science agenda. But, going back to the questions above, there can be friction too. Can a new ecosystem be built, establishing an entire rethought set of economic incentives, with universal access granted to all knowledge for everyone at once?We believe blockchain technology has immense potential to drive true industry and status quo disruption. In contrast with other more gradual approaches limited to the incremental adoption of gold or green open access standards, blockchain developments, such as the ones we propose with Project Aiur, can redesign an entire ecosystem at remarkable speed, creating better aligned incentives for all system participants. But not everything can be achieved simultaneously.

And herein lies the potential tension. Blockchain-based initiatives are dependent on the definition of new user communities for their success. And whilst these new communities might boast universal growth ambitions, community membership must, by definition, not encompass the population at launch. Does that equate to parting ways with the Open Science movement? We believe not.

Also, importantly, can new initiatives only aspire to being subsidized through public funding mechanisms, including indirect taxation-based ones, or, on the contrary, should they experiment creating new economic and user payment models aimed at attaining system sustainability? We are of the latter opinion, and we fail to see a wedge between our initiative and the broad Open Science movement here either.

In our view, via its explicit theory of change, Project Aiur is fundamentally aligned with Open Science advocacy. And this alignment should result in natural alliances cemented with other ecosystem actors around a common vision and shared long-term goals. We aim to bypass what we see as naïve, maximalist restrictions curtailing our project’s ambition to help generate better, more ubiquitous science.

With that said, we look forward to working together with the Open Science movement side-by-side to achieve a world where the research community generates better, more ubiquitous science. That is indeed the very reason why we exist!

The world needs science. Complex challenges ranging from climate change to preventive medicine require us to put our best minds together to solve them. And we do live in a world where more scientific knowledge is available to us than ever before — but the irony is that our politicians doubt its legitimacy, our researchers are pressed for time and resources do not have capacity to communicate across even the closest alleyways of a university, publishing houses generate profit by keeping vital results hidden behind heavy walls, and go after those who breach them with deadly force. In addition, the big software players are opaque and seemingly impossible to hold accountable for their data, their algorithms and the implications of these. In spite of Tim Berners-Lee creating the internet to share scientific knowledge, it seems we have only come marginally further today than we had back then.

Two years ago at Iris.ai we sat out on quite the ambitious journey: to build what we call an “AI Scientist”, a system that can augment our human intelligence by connecting the dots of all of the world’s research. The months since have been filled with hard work, progress, setbacks, a lot of rejections — but also so much love, support, understanding and encouragement from our wonderful community.

In these two years we’ve built a system that reduces a human’s time to map out existing scientific knowledge with up to 90% , while increasing serendipity and interdisciplinary discovery. This is the first important step towards what we call the Knowledge Validation Engine, a core feature of the AI Scientist. We have a dedicated team that has built this, we have a number of budding university collaborations and we have a group of lovely investors who believe in us and we’ve published several open access research papers. Most important, this past year we’ve seen an amazing community of AI trainers grow up around us — more than 8,000 individuals who volunteer their time to help Iris.ai learn. We’ve seen a desire to be part of our journey, a wish to help us achieve our mission, a community coming together to tell us that what we do is important. We have done our best to honor their help, but we have not done enough.

Transparency, openness and fighting bias have been our core values from day one, but we find ourselves not living up to our own standards. We find ourselves torn between servicing big corporate clients and satisfying a European venture capital community single-mindedly focused on revenue (with some very honest impact focused exceptions) on one hand — while also trying to bring what we build out to as many people as possible.

For us to truly make impact in the world, it is not enough to build some great tools, we need to disrupt and uproot an entire industry. We can not do that on our own — it’s a grassroots challenge. We need your help.

Scientific knowledge is arguably the ultimate decentralized application. It is not controlled by a central agent, is individual node-independent, is exposed to public scrutiny and constant challenge and is preciously valuable for a large and fast growing cohort of current and future users.

The technology development of this decade is thrilling, and today we are taking advantage of a new opportunity. Utilizing the decentralized nature of the blockchain, we have decided to give power to our community by tokenizing our functionalities— allowing anyone who contributes to the tool to generate tokens by doing so — tokens they can later use directly to access our core services.

An AI Trainer who annotates research papers, a coder who commits to our increasingly open source code, a user who reports a bug or a researcher who uses the Iris.ai Knowledge Validation Engine to publish their research Open Access — they will all be rewarded with tokens for contributing. The tokens can be used to access any of the Iris.ai tools. All token holders will have a voice, have transparent insight into our core technology and will be asked to hold us accountable to openness and de-biasing our algorithms and data. And as both corporates purchase access to the Iris.ai tools and the the algorithms of Iris.ai improve over time, the value of the tokens held by our community will increase.

We’re excited, thrilled and a little scared — as per usual, we’re traversing unchartered territory. And we can not do this alone. Please join us in making science transparent, open and accessible.

Our white paper with the details will be made available early 2018. Until then we would love your ideas, thoughts and feedback — on tokens@iris.ai or our Telegram channel.

There is little doubt that we are in the middle of a surge in Artificial Intelligence research and development. New advances are being aggressively pursued on a wide variety of fields from autonomous driving to automatic scheduling, to name a couple.

One of the aspects that we spend more time reflecting on, at Iris.AI, is the one to do with who will capture the expected benefits from this ongoing technological progress. Or, to be more precise, what shape will the who-will-drive-it and who-will-reap-the-benefits equation take?

In the good ol’ days the answer to this type of question was relatively straightforward. Prospective inventors understood the intellectual property legal framework well enough to be able to figure out where they stood. They had to race to a finish line, largely leveraging their individual effort, and then a number of more or less predictable things would trigger.

That traditional IPR system has come under growing criticism over time, but as professor James Robinson –co-author of the highly refreshing and thought provoking essay ‘Why Nations Fail: The Origins of Power, Prosperity, and Poverty’– reiterated at a recent conference, that patent-based system could be categorized as an inclusive economic institution largely beneficial to society as a whole. It had the merit of aligning effort and reward.

Fast forward now to 2016 and you will find a digital environment where there is ever less clarity over what will the equivalent system look like in the future. And in addition to this, a lot of very smart people happen to be expressing serious concerns about where this fascinating AI road might lead us to in the coming years. The highly publicized launch of Open AI is but a testimony to those concerns prompting action — and not an insignificant one!

One of the key things that have changed from then to now, we would contend, is that merit has become fuzzier and more complex. Leaving aside some sordid stories about ruthless appropriation of other colleagues’ work, the basic rule that held true in the past was that merit accrued to the inventor registering the patent.

When it comes to AI –and supervised machine learning in particular–, however, a new category of stakeholders is emerging: AI trainers. How will their role be recognized in distributing the value generated through future incremental algorithm advances?

These issues might seem a bit far-fetched today, but we believe they provide a solid foundation for the vision we have chosen to embrace: one of developing our AI in the open, with maximum levels of transparency and encouraging broad reusability to address different use cases that should bring about different social and economic benefits.

That is our commitment at Iris.AI, both to our staunch and growing supporter base and to the community at large: one of transparency and openness every step of the way, whilst we pursue our ambitious product roadmap going forward.

We hope we can contribute to a healthy discussion and, hopefully, that more people join us in promoting this vision. One thing we know for sure: we will not get very far unless we are joined by a critical mass of researchers, AI practitioners, corporate innovators, fellow startups, investors, advocates and knowledge seekers.

Yesterday we published our list of the TOP 3 talks at TED 2016. This task turned out to be challenging simply because Iris.AI loved the vast majority of the talks, making it hard for us to pick just three of them.

Hence, we expanded the concept of TOP 3 to TOP 10. Here’s part 2 of the series shedding light on the future of flying objects and understanding the code of life. It comes with the open access research articles selected by our baby AI.

Hope you enjoy these picks!

#4 Riccardo Sabatini: Understanding the code of life

Now that we can read human genome, what can we use that for?

Riccardo Sabatini from Human Longevity Inc introduces the TED audience to, hands-down, the most relevant code determining the future of humanity – the human genome. He literally showcases the DNA of the first man to sequence human life by dragging every single detail of that code on the stage in the form of 175 books.

It took more than 40 years to read the human genome after it was first pictured in the 1950s. Since then the entire field of research has achieved several breakthroughs. Sabatini’s is one of the leading teams working on this topic. At their company they can now predict height, eye color, skin color and even facial structure based on a person’s genome.

But that’s just to get working on more important issues such as how your body works, how it ages, how disease generates in your body, how drugs work in your body – if they work, that is. These advances enable the ability to move from a statistical approach, where you as a human being are depicted as an average, to a personalized one where machines read the 175 books on you to get an exact understanding of who you are.

Iris.AI cracked the code of Sabatini’s talk relatively well – although hearing the word “apparently” repeatedly made her think that it’s an essential part of the story, too… 😉 Have a look at her selection of research articles on genome sequencing and 3D-printing, for example: https://the.iris.ai/map/6695.

#5 Raffaello D’Andrea: Dazzling flying machines of the future

When do we need to implement new revolutionary rules for the flying machines?

A group of “fireflies” (i.e. 33 micro-quadcopters) dancing together in unison above the TED audience and a flying lampshade (i.e. a two propeller flying machine) that looks like the one in your parents’ living room – these are just some of the examples that Raffaello D’Andrea, a professor at ETH Zurich, demoed on stage to expand the audience’s understanding of what exactly is meant by autonomous flight and what needs to happen before the promise of these flights can fully materialize.

“Inspection, environmental monitoring photography, film and journalism are just some of the potential applications for autonomous flight. Before we can fully leverage this potential and welcome the flying objects into our everyday lives, they will need to become extremely safe and reliable”, D’Andrea explains.

That’s the goal D’Andrea’s team is working towards. Building drones that can hover and resist disturbance, move anywhere in space irrespective of where they are facing, and recover if anything goes wrong – like the motor, a propeller or even a battery pack fails, are just some of the projects his research team is working and delivering results on. So, we might need to implement new revolutionary rules for the flying machines sooner than we thought!