Posts tagged publishing

The Standard Ebooks project is a volunteer driven, not-for-profit effort to produce a collection of high quality, carefully formatted, accessible, open source, and free public domain ebooks that meet or exceed the quality of commercially produced ebooks. The text and cover art in our ebooks is already believed to be in the public domain, and Standard Ebook dedicates its own work to the public domain, thus releasing whole ebooks files themselves into the public domain.

Science takes place in a Madisonian system, where competing forces push us from all directions. The good guys don’t always win. The powerful guys push hard. The direction science moves depends on the sum of the forces on it. A well meaning push in the right direction doesn’t always end up moving science that way. The reliability project and the open science foundation have recently come into a bit of power and public influence. So far, they’ve pushed for greater reliability and more careful methods. But they’ve yet to ask “Why are scientists using poor methods?”, “What are the forces that drive reliability down?”, and “Why were past methods more reliable?” The surge in retraction is relatively new. There wasn’t an OSF in 1957, and yet, somehow my grandfather was able to publish a reliable effect.

Imagine, for a moment, if it were possible to provide access not just to those books, but to all knowledge for everyone, everywhere—the ultimate realisation of Panizzi’s dream. In fact, we don’t have to imagine: it is possible today, thanks to the combined technologies of digital texts and the Internet. The former means that we can make as many copies of a work as we want, for vanishingly small cost; the latter provides a way to provide those copies to anyone with an Internet connection. The global rise of low-cost smartphones means that group will soon include even the poorest members of society in every country. That is to say, we have the technical means to share all knowledge, and yet we are nowhere near providing everyone with the ability to indulge their learned curiosity

I started the website because it was a great demand for such service in research community. In 2011, I was an active participant in various online communities for scientists (i.e. forums, the technology preceding social networks and still surviving to the present day). What all students and researchers were doing there is helping each other to download literature behind paywalls. I became interested and very involved. Two years before, I already had to pirated many paywalled papers while working on my final university project (which was dedicated to brain-machine interfaces). So I knew well how to do this and had necessary tools. After sending tens or hundreds of research papers manually, I wanted to develop a script that will automate my work. That’s how Sci-Hub started.

A researcher in Russia has made more than 48 million journal articles - almost every single peer-reviewed paper every published - freely available online. And she’s now refusing to shut the site down, despite a court injunction and a lawsuit from Elsevier, one of the world’s biggest publishers. For those of you who aren’t already using it, the site in question is Sci-Hub, and it’s sort of like a Pirate Bay of the science world. It was established in 2011 by neuroscientist Alexandra Elbakyan, who was frustrated that she couldn’t afford to access the articles needed for her research, and it’s since gone viral, with hundreds of thousands of papers being downloaded daily. But at the end of last year, the site was ordered to be taken down by a New York district court - a ruling that Elbakyan has decided to fight, triggering a debate over who really owns science.

A forest has been planted in Norway, which will supply paper for a special anthology of books to be printed in one hundred years time. Between now and then, one writer every year will contribute a text, with the writings held in trust, unpublished, until 2114.

For scholarly publishing, the secret sauce - the essential thing - is a mechanism for review. Even open archives like arXiv.org have review in the sense of only letting people who are endorsed by an existing community post, but here we’ll assume that we’re doing something more like traditional academic publishing - reviewing something called a paper (it could actually be code, or a figure, or a paragraph but let’s stick to papers as that’s easier to think about). Peer review at present is something that belongs to a journal; it’s a set of rules and procedures, written and enforced by an editorial board and supported by a lot of email and some fairly wonky software.

The publishers Springer and IEEE are removing more than 120 papers from their subscription services after a French researcher discovered that the works were computer-generated nonsense. Over the past two years, computer scientist Cyril Labbé of Joseph Fourier University in Grenoble, France, has catalogued computer-generated papers that made it into more than 30 published conference proceedings between 2008 and 2013. Sixteen appeared in publications by Springer, which is headquartered in Heidelberg, Germany, and more than 100 were published by the Institute of Electrical and Electronic Engineers (IEEE), based in New York. Both publishers, which were privately informed by Labbé, say that they are now removing the papers. Among the works were, for example, a paper published as a proceeding from the 2013 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering, held in Chengdu, China. (The conference website says that all manuscripts are “reviewed for merits and contents”.) The authors of the paper, entitled ‘TIC: a methodology for the construction of e-commerce’, write in the abstract that they “concentrate our efforts on disproving that spreadsheets can be made knowledge-based, empathic, and compact”. (Nature News has attempted to contact the conference organizers and named authors of the paper but received no reply; however at least some of the names belong to real people. The IEEE has now removed the paper).

It’s a beautiful business to be in: publish research that you took no part in, claim the copyrights to the results of that research, publish the research in a very expensive journal, publish reprints at exorbitant fees and finally, when a more efficient distribution method appears get rid of all the costly components of the business but keep the prices the same. According to one person I spoke to who is knowledgeable about the publishing field the profit margins dwarf even those of the publication of pornography.

Journal of Errology (JoE) is a research repository that enables sharing and discussions on those unpublished futile hypotheses, micro research papers, errors, iterations, negative results, false starts, shortfalls, micro-papers and other original stumbles that are part of a larger successful research in biological sciences.

Philip M. Parker, Professor of Marketing at INSEAD Business School, has had a side project for over 10 years. He’s created a computer system that can write books about specific subjects in about 20 minutes. The patented algorithm has so far generated hundreds of thousands of books. In fact, Amazon lists over 100,000 books attributed to Parker, and over 700,000 works listed for his company, ICON Group International, Inc. This doesn’t include the private works, such as internal reports, created for companies or licensing of the system itself through a separate entity called EdgeMaven Media.

In a memorable blogpost, Gowers announced that henceforth he would not be submitting articles to Elsevier’s journals and that he would also be refusing to peer-review articles for them. His post struck a nerve, attracting thousands of readers and commenters and stimulating one of them to set up a campaigning website, The Cost of Knowledge, which enables academics to register their objections to Elsevier. To date, more than 9,000 have done so. This is the beginning of something new. The worm has finally begun to turn. The Wellcome Trust and other funding bodies are beginning to demand that research funded by them must be published outside paywalls. Some things are simply too outrageous to be tolerated. The academic publishing racket is one. And when it’s finally eliminated, Professor Gowers should get not just a knighthood, but the Order of Merit.

PUBLISHING obscure academic journals is that rare thing in the media industry: a licence to print money. An annual subscription to Tetrahedron, a chemistry journal, will cost your university library $20,269; a year of the Journal of Mathematical Sciences will set you back $20,100. In 2011 Elsevier, the biggest academic-journal publisher, made a profit of £768m ($1.2 billion) on revenues of £2.1 billion. Such margins (37%, up from 36% in 2010) are possible because the journals’ content is largely provided free by researchers, and the academics who peer-review their papers are usually unpaid volunteers. The journals are then sold to the very universities that provide the free content and labour. For publicly funded research, the result is that the academics and taxpayers who were responsible for its creation have to pay to read it. This is not merely absurd and unjust; it also hampers education and research.