Tuesday, March 31, 2015

Librarians have been at the forefront of the open access
movement since the beginning, not least because in 1998 the Association of
Research Libraries (ARL) founded the
Scholarly Publishing and Academic Resources Coalition (SPARC). Today SPARC is arguably the world’s
most active and influential OA advocacy organisation.

Marcus Banks

It is important to note that librarians’ interest in open
access grew primarily out of their frustration with the so-called “serials crisis” — the
phenomenon that has seen the cost of scholarly journals
consistently grow at a higher rate than library serials budgets.

SPARC’s initial strategy, therefore, was to encourage the
growth of new low-cost, non-profit, subscription journals able to compete with the
increasingly expensive ones produced by profit-hungry commercial publishers. As
SPARC’s then Enterprise Director Rick Johnson wrote in 2000,
“In 1998, after years of mounting frustration with high and fast-rising
commercial journal prices, a group of libraries formally launched SPARC to
promote competition in the scholarly publishing marketplace. The idea was to
use libraries’ buying power to nurture the creation of high-quality, low-priced
publication outlets for peer-reviewed scientific, technical, and medical
research.”

In the wake of the 2002 Budapest Open Access
Initiative (an event attended by Johnson), however, SPARC began to focus
more and more of its efforts on open access. The assumption was that this would
not only allow research to be made freely available, but finally resolve the
affordability problem faced by the research community. As the BOAI text expressed it, “the
overall costs of providing open access to this literature are far lower than
the costs of traditional forms of dissemination.”

Ironically, despite their high profile advocacy for open
access many librarians have proved strangely reluctant to practice what they
preach, and as late as last year calls were
still being made for the profession to start “walking the talk”.

On the other hand, many librarians have embraced OA, particularly medical librarians. In
2001, for instance, the Journal of
the Medical Library Association (JMLA) began to make its content freely
available on the Internet. And in 2003 Charles Greenberg, then at the Yale University Medical Library, launched
an open access journal with BioMed
Central called Biomedical Digital
Libraries (BDL). One of the first to join the editorial board (and later to
take over as Editor-in-Chief) was Marcus Banks, who was then working
at the US National Library of Medicine.

Four years later, however, BDL became a victim of BMC’s decision to increase the cost of the
article-processing charges (APCs) it
levies. This meant that few librarians were able to afford to
publish in the journal any longer, and submissions began to dry up. Despite
several attempts to move BDL to a different
publishing platform, in 2008 Banks had to make the hard decision to cease
publishing the journal.

What do we learn from BDL’s
short life? In advocating for pay-to-publish gold OA did open access advocates
underestimate how much it costs to publish a journal? Or have publishers simply
been able to capture open access and use it to further ramp up what many
believe to be their excessive profits? Why has JMLA continued to prosper under open access while BDL has withered and died? Was BDL unable to compete with JMLA on a level playing field? Could the
demise of BDL have been avoided? What, if anything, does the journal’s fate
tell us about the future of open access?

I discuss these and other questions with Banks below. The
issue of affordability, it seems to me, is particularly apposite, as librarians
are having to confront the harsh truth that, far from reducing the costs of
scholarly communication, open access appears more likely to increase
them.

It turns out that Banks has an interesting perspective on
this issue. As he puts it, “At the risk of frustrating many librarian
colleagues, I must say that the framing of open access as a means of saving
money has been and remains a serious strategic error.”

He adds, “A fully open access world may not save any money
and could cost more than we pay now — this world would include publication
charges as well as payments for tools that mined and sorted the now completely
open literature. That’s fine with me, because in this world we’d be getting
better value for money.”

The interview begins …

RP: Can you say something about your background and career to date?

MB: I have been a
librarian since 2002. My first position after earning my Masters of Library and
Information Science was as an Associate Fellow at the US National Library of
Medicine (NLM), from 2002-2004. During
this time NLM was developing PubMed Central (PMC)
as a freely accessible digital archive of biomedical literature.

Growth at PMC was slow, as deposits to it were voluntary — this
was years before PMC became the required repository under the terms of the NIH
Public Access Policy. Publishers rightly worried that a fully open access
archive would challenge their business model, a concern that persists today.

Watching this debate unfold raised my awareness of the
various agendas in scholarly publishing, as well as of the potential for open
access publishing to expand the reach of biomedical literature.

RP: What are you doing currently?

MB: My most
recent position was as the Director of Library/Academic & Instructional
Innovation at Samuel Merritt University
in Oakland, California. Since then my wife and I have returned to the Chicago
area for both personal and professional reasons. I am currently pursuing
employment while building a consulting practice devoted to transformation in
scholarly communication. Even with “gainful employment” I would continue the
consulting.

RP: You said that the growing debate about scholarly communication made
you aware of the potential for open access publishing. You were later involved
in the creation of an open access journal called Biomedical
Digital Libraries, which I think was launched in 2004 but
ceased operations in 2007. Can you say what your role at the journal was, why
the journal was created, and why it did not succeed?

MB:Charles Greenberg, then at the
medical library of Yale, launched Biomedical
Digital Libraries (BDL) at the Medical Library Association meeting in May
2003. It was an open access title published by BioMed Central (BMC). His first task was to recruit an
editorial board, and I joined in as an Associate Editor. Our first papers
appeared in 2004. As Charlie moved on to other projects, I became co-editor and
then sole Editor-in-Chief in 2006.

Thursday, March 26, 2015

Guest Post by Professor David Price, Vice-Provost (Research), University College London

David Price

Research Councils UK (RCUK) has today released the Report of an independent review body on the implementation of its Open Access
policy.

It is not a review of Open Access policies and their
implementation in the UK. The Report is quite clear about this – it is a review
of the impacts of the implementation of the RCUK Policy on Open Access for its funded research outputs. This is a review which is being undertaken at
an early stage in the history of that OA policy. As such, there is much that is
good and helpful about the Report’s findings and I will touch on some of these
points below.

Overall, however, the Report is a missed opportunity to look
at the deeper implications of the move to Open Access in the UK. There are
broader issues, in many of which RCUK is a leader, which would have benefited
from a more confident treatment by the panel. There is still a great deal of
work to do!

The Report looks in some detail at the question of
embargoes. While the short embargoes of 6 and 12 months have been taken up by the
research community, there is still unhappiness. As the Report says, some of
this is due to poor communication of the policy and resulting confusion in the
academic community. Another aspect of it, however, is a genuine concern among
some communities, for example History scholars, that short embargo periods are
harmful to academic freedom to choose where to publish. RCUK needs to look at
the issue of embargo periods again.

The Report also highlights a number of problems with the
RCUK recommendation of a CC-BY licence for research outputs. If this is the RCUK position, then compliance with
the policy would require academics to use this licence. In its review of policy
implementation, the Report shows that this has not always been the case. The
Report also, quite rightly, highlights the unhappiness of the Arts and
Humanities community in the requirement for a CC-BY licence. From the evidence
presented, it looks as though this community feels they are being made to dance
to a biomedical and scientific tune, where CC-BY is more acceptable. The Report
is right to highlight the need for further investigation.

The Report has further nuggets of wisdom. It highlights the
administrative costs for universities of implementing the RCUK Open Access
policy, building on the London Higher Report supported by SPARC Europe. It also suggests that university and publisher
systems should be developed to accommodate ORCID (for author IDs) and FundRef (for
funder information), which will help monitor implementation of the policy in
future years.

Why are the costs in the final column for Hybrids so much
bigger than the rest? It was beyond the remit of the review to investigate this
in detail, but this question does need further study. RCUK derives its money
from public funds and this is a question which the taxpayer would certainly
have a right to understand in more detail.

While the Report contains much that is useful and
thought-provoking, there are some big gaps that it should have covered. The
Report consciously limits itself to the implementation of the RCUK policy, and
does not look at the wider UK Open Access scene in detail. This is a mistake
because the RCUK position would be more intelligible if such a wider comparison
had taken place. The Report says that the RCUK policy position is broadly
complementary to other UK OA policies. Any misunderstandings on this front may
be due, it says, to poor communication of the policies. Really? Are there many
universities who believe this? The new HEFCE policy for REF 2020 seems to me to
be quite different from the RCUK policy, and it is the REF policy that is
capturing university attention at the moment. It is only the REF policy which
is insisting on ‘deposit on acceptance’. And it is the RCUK policy which
encourages Gold OA publications and requires the use of a CC-BY licence. The
REF policy is neutral, for example, as to the colour (Gold or Green) of the OA
output. To say that the RCUK and REF policies are complementary defies logic.
The RCUK Review panel needs to think this one through again.

The Report highlights the shortcomings of universities in
gathering data for the review. It is right to do so. There needs to be more
accurate reporting next time. In that respect, I would have expected the Review
panel to draw up a template for reporting, addressing the issues it identified
as weaknesses in the first set of reports. The Report recommends that a
template be constructed, but why (when this is such an important issue) did it
not draw up this template itself? Not good practice.

Finally, the Report cautiously advocates that RCUK look at
the level of funding it gives to fund OA dissemination in future years. A
welcome recommendation, but rather weak. Wellcome funds all OA outputs that
emanate from its funded research. Why did the RCUK review not make a similar
recommendation? As things stand, once RCUK funds are exhausted, universities
either have to find monies for APCs themselves or advise the authors to publish
their outputs as Green outputs. This is unsatisfactory and will lead to a
fragmented publication framework for RCUK research which is in no-one’s
interests.

To conclude: the independent Review panel which has produced
the review of the implementation of the RCUK Open Access policy has only half
done its job. It has produced a detailed analysis about implementation, which
is useful. But, in walking away from broader policy issues, it leaves many
questions unanswered which should have been tackled. Will future reviews take
these issues forward? They should.

Sunday, March 22, 2015

Contrary
to what one might expect, not all the items in open access repositories are publicly available. Estimates of the percentage of the content in repositories that is not in fact open access tend
to range from around 40% to 60%. This will include bibliographic records containing only metadata, plus full-text documents that have been placed on “dark deposit” — i.e. documents that are present
in the repository but not freely available, either because they are subject to
a publisher’s embargo or because the author(s) asked for the full-text to be deposited
on a closed access basis. To enable researchers to nevertheless obtain copies of items that have been placed on
dark deposit OA advocates developed the request eprint button. But how does the button work, and how
effective is it? Below Eloy Rodrigues, Director of Documentation Services at the University
of Minho, discusses the issues,
and outlines the situation at UMinho.

Eloy Rodrigues

RP: How many scholarly items are currently deposited in the University
of Minho’s institutional repository RepositóriUM, and
what are the growth rates?

ER: Currently we have more
than 32,600 items in RepositóriUM, with around 5,000
being deposited yearly since the upgrade
of our policy (effective since January 2011). Since 2011 more than 20,000
items have been deposited.

RP: Of these, how many are full text and freely available to the
public (i.e. they are not metadata alone, not currently subject to publisher
embargo, and not restricted to members of the university — as in requiring
login)?

ER: Almost 26,000 (25,932)
are freely available, which is more than 79% of the total.

RP: As I understand it, repository users can ask that a private copy of
any document on dark deposit is made available to them by using the request eprint
button built into the repository. In 2010 you co-authored a paper about this button,
which was then more frequently called the “Fair Dealing” button. Your paper included
data on “approval success rates” (i.e. the frequency with which authors
sanctioned a copy of their work being made available to those requesting it).
These data came from three universities: Southampton, Stirling and UMinho (your
institution). The approval success rates were, respectively, 47%, 60% and 27%,
with many requests simply ignored or lost. How has the situation at the
University of Minho changed since then? What are the current figures?

ER: The overall response
rate has remained basically the same, or even a little lower. In 2014 we had a
global response rate of around 23%, with 21% sending the requested documents
and 2% denying the request.

However the global response rate is highly “biased” by the effect of
theses and dissertations. Theses and dissertations (T&Ds) account for around
21% of the total number of documents in RepositóriUM, and around 30% of the
total number of restricted or embargoed access documents (currently around 6,700),
but I estimate (based on some small “samples”) they represent far more than 50%
(probably around 60% to 70%) of the requests received.

Because most authors of T&Ds don’t maintain any connection with the
university after completing their thesis and dissertation, and they often
change the email that was registered at the time the document was deposited in the
repository (which is the email used to send the requests to authors), the T&Ds
response rate is very low (probably lower than 10%), and that obviously affects
the global response rate.

But we really don’t have data on this (we would need to “manually” look
into the request logs we have, as we are not registering the document type from
the requests) but based on some anecdotal evidence I estimate the response rate
from UMinho members (professors and researchers) will be at least two times higher
than the global average. So, excluding T&Ds, I “guess” the current response
rate will be around 50%, or even a little bit higher (from 50% to 60%).

Eprint fatigue

RP: In 2010 you made the following comment on a
blog: “Our experience is that authors get ‘tired’ of replying to copy requests,
especially when requests are very frequent. The consequence is that some start
not replying at all, and others ask to change to open access
articles/papers/theses there were in closed/embargoed access. We had more than
20 of those requests just on the last year…” Is that still your experience, or
have author’s attitudes and behaviour changed since then?

ER: In the last couple of
years I haven’t had regular conversations or feedback from Minho researchers
about the copy requests, in the way I did in the first few years after the
introduction of the button. But I know we still receive frequent (approximately
on a weekly basis) requests to change the access status of closed/embargoed
documents to open access.

RP: Presumably if a paper is on closed access as a result of a
publisher embargo it is not possible to change the status to open access?

ER: Presumably yes. But
there this a wide variety of behaviour from UMinho authors. While some are
confident and fearless, others are fearful at the time of deposit, especially with
papers published in journals or conference proceedings which do not have well
formalised self-archiving/OA policies. Afterwards they tend to become less
timid about their publications.

We inform authors about possible access permissions or restrictions to
their deposited publications, but we respect their wishes about the access
status.

RP: I assume most institutional repositories now have a request eprint
button. But I think not all IRs implement the button in the same way. Can you
talk me through the process at RepositóriUM once a user hits the eprint button?
Is it fully automated, or is there some manual intervention? What happens
behind the scenes when a user requests a copy of an item in the repository?

ER: The way we implement the
process in RepositóriUM (and I assume it will be similar in other DSpace based
repositories, as the request-copy addon to DSpace was developed here at UMinho)
is the following: When users hit the button (actually it is a closed access
logo) and fill in a form with their name and email (and an optional message),
an automatic email is immediately sent to the author.

That message contains a token URL, directing the author to a
RepositóriUM page, where there are two buttons – Send copy / Don’t send copy.
After choosing one of the options another page is displayed with a template
message, which can be edited by the replier. The final step is hitting the send
button.

So, in summary, the text is always provided by the author (and not
automatically or by the repository staff), and the process requires just 3
clicks, plus editing the reply message if the author chooses to do so.

RP: Advocates for use of the button believe that it is a much more
effective way for researchers to get access to papers on dark deposit than,
say, by directly emailing the authors. I note a paper
published in PLOS ONE in 2011 tested the email approach. A group of researchers
sent out a number of email requests for papers in the area of HIV vaccine
research. The success rates they reported were between 54% and 60%, which is
perhaps a little higher than the rates described in your 2010 paper. What do we
make of that?

ER: I can only speculate
about it. The button simplifies the process, both for the requester (who only
needs to make two clicks and, if they want, customise a model message to the
author) and for the author (who receives an email from the repository and just
needs to make three clicks, and if they want customise a reply message). But
maybe, at least for some people, this may appear completely impersonal and they
prefer the more personal and human direct email contact.

That said, I’m not convinced that email contact will get a higher response
rate than the button, and you cannot infer that from the PLOS paper. To test
that hypothesis you would need to test both approaches for the same universe of
publications and authors.

RP: The PLOS ONE study reported that two thirds of the papers (where
the author responded positively) were received “on the same day or the next.
However, the other third of respondents took on average 11 days to reply
(median 3 days, maximum 54 days).” Do you have any information on turnaround
time for those who use the button at UMinho?

ER: We just have data on the
mean response time. In 2014 the mean response time was near six days for accepted
requests, and 3.5 days for rejected requests. Again I think this result may be
slightly biased by a higher response time from T&Ds authors, but that would
need to be investigated.

User friendly?

RP: On March 2nd I tried to access a paper in RepositóriUM
called “Academic job satisfaction and motivation: findings from a nationwide
study in Portuguese higher education”. On trying to open the paper I was told
that it was on restricted access and invited to request a copy of it, which I
did. As the image below shows, I was informed that my request had been
successful. However, I never heard anything further, and was left in the dark
as to what had happened to my request. It is not a very user-friendly system is
it? Might not most readers be inclined to give up after even a couple of such
failed attempts to get a paper?

ER: Yes, I recognise that.
It is not very user friendly, and people may be inclined to give up after a
couple of “non-answers”. We’ve focused the development of the addon on making
it very easy and simple to use by external readers and especially by UMinho
authors.

At the time of development we really didn’t consider the issues around
monitoring, reporting, collecting statistics on the use of the button, or
providing feedback to requesters. And after the initial development we have really
just made some minor improvements/adjustments (like spam control through a
captcha feature) and upgraded it to the newest DSpace releases.

RP: My experience with the ORBi
repository at the University of Liège was somewhat different. I tried the
button there twice. On both occasions I received the full text (or a link to
it) within 24 hours. Paul Thirion, Head
librarian at the University of Liège, reports that the approval success rates
for requests made using the button built into the ORBi repository are higher
than average, ranging from 67% in 2009 to 81% in 2014. Do you have any sense of
why Liège is more successful at getting researchers to approve eprint requests
than other universities?

ER: I really don’t know. I
imagine that, apart from some subjective aspects (like cultural and organisational
differences and/or a different relationship to and perception of open access
and the institutional repository between researchers at Liège and Minho etc.),
there are some objective factors to explain it: probably the T&Ds effect is
not present at ORBi, and I can speculate that there is a difference in the percentage of closed/embargoed access documents in ORBi (which I think is
higher than in RepositóriUM), and maybe there is also a lower percentage of
documents for which the access status is changed to open after deposition. [RP: Paul Thirion
reports that around 62% of the documents in ORBi are full-text].

To what end?

RP: The paper you co-authored in 2010 goes on to say, “Given a
significant number of button requests whichare ignored or lost, one might be
tempted to assume that it has not worked. However, this is not true. The
principal impact of the Button has been to enable the adoption of institutional
IDOA
mandates.” This left me wondering as to the point of the button. I had
assumed the sole purpose was to ensure that those who want access to papers
under publisher embargo can nevertheless obtain a copy of them. For instance, in
commenting on the open access policy being introduced by the Higher Education
Funding Council for England Stevan Harnad described the
purpose of the button as being to “tide over the usage needs of UK and
worldwide researchers for the deposited research during the allowable embargo.”
Your paper, however, suggests that the objective is rather to encourage funders
and institutions to introduce OA mandates. What are your views today on the
purpose of the button?

ER: I think the introduction
of the button had both the immediate and practical objective of providing
access to papers which were deposited with temporary (embargo period) or
definitive access restriction, and the more strategic objective of helping in
the introduction of mandates (by creating a mechanism that allows mandating
universal deposit, regardless of eventual access restrictions, while offering a
“second class” access procedure).

In my opinion both purposes remain important today.

RP: How would you describe the success of the button today, and what do
you predict for its future success?

ER: I don’t know what the
global response rate to the button requests is.
But even if it is closer to the UMinho 50% estimate, than the Liege 80%
result, it means that tens or hundreds of thousands of papers were made
available to many readers that otherwise would not have access to them.

So, I think the button is relatively successful, both in actually
providing access to closed/embargoed access publications and in helping
institutions and funders to define self-archiving mandates, without pushing
themselves into spending yet more money by paying APCs, on top of their
subscription costs.

For the immediate future, I predict the button will remain useful and
hopefully more successful, as the number of mandatory polices, as well as
embargoes, grows.

RP: One thing I find striking is that advocates for the button seem to
have done very little research into its efficacy. Why do you think that is?

ER: I can only reply for
myself and for UMinho’s RepositóriUM. I think the first reason is that our main
focus is on managing and running the repository as a critical service of the
university, with limited capacity to do research and development. So we use
that limited capacity for very practical and applied developments and not on “non-applied
research”.

The second reason is that, despite being important and useful, the
button is not on our top three priorities for work on the repository. We’ve
devoted much of our efforts on improving the repository interoperability and
integration with other services/systems, on facilitating and simplifying the
deposit/self-archiving of publications into the repository, on collecting and
providing usage statistics to authors of publications in the repository, on
guaranteeing/improving repository visibility in the global search engines
(especially Google), etc. All those issues have higher strategic relevance for
us given the current state of policy implementation and repository development
at UMinho.

RP: Do you think there is a danger that if the button were to prove too
successful publishers might seek to curtail or prevent its use in some way?

ER: I don’t think so. It is
at least very questionable that publishers would have any solid legal ground to
act against the button use, and, on the other hand, it would give them very bad
publicity. So, from a cost-benefit point of view, I think the button is not a
high priority for publishers either.

RP: Thank you for taking the time to answer my questions.

__­­___

I am currently working on a longer
document about dark deposit and the request eprint button. As such, I would welcome people’s thoughts about and experiences of these two things. I can be contacted here.

Sunday, March 08, 2015

As
the open access train rolls towards the future more and more traditional scholarly
publishers are jumping on board. When and how they do so is not an easy
decision—as Wiley’s Alice Meadows pointed
out recently on the Scholarly Kitchen.
Nevertheless, OA is now inevitable, so the plunge has to be taken sooner or
later.

Collabra is a mega journal that will
initially focus on three broad disciplinary areas (life and biomedical
sciences, ecology and environmental science, and social and behavioural
sciences), and then expand into other disciplines at a later date. Collabra is expected
to publish its first articles in the next month or so.

Luminos is an open access monograph publisher that
will publish its first book this autumn.

What
is the context in which UC Press’ move needs to be seen?

The key
challenge open access poses for publishers is how to develop a workable business
model. After all, since OA requires that research publications are made freely
available, the traditional subscription model no longer works. Understandably, therefore, publishers have concluded that the costs of producing OA journals
and books will have to be recovered at the author’s side of the process (via author-side
fees) rather than at the reader’s side (via subscriptions).

The question therefore is: how
can this be done in a way that it is both workable and sustainable? Today there
are two primary ways of attempting to do this—the article-processing charge (APC) and the membership scheme.

In the former
case, the onus for finding the funds needed to pay to publish falls on authors.
This means that if they cannot persuade their institution or funder (assuming they
have one) to pay the bill, they may have to pay it themselves. (Most OA
publishers advertise fee waivers, but it is not entirely clear how many
researchers benefit from these, especially those offered by commercial publishers).

In the latter
case, the author’s institution takes on the responsibility—by bulk-buying APCs
(publication rights if you like) for all its researchers. Normally, this means the
institutional library will pay subscription-like annual fees to a number of open
access publishers. For authors this has the benefit of making OA publication
services free at the point of use, although there are variations on this model—e.g.
here
and here.

And as large subscription publishers like Springer ramp up their open access
activities we are seeing new-style
big deals emerge whereby libraries pay a single annual fee that covers both
access to the publisher’s paywalled content and
publishing rights for researchers who want to publish in their open access
journals.

These new models
have their critics, and OA advocates frequently point out that the majority of
OA journals today do not charge
a publication fee. The implication is that there are other, better, ways of
funding open access. Nevertheless, as large commercial subscription publishers increasingly
move into the open access space (offering OA journals and, increasingly, OA
books), the tide is currently moving strongly in the direction of author-side pay-to-publish
models.

Today, therefore, unless their institution has a membership scheme with the
OA journal in which they want to publish, authors looking to embrace OA still face
the challenge of finding some way of paying the publication fee. This can be very
difficult, particularly for researchers who have little or no funding (as UCLA behavioural
and evolutionary ecologist Peter Nonacs
describes here).

Those who work
in subjects where the monograph is the primary vehicle for communicating
research find themselves in a particularly hard place. Consider, for instance, that
where a commercial publisher like Springer charges $3,000
to make an article open access (and non-profit OA publisher PLOS charges between $1,350 and $2,900)
the cost of publishing an OA book can be as much as $17,500 + taxes (which is
what Palgrave
Macmillan charges). Clearly, this poses a huge challenge.

In the hope of
addressing this issue Knowledge Unlatched—a not-for-profit organisation coordinating a global consortium of
libraries to share the costs of making books open access—has pioneered a
library consortium approach.

The model used here is not unlike the membership
schemes used by OA journal publishers, but what libraries pay depends not on
the number of texts their researchers publish, but on how many other libraries join
the consortium. Basically, publication costs are shared between institutions on
a per title basis. Knowledge Unlatched estimates these
costs at around $13 to $60 per library, per book. Clearly, time will tell how
successful this approach proves.

Variations on a theme

So what is UC Press
bringing to the party? Essentially, while embracing the two primary author-side
payment models, the Press has introduced some interesting innovations. Let’s
describe its approach therefore as variations on a theme.

The first point
to make is that as a non-profit publisher subsidised by its host university, and
with its own foundation,
UC Press has been able to set Collabra’s APC at $875. This is not only significantly
lower than what commercial publishers charge, but considerably lower than PLOS ONE, the pioneering mega journal launched
by non-profit publisher Public Library of Science in 2006. PLOS ONE charges $1,350 per paper.

Moreover, only
$625 of this fee will go to Collabra, with $250 being pooled in what the publisher
calls a “Research Community Fund”. This fund is then used to pay editors and
reviewers a fee for their services. Explaining how it works to Scholastica,UC
Press’ director of digital development Neil Christensen said, “[O]n a quarterly basis we look at activities:
Reviewer A had X many decisions, Editor A had X many decisions, and for each
decision there is a point value. You take the total sum of the money in the
pool and then divide it by the total sum of the points that have been generated
for that period, and then allocate the money based on how many points or value
each individual has contributed.”

It
is this novel feature that has attracted most attention for
Collabra (see here and here for instance). But
in fact the more interesting aspect of Collabra’s model is that editors and
reviewers are invited not to take the money they have earned, but to give it
away—either by donating it to the Collabra Waiver Fund, or to their own
institutional open access fund. By doing so, they
can help researchers who do not have the money needed to publish make their
work open access too.

What this does is draw out attention to
the fact that scholarly publishing is essentially a communal and collaborative
activity, and one that works best when scientists and scholars are able to
share their findings in as frictionless a way as possible. While the Internet
has made it technically much easier
to share research, current models of open access have made it financially harder (since authors now
need money to pay to publish). As noted, this is especially difficult for those
in subjects with little in the way of funding. With Collabra, UC Press is
proposing that a possible way of mitigating this new obstacle is to invite researchers to share the costs of open access publishing amongst themselves in an equitable way.

And with this same aim in mind, Collabra plans to “pair” different research fields. As Mudditt explains
below, “One of Collabra’s core innovations is to test the thesis that we can
use income from fields with higher research funding to support those with
little or no funding. As such, this requires us to publish both in fields that
have substantial funding (such as the life sciences) and those that have far
less (in this case, social and behavioural sciences).”

The
same community-focussed approach is also inherent to the Luminos model. While its
publication fee ($15,000) is comparable to that charged by other publishers, UC
Press will subsidise the fee through a library membership scheme (research libraries are being asked to pay an
annual fee of $1,000 in order to “directly support researchers in getting vital
work into the world” and to “help ensure access to this work is open and free
to everyone”). The publishing costs will also be directly subsidised by UC
Press. As a result, it is expected that the cost to the author will be halved
to around $7,500. UC Press assumes that in most cases the author’s institution
will pay the subsidised fee, but it has also created a Luminos fee waiver fund
for those unable to obtain institutional support.

And
in a similar collaborative spirit, UC Press is working with the California Digital Library (courtesy of a $750,000 grant from the Andrew W.
Mellon Foundation) to develop a web-based open-source content management system
to support the publication of open access monographs in the humanities and
social sciences. When complete, the system will be made available to the wider community
of academic publishers, especially university presses and library publishers.

So
far as licensing goes, UC Press has decided to directly emulate what other OA
publishers are doing. All the papers published by Collabra, for instance, will be licensed
under a CC-BY licence—as they are with PLOS eLife, PeerJ, and F1000Research. And authors publishing with Luminos will be able
to choose from a range of Creative Commons licences, as they can with Knowledge Unlatched. When asked on the Scholarly Kitchen blog about the latter decision, Mudditt explained that research undertaken by UC Press (and by
Knowledge Unlatched) had “unearthed significant concerns from authors
about losing control of their material.”

In
summary, while UC Press’ OA programmes could be described as variations on a
theme, they come with some interesting innovations. These innovations remind us
that scholarly communication works best when it experiences as little friction (both technical
and financial) as possible. They also remind us that communicating research is
essentially a communal and collaborative process. And since for some authors open access introduces financial obstacles that did not previously exist, it follows that the research community needs to come
up with new non-discriminatory ways of sharing the costs of scholarly
communication.

It is also possible that today’s author-side pay-to-publish OA models may not
prove workable in the long term. The OA membership schemes being introduced by large journal publishers, for instance, seem destined to recreate the
dysfunctional market conditions that subscription publishers are accused of creating
with the big
deal. As such, it is not currently clear that open
access will solve the affordability problem that caused many to join the OA movement
in the first place.

But
the good news is that if publishers like UC Press continue to experiment, and to
innovate, both the accessibility and the
affordability problems may eventually be solved.

To
find out more about UC Press’ open access plans please read Mudditt’s answers
to my questions below.