Promising Drugs, Pricing and Access

The drug pricing debate rages on. What are the solutions to continuing to foster research and innovation, while ensuring access and affordability for patients? Can biosimilars and generics be able to expand market access in the U.S.?

Robert: He sees so many players in the onStevencology space discovering new drugs and other drugs are going generic (that is what is working). However are we spending too much on cancer care relative to other diseases (their initiative Going Beyond the Surface)

Steven: the advent of biosimilars is good for the industry

Patrick: large effort in oncology, maybe too much (750 trials on Keytruda) and he says pharma is spending on R&D (however clinical trials take large chunk of this money)

Robert: cancer has gotten a free ride but cost per year relative to benefit looks different than other diseases. Are we overinvesting in cancer or is that a societal decision

Gary: maybe as we become more specific with precision medicines high prices may be a result of our success in specifically targeting a mutation. We need to understand the targeted drugs and outcomes.

Patrick: “Cancer is the last big frontier” but he says prices will come down in most cases. He gives the example of Hep C treatment… the previous only therapeutic option was a very toxic yearlong treatment but the newer drugs may be more cost effective and safer

Steven: Our blockbuster drugs could diffuse the expense but now with precision we can’t diffuse the expense over a large number of patients

President’s Cancer Panel Recommendation

Six recommendations

promoting value based pricing

enabling communications of cost

financial toxicity

stimulate competition biosimilars

value based care

invest in biomedical research

Patrick: the government pricing regime is hurting. Alot of practical barriers but Merck has over 200 studies on cost basis

Robert: many concerns/impetus started in Europe on pricing as they are a set price model (EU won’t pay more than x for a drug). US is moving more to outcomes pricing. For every one health outcome study three studies did not show a benefit. With cancer it is tricky to establish specific health outcomes. Also Medicare gets best price status so needs to be a safe harbor for payers and biggest constraint is regulatory issues.

Steven: They all want value based pricing but we don’t have that yet and there is a challenge to understand the nuances of new therapies. Hard to align all the stakeholders together so until some legislation starts to change the reimbursement-clinic-patient-pharma obstacles. Possibly the big data efforts discussed here may help align each stakeholders goals.

Gary: What is the data necessary to understand what is happening to patients and until we have that information it still will be complicated to determine where investors in health care stand at in this discussion

Robert: on an ICER methods advisory board: 1) great concern of costs how do we determine fair value of drug 2) ICER is only game in town, other orgs only give recommendations 3) ICER evaluates long term value (cost per quality year of life), budget impact (will people go bankrupt)

4) ICER getting traction in the public eye and advocates 5) the problem is ICER not ready for prime time as evidence keeps changing or are they keeping the societal factors in mind and they don’t have total transparancy in their methodology

Steven: We need more transparency into all the costs associated with the drug and therapy and value-based outcome. Right now price is more of a black box.

Moderator: pointed to a recent study which showed that outpatient costs are going down while hospital based care cost is going rapidly up (cost of site of care) so we need to figure out how to get people into lower cost setting

Breaking Down Silos in Research

“Silo” is healthcare’s four-letter word. How are researchers, life science companies and others sharing information that can benefit patients more quickly? Hear from experts at institutions that are striving to tear down the walls that prevent data from flowing.

Seqster: Seqster is a secure platform that helps you and your family manage medical records, DNA, fitness, and nutrition data—all in one place. Founder has a genomic sequencing background but realized sequence information needs to be linked with medical records.

HealthShare Exchange envisions a trusted community of healthcare stakeholders collaborating to deliver better care to consumers in the greater Philadelphia region. HealthShare Exchange will provide secure access to health information to enable preventive and cost-effective care; improve quality of patient care; and facilitate care transitions. They have partnered with multiple players in healthcare field and have data on over 7 million patients.

Data can be overwhelming, but it doesn’t have to be this way. To drive healthcare efficiency, we designed a modular suite of products for a smooth transition into a data-driven world within 4 weeks. Why does it take so much money to move data around and so slowly?

What is interoperatibility?

Ardy: We knew in genomics field how to build algorithms to analyze big data but how do we expand this from a consumer standpoint and see and share your data.

Lauren: how can we use the data between patients, doctors, researchers? On the research side genomics represent only 2% of data. Silos are one issue but figuring out the standards for data (collection, curation, analysis) is not set. Still need to improve semantic interoperability. For example Flatiron had good annotated data on male metastatic breast cancer.

David: Technical interopatabliltiy (platform), semantic interopatability (meaning or word usage), format (syntactic) interopatibility (data structure). There is technical interoperatiblity between health system but some semantic but formats are all different (pharmacies use different systems and write different prescriptions using different suppliers). In any value based contract this problem is a big issue now (we are going to pay you based on the quality of your performance then there is big need to coordinate across platforms). We can solve it by bringing data in real time in one place and use mapping to integrate the format (need quality control) then need to make the data democratized among players.

Rakesh: Patients data should follow the patient. Of Philadelphia’s 12 health systems we had a challenge to make data interoperatable among them so tdhey said to providers don’t use portals and made sure hospitals were sending standardized data. Health care data is complex.

David: 80% of clinical data is noise. For example most eMedical Records are text. Another problem is defining a patient identifier which US does not believe in.

Please follow on Twitter using the following #hash tags and @pharma_BI

Imagine a world where doctors have at their fingertips the information that allows them to individualize a diagnosis, treatment or even a cure for a person based on their genes. That’s what President Obama envisioned when he announced his Precision Medicine Initiative earlier this year. Today, with the launch of FDA’s precisionFDA web platform, we’re a step closer to achieving that vision.

PrecisionFDA is an online, cloud-based, portal that will allow scientists from industry, academia, government and other partners to come together to foster innovation and develop the science behind a method of “reading” DNA known as next-generation sequencing (or NGS). Next Generation Sequencing allows scientists to compile a vast amount of data on a person’s exact order or sequence of DNA. Recognizing that each person’s DNA is slightly different, scientists can look for meaningful differences in DNA that can be used to suggest a person’s risk of disease, possible response to treatment and assess their current state of health. Ultimately, what we learn about these differences could be used to design a treatment tailored to a specific individual.

The precisionFDA platform is a part of this larger effort and through its use we want to help scientists work toward the most accurate and meaningful discoveries. precisionFDA users will have access to a number of important tools to help them do this. These tools include reference genomes, such as “Genome in the Bottle,” a reference sample of DNA for validating human genome sequences developed by the National Institute of Standards and Technology. Users will also be able to compare their results to previously validated reference results as well as share their results with other users, track changes and obtain feedback.

Over the coming months we will engage users in improving the usability, openness and transparency of precisionFDA.One way we’ll achieve that is by placing the code for the precisionFDA portal on the world’s largest open source software repository, GitHub, so the community can further enhance precisionFDA’s features.Through such collaboration we hope to improve the quality and accuracy of genomic tests – work that will ultimately benefit patients.

precisionFDA leverages our experience establishing openFDA, an online community that provides easy access to our public datasets. Since its launch in 2014, openFDA has already resulted in many novel ways to use, integrate and analyze FDA safety information. We’re confident that employing such a collaborative approach to DNA data will yield important advances in our understanding of this fast-growing scientific field, information that will ultimately be used to develop new diagnostics, treatments and even cures for patients.

The opinions expressed in this blog post are the author’s only and do not necessarily reflect those of MassDevice.com or its employees.

So What Are the Other Successes With Such Open Science 2.0 Collaborative Networks?

In the following post there are highlighted examples of these Open Scientific Networks and, as long as

transparancy

equal contributions (lack of heirarchy)

exists these networks can flourish and add interesting discourse. Scientists are already relying on these networks to collaborate and share however resistance by certain members of an “elite” can still exist. Social media platforms are now democratizing this new science2.0 effort. In addition the efforts of multiple biocurators (who mainly work for love of science) have organized the plethora of data (both genomic, proteomic, and literature) in order to provide ease of access and analysis.

Curation: an Essential Practice to Manage “Open Science”

The web 2.0 gave birth to new practices motivated by the will to have broader and faster cooperation in a more free and transparent environment. We have entered the era of an “open” movement: “open data”, “open software”, etc. In science, expressions like “open access” (to scientific publications and research results) and “open science” are used more and more often.

Another area, where there are most likely fewer barriers, is scientific and technical culture. This broad term involves different actors such as associations, companies, universities’ communication departments, CCSTI (French centers for scientific, technical and industrial culture), journalists, etc. A number of these actors do not limit their work to popularizing the scientific data; they also consider they have an authentic mission of “culturing” science. The curation practice thus offers a better organization and visibility to the information. The sought-after benefits will be different from one actor to the next.

Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians

Given the aforementioned problems of:

I. the complex and rapid deluge of scientific information

II. the need for a collaborative, open environment to produce transformative innovation

III. need for alternative ways to disseminate scientific findings

CURATION MAY OFFER SOLUTIONS

I. Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation

III. Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)

Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor

which has created a need for more context-driven scientific search and discourse.

However another issue would be Individual Bias if these networks are closed and protocols need to be devised to reduce bias from individual investigators, clinicians. This is where CONSENSUS built from OPEN ACCESS DISCOURSE would be beneficial as discussed in the following post:

a systematic error of methodology as it pertains to measurement or sampling (e.g., selection bias),

a systematic defect of design that leads to estimates of experimental and control groups, and of effect sizes that substantially deviate from true values (e.g., information bias), and

a systematic distortion of the analytical process, which results in a misrepresentation of the data with consequential errors of inference (e.g., inferential bias).

This post highlights many important points related to bias but in summarry there can be methodologies and protocols devised to eliminate such bias. Risk of bias can seriously adulterate the internal and the external validity of a clinical study, and, unless it is identified and systematically evaluated, can seriously hamper the process of comparative effectiveness and efficacy research and analysis for practice. The Cochrane Group and the Agency for Healthcare Research and Quality have independently developed instruments for assessing the meta-construct of risk of bias. The present article begins to discuss this dialectic.

Information dissemination to all stakeholders is key to increase their health literacy in order to ensure their full participation

So what about the safety and privacy of Data?

A while back I did a post and some interviews on how doctors in developing countries are using social networks to communicate with patients, either over established networks like Facebook or more private in-house networks. In addition, these doctor-patient relationships in developing countries are remote, using the smartphone to communicate with rural patients who don’t have ready access to their physicians.

According to International Telecommunication Union (ITU) statistics, world-wide mobile phone use has expanded tremendously in the past 5 years, reaching almost 6 billion subscriptions. By the end of this year it is estimated that over 95% of the world’s population will have access to mobile phones/devices, including smartphones.

This presents a tremendous and cost-effective opportunity in developing countries, and especially rural areas, for physicians to reach patients using mHealth platforms.

In Summary, although there are restrictions here in the US governing what information can be disseminated over social media networks, developing countries appear to have either defined the regulations as they are more dependent on these types of social networks given the difficulties in patient-physician access.

His article “Academic Publishing Can’t Remain Such a Great Business” discusses the history of academic publishing and how consolidation of smaller publishers into large scientific publishing houses (Bigger publishers bought smaller ones) has produced a monopoly like environment in which prices for journal subscriptions are rising. He also discusses how the open access movement is challenging this model and may oneday replace the big publishing houses.

A few tidbits from his article:

Publishers of academic journals have a great thing going. They generally don’t pay for the articles they publish, or for the primary editing and peer reviewing essential to preparing them for publication (they do fork over some money for copy editing). Most of this gratis labor is performed by employees of academic institutions. Those institutions, along with government agencies and foundations, also fund all the research that these journal articles are based upon.

Yet the journal publishers are able to get authors to sign over copyright to this content, and sell it in the form of subscriptions to university libraries. Most journals are now delivered in electronic form, which you think would cut the cost, but no, the price has been going up and up:

This isn’t just inflation at work: in 1994, journal subscriptions accounted for 51 percent of all library spending on information resources. In 2012 it was 69 percent.

Who exactly is getting that money? The largest academic publisher is Elsevier, which is also the biggest, most profitable division of RELX, the Anglo-Dutch company that was known until February as Reed Elsevier.

RELX reports results in British pounds; I converted to dollars in part because the biggest piece of the company’s revenue comes from the U.S. And yes, those are pretty great operating-profit margins: 33 percent in 2014, 39 percent in 2013. The next biggest academic publisher is Springer Nature, which is closely held (by German publisher Holtzbrinck and U.K. private-equity firm BC Partners) but reportedly has annual revenue of about $1.75 billion. Other biggies that are part of publicly traded companies include Wiley-Blackwell, a division of John Wiley & Sons; Wolters Kluwer Health, a division of Wolters Kluwer; and Taylor & Francis, a division of Informa.

And gives a brief history of academic publishing:

The history here is that most early scholarly journals were the work of nonprofit scientific societies. The goal was to disseminate research as widely as possible, not to make money — a key reason why nobody involved got paid. After World War II, the explosion in both the production of and demand for academic research outstripped the capabilities of the scientific societies, and commercial publishers stepped into the breach. At a time when journals had to be printed and shipped all over the world, this made perfect sense.

Once it became possible to effortlessly copy and disseminate digital files, though, the economics changed. For many content producers, digital copying is a threat to their livelihoods. As Peter Suber, the director of Harvard University’s Office for Scholarly Communication, puts it in his wonderful little book, “Open Access”:

And while NIH Tried To Force These Houses To Accept Open Access:

About a decade ago, the universities and funding agencies began fighting back. The National Institutes of Health in the U.S., the world’s biggest funder of medical research, began requiring in 2008 that all recipients of its grants submit electronic versions of their final peer-reviewed manuscripts when they are accepted for publication in journals, to be posted a year later on the NIH’s open-access PubMed depository. Publishers grumbled, but didn’t want to turn down the articles.

Big publishers are making $ by either charging as much as they can or focus on new customers and services

For the big publishers, meanwhile, the choice is between positioning themselves for the open-access future or maximizing current returns. In its most recent annual report, RELX leans toward the latter while nodding toward the former:

Over the past 15 years alternative payment models for the dissemination of research such as “author-pays” or “author’s funder-pays” have emerged. While it is expected that paid subscription will remain the primary distribution model, Elsevier has long invested in alternative business models to address the needs of customers and researchers.

Artificial Intelligence Versus the Scientist: Who Will Win?

Will DARPA Replace the Human Scientist: Not So Fast, My Friend!

Writer, Curator: Stephen J. Williams, Ph.D.

Last month’s issue of Science article by Jia You “DARPA Sets Out to Automate Research”[1] gave a glimpse of how science could be conducted in the future: without scientists. The article focused on the U.S. Defense Advanced Research Projects Agency (DARPA) program called ‘Big Mechanism”, a $45 million effort to develop computer algorithms which read scientific journal papers with ultimate goal of extracting enough information to design hypotheses and the next set of experiments,

all without human input.

The head of the project, artificial intelligence expert Paul Cohen, says the overall goal is to help scientists cope with the complexity with massive amounts of information. As Paul Cohen stated for the article:

“‘

Just when we need to understand highly connected systems as systems,

our research methods force us to focus on little parts.

”

The Big Mechanisms project aims to design computer algorithms to critically read journal articles, much as scientists will, to determine what and how the information contributes to the knowledge base.

As a proof of concept DARPA is attempting to model Ras-mutation driven cancers using previously published literature in three main steps:

One team is focused on extracting details on experimental procedures, using the mining of certain phraseology to determine the paper’s worth (for example using phrases like ‘we suggest’ or ‘suggests a role in’ might be considered weak versus ‘we prove’ or ‘provide evidence’ might be identified by the program as worthwhile articles to curate). Another team led by a computational linguistics expert will design systems to map the meanings of sentences.

Integrate each piece of knowledge into a computational model to represent the Ras pathway on oncogenesis.

Produce hypotheses and propose experiments based on knowledge base which can be experimentally verified in the laboratory.

The Human no Longer Needed?: Not So Fast, my Friend!

The problems the DARPA research teams are encountering namely:

Need for data verification

Text mining and curation strategies

Incomplete knowledge base (past, current and future)

Molecular biology not necessarily “requires casual inference” as other fields do

Verification

Notice this verification step (step 3) requires physical lab work as does all other ‘omics strategies and other computational biology projects. As with high-throughput microarray screens, a verification is needed usually in the form of conducting qPCR or interesting genes are validated in a phenotypical (expression) system. In addition, there has been an ongoing issue surrounding the validity and reproducibility of some research studies and data.

Therefore as DARPA attempts to recreate the Ras pathway from published literature and suggest new pathways/interactions, it will be necessary to experimentally validate certain points (protein interactions or modification events, signaling events) in order to validate their computer model.

Text-Mining and Curation Strategies

The Big Mechanism Project is starting very small; this reflects some of the challenges in scale of this project. Researchers were only given six paragraph long passages and a rudimentary model of the Ras pathway in cancer and then asked to automate a text mining strategy to extract as much useful information. Unfortunately this strategy could be fraught with issues frequently occurred in the biocuration community namely:

Manual or automated curation of scientific literature?

Biocurators, the scientists who painstakingly sort through the voluminous scientific journal to extract and then organize relevant data into accessible databases, have debated whether manual, automated, or a combination of both curation methods [2] achieves the highest accuracy for extracting the information needed to enter in a database. Abigail Cabunoc, a lead developer for Ontario Institute for Cancer Research’s WormBase (a database of nematode genetics and biology) and Lead Developer at Mozilla Science Lab, noted, on her blog, on the lively debate on biocuration methodology at the Seventh International Biocuration Conference (#ISB2014) that the massive amounts of information will require a Herculaneum effort regardless of the methodology.

The Big Mechanism team decided on a full automated approach to text-mine their limited literature set for relevant information however was able to extract only 40% of relevant information from these six paragraphs to the given model. Although the investigators were happy with this percentage most biocurators, whether using a manual or automated method to extract information, would consider 40% a low success rate. Biocurators, regardless of method, have reported ability to extract 70-90% of relevant information from the whole literature (for example for Comparative Toxicogenomics Database)[3-5].

Incomplete Knowledge Base

In an earlier posting (actually was a press release for our first e-book) I had discussed the problem with the “data deluge” we are experiencing in scientific literature as well as the plethora of ‘omics experimental data which needs to be curated.

Tackling the problem of scientific and medical information overload

Figure. The number of papers listed in PubMed (disregarding reviews) during ten year periods have steadily increased from 1970.

Analyzing and sharing the vast amounts of scientific knowledge has never been so crucial to innovation in the medical field. The publication rate has steadily increased from the 70’s, with a 50% increase in the number of original research articles published from the 1990’s to the previous decade. This massive amount of biomedical and scientific information has presented the unique problem of an information overload, and the critical need for methodology and expertise to organize, curate, and disseminate this diverse information for scientists and clinicians. Dr. Larry Bernstein, President of Triplex Consulting and previously chief of pathology at New York’s Methodist Hospital, concurs that “the academic pressures to publish, and the breakdown of knowledge into “silos”, has contributed to this knowledge explosion and although the literature is now online and edited, much of this information is out of reach to the very brightest clinicians.”

Traditionally, organization of biomedical information has been the realm of the literature review, but most reviews are performed years after discoveries are made and, given the rapid pace of new discoveries, this is appearing to be an outdated model. In addition, most medical searches are dependent on keywords, hence adding more complexity to the investigator in finding the material they require. Third, medical researchers and professionals are recognizing the need to converse with each other, in real-time, on the impact new discoveries may have on their research and clinical practice.

These issues require a people-based strategy, having expertise in a diverse and cross-integrative number of medical topics to provide the in-depth understanding of the current research and challenges in each field as well as providing a more conceptual-based search platform. To address this need, human intermediaries, known as scientific curators, are needed to narrow down the information and provide critical context and analysis of medical and scientific information in an interactive manner powered by web 2.0 with curators referred to as the “researcher 2.0”. This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Yaneer Bar-Yam of the New England Complex Systems Institute was not confident that using details from past knowledge could produce adequate roadmaps for future experimentation and noted for the article, “ “The expectation that the accumulation of details will tell us what we want to know is not well justified.”

In a recent post I had curated findings from four lung cancer omics studies and presented some graphic on bioinformatic analysis of the novel genetic mutations resulting from these studies (see link below)

which showed, that while multiple genetic mutations and related pathway ontologies were well documented in the lung cancer literature there existed many significant genetic mutations and pathways identified in the genomic studies but little literature attributed to these lung cancer-relevant mutations.

This ‘literomics’ analysis reveals a large gap between our knowledge base and the data resulting from large translational ‘omic’ studies.

A ‘literomics’ approach focuses on what we don NOT know about genes, proteins, and their associated pathways while a text-mining machine learning algorithm focuses on building a knowledge base to determine the next line of research or what needs to be measured. Using each approach can give us different perspectives on ‘omics data.

Deriving Casual Inference

Ras is one of the best studied and characterized oncogenes and the mechanisms behind Ras-driven oncogenenis is highly understood. This, according to computational biologist Larry Hunt of Smart Information Flow Technologies makes Ras a great starting point for the Big Mechanism project. As he states,” Molecular biology is a good place to try (developing a machine learning algorithm) because it’s an area in which common sense plays a minor role”.

Even though some may think the project wouldn’t be able to tackle on other mechanisms which involve epigenetic factors UCLA’s expert in causality Judea Pearl, Ph.D. (head of UCLA Cognitive Systems Lab) feels it is possible for machine learning to bridge this gap. As summarized from his lecture at Microsoft:

“The development of graphical models and the logic of counterfactuals have had a marked effect on the way scientists treat problems involving cause-effect relationships. Practical problems requiring causal information, which long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Moreover, problems that were thought to be purely statistical, are beginning to benefit from analyzing their causal roots.”

According to him first

1) articulate assumptions

2) define research question in counter-inference terms

Then it is possible to design an inference system using calculus that tells the investigator what they need to measure.

The key for the Big Mechansism Project may me be in correcting for the variables among studies, in essence building a models system which may not rely on fully controlled conditions. Dr. Peter Spirtes from Carnegie Mellon University in Pittsburgh, PA is developing a project called the TETRAD project with two goals: 1) to specify and prove under what conditions it is possible to reliably infer causal relationships from background knowledge and statistical data not obtained under fully controlled conditions 2) develop, analyze, implement, test and apply practical, provably correct computer programs for inferring causal structure under conditions where this is possible.

In summary such projects and algorithms will provide investigators the what, and possibly the how should be measured.

PROGRAM ANNOUNCEMENT

Wednesday, 19th March 2014Registration and coffee begins – 08.00
Program begins – 08.15
Networking reception will take place at 18.00 – 20.00

Once you arrive at 7 World Trade Center (250 Greenwich St, New York, NY10007, USA).
Please use the D Elevator Bank to the 40th floor where Sachs Team will welcome you at the registration desk.For urgent issues, please contact:Tomas@sachsforum.com (cell number +44 77 043 158 71)
Or Mina@sachsforum.com (cell number +44 74 636 695 04) Cells available from 15th March.

Announcement

LEADERS IN PHARMACEUTICAL BUSINESS INTELLIGENCE will cover the event for the Scientific Media

Sachs Cancer Bio Partnering &
Investment Forum

Promoting Public & Private Sector Collaboration & Investment

in Drug Development

The 2nd Annual Sachs Cancer Bio Partnering & Investment Forum is designed to bring together thought leaders from cancer research institutes, patient advocacy groups, pharma and biotech to facilitate partnering and funding/investment. We expect around 200 delegates and there is an online meeting system and meeting facilities to make the event transactional. There will also be a track of about 30 presentations by listed and private biotechnology companies seeking licensing/investment.

The 2nd Annual Sachs Cancer Bio Partnering & Investment Forum will cover the following topics in the program:

Presenting at the forum offers excellent opportunities to showcase activities and highlight investment and partnership opportunities. Biotech companies will be able to communicate investment and licensing opportunities. These are for both public and private companies. The audience is comprised of financial and industry investors. These are streamed 15 minute presentations. The patient advocacy presentations are 30 minutes.

Sachs forums are recognised as the leading international stage for those interested in investing in the biotech and life science industry and are highly transactional. They draw together an exciting cross-section of early-stage/pre-IPO, late-stage and public companies with leading investors, analysts, money managers and pharmas. The Boston forum provides the additional interaction with the academic/scientific and patient advocacy communities.

Sponsorship and Exhibition

Sachs Associates has developed an extensive knowledge of the key individuals operating within the European and global biotech industry. This together with a growing reputation for excellence puts Sachs Associates at the forefront of the industry and provides a powerful tool by which to increase the position of your company in this market.

Raise your company’s profile directly with your potential clients. All of our sponsorship packages are tailor made to each client, allowing your organisation to gain the most out of attending our industry driven events.

The 2nd Annual Sachs Cancer Bio Partnering & Investment Forum is designed to bring together thought leaders from cancer research institutes, patient advocacy groups, pharma and biotech to facilitate partnering and funding/investment. We expect around 200 delegates and there is an online meeting system and meeting facilities to make the event transactional. There will also be a track of about 30 presentations by listed and private biotechnology companies seeking licensing/investment.The 2nd Annual Sachs Cancer Bio Partnering & Investment Forum will cover the following topics in the program:

Advances in Translational Research

Strategies for Small Molecule and Biologicals Drug Development

Deal Making

Public & Private Partnerships

Confirmed Speakers & Chairs include:

The 2nd Annual Sachs Cancer Bio Partnering & Investment Forum will cover the following topics in the program:

Presenting at the forum offers excellent opportunities to showcase activities and highlight investment and partnership opportunities. Biotech companies will be able to communicate investment and licensing opportunities. These are for both public and private companies. The audience is comprised of financial and industry investors. These are streamed 15 minute presentations. The patient advocacy presentations are 30 minutes.

Sachs forums are recognised as the leading international stage for those interested in investing in the biotech and life science industry and are highly transactional. They draw together an exciting cross-section of early-stage/pre-IPO, late-stage and public companies with leading investors, analysts, money managers and pharmas. The Boston forum provides the additional interaction with the academic/scientific and patient advocacy communities.

Sponsorship and Exhibition

Sachs Associates has developed an extensive knowledge of the key individuals operating within the European and global biotech industry. This together with a growing reputation for excellence puts Sachs Associates at the forefront of the industry and provides a powerful tool by which to increase the position of your company in this market.

Raise your company’s profile directly with your potential clients. All of our sponsorship packages are tailor made to each client, allowing your organisation to gain the most out of attending our industry driven events.

PeerJ
Hi Aviva,
We are very pleased to announce <http://blog.peerj.com/post//celebrating-the-one-year-anniversary-of-peerj> that this is the one year anniversary of PeerJ – it was on June 12th, 2012 that we first announced ourselves and started the process towards becoming a fully-fledged publishing company. Today, just 12 months later, PeerJ is completely up and running; we are publishing high quality peer-reviewed science; and we are doing our very best to change the world by pushing the boundaries of Open Access!

To briefly overview what has been achieved in the last year – we announced ourselves on June 12th 2012 and opened the PeerJ doors for submissions on December 3rd. We published our first PeerJ articles on Feb 12th 2013, and followed up by launching PeerJ PrePrints on April 3rd 2013. This last year has been spent recruiting an Editorial Board of 800 world renowned researchers; building cutting edge submission, peer-review, publication and pre-print software from scratch; establishing ourselves with all the major organizations who archive, index, list and certify new publications; and building an entirely new type <http://blog.peerj.com/post/46261563342/6-reasons-to-publish-with-peerj> of publishing company from the ground up.

We are celebrating this milestone with a new PeerJ Competition. On June 19th, we will give away 12 “complimentary publication” passes (the ability to publish one paper with us at no cost to you or any of your co-authors) + a PeerJ Charlie T-Shirt + a pin + a fridge magnet (!) to a random selection of 12 people (one for each month of our first year) who publicly post some variation of the following message:

“PeerJ just turned one! Open access publishing, for just $99 for life – check them out and submit now!”

Please include a link to us as well (you choose the best one!).

The last year has been an intense journey, and to be honest we have been so busy we almost missed the anniversary! We would like to take this opportunity to thank the many thousands of researchers who have signed up as PeerJ Members; all those who have authored or reviewed articles; all those who have joined our Editorial Board; and anyone who have simply expressed their support – without the involvement and enthusiasm of these people we would not be where we are today. Of course, we must also thank our dedicated staff (Alf Eaton, Patrick McAndrew and Jackie Thai) and Tim O’Reilly, who collectively took a chance on a brand new publishing concept, but who have been irreplaceable in making us what we are today!

Part One

e-Recognition for Author Views is presented below of a pioneering launch of the ONE and ONLY web-based Open Access Online Scientific Journal on frontiers in Biomedical Technologies, Genomics, Biological Sciences, Healthcare Economics, Pharmacology, Pharmaceutical & Medicine.

Friction-free Collaboration over the Internet: An Equity Sharing Venture for “Open Access to Curation of Scientific Research” launched THREE TYPES of Scientific Research Sharing

Published: January 16, 2012

The New England Journal of Medicine marks its 200th anniversary this year with a timeline celebrating the scientific advances first described in its pages: the stethoscope (1816), the use of ether foranesthesia (1846), and disinfecting hands and instruments before surgery (1867), among others.

Timothy Fadek for The New York Times

For centuries, this is how science has operated — through research done in private, then submitted to science and medical journals to be reviewed by peers and published for the benefit of other researchers and the public at large. But to many scientists, the longevity of that process is nothing to celebrate.

The system is hidebound, expensive and elitist, they say. Peer review can take months, journal subscriptions can be prohibitively costly, and a handful of gatekeepers limit the flow of information. It is an ideal system for sharing knowledge, said the quantum physicist Michael Nielsen, only “if you’re stuck with 17th-century technology.”

Dr. Nielsen and other advocates for “open science” say science can accomplish much more, much faster, in an environment of friction-free collaboration over the Internet. And despite a host of obstacles, including the skepticism of many established scientists, their ideas are gaining traction.

Open-access archives and journals like arXiv and the Public Library of Science (PLoS) have sprung up in recent years. GalaxyZoo, a citizen-science site, has classified millions of objects in space, discovering characteristics that have led to a raft of scientific papers.

On the collaborative blog MathOverflow, mathematicians earn reputation points for contributing to solutions; in another math experiment dubbed the Polymath Project, mathematicians commenting on the Fields medalistTimothy Gower’s blog in 2009 found a new proof for a particularly complicated theorem in just six weeks.

And a social networking site called ResearchGate — where scientists can answer one another’s questions, share papers and find collaborators — is rapidly gaining popularity.

Editors of traditional journals say open science sounds good, in theory. In practice, “the scientific community itself is quite conservative,” said Maxine Clarke, executive editor of the commercial journal Nature, who added that the traditional published paper is still viewed as “a unit to award grants or assess jobs and tenure.”

Dr. Nielsen, 38, who left a successful science career to write “Reinventing Discovery: The New Era of Networked Science,” agreed that scientists have been “very inhibited and slow to adopt a lot of online tools.” But he added that open science was coalescing into “a bit of a movement.”

On Thursday, 450 bloggers, journalists, students, scientists, librarians and programmers will converge on North Carolina State University (and thousands more will join in online) for the sixth annual ScienceOnline conference. Science is moving to a collaborative model, said Bora Zivkovic, a chronobiology blogger who is a founder of the conference, “because it works better in the current ecosystem, in the Web-connected world.”

Indeed, he said, scientists who attend the conference should not be seen as competing with one another. “Lindsay Lohan is our competitor,” he continued. “We have to get her off the screen and get science there instead.”

Facebook for Scientists?

“I want to make science more open. I want to change this,” said Ijad Madisch, 31, the Harvard-trained virologist and computer scientist behind ResearchGate, the social networking site for scientists.

Started in 2008 with few features, it was reshaped with feedback from scientists. Its membership has mushroomed to more than 1.3 million, Dr. Madisch said, and it has attracted several million dollars in venture capital from some of the original investors of Twitter, eBay and Facebook.

A year ago, ResearchGate had 12 employees. Now it has 70 and is hiring. The company, based in Berlin, is modeled after Silicon Valley startups. Lunch, drinks and fruit are free, and every employee owns part of the company.

The Web site is a sort of mash-up of Facebook, Twitter and LinkedIn, with profile pages, comments, groups, job listings, and “like” and “follow” buttons (but without baby photos, cat videos and thinly veiled self-praise). Only scientists are invited to pose and answer questions — a rule that should not be hard to enforce, with discussion threads about topics like polymerase chain reactions that only a scientist could love.

Scientists populate their ResearchGate profiles with their real names, professional details and publications — data that the site uses to suggest connections with other members. Users can create public or private discussion groups, and share papers and lecture materials. ResearchGate is also developing a “reputation score” to reward members for online contributions.

ResearchGate offers a simple yet effective end run around restrictive journal access with its “self-archiving repository.” Since most journals allow scientists to link to their submitted papers on their own Web sites, Dr. Madisch encourages his users to do so on their ResearchGate profiles. In addition to housing 350,000 papers (and counting), the platform provides a way to search 40 million abstracts and papers from other science databases.

In 2011, ResearchGate reports, 1,620,849 connections were made, 12,342 questions answered and 842,179 publications shared. Greg Phelan, chairman of the chemistry department at the State University of New York, Cortland, used it to find new collaborators, get expert advice and read journal articles not available through his small university. Now he spends up to two hours a day, five days a week, on the site.

Dr. Rajiv Gupta, a radiology instructor who supervised Dr. Madisch at Harvard and was one of ResearchGate’s first investors, called it “a great site for serious research and research collaboration,” adding that he hoped it would never be contaminated “with pop culture and chit-chat.”

Travis Dove for The New York Times

COME TOGETHER Bora Zivkovic, a chronobiology blogger, is a founder of the ScienceOnline conference.

Dr. Gupta called Dr. Madisch the “quintessential networking guy — if there’s a Bill Clinton of the science world, it would be him.”

The Paper Trade

Dr. Sönke H. Bartling, a researcher at the German CancerResearch Center who is editing a book on “Science 2.0,” wrote that for scientists to move away from what is currently “a highly integrated and controlled process,” a new system for assessing the value of research is needed. If open access is to be achieved through blogs, what good is it, he asked, “if one does not get reputation and money from them?”

Changing the status quo — opening data, papers, research ideas and partial solutions to anyone and everyone — is still far more idea than reality. As the established journals argue, they provide a critical service that does not come cheap.

“I would love for it to be free,” said Alan Leshner, executive publisher of the journal Science, but “we have to cover the costs.” Those costs hover around $40 million a year to produce his nonprofit flagship journal, with its more than 25 editors and writers, sales and production staff, and offices in North America, Europe and Asia, not to mention print and distribution expenses. (Like other media organizations, Science has responded to the decline in advertising revenue by enhancing its Web offerings, and most of its growth comes from online subscriptions.)

Similarly, Nature employs a large editorial staff to manage the peer-review process and to select and polish “startling and new” papers for publication, said Dr. Clarke, its editor. And it costs money to screen for plagiarism and spot-check data “to make sure they haven’t been manipulated.”

The largest journal publisher, Elsevier, whose products include The Lancet, Cell and the subscription-based online archive ScienceDirect, has drawn considerable criticism from open-access advocates and librarians, who are especially incensed by its support for the Research Works Act, introduced in Congress last month, which seeks to protect publishers’ rights by effectively restricting access to research papers and data.

In an Op-Ed article in The New York Times last week,Michael B. Eisen, a molecular biologist at the University of California, Berkeley, and a founder of the Public Library of Science, wrote that if the bill passes, “taxpayers who already paid for the research would have to pay again to read the results.”

In an e-mail interview, Alicia Wise, director of universal access at Elsevier, wrote that “professional curation and preservation of data is, like professional publishing, neither easy nor inexpensive.” And Tom Reller, a spokesman for Elsevier, commented on Dr. Eisen’s blog, “Government mandates that require private-sector information products to be made freely available undermine the industry’s ability to recoup these investments.”

Mr. Zivkovic, the ScienceOnline co-founder and a blog editor for Scientific American, which is owned by Nature, was somewhat sympathetic to the big journals’ plight. “They have shareholders,” he said. “They have to move the ship slowly.”

Still, he added: “Nature is not digging in. They know it’s happening. They’re preparing for it.”

Science 2.0

Scott Aaronson, a quantum computing theorist at the Massachusetts Institute of Technology, has refused to conduct peer review for or submit papers to commercial journals. “I got tired of giving free labor,” he said, to “these very rich for-profit companies.”

Dr. Aaronson is also an active member of online science communities like MathOverflow, where he has earned enough reputation points to edit others’ posts. “We’re not talking about new technologies that have to be invented,” he said. “Things are moving in that direction. Journals seem noticeably less important than 10 years ago.”

Dr. Leshner, the publisher of Science, agrees that things are moving. “Will the model of science magazines be the same 10 years from now? I highly doubt it,” he said. “I believe in evolution.

“When a better system comes into being that has quality and trustability, it will happen. That’s how science progresses, by doing scientific experiments. We should be doing that with scientific publishing as well.”

Matt Cohler, the former vice president of product management at Facebook who now represents Benchmark Capital on ResearchGate’s board, sees a vast untapped market in online science.

“It’s one of the last areas on the Internet where there really isn’t anything yet that addresses core needs for this group of people,” he said, adding that “trillions” are spent each year on global scientific research. Investors are betting that a successful site catering to scientists could shave at least a sliver off that enormous pie.

Dr. Madisch, of ResearchGate, acknowledged that he might never reach many of the established scientists for whom social networking can seem like a foreign language or a waste of time. But wait, he said, until younger scientists weaned on social media and open-source collaboration start running their own labs.

“If you said years ago, ‘One day you will be on Facebook sharing all your photos and personal information with people,’ they wouldn’t believe you,” he said. “We’re just at the beginning. The change is coming.”

Views of Célya Gruson-Daniel, October 29, 2012, MyScienceWork

The Internet now makes it possible to publish and share billions of data items every day, accessible to over 2 billion people worldwide. This mass of information makes it difficult, when searching, to extract the relevant and useful information from the background noise. It should be added that these searches are time-consuming and can take much longer than the time we actually have to spend on them. Today, Google and specialized search engines such as Google Scholar are based on established algorithms. But are these algorithms sufficiently in line with users’ needs? What if the web needed a human brain to select and put forward the relevant information and not just the information based on “popularity” and lexical and semantic operations?

To address this need, human intermediaries, empowered by the participatory wave of web 2.0, naturally started narrowing down the information and providing an angle of analysis and some context. They are bloggers, regular Internet users or community managers – a new type of profession dedicated to the web 2.0. A new use of the web has emerged, through which the information, once produced, is collectively spread and filtered by Internet users who create hierarchies of information. This “popularization of the web”therefore paves the way to a user-centered Internet that plays a more active role in finding means to improve the dissemination of information and filter it with more relevance. Today, this new practice has also been categorized and is known as curation.

The term “curation” was borrowed from the world of fine arts. Curators are responsible for the exhibitions held in museums and galleries. They build these exhibitions and act as intermediaries between the public and works of art. In contemporary art, the curator’s role is also to interpret works of art and discover new artists and trends of the moment. In a similar way on the web, the tasks performed by content curators include the search, selection, analysis, editorial work and dissemination of information. Curators can also share online the most relevant information on a specific subject. Instead of acting as mere echo chambers, they provide some context for their searches. For example, they address niche topics and themes that do not stand out in a traditional search. They prioritize the information and are able to find new means of presenting it, new types of visualization. Their role is, therefore, to find new formats, faster and more direct means of consultation for Internet users, in a context in which the time we spend reading the information is more and more limited. Curation on the web has a social and relational dimension that plays a central role in the curator’s work. Anyone can act as a curator and personalize information, providing an angle that he or she invites us to discover. This means that curation can be carried out by individuals who do not have an institutional footing. The expression “powered by people” exemplifies this possibility of democratizing information searches.

The world of scientific research and culture is no exception to this movement. The web 2.0 offers the scientific community and its surrounding spheres the opportunity to discover new tools that transform practices and uses, not only of researchers, but also of all the actors of scientific and technical culture (STC).

Curation: an Essential Practice to Manage “Open Science”

The web 2.0 gave birth to new practices motivated by the will to have broader and faster cooperation in a more free and transparent environment. We have entered the era of an “open” movement: “open data”, “open software”, etc. In science, expressions like “open access” (to scientific publications and research results) and “open science” are used more and more often.

The concept of “open science” emerged from the web and created bigger and bigger niches all around the planet. Open science and its derivatives such as open access make us dream of an era of open, collective expertise and innovation on an international scale. This catalyst in the field of science is only possible on one condition: that it be accompanied by the emergence of a reflection on the new practices and uses that are essential to its conservation and progress. Sharing information and data at the international level is very demanding in terms of management and organization. As a result, curation has established itself in the realm of science and technology, both in the research community and in the world of scientific and technical culture.

In the world of research, curation appears as a logical extension of the literature review and bibliographic search, the pillars of a researcher’s work. Curation on the web has brought a new dimension to this work of organizing and prioritizing information. It makes it easier for researchers to collaborate and share, while also bringing to light some works that had previously remained in the shadows.

Mendeley and Zotero are both search and bibliographic management tools that assist you in the creation of an online library. Thus, it is possible to navigate in this mass of bibliographic data, referenced by the researcher, through multiple gateways: keywords, authors’ names, date of publication, etc. In addition, these programs make it possible to generate automatically article bibliographies in the formats specified by each scientific journal. What is new about these tools, apart from the “logistical” aid they provide, is that they are based on collaboration and sharing. Mendeley and Zotero let you create private or public groups. These groups make it possible to share a bibliography with other researchers. They also give access to discussion forums that are useful for sharing with international researchers. Other tools like EndNote and Papersexist, but these paid softwares are less collaborative.

New platforms, real scientific social networks, have also appeared. The leading platform ResearchGate was founded in 2008 and now counts 1.9 million users (august 2012). It is an online search platform, but it is used above all for social interaction. Researchers can create a profile and discussion groups, make their work available online, job hunt, etc. Other professional social networks for researchers have emerged, among them MyScienceWork, which is devoted to open access.

Curation, in the era of open science, accelerates the dissemination of information and provides access to the most relevant parts. Post-publication comments add value to the content. Apart from the benefits for the community, these new practices change the role of researchers in society by offering them new public spaces for expression. Curation on the web opens the way towards the development of an e-reputation and a new form of celebrity in the world of international science. It gives everyone the opportunity to show the cornerstones of their work in the same way that the research notebooks of Hypothèses.orgwere used in Humanities and Social Sciences. This system based on the dual role of “observer/observed” may also impose limits on researchers who would have to be more thorough in the choice of the articles they list.

Have we entered the era of the “researcher 2.0”? Undoubtedly, even if it is still limited to a small group of people. The tools described above are widely used for bibliographic management but their collaborative function is still less used. It is difficult to change researchers’ practices and attitudes. To move from a closed science to an open science in a world of cutthroat competition, researchers will have to grope their way along. These new means of sharing are still sometimes perceived as a threat to the work of researchers or as an excessively long and tedious activity.

Another area, where there are most likely fewer barriers, is scientific and technical culture. This broad term involves different actors such as associations, companies, universities’ communication departments, CCSTI (French centers for scientific, technical and industrial culture), journalists, etc. A number of these actors do not limit their work to popularizing the scientific data; they also consider they have an authentic mission of “culturing” science. The curation practice thus offers a better organization and visibility to the information. The sought-after benefits will be different from one actor to the next. University communication departments are using the web 2.0 more and more to promote their values; this is the case, for example, for the FrenchUniversité Paris 8. For companies, curation offers the opportunity to become a reference on the themes related to their corporate identity. MyScienceWork, for example, began curating three collections surrounding the key themes of its project. The key topics of its identity are essentially open access, new uses and practices of the web 2.0 in the world of science and “women in science”. It is essential to keep abreast of the latest news coming from large institutions and traditional media, but also to take into account bloggers’ articles and links that offer a different viewpoint.

Some tools have also been developed in order to meet the expectations of these various users. Pearltreesand Scoopit are non-specialized curation tools that are widely used by the world of Scientific and Technical Culture. Pearltrees offers a visual representation in which each listed page is presented as a pearl connected to the others through branches. The result: a prioritized data tree. These mindmaps can be shared with one’s contacts. A good example of this is the work done by Sébastien Freudenthal, who uses this tool on a daily basis and offers rich content listed by theme in the field of Sciences and Web. Scoopit offers a more traditional presentation with a nice page layout that looks like a magazine. It enables you to list articles quickly and almost automatically, thanks to a plugin, and also to share them. A special tool for the “world” of Technical and Scientific Culture is the social network of scientific culture Knowtex that, in addition to its referencing and links assessment functions, seeks to create a space interconnecting journalists, artists, communicators, designers, bloggers, researchers, etc.

These different tools are used on a daily basis by various actors of technical and scientific culture, but also by researchers, teachers, etc. They gather these communities around a shared practice and favor multiple conversations. The development of these hybrid networks is surely a cornerstone in the building of open science, encouraging the creation of new ties between science and society that go beyond the traditional geographical limits.

Summary

This article has two parts, the first presents a pioneering experience in Curation of Scientific Research in an Open Access Online Scientific Journal, in a BioMed e-Books Series and in curation of a Scoop.it! Journal on Medical Imaging.

The second Part, presents Views of two Curators on the transformation of Scientific Publishing and the functioning of the Scientific AGORA (market place in the Ancient Greek CIty of Athena).

The CHANGES described above are irrevocable and foster progress of civilization by provision of ACCESS to the Scientific Process and Resources via collaboration among peers.

Published: April 7, 2013

The scientists who were recruited to appear at a conference called Entomology-2013 thought they had been selected to make a presentation to the leading professional association of scientists who study insects.

But they found out the hard way that they were wrong. The prestigious, academically sanctioned conference they had in mind has a slightly different name: Entomology 2013 (without the hyphen). The one they had signed up for featured speakers who were recruited by e-mail, not vetted by leading academics. Those who agreed to appear were later charged a hefty fee for the privilege, and pretty much anyone who paid got a spot on the podium that could be used to pad a résumé.

“I think we were duped,” one of the scientists wrote in an e-mail to the Entomological Society.

Those scientists had stumbled into a parallel world of pseudo-academia, complete with prestigiously titled conferences and journals that sponsor them. Many of the journals and meetings have names that are nearly identical to those of established, well-known publications and events.

Steven Goodman, a dean and professor of medicine at Stanford and the editor of the journal Clinical Trials, which has its own imitators, called this phenomenon “the dark side of open access,” the movement to make scholarly publications freely available.

The number of these journals and conferences has exploded in recent years as scientific publishing has shifted from a traditional business model for professional societies and organizations built almost entirely on subscription revenues to open access, which relies on authors or their backers to pay for the publication of papers online, where anyone can read them.

Open access got its start about a decade ago and quickly won widespread acclaim with the advent of well-regarded, peer-reviewed journals like those published by the Public Library of Science, known as PLoS. Such articles were listed in databases like PubMed, which is maintained by the National Library of Medicine, and selected for their quality.

But some researchers are now raising the alarm about what they see as the proliferation of online journals that will print seemingly anything for a fee. They warn that nonexperts doing online research will have trouble distinguishing credible research from junk. “Most people don’t know the journal universe,” Dr. Goodman said. “They will not know from a journal’s title if it is for real or not.”

Researchers also say that universities are facing new challenges in assessing the résumés of academics. Are the publications they list in highly competitive journals or ones masquerading as such? And some academics themselves say they have found it difficult to disentangle themselves from these journals once they mistakenly agree to serve on their editorial boards.

The phenomenon has caught the attention of Nature, one of the most competitive and well-regarded scientific journals. In a news report published recently, the journal noted “the rise of questionable operators” and explored whether it was better to blacklist them or to create a “white list” of those open-access journals that meet certain standards. Nature included a checklist on “how to perform due diligence before submitting to a journal or a publisher.”

Jeffrey Beall, a research librarian at the University of Colorado in Denver, has developed his own blacklist of what he calls “predatory open-access journals.” There were 20 publishers on his list in 2010, and now there are more than 300. He estimates that there are as many as 4,000 predatory journals today, at least 25 percent of the total number of open-access journals.

“It’s almost like the word is out,” he said. “This is easy money, very little work, a low barrier start-up.”

Journals on what has become known as “Beall’s list” generally do not post the fees they charge on their Web sites and may not even inform authors of them until after an article is submitted. They barrage academics with e-mail invitations to submit articles and to be on editorial boards.

One publisher on Beall’s list, Avens Publishing Group, even sweetened the pot for those who agreed to be on the editorial board of The Journal of Clinical Trails & Patenting, offering 20 percent of its revenues to each editor.

One of the most prolific publishers on Beall’s list, Srinubabu Gedela, the director of the Omics Group, has about 250 journals and charges authors as much as $2,700 per paper. Dr. Gedela, who lists a Ph.D. from Andhra University in India, says on his Web site that he “learnt to devise wonders in biotechnology.”

Another Beall’s list publisher, Dove Press, says on its Web site, “There are no limits on the number or size of the papers we can publish.”

Open-access publishers say that the papers they publish are reviewed and that their businesses are legitimate and ethical.

“There is no compromise on quality review policy,” Dr.Gedela wrote in an e-mail. “Our team’s hard work and dedicated services to the scientific community will answer all the baseless and defamatory comments that have been made aboutOmics.”

But some academics say many of these journals’ methods are little different from spam e-mails offering business deals that are too good to be true.

Paulino Martínez, a doctor in Celaya, Mexico, said he was gullible enough to send two articles in response to an e-mail invitation he received last year from The Journal of Clinical Case Reports. They were accepted. Then came a bill saying he owed $2,900. He was shocked, having had no idea there was a fee for publishing. He asked to withdraw the papers, but they were published anyway.

“I am a doctor in a hospital in the province of Mexico, and I don’t have the amount they requested,” Dr. Martínez said. The journal offered to reduce his bill to $2,600. Finally, after a year and many e-mails and a phone call, the journal forgave the money it claimed he owed.

Some professors listed on the Web sites of journals on Beall’s list, and the associated conferences, say they made a big mistake getting involved with the journals and cannot seem to escape them.

Thomas Price, an associate professor of reproductive endocrinology and fertility at the Duke University School of Medicine, agreed to be on the editorial board of The Journal of Gynecology & Obstetrics because he saw the name of a well-respected academic expert on its Web site and wanted to support open-access journals. He was surprised, though, when the journal repeatedly asked him to recruit authors and submit his own papers. Mainstream journals do not do this because researchers ordinarily want to publish their papers in the best journal that will accept them. Dr. Price, appalled by the request, refused and asked repeatedly over three years to be removed from the journal’s editorial board. But his name was still there.

“They just don’t pay any attention,” Dr. Price said.

About two years ago, James White, a plant pathologist at Rutgers, accepted an invitation to serve on the editorial board of a new journal, Plant Pathology & Microbiology, not realizing the nature of the journal. Meanwhile, his name, photograph and résumé were on the journal’s Web site. Then he learned that he was listed as an organizer and speaker on a Web site advertising Entomology-2013.

“I am not even an entomologist,” he said.

He thinks the publisher of the plant journal, which also sponsored the entomology conference, — just pasted his name, photograph and résumé onto the conference Web site. At this point, he said, outraged that the conference and journal were “using a person’s credentials to rip off other unaware scientists,” Dr. White asked that his name be removed from the journal and the conference.

Weeks went by and nothing happened, he said. Last Monday, in response to this reporter’s e-mail to the conference organizers, Jessica Lincy, who said only that she was a conference member, wrote to explain that the conference had “technical problems” removing Dr. White’s name. On Tuesday, his name was gone. But it remained on the Web site of the journal.

Dr. Gedela, the publisher of the journals and sponsor of the conference, said in an e-mail on Thursday that Dr. Price and Dr. White’s names remained on the Web sites “because of communication gap between the EB member and the editorial assistant,” referring to editorial board members. That day, their names were gone from the journals’Web sites.

“I really should have known better,” Dr. White said of his editorial board membership, adding that he did not fully realize how the publishing world had changed. “It seems like the Wild West now.”

This article has been revised to reflect the following correction:

Correction: April 8, 2013

An earlier version of this article misstated the name of a city in Mexico that is home to a doctor who sent articles to a pseudo-academic journal. It is Celaya, not Ceyala.