Chemical space, high throughput screening and the world of blockbuster drugs

By Dr Hakim Djaballah

Spring 2013

We are constantly told that if a high throughput screen does not identify hit(s) then blame it on the compounds in your library. The community has accepted this notion and unleashed a chemical space exploration through the use of novel or pre-existing synthetic chemistries to supposedly generate better ones.

Its efforts, unfortunately, did not result in a giant step for mankind to combat disease, rather this article would argue that we are still at the same start-up line as we were 30 years ago and should perhaps listen to the call of nature to revive natural product research as it has amassed the most diverse libraries over the past millions of years.

The most memorable and thought-provoking editorial was by Roger Lahana in 1999 on combinatorial chemistry (CC) and high throughput screening (HTS)1; and appropriately entitled ‘How many leads from HTS?’ It was during the phase of heavy investments by large Pharma and biotech companies alike in CC, automation, novel assay technologies and HTS as the only way forward to discover drugs and reduce overall cost and time.

This revolutionary phase became obsessed with numbers and turned HTS into a numbers generator; where companies were viewed as successful if they had assembled multimillion compound libraries and generated several million data points per year. Not surprising at all that when some of the attendees of the CHI Highthroughput technologies meeting, back in May of 1999 in Washington DC (USA), were asked the question of how many leads have we got from CC and HTS so far? Their overwhelming response was none!1. A few months later, a response to Lahana’s editorial was published2. Its author claimed that the purpose of screening is not to identify the molecule, rather to identify good leads from which clinical candidates could be synthesised using rational/classical chemistry approaches; they further added that if screening fails to identify good leads, then either the assays were no good or the quality of the library was poor2. Surprisingly, Fox and colleagues in 2001 reported what was thought to be the earliest signs of success of HTS as evident with up to 46 drug candidates apparently obtained from HTS over an eight-year period (1990 to 1997), as recorded through their worldwide survey interviewing 52 individual HTS lab directors3. A year later, Fox and colleagues surprised the drug discovery world by reporting that up to 63 drug candidates, obtained from HTS efforts from the earlier 1990s to 2000, were indeed in Human clinical trials4. Does it mean that we can declare success of CC and HTS in providing us with the much needed drugs? Clearly, it is a sharp contrast between the Lahana editorial with no leads in sight and the Fox reports with up to 63 candidates in Human clinical trials. Almost 11 years later, one would expect to see at least a favourable outcome of some of these candidates and then we can collectively declare success. With limited information at hand, it is an impossible task to track each of the anonymous 63 clinical candidates through their trials and approval processes.

Scannell and colleagues, in a recent review, reported for the first time a thoughtful analysis of R&D efficiency over a 60-year period5 and may provide us with a partial answer as to the success or failure of CC and HTS. They report that with an 800-fold increase in CC output during the 1980s and 1990s resulting in large chemical libraries, innovations in HTS including a 10-fold cost reduction in testing these libraries, and vast improvements to the R&D process including outsourcing, the number of new drug approvals has halved roughly every nine years since the 1950s and up to the 2010 approvals – an 80-fold fall in productivity5. Their analysis seems to point towards failures rather than successes of CC and HTS. However, a more optimistic view was recently presented by Berggren and colleagues at McKinsey & Company, addressing the crisis in the R&D productivity with a five-year forecast in drug innovation6. The probability of success (POS) is much higher in their forecast with an average annual number of approvals/launches to be above 30 during a fiveyear period (2011-15), and with a rapid decline to below 10 in 2016. Their positive outlook seems to be driven by the unique number of novel molecules in development and reaching 7,709 compounds in 2011. They further estimate the total number of novel compounds between 2006 and 2011 to be more than 13,000 potential drug products, based on their pipeline analysis of the Pharmaprojects pipeline database6. Berggren assessment is suggestive of a success if these 13,000 compounds were indeed the fruits of CC and HTS; but their outlook of potential disaster by 2016 somehow contradicts the success and questions whether the industry can indeed generate additional 13,000 novel compounds beyond 2015?

Chemical library: good or bad?

Like many of my colleagues who are in the business of running chemical screening operations in Academia, we are often being told by our customers that the libraries we use in HTS are ‘no good’ since their friends were extremely lucky in finding hit(s) using someone else’s library; and sometimes even criticised by colleagues on grant review panels that the libraries to be used for screening are also ‘no good’. I often wondered whether this is an academic naivety problem to do with the novelty of chemical screening or there was an underlining message in there. When I joined Memorial Sloan-Kettering Cancer Center in 2003 to set up the HTS Lab, I was instructed by a particular member of faculty to purchase the ChemBridge DIVERSet E library containing 50,000 compounds and as he put it “many people have screened it and published on it” as their selection criteria for a chemical library; further adding “if no one has published on it then it will be hard to publish on it in the future”. I found their instructions and insistence remarkable considering the fact that they are chemists by training. Instead, I took the simplest approach to building what I believed would be a more comprehensive chemical library to fit all of the biologies being studied at the centre, and with four simple rules: 1) The build has to be over time; 2) The overall size should not exceed 500,000 molecules; 3) Loose exclusion criteria/ rules for molecules; and 4) Enrichment in non-Lipinski compliant compounds, ie natural products, is a must. Ten years later, my chemical library is beautiful, approaching 400,000 chemicals and with a good track record in identifying actives in screening campaigns7-18.

Surprisingly though, when Fox and colleagues sought opinions on the success of HTS from interviewing HTS lab heads back in 2001, only 12.5% felt chemical libraries were the problem and better ones were needed to be used for screening4, hardly a majority that one would have expected. However, Scannell and colleagues would argue that the chemical library content is a huge problem considering the reality of the chemical space coverage of today’s libraries being infinitesimally small5. They further add that Pharma and biotech merger activities in the past few years have revealed substantial overlaps between their respective chemical libraries5; not surprising considering that similar brains think alike, only a handful of commercial vendors to go to, and similar chemistries were applied. It also means that we are all screening similar compounds whether we are aware of it or not. If that is truly reflective of the dire state of the composition of chemical libraries and content, then how do we explain the source/origin of these 13,000 novel and unique compounds alluded to by Berggren6?

How do you begin to judge whether a chemical library is good or bad? In my opinion, you cannot really judge a library; you can only describe its content in terms of unique chemotypes, how big a cluster they represent, presence of nuisance chemistries or reactive groups, and if you are really desperate for a validation statement then by all means apply the Lipinski’s rules of five; segregating the library into categories of ‘compliance’. As an example, a recent publication by Baell addressing the coverage of lead-likes in commercial libraries using their inhouse developed PAINS filters, reported that only 6,000 lead-like molecules in a vendor library of 400,000 chemicals19; that is a very high attrition rate of only 1 in 67 molecules is lead-like and perhaps worth screening. I would take a completely different approach and claim that each molecule in your library presents an opportunity, I do not know what it is yet and it does not really matter; the discovery is dependent on the biological question being asked, and the assay used for the screen is highly critical. Nowadays, there are fewer and fewer reports addressing or questioning assay suitability/ validity for HTS. This is unfortunate considering the overwhelming investment in screening from both the public and private sectors. We have also been led to believe that a single measure, known as the Z-factor20, is all you need to determine whether the assay is good for HTS or not. With a Z-factor value of >0.5, your assay must be robust and ready for screening. This magic factor has industrialised the screening world and unfortunately, is the sole culprit of misguidance in assay development for HTS. The Z-factor is defined as a measure of ‘statistical effect size’, meaning that to be statistically relevant you need a larger data set to assess the separation of your assay signal to its noise or background. What the Z-factor does not tell us is: 1) The relevance of your assay signal to the question you are trying to study since we are fully vested in sensitivities, miniaturisations and ultra-HTS; 2) Assay dynamics and sensitivity to specific and non-specific modulators; and 3) Heterogeneity of the biology being studied. In essence, it is an abused measure and sometimes irrelevant such is the case for high content cellbased assays18; but we must have it to get approvals to carry out our screens, to get our manuscripts published as reviewers will more than often ask for it, and to get our grant proposals funded as it is part of the funding requirement.

Many of us do forget that screening these libraries represent the largest live casino you will ever play in, whether you are a blackjack, poker, roulette or slot-machine player, the odds are always in favour of the house; unless of course you cheat or get lucky. So, today’s chemical libraries are indeed one size fits all and as far as I am concerned nothing is wrong with them; they are also the only available and accessible tools to use and should be used across a multitude of biologies and screens with the hope of getting lucky one day. Baell’s comment on the sheer level of artifacts being generated by screening these libraries is duly noted19, but biology without noise is not worth studying. I would also argue that these reported pan-active hit(s) from screening these libraries are the best metrics of suitability/validity of your assay in the first place.

Chemical space exploration

Curiosity is one of the many traits of human beings leading us to always wanting to explore and better understand our world. In very ancient times, our ancestors used to observe objects in the skies leading them to make predictions of their motions only if they can see them by their naked eyes. Through perseverance and technology advancements, the field has enabled the human race to go beyond the skies, the solar system and into deep space exploration using both manned and unmanned missions. The purpose of which is to unravel the mysteries of our universe, and to better understand our place in the cosmos. After several thousand years of research and a gigantic budget, have we explored or conquered space?

By comparison, have we successfully conquered the chemical space? The answer is not yet because we have only begun to explore its complexities. As disappointing as this may be, our ability to synthesize complex chemical molecules is still rather primitive as compared to what nature can and has been doing over billions of years. As an example, bacteria can make a complex molecule in a few hours versus several years of hard work of a super creative synthetic chemist with an army of students and post-doctoral researchers resulting in one high profile publication with a gigantic authorship lineup 10 to 20 years later, and with a synthetic route of more than 60 steps; hardly amenable to conventional process chemistry for scale up. So, in essence, we can replicate or mimic what nature has already made given resources and time, but is it enough for conquering space and making novel drugs? Newman and Cragg would argue that we are at least making some progress, and suggest that natural product research should be expanded significantly, as assessed through their 30 years’ analysis of drug approvals: 64 New Chemical Entities (NCEs) were natural products, 299 NCEs were derived from a natural product with a semisynthetic modification, 268 NCEs were natural product mimics and 55 NCEs were made by total synthesis but the pharmacophore may have a natural product origin21.

According to Scannell and colleagues, the world’s chemical libraries combined represent a very tiny miniscule fraction of the vast chemical space5, perhaps the best comparison would be the size of a rounded grain of salt versus the size of Earth; they further argue that these libraries contain redundant regions of the chemical space. Surprisingly, the CAS database counter registry contains ~68 million commercially available chemicals22. Based on the various suggestions as to the size of the chemical space23-24, I would propose that empirically it may contain 10180 possible organic and inorganic chemical structures. I would also predict that up to 10% have evolved over billions of years to interact with living matter from bacteria, fungus, worms, scorpions, frogs, dinosaurs, plants to humans. This would leave us with up to 1018 possible chemical structures to explore and synthesise, clearly an unattainable task. But if I would further empirically predict that there are only 109 known chemicals on Earth, it would then mean that we have covered only 7%, and leaving 93% or 930 million molecules up for chemical exploration. How realistic is it to engage in making these compounds over the next 30 years? That is 31 million compound per year or 2.59 million compounds per month or 86,111 compounds per day. Even with access to the most sophisticated high output chemical synthesis strategies and robotic enabled facilities in the world, simply put this task would also be unattainable. In a recent report, Reymond and Awale predict that by using their exploration strategies including improved compound enumeration, classification and virtual screening schemes, and implementation of chemical synthesis resources, we would have an opportunity to better explore the chemical space25. Thirty years of CC and chemical innovations have left us stagnant at the same starting line for the race to produce better drugs, and with only 387 NCEs discovered by random screening or modifications of existing molecules21.

Drug discovery under siege

1,073 drugs are considered unique NCEs approved over a 30-year period21,26. If these numbers truly hold, then the combined mighty of Pharma and biotech companies have only produced an average of ~36 NCEs per year over a period of 30 years of R&D. At face value, this analysis does not make sense otherwise these companies would be out of business and long gone. Actually, it does make perfect sense for those companies playing the dangerous game of chasing blockbuster drugs knowing how much they will be worth in the end. A blockbuster drug is defined as the one which achieves revenues of more than $1 billion at global level for its owner per year27; it is estimated that 125 drugs have met the target sale. The top 10 best selling drugs in the Unites States alone generated more than $70 billion in sales in 201128-29. Seven of these blockbusters (Table 1) have an origin linking them to products naturally produced by nature including the number one blockbuster of all time, Lipitor, generating more than $125 billion in sales for Pfizer29.

The search for blockbuster drugs is more lucrative and important than ever; Pfizer’s success with Lipitor makes the chase even more addictive and risky, though Pharma chiefs claim that recent restructuring and portfolio derisking activities are gaining traction towards rebuilding a strong and potentially profitable pharmaceutical industry29. I would argue that it is hardly the case, considering several indicators such as prescription costs are under pressure, patents have sensitive landscapes, generic sales has reached up to 50% of the market and only two in 10 drugs are profitable, when combined report that the drug discovery business is indeed under siege (Figure 1). Furthermore, the patent cliff of the best selling drugs is upon us and the recent approvals may result in unanticipated best seller replacements – wishful thinking by the Pharma chiefs. In 2013, it is a reality that making any investments in R&D is even more challenging than gambling, leaving the dreaded question of where will the next blockbuster drug come from?

The Pharma and biotech industries have been exploring the chemical space for more than 30 years, through the use of sophisticated synthetic methodologies and CC and resulting multi-million compound libraries screened against a diverse range of targets and biologies. The outcome of these huge investments is rather disappointing with only 36% of the approved NCEs attributed to this gigantic endeavour. It is not that surprising, considering that manmade chemistries will always rely on simple coupling reactions with nitrogen enrichment in the final molecules. Figure 2 summarises the unreasonableness of chemical exploration by synthetic chemistries leading to success. The numbers clearly bias those molecules with natural source origins and contributing to 64% of the approved NCEs. Newman and Cragg alluded to various academic chemistry groups modifying active natural product skeletons as leads for novel drugs21, but failed to caution against the continuous use of combinatorial chemistry approaches with a new coat of diversity paints. If these approaches were to be successful, then we would have observed their impact by now. This leaves us with only one choice and that is to return to Mother Nature as a golden source of novel agents and drugs. Billions of years of evolution can only make better chemical molecules than mankind will ever come close to making synthetically.

A call to revive natural product research

Will the Pharma industry answer the call? Most likely not as reflected in their recent activities of consolidations, layoffs and site closures. I would also speculate that they would eventually eliminate the ‘R’ from their R&D efforts and shift it towards the public sector; in turn, government agencies, academics and non-for-profit research organisations will answer the call and hopefully revive natural product research through a global initiative to benefit mankind. Combined, they would provide Pharma with unprecedented access to the largest research enterprise in the world at little or no cost to them. Academic and non-for-profit screening operations throughout the world are better positioned than ever to screen natural products as extracts or purified molecules. The global nature of this initiative would require special collaborative agreements protecting the rights of all parties involved, especially those biodiversity rich source countries providing access to their resources and in return, be rewarded in a fair and equitable manner. Such efforts have been ongoing for many years now with the hope that we are indeed close to the ratification of the Nagoya Protocol, which requires signatures from 50 countries30.

Acknowledgements I wish to thank members of my lab and colleagues at the centre for fruitful discussions over the past 10 years. The HTS Lab is partially supported by William H. Goodwin and Alice Goodwin and the Commonwealth Foundation for Cancer Research, the Experimental Therapeutics Center of the Memorial Sloan-Kettering Cancer Center, the William Randolph Hearst Fund in Experimental Therapeutics, the Lillian S. Wells Foundation and by a NIH/NCI Cancer Center Support Grant 5 P30 CA008748-44.

Dr Hakim Djaballah, molecular pharmacologist and technologist, has been the Director of the HTS Core Facility at Memorial Sloan-Kettering Cancer Center since its establishment in 2003. In 1992, he received his PhD in biochemistry from the University of Leicester, England. He was the recipient of the 2007 Robots and Vision User Recognition Award. Beckman Coulter, Inc 49 BioTek Instruments, Inc 25 BMG LABTECH GmbH 53 Cellular Dynamics International 21 CISBIO International SA 6 CyBio AG 54 Douglas Scientific LLC 45 eBioscience, Inc

28 Nisen, M. The 10 Best Selling Prescription Drugs in the United States. In the Business Insider, June 28, 2012; http://www.businessinsider.co m/10-best-selling-blockbusterdrugs- 2012- 6?op=1#ixzz2KzPtaOWz.