The scientific idea that is most ready for retirement is the scientific method itself. More precisely it is the idea that there would be only one scientific method, one exclusive way of obtaining scientific results. The problem is that the traditional scientific method as an exclusive approach is not adequate to the new situations of contemporary science like big data, crowdsourcing, and synthetic biology. Hypothesis-testing through observation, measurement, and experimentation made sense in the past when obtaining information was scarce and costly, but this is no longer the case. In recent decades, we have already been adapting to a new era of information abundance that has facilitated experimental design and iteration. One result is that there is now a field of computational science alongside nearly every discipline, for example computational biology and digital manuscript archiving. Information abundance and computational advance has promulgated the evolution of a scientific model that is distinct from the traditional scientific method, and three emerging areas are advancing it even more.

Big data, the creation and use of large and complex cloud-based data sets, is one pervasive trend that is reshaping the conduct of science. The scale is immense: organizations routinely process millions of transactions per hour into hundred-petabyte databases. Worldwide annual data creation is currently doubling and estimated to reach 8 zettabytes in 2015. Even before the big data era, modeling, simulating, and predicting became a key computational step in the scientific process and the new methods required to work with big data make the traditional scientific method increasingly less relevant. Our relationship to information has changed with big data. Previously in the era of information scarcity, all data was salient. In a calendar for example, every data element, or appointment, is important and intended for action. With big data, the opposite is true, 99% of the data may be irrelevant (immediately, over time, or once processed into higher resolution). The focus becomes extracting points of relevance from an expansive whole, looking for signal from noise, anomalies, and exceptions, for example genomic polymorphisms. The next level of big data processing is pattern recognition. High sampling frequencies allow not only point-testing of phenomena (as in the traditional scientific method), but its full elucidation over multiple time frames and conditions. For the first time longitudinal baseline norms, variance, patterns, and cyclical behavior can be obtained. This requires thinking beyond the simple causality of the traditional scientific method into extended systemic models of correlation, association, and episode triggering. Some of the prominent methods used in big data discovery include machine learning algorithms, neural networks, hierarchical representation, and information visualization.

Crowdsourcing is another trend reshaping the conduct of science. This is the coordination of large numbers of individuals (the crowd) through the Internet to participate in some activity. Crowd models have led to the development of a science ecosystem that includes the professionally-trained institutional researcher using the traditional scientific method at one end, and the citizen scientist exploring issues of personal interest through a variety of methods at the other. In between are different levels of professionally-organized and peer-coordinated efforts. The Internet (and the trend to Internet-connect all people - 2 billion now estimated to be 5 billion in 2020) enables very-large scale science. Not only are existing studies cheaper and quicker in crowdsourced cohorts, but studies 100x the size and detail of previous studies are now possible. The crowd can provide volumes of data by automatically linking quantified self-tracking gadgets to data commons websites. Citizen scientists participate in light information-processing and other data collection and analysis activities through websites like Galaxy Zoo. The crowd is engaged more extensively through crowdsourced labor marketplaces (initially like Mechanical Turk, now increasingly skill-targeted), data competitions, and serious gaming (like predicting protein folding and RNA conformation). New methods for the conduct of science are being innovated through DIY efforts, the quantified self, biohacking, 3D printing, and collaborative peer-based studies.

Synthetic biology is a third wide-spread trend reshaping the conduct of science. Lauded as the potential 'transistor of the 21st century' given its transformative possibilities, synthetic biology is the design and construction of biological devices and systems. It is highly multi-disciplinary, linking biology, engineering, functional design, and computation. One of the key application areas is metabolic engineering, working with cells to greatly expand their usual production of substances that can then be used for energy, agricultural, and pharmaceutical purposes. The nature of synthetic biology is pro-actively creating de novo biological systems, organisms, and capacities, which is the opposite of the esprit of the passive characterization of phenomena for which the original scientific method was developed. While it is true that optimizing genetic and regulatory processes within cells can be partially construed under the scientific method, the overall scope of activity and methods are much broader. Innovating de novo organisms and functionality requires a significantly different scientific methodology than that supported by the traditional scientific method, and includes a re-conceptualization of science as an endeavor of characterizing and creating.

In conclusion, we can no longer rely exclusively on the traditional scientific method in the new era of science emerging through areas like big data, crowdsourcing, and synthetic biology. A multiplicity of models must be employed for the next generation of scientific advance, supplementing the traditional scientific method with new ways that are better suited and equally valid. Not only is a plurality of methods required, it opens up new tiers for the conduct of science. Science can now be carried out downstream at increasingly detailed levels of resolution and permutation, and upstream with broader systemic dynamism. Temporality and the future become more knowable and predictable as all processes, human and otherwise, can be modeled with continuous real-time updates. Epistemologically, how 'we know' and the truth of the world and reality is changing. In some sense we may be in a current intermediary 'dark ages node' where the multiplicity of future science methods can pull us into a new era of enlightenment just as surely as the traditional scientific method pulled us into modernity.