Recently papers point to problems with science: claims are failing to replicate. With large, observational medical studies the failure to replicate appears to be as high as 80 to 90%. With experimental biology papers 47/53 claims did not replicate. Of papers that are retracted, fraud was proven or suspected in over 40% of the papers. There is a need for funding agencies to alter the incentive system in science. This lecture is aimed at statistical consumers of science. We are all consumers in that we read the mass media and see science claims every day. The number of questions under consideration should be clearly disclosed. Journal editors should accept papers that appear to have valid claims; having a p-value <0.05 should not be their touchstone of acceptance. We need increased scientific oversight beyond journal refereeing. Examples of faliure to replicate, simple data processing and analysis strategies editors can adopt, and statistical methods to deal with multiple testing and multiple modeling will be given in this presentation.