Statistical predictive techniques have their roots in the small-data setting of the day, 200 years ago. During that time, a large collection of literature has formed that 1) deepened the theory, providing a zoom-in view of the entrails of the traditional-technique workings, and 2) widened the theory, providing a zoom-out view of the new methods, which stem in part from non-theoretical aspects such as data types; e.g., addressing categorical data resulted in the log-linear model. These new and better-understood methods have as their work-ground the small-data setting, and thus have two zones of weakness, which prevent them from the gain of a predictive information advantage. First, the data analyst must “fit the (data to a) model” under the assumption that the data analyst’s choicest pre-specified model did in fact generate the data at-hand, an untenable assumption (problematic then, and definitely now). Second, these methods are at best optimal for the small-data of yesteryear, and are not scaleable to today’s big-data setting. Today’s model input process can effortlessly import big data due to gargantuan computer memory storage devises. However, the model’s output process is stuck with the paradigm of “fit the model.” The implication of the weaknesses is simply that the models cannot bear the gain of a predictive information advantage.