The message here is: do not make your product more robust or complex than you know it needs to be and do not waste time planning for what may, most likely, never happen.
Tip
Always plan for reasonable levels of safety, complexity, and performance.
Premature optimization
Is your software fast enough? Don't know? Then why are you optimizing that code, my friend? When you spend time optimizing software that you're not sure needs optimization, if no one complained about it being slow or you do not notice it to be slow in daily use, you're probably wasting time with premature optimization.
And so, on to Flask.
Blueprints 101
So far, our applications have all been flat: beautiful, single-file Web applications (templates and static resources not considered). In some cases, that's a nice approach; a reduced need for imports, easy to maintain with simple editors and all but…
As our applications grow, we identify the need to contextually arrange our code.

You may purchase a nice Integrated Development Environment (IDE) such as PyCharm or WingIDE to improve your productivity or hire third-party services to help you test your code or control your development schedule, but these can do just so much. Good architecture and task automation will be your best friend in most projects. Before discussing suggestions on how to organize you code and which modules will help you save some typing here and there, let's discuss premature optimization and overengineering, two terrible symptoms of an anxious developer/analyst/nosy manager.
Overengineering
Making software is like making a condo, in a few ways. You'll plan ahead what you want to create before starting so that waste is kept to a minimum. Contrary to a condo, where it's advisable to plan the whole project before you start, you do not have to plan out your software because it will most likely change during development, and a lot of the planning may just go to waste.

This flies in the face of the folk wisdom that you can code like hell and then test all the mistakes out of the software. That idea is dead wrong. Testing merely tells you the specific ways in which your software is defective. Testing won't make your program more usable, faster, smaller, more readable, or more extensible.
Premature optimization is another kind of process error. In an effective process, you make coarse adjustments at the beginning and fine adjustments at the end. If you were a sculptor, you'd rough out the general shape before you started polishing individual features. Premature optimization wastes time because you spend time polishing sections of code that don't need to be polished. You might polish sections that are small enough and fast enough as they are, you might polish code that you later throw away, and you might fail to throw away bad code because you've already spent time polishing it.

…

When you tune code, you're implicitly signing up to reprofile each optimization every time you change your compiler brand, compiler version, library version, and so on. If you don't reprofile, an optimization that improves performance under one version of a compiler or library might well degrade performance when you change the build environment.
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Donald Knuth
You should optimize as you gofalse! One theory is that if you strive to write the fastest and smallest possible code as you write each routine, your program will be fast and small. This approach creates a forest-for-the-trees situation in which programmers ignore significant global optimizations because they're too busy with micro-optimizations.

…

Developers immerse themselves in algorithm analysis and arcane debates that in the end don't contribute much value to the user. Concerns such as correctness, information hiding, and readability become secondary goals, even though performance is easier to improve later than these other concerns are. Post hoc performance work typically affects less than five percent of a program's code. Would you rather go back and do performance work on five percent of the code or readability work on 100 percent?
In short, premature optimization's primary drawback is its lack of perspective. Its victims include final code speed, performance attributes that are more important than code speed, program quality, and ultimately the software's users. If the development time saved by implementing the simplest program is devoted to optimizing the running program, the result will always be a program that runs faster than one developed with indiscriminate optimization efforts (Stevens 1981).

Optimization can fine-tune the performance of a system, but it can rarely deliver a miracle.
Although the quote is often attributed to Donald Knuth, who popularized it, it was Tony Hoare who originally said, “Premature optimization is the root of all evil.” This statement has long been the rallying cry of software engineers who avoid any thought of application performance until the very end of the software-development cycle—at which point the optimization phase is typically ignored for economic or time-to-market reasons. However, Hoare did not say, “Concern about application performance during the early stages of an application’s development is the root of all evil.” He specifically said premature optimization, which, back then, meant counting cycles and instructions in assembly language code—not the type of coding you want to do during initial program design, when the code base is rather fluid.

…

The following excerpt from a short essay by Charles Cook (www.cookcomputing.com/blog/archives/000084.html) describes the problem with reading too much into this statement:
I’ve always thought this quote has all too often led software designers into serious mistakes because it has been applied to a different problem domain to what was intended.
The full version of the quote is “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.” and I agree with this. It’s usually not worth spending a lot of time micro-optimizing code before it’s obvious where the performance bottlenecks are. But, conversely, when designing software at a system level, performance issues should always be considered from the beginning. A good software developer will do this automatically, having developed a feel for where performance issues will cause problems.

…

These chapters describe disassemblers, object code dump tools, debuggers, various HLL compiler options for displaying assembly language code, and other useful software tools.
The remainder of the book, Chapter 7 through Chapter 15, describes how compilers generate machine code for different HLL statements and data types. Armed with this knowledge, you will be able to choose the most appropriate data types, constants, variables, and control structures to produce efficient applications.
While you read, keep Dr. Hoare’s quote in mind: “Premature optimization is the root of all evil.” It is certainly possible to misapply the information in this book and produce code that is difficult to read and maintain. This would be especially disastrous during the early stages of your project’s design and implementation, when the code is fluid and subject to change. But remember: This book is not about choosing the most efficient statement sequence, regardless of the consequences; it is about understanding the cost of various HLL constructs so that, when you have a choice, you can make an educated decision concerning which sequence to use.

That’s something you should fix.
Sometimes a user will report that there’s a bug, when actually it’s the program behaving exactly as you intended it to. In this case, it’s a matter of majority rules. If a significant number of users think that the behavior is a bug, it’s a bug. If only a tiny minority (like one or two) think it’s a bug, it’s not a bug.
The most famous error in this area is what we call “premature optimization.” That is, some developers seem to like to make things go fast, but they spend time optimizing their code before they know that it’s slow! This is like a charity sending food to rich people and saying, “We just wanted to help people!” Illogical, isn’t it? They’re solving a problem that doesn’t exist.
The only parts of your program where you should be concerned about speed are the exact parts that you can show are causing a real performance problem for your users.

Most attempts at optimization — tying something down very explicitly — reduces the breadth and scope of interactions and relationships, which is the very source of emergence. In the flocking birds example, as with a well-designed system, it's the interactions and relationships that create the interesting behavior.
The harder we tighten things down, the less room there is for a creative, emergent solution. Whether it's locking down requirements before they are well understood or prematurely optimizing code, or inventing complex navigation and workflow scenarios before letting end users play with the system, the result is the same: an overly complicated, stupid system instead of a clean, elegant system that harnesses emergence.
Keep it small. Keep it simple. Let it happen.
—Andrew Hunt, The Pragmatic Programmers
Table of contents | Essay list for this chapter | Next essay
The Three Musketeers
Use a team of three for version 1.0
For the first version of your app, start with only three people.

A
profiler is a valuable tool—perhaps even a necessity—in producing the most
efficient code. If your Lisp implementation provides one, use it to guide
optimization. If not, you are reduced to guessing where the bottlenecks are,
and you might be surprised how often such guesses turn out to be wrong.
A corollary of the bottleneck rule is that one should not put too much
effort into optimization early in a program's life. Knuth puts the point even
more strongly: "Premature optimization is the root of all evil (or at least most
of it) in programming."0 It's hard to see where the real bottlenecks will be
when you've just started writing a program, so there's more chance you'll
be wasting your time. Optimizations also tend to make a program harder to
change, so trying to write a program and optimize it at the same time can be
like trying to paint a picture with paint that dries too fast.

Violating the prime directive
Star Trek was notorious for having rules that brash starship captains routinely violated. Well, by indexing TMDB directly, you violated some advice we gave earlier. You directly placed the source data model into Elasticsearch. Shouldn’t you have done some signal modeling? If you use this data directly to create a search index, won’t you end up with relevance problems?
Well, yes, but that’s for a good reason. Search is a place ripe for premature optimization. You’re likely to reach the heat death of the universe before achieving a perfect search solution in every direction. You know there will be relevance problems, but you don’t quite know what those are until you experiment with user searches. There are few areas that emphasize “fail fast” as much as search relevance. Load your data, get something basic working, find where it’s broken, reconfigure, reindex if need be, requery, rinse, and repeat.

So now I say, “When Alan Turing wrote the first programming manual for the Mark I, in 1950. …”
Mathematical things: similarly I'll get people who miss it. So then I'll say, you know, I actually said it correctly, but I know I still have to change it and make it better.
Seibel: When you publish a literate program, it's the final form of the program, typically. And you are often credited with saying, “Premature optimization is the root of all evil.” But by the time you get to the final form it's not premature—you may have optimized some parts to be very clever. But doesn't that make it hard to read?
Knuth: No. A good literate program will show its history. A good literate program will say, “Here's the obvious way to do it and then why we don't follow that road?”
When you put subtle stuff in your program, literate programming shines because you don't just have the code that does it but also your documentation.

…

And read version two or you'll never understand version three.”
I write a whole variety of different kinds of programs. Sometimes I'll write a program where I couldn't care less about efficiency—I just want to get the answer. I'll use brute force, something that I'm guaranteed I won't have to think—there'll be no subtlety at all so I won't be outsmarting myself. There I'm not doing any premature optimization.
Then I can change that into something else and see if I get something that agrees with my brute-force way. Then I can scale up the program and go to larger cases. Most programs stop at that stage because you're not going to execute the code a trillion times. When I'm doing an illustration for The Art of Computer Programming I may change that illustration several times and the people who translate my book might have to redo the program, but it doesn't matter that I drew the illustration by a very slow method because I've only got to generate that file once and then it goes off to the publisher and gets printed in a book.

This book breaks down the
internals of various databases and data processing systems, and it’s great fun to
explore the bright thinking that went into their design.
Sometimes, when discussing scalable data systems, people make comments along the
lines of, “You’re not Google or Amazon. Stop worrying about scale and just use a
relational database.” There is truth in that statement: building for scale that you don’t
need is wasted effort and may lock you into an inflexible design. In effect, it is a form
of premature optimization. However, it’s also important to choose the right tool for
the job, and different technologies each have their own strengths and weaknesses. As
we shall see, relational databases are important but not the final word on dealing with
data.
Scope of This Book
This book does not attempt to give detailed instructions on how to install or use spe‐
cific software packages or APIs, since there is already plenty of documentation for
those things.

…

A single integra‐
ted software product may also be able to achieve better and more predictable perfor‐
mance on the kinds of workloads for which it is designed, compared to a system
consisting of several tools that you have composed with application code [23]. As I
said in the Preface, building for scale that you don’t need is wasted effort and may
lock you into an inflexible design. In effect, it is a form of premature optimization.
The goal of unbundling is not to compete with individual databases on performance
for particular workloads; the goal is to allow you to combine several different data‐
bases in order to achieve good performance for a much wider range of workloads
than is possible with a single piece of software. It’s about breadth, not depth—in the
same vein as the diversity of storage and processing models that we discussed in
“Comparing Hadoop to Distributed Databases” on page 414.

This is where the design features that enable further scaling come into play.
While every effort is made to foresee potential scaling issues, not all of them can receive engineering attention. The additional design and coding effort that will help deal with future potential scaling issues is lower priority than writing code to fix the immediate issues of the day. Spending too much time preventing scaling problems that may or may not happen is called premature optimization and should be avoided.
5.1.1 Identify Bottlenecks
A bottleneck is a point in the system where congestion occurs. It is a point that is resource starved in a way that limits performance. Every system has a bottleneck. If a system is underperforming, the bottleneck can be fixed to permit the system to perform better. If the system is performing well, knowing the location of the bottleneck can be useful because it enables us to predict and prevent future problems.

Best Isn't Always Best
You also need to be pragmatic about choosing appropriate algorithms—the fastest one is not always the best for the job. Given a small input set, a straightforward insertion sort will perform just as well as a quicksort, and will take you less time to write and debug. You also need to be careful if the algorithm you choose has a high setup cost. For small input sets, this setup may dwarf the running time and make the algorithm inappropriate.
Also be wary of premature optimization. It's always a good idea to make sure an algorithm really is a bottleneck before investing your precious time trying to improve it.
Related sections include:
Estimating, page 64
Challenges
Every developer should have a feel for how algorithms are designed and analyzed. Robert Sedgewick has written a series of accessible books on the subject ([Sed83, SF96, Sed92] and others).

There’s no way to stop the process of natural selection. A pesticide might work for a decade or two. Beyond that, natural selection is likely to render the compound ineffective. Companies in the pesticide market need to continually synthesize new compounds to combat resistance. Many, many hundreds of different synthesized pesticides exist for this reason. It’s a costly endeavor with no endpoint. Resistance put a big dent in the premature optimism that DDT would once and for all make humanity the victor in the battle against pests.
Pest resistance wrought by natural selection wasn’t the only problem with the DDT bonanza. The pesticide, when sprayed across fields and forests and inside homes, attacked all living organisms with which it came into contact. Again, this wasn’t new. Before DDT, strychnine to control rodents killed quail and songbirds, and arsenic to control tree diseases killed deer.

One of the downsides is that this navigation of controls can be quite chatty, as the client needs to follow links to find the operation it wants to perform. Ultimately, this is a trade-off. I would suggest you start with having your clients navigate these controls first, then optimize later if necessary. Remember that we have a large amount of help out of the box by using HTTP, which we discussed earlier. The evils of premature optimization have been well documented before, so I don’t need to expand upon them here. Also note that a lot of these approaches were developed to create distributed hypertext systems, and not all of them fit! Sometimes you’ll find yourself just wanting good old-fashioned RPC.
Personally, I am a fan of using links to allow consumers to navigate API endpoints. The benefits of progressive discovery of the API and reduced coupling can be significant.

The goal of MOPED is to define a process by which we can take a rough use case for a new distributed application, and go from “Hello World” to fully working prototype in any language in under a week.
Using MOPED, you grow, more than build, a working ØMQ architecture from the ground up with minimal risk of failure. By focusing on the contracts rather than the implementations, you avoid the risk of premature optimization. By driving the design process through ultra-short test-based cycles, you can be more certain that what you have works before you add more.
We can turn this into five real steps:
Internalize the ØMQ semantics.
Draw a rough architecture.
Decide on the contracts.
Make a minimal end-to-end solution.
Solve one problem and repeat.
Step 1: Internalize the Semantics
You must learn and digest ØMQ’s “language,” that is, the socket patterns and how they work.

The purpose of understanding and analyzing systems is to improve them, which is often tricky—changing systems can often create unintended consequences.
In this chapter, you’ll learn the secrets of Optimization, how to remove unnecessary Friction from critical processes, and how to build Systems that can handle Uncertainty and Change.
SHARE THIS CONCEPT: http://book.personalmba.com/improving-systems/
Optimization
Premature optimization is the root of all evil.
—DONALD KNUTH, COMPUTER SCIENTIST AND FORMER PROFESSOR AT STANFORD UNIVERSITY
Optimization is the process of maximizing the output of a System or minimizing a specific input the system requires to operate. Optimization typically revolves around the systems and processes behind your Key Performance Indicators , which measure the critical elements of the system as a whole.

If a ‘‘maintenance crew’’ is
left guessing about the architecture of the system or must deduce the purpose of system components from their implementation, the structure of a system can deteriorate rapidly under the impact
of local patches. Documentation is typically much better at conveying details than in helping new
people to understand key ideas and principles.
23.4.7 Efficiency [design.efficiency]
Donald Knuth observed that ‘‘premature optimization is the root of all evil.’’ Some people have
learned that lesson all too well and consider all concern for efficiency evil. On the contrary, efficiency must be kept in mind throughout the design and implementation effort. However, that does
not mean the designer should be concerned with micro-efficiencies, but that first-order efficiency
issues must be considered.
The best strategy for efficiency is to produce a clean and simple design.

We saw this in the railroad frenzy of the nineteenth century, which was followed by widespread bankruptcies. (I have some of these early unpaid railroad bonds in my collection of historical documents.) And we are still feeling the effects of the e-commerce and telecommunications busts of several years ago, which helped fuel a recession from which we are now recovering.
AI experienced a similar premature optimism in the wake of programs such as the 1957 General Problem Solver created by Allen Newell, J. C. Shaw, and Herbert Simon, which was able to find proofs for theorems that had stumped mathematicians such as Bertrand Russell, and early programs from the MIT Artificial Intelligence Laboratory, which could answer SAT questions (such as analogies and story problems) at the level of college students.163 A rash of AI companies occurred in the 1970s, but when profits did not materialize there was an AI "bust" in the 1980s, which has become known as the "AI winter."

That’s an optimization and should only be taken on when absolutely necessary, especially given the costs associated with it: efficient field access ties code that uses it to a particular type, which often complicates the implementation of generic functionality and limits composability.[432]
* * *
[431] The canonical and up-to-date version of this flowchart is maintained at https://github.com/cemerick/clojure-type-selection-flowchart along with a number of translations, including Dutch, German, Japanese, Portuguese, and Spanish so far.
[432] Recall that “premature optimization is the root of all evil.” Thank you, Professor Knuth.
Chapter 19. Introducing Clojure into Your Workplace
(or, Sneaking Clojure Past the Boss[433])
It is a sad fact that many programmers, if not the majority, use languages and tools every day that they begrudge. Either through historical accident, organizational inertia, or hard facts of the business, we often find ourselves stuck wishing we were using something, anything else to get our jobs done.

Finally, I would have used the $animate service, which I describe in Chapter 23, to display short, focused
animations to ease the transition from one view to another when the URL path changes.
186
Chapter 8 ■ SportsStore: Orders and Administration
AVOIDING OPTIMIZATION PITFALLS
You will notice that I say that I could consider reusing the category and pagination data, not that I would definitely
do so. That’s because any kind of optimization should be carefully assessed to ensure it is sensible and that it
avoids two main pitfalls that dog optimization efforts.
The first pitfall is premature optimization, which is where a developer sees an opportunity to optimize
an operation or task before the current implementation causes any problems or breaks a contract in the
nonfunctional specification. This kind of optimization tends to make code more specific in its nature that it would
otherwise be, and that can kill the easy movement of functionality from one component to another that is typical
of AngularJS (and is one of the most enjoyable aspects of AngularJS development).

Design Reflections
My experience in working on Graphite has reaffirmed a belief of mine that scalability has very little to do with low-level performance but instead is a product of overall design. I have run into many bottlenecks along the way but each time I look for improvements in design rather than speed-ups in performance. I have been asked many times why I wrote Graphite in Python rather than Java or C++, and my response is always that I have yet to come across a true need for the performance that another language could offer. In [Knu74], Donald Knuth famously said that premature optimization is the root of all evil. As long as we assume that our code will continue to evolve in non-trivial ways then all optimization6 is in some sense premature.
One of Graphite's greatest strengths and greatest weaknesses is the fact that very little of it was actually "designed" in the traditional sense. By and large Graphite evolved gradually, hurdle by hurdle, as problems arose. Many times the hurdles were foreseeable and various pre-emptive solutions seemed natural.

for each, brainstorming the ramifications.
Can You Flip the Deferred-Life Plan and Make It Work?
“Many, many people are working very hard, trying to save their money to retire so they can travel. Well, I decided to flip it around and travel when I was really young, when I had zero money. And I had experiences that, basically, even a billion dollars couldn’t have bought.”
“You Don’t Want ‘Premature Optimization’”
“I really recommend slack. ‘Productive’ is for your middle ages. When you’re young, you want to be prolific and make and do things, but you don’t want to measure them in terms of productivity. You want to measure them in terms of extreme performance, you want to measure them in extreme satisfaction.”
The Ideas You Can’t Give Away or Kill . . .
“I became a proponent of trying to give things away first.

Technically an index-time boost is distributed (multiplied) into each term’s relevancy, which is somewhat different than using a function query against a popularity field, which is added to the overall score. Although it’s possible to construct your function queries in such a way as to mimic the index-time boost, in practice the additive boost will likely accomplish your desired outcome, so too much focus on this detail is likely a premature optimization until you discover a problem with this approach.
Both the index-time document boost and the boosting of a document by a function on a popularity field are focused upon globally boosting a document’s relevancy versus all other documents. This might make sense for an e-commerce application in which certain products tend to sell better overall or for a news website where certain popular articles are trending.