(This is mainly aimed at those who have specific knowledge of low latency systems, to avoid people just answering with unsubstantiated opinions).

Do you feel there is a trade-off between writing "nice" object orientated code and writing very fast low latency code? For instance, avoiding virtual functions in C++/the overhead of polymorphism etc- re-writing code which looks nasty, but is very fast etc?

It stands to reason- who cares if it looks ugly (so long as its maintainable)- if you need speed, you need speed?

I would be interested to hear from people who have worked in such areas.

@user997112: The close reason is self explanatory. It says: "We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. Doesn't necessarily mean they're correct, but that was the close reason chosen by all three close voters.
–
Robert HarveyDec 18 '12 at 22:30

Anecdotally, I'd say that the reason this question is attracting close votes is that it may be being perceived as a thinly-veiled rant (although I don't think it is).
–
Robert HarveyDec 18 '12 at 22:35

6

I'll stick my neck out: I cast the third vote to close as "not constructive" because I think the questioner pretty much answers his own question. "Beautiful" code that doesn't run fast enough to do the job has failed to meet the latency requirement. "Ugly" code that runs fast enough can be made more maintainable through good documentation. How you measure beauty or ugliness is a topic for another question.
–
BlrflDec 18 '12 at 22:39

@Carson63000, user1598390 and whoever else is interested: If the question ends up closed, feel free to ask about the closure on our Meta site, there's little point in discussing a closure in comments, especially a closure that hasn't happened. Also, keep in mind that every closed question can be re-opened, it's not the end of the world. Except of course if the Mayans were right, in which case it was nice knowing you all!
–
Yannis Rizos♦Dec 20 '12 at 1:42

8 Answers
8

Do you feel there is a trade-off between writing "nice" object
orientated code and writing very [sic] low latency code?

Yes.

That's why the phrase "premature optimization" exists. It exists to force developers to measure their performance, and only optimize that code that will make a difference in performance, while sensibly designing their application architecture from the start so that it doesn't fall down under heavy load.

That way, to the maximum extent possible, you get to keep your pretty, well-architected, object-oriented code, and only optimize with ugly code those small portions that matter.

I think putting some though into basic work avoidance as you go is worthwhile as long as it doesn't come at expense of legibility. Keeping things concise, legible and doing only doing the obvious things they need to do leads to a lot of indirect long-term perf wins like other developers knowing what the heck to make of your code so they don't duplicate effort or make bad assumptions about how it works.
–
Erik ReppenDec 20 '12 at 3:23

On "premature optimization" - that still applies even if the optimized code will be just as "nice" as the unoptimized code. The point is to not waste time aiming for speed/whatever that you don't need to achieve. In fact optimization isn't always about speed, and arguably there's such a thing as unnecessary optimization for "beauty". Your code doesn't need to be a great works of art in order to be readable and maintainable.
–
Steve314Dec 21 '12 at 4:02

Yes, the example I give is not C++ vs. Java but is Assembly vs. COBOL as it is what I know.

Both languages are very fast, but, even COBOL when compiled has many more instructions that are placed into the instruction set that do not necessarily need to be there vs writing those instructions yourself in Assembly.

The same idea can be applied directly to your question of writing "ugly looking code" vs. using inheritance/polymorphism in C++. I believe it is necessary to write ugly looking code, if the end-user needs sub-second transaction timeframes then it's our job as programmers to give them that no matter how it happens.

That being said, liberal use of comments increases programmer functionality & maintainability greatly, no matter how ugly the code is.

Yes sometimes code has to be "ugly" to make it work in the required time, all the code doesn't have to be ugly though. Performance should be tested and profiled before to find the bits of code that need to be "ugly" and those sections should be noted with a comment so future devs know what is purposefully ugly and what is just laziness. If someone is writing lots of poorly designed code claiming performance reasons, make them prove it.

Speed is just as important as any other requirement of a program, giving wrong corrections to a guided missile is equivalent to providing the right corrections after impact. Maintainability is always a secondary concern to working code.

Yes, a trade-off exist. By this, I mean that code that is faster and
uglier is not necessary better - the quantitative benefits from "fast
code" needs to be weighted against the maintenance complexity of the
code changes needed to achieve that speed.

The trade-off comes from business cost. Code that is more complex
requires more skilled programmers (and programmers with a more focused
skill set, such as ones with CPU architecture and design knowledge),
takes more time to read and understand the code and to fix bugs.
The business cost of developing and maintaining such code could be
in the range of 10x - 100x over normally-written code.

This maintenance cost is justifiable in some industries, in which
customers are willing to pay a very high premium for very fast software.

Some speed optimizations make better return-on-investment (ROI) than
others. Namely, some optimizations techniques can be applied with
lesser impact on code maintainability (preserving higher-level structure
and lower-level readability) compared to normally-written code.

Thus, a business owner should:

Look at the costs and benefits,

Make measurements and calculations

Have the programmer measure the program speed

Have the programmer estimate the development time needed for optimization

Make own estimate about the increased revenue from faster software

Have software architects or QA managers gauge qualitatively the drawbacks from reduced intuitiveness and readability of source code

And prioritize the low-hanging fruits of software optimization.

These trade-offs are highly specific to circumstances.

These cannot be optimally decided without the participation of managers and
product owners.

These are highly specific to platforms. For example, desktop and mobile
CPUs have different considerations. Server and client applications
also have different considerations.

Yes, it is generally true that faster code looks different from
normally-written code. Any code that is different will take more time
to read. Whether that implies ugliness is in the eyes of the beholder.

The techniques that I have some exposure with are: (without trying to
claim any level of expertise) short-vector optimization (SIMD),
fine-grained task parallelism, memory pre-allocation and object reuse.

SIMD typically has severe impacts on low-level readability, even though
it typically doesn't require higher-level structural changes (provided
that the API is designed with bottleneck-prevention in mind).

Some algorithms can be transformed into SIMD easily (the embarassingly-
vectorizable). Some algorithms require more computation rearrangements
in order to use SIMD. In extreme cases such as wavefront SIMD parallelism,
entirely new algorithms (and patentable implementations) have to be
written to to take advantage.

Fine-grained task parallelization requires rearranging algorithms into
data flow graphs, and repeatedly apply functional (computational)
decomposition to the algorithm until no further margin benefit can be
gained. Decomposed stages are typically chained with continuation-style,
a concept borrowed from functional programming.

By functional (computational) decomposition, algorithms which could
have been normally-written in a linear and conceptually clear sequence
(lines of code that are executable in the same order they are written)
have to be broken down into fragments, and distributed into multiple
functions or classes. (See algorithm objectification, below.) This
change will greatly impede fellow programmers who are not familiar
with the decomposition design process which gave rise to such code.

To make such code maintainable, the authors of such code must write
elaborate documentations of the algorithm - far beyond the kind of
code commenting or UML diagrams done for normally-written code.
This is similar to the way researchers write their academic papers.

No, fast code need not be in contradiction with object-orientedness.

Put in another way, it is possible to implement very fast software that
is still object-oriented. However, toward the lower-end of that
implementation (at the nuts-and-bolts level where the majority of
computation occurs), the object design may deviate significantly from
designs obtained from object-oriented design (OOD). The lower-level design
is geared toward algorithm-objectification.

A few benefits of object-oriented programming (OOP), such as
encapsulation, polymorphism, and composition, can still be reaped from
low-level algorithm-objectification. This is the main justification for
using OOP at this level.

Most benefits of object-oriented design (OOD) are lost. Most
importantly, there is no intuitiveness in the low-level design.
A fellow programmer cannot learn how to work with the lower-level
code without first fully understanding how the algorithm had been
transformed and decomposed in the first place, and this understanding
is not obtainable from the resulting code.

Some of the studies I have seen extracts of indicate that clean easy to read code is often faster than more complex hard to read code. In part, this is due to the way optimizers are designed. They tend to be much better at optimizing a variable into a register, than doing the same with an intermediate result of a calculation. Long sequences of assignments using a single operator leading to the final result may be optimized better than a long complicated equation. Newer optimizers may have reduced the difference between clean and complicated code, but I doubt they have eliminated it.

Other optimizations like loop unrolling can be added in a clean fashion when required.

Any optimization added to improve performance should be accompanied by an appropriate comment. This should include a statement that it was added as an optimization, preferably with measures of performance before and after.

I have found the 80/20 rule applies to the code I have optimized. As a rule of thumb I don't optimize anything that isn't taking at least 80% of the time. I then aim for (and usually achieve) a 10 fold performance increase. This improves performance about 4 fold. Most optimizations I have implemented haven't made the code significantly less "beautiful". Your mileage may vary.

If by ugly, you mean difficult to read/understand at the level where other developers will be re-using it or be needing to understand it, then I would say, elegant, easy-to-read code will almost always ultimately net you a performance gain in the long run in an app that you have to maintain.

Otherwise, sometimes there's enough of a performance win to make it worth putting ugly in a beautiful box with a killer interface on it but in my experience, this is a pretty rare dilemma.

Think about basic work avoidance as you go. Save the arcane tricks for when a performance problem actually presents itself. And if you do have to write something that somebody could only understand through familiarity with the specific optimization, do what you can to at least make the ugly easy to understand from a re-use of your code point of view. Code that performs miserably rarely ever does so because the developers were thinking overly hard about what the next guy was going to inherit, but if frequent changes are the only constant of an app (most web apps in my experience), rigid/inflexible code that's difficult to modify is practically begging for panicked messes to start popping up all over your code base. Clean and lean is better for performance in the long run.

I'd like to suggest two changes: (1) There are places where speed is needed. In those places, I think it is more worthwhile to make the interface easy to understand, than to make the implementation easy to understand, because the latter may be a lot more difficult. (2) "Code that performs miserably rarely ever does so ...", which I would like to rephrase as "A strong emphasis on code elegance and simplicity is rarely the cause of miserable performance. The former is even more important if frequent changes are anticipated, ..."
–
rwongDec 20 '12 at 5:35

Implementation was a poor choice of words in an OOPish conversation. I meant it in terms of ease of re-use and edited. #2, I just added a sentence to establish that 2 is essentially the point I was making.
–
Erik ReppenDec 20 '12 at 5:48

Complex and ugly aren't the same thing. Code that has many special cases, that's optimized to eek out every last drop of performance, and that looks at first like a tangle of connections and dependencies may in fact be very carefully engineered and quite beautiful once you understand it. Indeed, if performance (whether measured in terms of latency or something else) is important enough to justify very complex code, then the code must be well designed. If it's not, then you can't be sure that all that complexity is really better than a simpler solution.

Ugly code, to me, is code that's sloppy, poorly considered, and/or unnecessarily complicated. I don't think you'd want any of those features in code that has to perform.

Do performance diagnosis, and fix the problems it tells you, not the ones you guess. Guaranteed, they will be different from what you expect.

You can do these fixes in a way that is still clear and maintainable, but, you will have to add commentary so people who look at the code will know why you did it that way. If you don't, they will undo it.