I am only developer working on a project. The functionality that I have coded works to expectations(desired result) but since I am the only one I don't know if I can do anything better to it. How do I determine if my code is optimized (I want to increase the performance of some features)?

One thing that comes into mind is Code Review but there is no one free over here to review my code. I just keep on trying with alternate logic but at times things get stagnant and really dirty. Is there any other possible way?

6 Answers
6

Do I really need to optimize for this particular thing, or is it actually good enough?

As far as "what for" goes, a few possible answers and what you should do:

maintainability - the best test here is real-world production usage, but using that alone yields some kind of a vicious circle. However, you can have someone else review your code; you can check if others can work with it well; you can run over a checklist of what you consider code smells, or, more positively, a checklist of what you consider must-haves for well-written code. Document everything, and have someone else read the documentation and tell you which parts are unclear.

correctness and reliability - write automated tests that cover everything that can be meaningfully tested, and design your code to be as testable as possible. Put on your destruction monkey hat and see if you can use your own code in a way that breaks it; then go fix it. Write a detailed test plan that covers all the functional and technical requirements, including 'negative' cases (i.e., describe what you expect the code to do when you use it in unintended ways); let someone else execute the test plan and consider all negative responses bugs in either the design, the code, or the test plan.

run-time performance - there is no replacement for profiling here. Before attempting to optimize anything for performance, first make sure it is correct; then make sure the part you are going to optimize is indeed a problem area. Then change one thing, and measure the performance before and after. Don't forget to test against realistic data, including huge datasets - many algorithms work fine (even faster) for small datasets, but come to a grinding halt when the dataset exceeds a certain size. Optimize algorithms before code - algorithms can make an exponential difference, while optimizing code practically never wins you any gain beyond linear. Also, go for the easy fixes first: compiler optimization settings (these are basically free), simple refactorings (pass-by-const-reference instead of pass-by-value and such), caching (can be done at several levels, you'll have to judge where it makes the most sense), and whatever can be implemented easily in the code you already have. Note that there are several orthogonal measurement for "performance", such as CPU load, latency, memory footprint, etc.; quite often, you have to balance one of those against another, e.g. reducing CPU usage at the cost of an increased memory footprint. See Mark Booth's comment for more details. And don't forget, sometimes (though not often) "throw more hardware at it" is the correct answer.

profitability - analyze which features make you the most money, and what parts of the application are possible stumbling blocks for generating more revenue. For web applications, tracking usage patterns is probably the most promising way to do this. Then estimate how much you would have to spend on each part to fix it, and by how much it would increase your revenue. From there, it should be clear which issues you should take up first.

Great answer, it might be worth mentioning that even run-time performance can be split into several separate performance metrics such as memory, latency and throughput. Optimising for a small memory footprint could be a requirement for scalability, while optimising for throughput could affect latency and vice versa. If writing a enterprise app, being able to handle 1000 transactions a second with a one second response time might be best, whereas a heart monitor might need to handle only 100 events per second, but need to respond within a tenth of a second - a 1 second response could be fatal.
–
Mark BoothMay 18 '12 at 13:01

@MarkBooth: Excellent point. I tried to keep the answer concise, which is why I didn't go into more detail. I added a bit referring to your comment though.
–
tdammersMay 18 '12 at 14:51

1

Thanks, and if I could I'd +1 again for sometimes... "throw more hardware at it" is the correct answer. *8')
–
Mark BoothMay 18 '12 at 15:07

Profile your app and look in which functions your code spends the most time, then go about optimising those specific functions. You could spend days optimising one function only to discover that it's only called a couple of times, whereas there are little functions that get called hundreds of times, these are the ones to look at.

Replacing objects with low-level data types sometimes helps, also make sure that in conditionals the most likely conditions are evaluated first.

Also try playing with build settings, like compiler optimisation levels, stripping debug symbols and so on.

If the code actually conforms to the implicit and explicit expectations then you might not need to optimize it at all. Your time might be better spent elsewhere adding more functionality.

The way you phrase your question makes me think that you feel there are some implicit expectations that you have not fully achieved with your code though. Maybe you should check with your client first to see what they actually expect.

Is it fast enough? I wouldn't spend too much time bringing page load time from 1 second down to 0.1sec for a site that will see 50 visitors a day.

Open your solution, and try to look at your code like if you've never seen it before. Does it make sense ? Can you read it, and understand what it does and why it does it like that ? Maintainability is the key (unless it's one-off piece that you will never run again).

If you want a somewhat a better quantified measure, VS (if you're using .NET) has a code analysis tool, which will give you a rough idea of your code maintainability/complexity.

There hardly exists any code that can't be made faster.
As this post shows, no matter what you do to speed up the code, there is almost always something more you could do.
That's because changes that initially would not have been much help become a lot more significant after other sources of slowness have been removed.

Where did you get the idea that code reviewers could tell you how to make code faster?
I often get asked how to make code fast, but that's like asking a doctor to diagnose without examining the patient. Even eyeballing the code is of little use.

The fundamental skill of performance tuning is diagnosis.
Anybody who suggests otherwise is giving you a WAG.
Here's how I do it.