There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs.
If this question can be reworded to fit the rules in the help center, please edit the question.

I disagree on the software patterns, imho they do not limit your ability to think outside the box (you are never forced to use a specific pattern), but help you keep your mind free for the important things. For example, MVC makes it easy to to collect your research data without having to think about it's presentation, or present it, without having to think how its collected.
–
kepplaJul 18 '11 at 15:18

@keppla: I agree, in a non-research environment.
–
Robert HarveyJul 18 '11 at 15:24

what makes the difference for you? i provided an example why i think it's no problem, could you provide one?
–
kepplaJul 19 '11 at 6:16

@keppla: Software patterns are already researched. Certainly, if you're working on a problem that's already been solved, by all means use software patterns. If you are trying to solve a research problem using a large number of lightweight objects, then perhaps the Flyweight pattern might be of some use, and yes MVC will help you display your data, but I don't consider that software research proper. You can display data with Excel. Sure, you still need pencil and paper; I'm not saying any of those tools are not valuable, they're just not core research.
–
Robert HarveyJul 19 '11 at 14:34

He did not state that would research software, as i understand, but in a research environment (which, for example, would include writing software for a biolab).
–
kepplaJul 19 '11 at 14:41

Keep a wiki, and spend lots of effort to extract "wisdom" from your work.

Use version control. However, keep good algorithm candidates in the current system, even if they are not actively used.

It allows you to tinker with an older algorithm at the spur of the moment.

Stale performance data could be error-prone.

For example, the old data may be based on a less accurate metric

To get fresh performance data, re-run the algorithm(s).

Prefer dynamic typing and flexibility.

Use the right language.

If almost all successful researchers in the field use one particular language, then use it. Don't fight the wisdom of the crowd.

Instead, find ways to integrate smaller components into that language, if the smaller components can be developed in a language suitable for computation such as C/C++, or if existing open source code is available.

Ask fellow researchers for their source code.

Many researcher are actually quite friendly to such requests with proper credits and data sharing.

This will save a lot of trouble because their published papers will only cover the high level picture, yet the devil is in the details.

Always push yourself, but don't timebox.

Timebox don't work because of unpredictability in research work.

An example of how to use backlog in research: Suppose in the beginning there are items A, B, C, ..., X, Y, Z.

A

B

C

...

Over time, you worked on a number of items, and you have a sense of how promising each item is, not just the items you have worked but also those you don't. The updated backlog becomes:

A (promising: 90, progress: 70% done)

B (promising: 70, progress: 60% done)

Z (promising: 65, not started)

...

C (seems it won't work, don't bother)

Notice how item C sinked to the bottom because of research insights gained from working on A and B. Also notice how Z floats to the top. Learning about what other researchers are doing will also help floating items to the top.