creating indicators

I notice that in the default system, Indicators are created as Indicators["BBL"].CreateIndicator(new BollingerBandLower(14, 1.5));

What I would like to know is the following:1. Does "Indicators" above refer to the indicatormanager class? Why do I need to feed it a "BBL" argument?2. Can I create an Indicator off an Indicator? - an example could be a moving average of a moving average.

There are a few data issues that I would like to pose to you:1. Before the system loads, do you compute the indicators and store in memory for all stocks in portfolio?2. Are the indicator values stored for future reuse?

1. Yes, Indicators is an instance of the IndicatorManager class. It manages all of the indicators in your system. "BBL" is just the identifier chosen for this specific indicator. You need to give it this name so you can refer to the indicator later.

When you call CreateIndicator, the indicator manager actually creates a copy of that indicator for each symbol you are running the system with. So Indicators["BBL"] refers to the collection of "BBL" indicators, where there is one indicator for each symbol. To get the BBL indicator for MSFT, you would use Indicators["BBL"]["MSFT"].

2. Yes, you can create an indicator off of another indicator. We call this using one indicator as the input for another. If you wanted a 5-period simple moving average of the BBL indicator, you would do it like this:

The main reason for this is to enforce the fact that an indicator should not be able to cheat and look into the future. The indicator values are computed as the system is running. When the system is running, here is what happens for each bar:1. The latest bar for each symbol is added to the user-accessible bar collections.2. Positions are opened or closed based on orders that were submitted for the previous bars.3. Indicators are updated with the latest bar.4. Your system code is run (the NewSymbolBar or NewBar function).

I hope that helps you, feel free to ask any other questions you might have.

Thanks,Daniel

EDIT: I added another step to what happens when the system is running, since it relates to a question you asked in another forum.

Also, I should mention that another reason we do not compute the indicator values all at once before the system is run is because this will not work for running a live, realtime system. We want to have our backtesting be as close as possible to running a system in realtime mode. So in both cases, for each bar, we update our indicators and then run our logic.

Your approach seems quite robust. Let me try out some of the Indicator functions and get back to you. Also, since you are designing an IDE, I'm sure you are considering intellisense - that'd be very useful.

Absolutely. It's killing us personally not having that and we're not going to pass that pain along. I'm sorry that the beta testers must endure this, but we do know what you're going through.

Although this is quite a large feature to implement, we plan to have it in 1.0.

Henrik (6/29/2006)Your approach seems quite robust. Let me try out some of the Indicator functions and get back to you. Also, since you are designing an IDE, I'm sure you are considering intellisense - that'd be very useful.

OK - I think it all makes sense, but there's one other point I'd like to raise with you. I'm predominantly interested in intraday bar & tick-by-tick simulations. Thus, one of my main concerns with any product is how much data I can load into my RAM without the program crashing, and if there are any in-built strategies in place within the program for dealing with huge globs of data - for example, if i have 1000 instruments, for which I have over 10 years of tick-by-tick data, then it's definitely not something that's gonna fit into my RAM!

Thus, what I'd like to ask you is the following:

1. Are there any strategies in place for loading huge amounts of data? Say, 1-minute bars for 2000 instruments over a period of 10 years? Clearly, I can't imagine a moderate machine (2 GB RAM, 3 GhZ) loading so much data into memory.2. Have you tested out your product in such a scenario? I don't immediately have access to data, so I can't run the tests right now.

This is probably a good 'stress-test' for your product, but one that you should definitely consider.

Right now, it would attempt to load all that data into memory. Assuming your swap file is big enough, the OS should use virtual memory to store the data that can't fit into your RAM. This will slow down your system. Depending on how you access the data, the slowdown may not be too bad. If you are only accessing fairly recent data while your system is running, then there may not be too much page swapping (where the OS has to write something currently in memory to the swap file, and then read something from the swap file into memory).

Right now I am sure that running a system with all that data would be very slow. We have not done a stress test with that amount of data.

If you want to run a system on that amount of data without loading all the data into memory, then your system must only look at part of the data when making trading decisions. IE, say you have 10 years of 1 minute bar history. If you are making a trading decision about what to do today, will you need access to all 10 years of data? If so, then it is going to have to all be loaded into memory anyway. You might want to load some of it, then unload that and load another part of it, and so on. But this is pretty much what your OS does with virtual memory and the swap file, and doing it yourself won't change your performance drastically.

Anyway, I'm sure we can figure out a way to help you run systems with this kind of data. Can you give us more detail about what data the system might need to access each bar or tick?

It's just that we have around 30 gigs of intraday data sitting in the office, and if we were to come up with a viable strategy, we'd have to present potential investors with a simulation over the entire period - however, there really isn't any easy way out of this i'm afraid.

Well, we would like to provide support for running systems against such large datasets. However, the system would need to be designed so that it did not need to access the data all at once. For example, it seems like the further back you go, the less detail you would probably need about your data. So if you were running a system for 10 years of 1 minute data, maybe we could discard bar data older than 1 month, 6 months, or whatever. Your system might want to keep track of daily averages for the older data. But you would not have to keep the full data set in memory.

That's how I see a system working reasonably well for large data sets. If this sounds like it would help you, we can work together on how RightEdge can help support this.

What you're saying makes sense. I think that might work - one system that you can play with is to choose a simple crossover system - like the one you have up as a trading system - and test that out from, say 1995-2005 over 1 minute bars across lets say 1000 stocks. Most trend following systems are some version or derivative of the crossover system, so it can be a good proxy.