I just found out if I use the plot function with a very large offset, the study will do its usual thing and re-run the study. Before anyone says, well just set the "max number of bars study will reference" to a sufficiently large number and it will be fine; I can't. The simple reason is the offset can vary, and can be thousands of bars back just for one instance. So, it would be ridiculous to set it to some huge number when most of the time it's far too much. Obviously, the plot function was never designed to be used this way, and I think the only way around this is to use trend lines (provided I store in advance the necessary data for the end points otherwise I will suffer the exact same problem - or perhaps I will suffer the same issue too regardless - not sure).

The trouble is I like to use the plot function since it allows the user to change the characteristics of the line very easily (eg, color, width, make it invisible, etc.). Trend lines can slow down the study significantly, and use too many resources if too many are drawn. My wish is MC to provide an option to turn off the "max number of bars study will reference" feature for those who understand the ramifications, and take the necessary precautions to avoid errors. This is actually no different if one now uses arrays and tries to reference an element outside the array bounds; the study promptly aborts with a message. Such an option would also allow me to handle data values the same way with precautions. So, am I the only one who needs this option?

I have had some studies running with MaxBarsBack at 8000. It was my old binary search logic doing it (now fixed). The main point is it ran okay. I can't tell the difference now that it is running lower. Not sure if that helps at all.

Now that I have it fixed it stays at 1000 or 500. I haven't checked but I don't think it goes above it now. It is just that I can't really manually notice any difference in speed of execution. Of course other factors have changed such as a big upgrade in MC from 2.1.999.999 to 6.1 beta.

I am going to put in a function soon to notify me if the currentbar = 1 ever gets exacuted again after the start of the trading day (the hidden recalc I call it). When I do that I will have a record of all maxbarsback settings an all my studies.

Part of the issue is I have lots of advanced programming experience. So, "features" like "max number of bars study will reference" are more of a restriction to me given the type of studies I develop. I've become accustomed to it for now but I do sense that it's preventing a lot of other users coming on board, and are instead heading off to other trading platforms that do offer a more programmer friendly approach to the handling of data.

It's a memory optimization, and something of an over-simplification to make it easier (so you don't have specify it for every data series, and every variable you want to track individually, something most users would commonly not get just right the first few dozen times). Many other non EasyLanguage platforms don't use this concept at all, but interestingly, several are in the process of moving in this direction, because of excessive memory usage when retaining every value of every series for large charts.

I do understand what you're saying. Doubtless, the best of all possible worlds is when a platform gives you all of the options - limit it the easy way (as now), limit it the hard way (letting the programmer specify each series' limit), or don't limit it (meaning you can look all the way back always, but using gigabytes of memory when looking at for instance a 1 tick chart of ES going back a few years - something a 32 bit windows app like MC cannot do).

I didn't like the backward subscripting reference at first. As I learn it better I too have gotten use to it. I for the first time today actually used a "MyRSI(NumericSeries)," in a function and that was pretty cool. I had seen it but did not know what it meant then I wanted to put the MyRSI field into the function to reduce repeat coding and it hit me that just maybe this was what I had to do. Sure enough. It worked.

I have been programming since age 26, learned assembler in school and was the excited about programming student who came up with the idea of defining instructions in tables and using code to move them into the executable section of the program so the instructions printout was of little value after a while. These days I have no excitement about advanced programming. I just want it to do what I want it to do as fast as possible and be done with it. The easier it is for me the better. I get somewhat upset if I start some programming I figure is going to take 2 hours and it takes me 2 days. At age 26 I would not mind so much. Now I hate it when that happens. My excitement now is trading. So EL is okay for me.

Welcome to the 21st Century of 64-bit computing and cheap multi-GB memory PCs. Very soon we will have standard PCs with 100+ GB RAM as standard running super fast. Storing 10 years of 1 minute data in memory will not be an issue, that's for sure. In fact, with just the 64-bit systems now available today, one should be able to store centuries of 1 minute data all in memory.

This sort of reminds me of the days before virtual memory mapping, and we had to write code using overlay segments that had to be squeezed into a tiny amount of RAM. Ever since virtual memory became popular, the tyranny of physical memory alone was removed and programming became more productive. The same can be said about the "max number of bars study will reference" feature. One day soon it will be redundant as 64bit computing not only becomes more popular, it ends up being the norm. In fact I can afford to buy a very powerful 12 GB desktop for less than $2,000 right now! Eventually, for that price one could buy a 144 GB server. Things are moving along in computing. So should applications to take advantage of these huge capabilities, and no doubt most will. Those that don't but should do because they are resource intensive to begin with will die.

What you describe Janus sounds good. Looking forward to it. I think everyone knows it will get better (including MC too).

I suppose we can use the usual "head in the sand" approach and just wait for PCs to get much faster and not worry about the "feature" in question. That should be fine for most compute intensive traders. The more advanced ones that need to do employ far more complex trading methods, such as neural networks, will still be best to write their code in an external language, and use any trading platform that provides the necessary drawing and order management tools they require. This is where MC may hit a hurdle or two unless they remove the plotting limitation I've encountered, or provided a way to group trend lines together and allow the user and study to change the characteristic of all of them in one step.

I've started using 64 bit hardware around 20 or so years ago (DEC and HP) but they were expensive back then. Now it's becoming common place. Let's hope MC fully utilizes the 64 bit architecture in the not too distant future.

I've started using 64 bit hardware around 20 or so years ago (DEC and HP) but they were expensive back then. Now it's becoming common place. Let's hope MC fully utilizes the 64 bit architecture in the not too distant future.

I should know better...
I should add that
retail 64 bit OS has been around for ~10 yrs,

Research workstations (e.g. RISC stations) and expensive business computers (e.g. IBM mainframes and minis and their competitors) have had 64 bit available since the 1960s and 1970s. The first "regular PC" (i.e. that runs Windows on something regular folk who aren't hobbiests/enthusiasts can commonly have at home) that was 64 bit was possible when Intel introduced the Itanium in 2001, and Windows XP Profession x64 Edition was introduced the same year.

It is actually still quite early in the 64 bit adoption cycle for trading applications. Only one major retail platform has a 64 bit edition, and it's plagued with difficulties, including limited data feed and broker support because very few of them are ready for this yet. I've had a significant look at it and keep it installed for testing, and can say that it's pretty early yet, with tools and interoperability still somewhat experimental, and it's necessary for those using it to be much more willing to tolerate problems since there are still a lot of them, and we're somewhat on our own until there's more support out there. The thing is, trading applications aren't like PhotoShop (which has a good, working 64 bit implementation largely because it can stand on its own) - they have to interoperate in real-time with a bunch of other things. It's this interoperability and the need not to introduce any slow-downs as it changes back and forth from 32 bit to 64 bit that largely dictates the limits of what can be done today without it being more trouble than it's worth (although again, this can be expected to change gradually over the coming time).

There can be no doubt MC should be in the planning stages (if not the early implementation stages) of doing a 64 bit version (importantly, this has to be kept too in the context of what's needed most, which currently is probably not 64 bit support), but understand, if it were released today, it would have little support out there because the data feeds and brokers are largely not ready for this, and interoperability with 32 bit APIs is not as straight forward and trouble-free as we would all like it to be and were led by Microsoft to think it could be. However, over the next year or two, this situation should change as it's becoming more common for "regular PCs" to ship out with 64 bit OS and the 3rd party vendors very gradually ramp up on this technology. Currently, we're recommending in almost all cases that new PCs be purchased with at least 4GB of RAM (at least 6GB for i7 processors with x58 triple channel architecture) and 64 bit Windows 7 installed, so that as 64 bit applications become available, get debugged, and become more useful over the 1 to 3 year typical "primary use" life cycle of these PCs clients are ready now.

Getting closer to Janus's original topic where he touched on what I call "MC's hidden recalculate", I just finished the initial version of the function I will be dropping into all my studies to (detect/report on) this hidden recalculate. I installed it into one fairly simply study that has the RSI with a default setting of 14. It was set to auto-detect MaxBarsBack and in the end my function has reported that it needs maxbarsback of 22. At this point I am not sure why. It may have just blindly jumped past 14 and landed at 22. So I might try a setting of 14. At the moment I have changed the study to a "user specified" setting of 22 for MaxBarsBack.

So it seems that for tweeking the MaxBarsBack writing and using this function is worth the effort (certainly informative). However, my main use is to catch the need to change MaxBarsBack when I may have missed doing so in any future EL code changes I make.

Maybe by the weekend I will put this function out in the user contributed studies (with an example call I am using). Anyone using it will need to taylor it of course. It was harder to complete than I figured so for others who are interested, it is probably worth using it as a base.