Basic Programming model....

This is a discussion on Basic Programming model.... within the A Brief History of Cprogramming.com forums, part of the Community Boards category; Do you think it is worth the effort to write code that is efficient in cpu cycles and small in ...

Personally, I say that the software company that insists on forcing a hardware upgrade to use the new version of their software should have to purchase the new hardware for their clients.

Except that hardware does have limits. Let's do an exercise:

I am Squaresoft, it's 1997 and Final Fantasy VII is scheduled for release. I have a huge Nintendo fanbase, but Nintendo (at the time) refused to use anything other than cartridges, because gamers are hardcore.

I'm not going to even try putting polygon graphics in a game on some cartridge: it's just not going to work. A CD would be so much better just because it can handle so much more data.

That's the kind of stuff it comes down to. Companies are polite, they do write system requirements in the manual, if you don't want to follow their suggestion, then your SOL. Companies that are ethical routinely push hardware to their limits to reduce upgrade costs to their cusomers, but asking companies to sell hardware along with a game is a bit overkill--you might as well buy a new PC in a bundle of games or something. That's one way to ensure that customers will have what they need when they play your game.

And I've seen examples where good coding would improve efficiency particularly when it comes to memory management.

There was this example where an array of ints would be created. They said that you should take into account how pages are allocated in memory by the OS when doing this.
In the example they used pages of 4kB. The program needed to have a matrix of 4MB. In this case it would be very efficient to create 1024 arrays of 4kB. So that each array would fill up a page.
In the end all of this would decrease the working set ( active pages ).

Then again, apart from restructuring your code to gain efficiency , how will your compiler handle/optimize all these things...

Anyway I think that its better to write efficient code then to rely on hardware improvements.

In general I agree with GanglyLamb, however the other side of the coin is worth a look. I could not count the number of days I have wasted obsessing over tiny fragments of code which I eventually threw out or re-wrote anyways. You have to know where to draw the line. I hate to admit that theres ever a time when something shouldn't be done "perfectly", but there is, and I admit it. It all comes down to the matter of diminishing returns.

You want efficient code that will run fast, but you dont want to spend decades writting it. You have to decide where your application falls, it'll have to be balanced. Write the most efficient code for the time frame you have.

Sure if the application is small you can pass by with slow code you slapped together in a few hours. But for large applications you need to optimize the slowest parts (mainly long and intensive algorithms and memory management).

There was this example where an array of ints would be created. They said that you should take into account how pages are allocated in memory by the OS when doing this.
In the example they used pages of 4kB. The program needed to have a matrix of 4MB. In this case it would be very efficient to create 1024 arrays of 4kB. So that each array would fill up a page.
In the end all of this would decrease the working set ( active pages ).

Why would there be a difference between 1000 4KB arrays and 1 4MB array? My understanding is that the working set is determined by the memory pages that are touched, rather than the details of allocation.

Well you could make 4MB up of 2048 arrays of 2kB ... if optimization by your compiler or whatever is not done ... then there's a good chance all these arrays would end up in different pages ... creating more internal fragmentation. + Depending on which algorithme is used to determine which pages should be removed it could even be that there would be alot more page faults. Since you will be probably working on that 4MB array at approx. the same time , most of the pages that make up this 4Mb will either be just used, or yet to be used.

Now suppose you have a counter that keeps track of how many time a page is being used. Then use a timer to decrement the counter as time goes by. Then use a LFU algorithme.
With this example you are likely to get more pagefaults since there are more pages. As time goes by, the counters will decrease, making it more likely to be removed... while it actually shouldnt be removed since it's still a very active page.

It's not the algorithme that is bad in this case, its the design of the program. Since you have more pages, you need more lookups, which causes more time to pass before all pages we use in the 4MB are handled... which could eventually leads to a kind of Trashing effect ( although that depends on how many processes are in memory at that time and how much of the memory is being used ).

Of course in an os there are implementations to deal with this trashing...

Anyhow this is all very theoretical, in practice this is almost impossible to do, since for every other OS / architecture you would need to restructure your entire code ( which you don't really expect when working with very high level languages , you expect the compiler do this for you....)

Personally, I say that the software company that insists on forcing a hardware upgrade to use the new version of their software should have to purchase the new hardware for their clients.

If there was a way to program in which you got a perfect speed to size ratio and you didn't have to do things like sacrifice anything to get the best of everything, everybody would program that way. Considering that that approch hasn't come around yet in practical and modern software development, it kind of tells you something, huh.

You forgot about the 128MB of video RAM Vista needs just to achieve some simple effects OSX has had for years. Every screen show of Vista I see just looks more and more like OSX gone wrong. If they're gonna copy apple they shouldn't ruin all the features... more here:Visual Tour: 20 Things You Won't Like About Windows Vista