In game development there is a lot of C/C++, in business applications C#. I have seen C/C++ devs express concern over how a single line of code translates to assembly. In .NET some go into IL, rarely.

In C#, "micro-optimizing" is frowned upon, rare and usually a waste of time. This does not appear to be the case in game development.

What specifically creates this inconsistency? Do games constantly push the limits of hardware? If yes, as hardware improves should we expect higher level languages to take-over the gaming industry?

I'm not looking for a debate on the feasibility of C# as a game dev lang. I know it's been done to some degree. Focus on Micro-optimization. Specifically, the difference between Game Dev vs Applications dev.

I've worked as a games developer and an application developer and the differences are moot. Micro optimisation without profiling is frowned upon on in both. Many games don't have very powerful requirements and dont require any optimisation. Some business applications require far more stringent requirements (e.g. uptime and real time guarantees) than an average 60Hz game.
–
Dave HillierDec 30 '13 at 16:18

I would point out that code that optimizes database queries can greatly improve the usability of business applications.
–
HLGEMMar 31 '11 at 15:14

3

+1. Database and Network optimization would usually give more bang for buck in business application. E.g. choice of JSON vs XML and tuning DB indexes
–
Shamit VermaMar 31 '11 at 15:16

1

+1 but you should add the other side of the equation : the "main loop(s)" and rendering(s) in games on witch the fluidity of the game rely on makes each microsecond lost a loss of value, because quality is perceptible to the eye and other senses.
–
KlaimMar 31 '11 at 17:41

1

Well said. And indeed, having done business apps and game development, I have spent time poring over a complex SQL query trying to eke out some more performance, much the same as I have spent time poring over an inner loop in a game.
–
Carson63000Mar 31 '11 at 20:30

Most business applications are written as in-house tools. Expectations about the usability of this tools are much lower than in the case of software sold to mass customers. It is quite common that an in-house business app has menus and dialogs which react slowly to mouse clicks, windows which redraw with delay, or even a GUI written in Swing (the horror!). This due to a number of reasons (it is more important that the software is customizable than that it is very "snappy", the users of the software have no choice whether to use or not use the software in question, the people who make the decision to install the software do not use it themselves...). The consequence of all this is that the developers of this tools do not spend much time optimizing the responsiveness of the application, but care a lot about the extensibility and number of features. Different client base => different design goals => different methodology.

Note that a business application targeting a mass audience, such as Excel, IS heavily optimized.

In business applications, it's very rare for microseconds to matter. In games, it's a fact of life.

If you want to have a game running at 60 frames per second, you have ~ 16.67 milliseconds to do everything that needs to be done for that frame - input, physics, gameplay logic, audio, networking, AI, rendering, and so on; if you're lucky, you'll run at 30 fps and have a luxurious 33.3 milliseconds. If a frame takes too long, your reviews will suffer, your players will fill internet forums with bile and you won't sell as much as you might (not to mention the blow to your professional pride) and if you're really unlucky you will find your team coding business applications for a living.

Of course, game developers don't worry about every single line as, with experience and a decent profiler, you learn which lines need worrying about. On the other hand, those worries will sometimes touch things that in the business world would probably be considered nano-optimizations rather than micro-optimizations.

Dont't expect any high-level language to kick C++ out the door until one offers comparable, and predictable, performance.

@quant: As with most stream-processing applications - robotics, power grids, rocketry, medical technology, etc. Build up too much of a backlog and it may be too late by the time you catch up.
–
AaronaughtMay 11 '11 at 22:39

If yes, as hardware improves should we expect higher level languages to take-over the gaming industry?

Not really - because as hardware improves, consumers expect games to improve too. They don't expect to see the same quality of game developed more efficiently because the developers used a higher-level language. They expect to have their socks blown off by every new platform.

Of course, there is some movement. When I was a lad and first interested in game development, it was handwritten assembly, or get the hell out. This was the Commodore 64 era. Nowadays, of course, C++ is the lingua franca of most game development. And indeed, we've even seen movement towards using C++ for engine code and a higher-level scripting language for game logic code. e.g. LUA, or the Unreal engine has its own UnrealScript language.

+1 a good portion of game devs these days use a hyper-optimized engine layer written by someone else, then use something like Python, or less meticulous C++ to wrap things together.
–
Morgan HerlockerMay 11 '11 at 21:44

Okay, so you've seen C and C++ developers obsessing over individual lines. I'd bet they don't obsess over each and every line.

There are cases where you want the maximum performance, and this includes a lot of games. Games have always tried to push the performance limits, in order to look better than their competition on the same hardware. This means that you apply all the usual optimization techniques. Start with algorithms and data structures, and move in from there. By using a profiler, it's possible to find where the most time is being taken, and where it's possible to get significant gains from micro-optimizing a few lines.

This isn't because the languages force people into that, it's that people choose languages based on what they want to do. If you want to wring the last bit of performance out of a program, you won't write C# and compile to the CLR and hope the JIT compiler (or whatever) does a good job, you write it in something where you can largely control the output. You'll use C or C++ (and probably a restricted subset of C++) and study the assembly-language output and profiler results.

There are plenty of people who use C and C++ and don't worry too much about the details of translation, as long as it seems to be fast enough.

"Game" is quite an encompassing term. If you had, say, an MMORPG, smaller optimisations would effect many players.

Gamers are, and have probably always been, used to a comparatively large amount of things happening at once, in realtime. Sure; at one time, having a responsive Pacman or Tetris was the goal. But they still had to be responsive. Nowaydays, 3DMMORPGs over packet-dropping network-connections.