Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

ChiefMonkeyGrinder writes "This summer, the London Stock Exchange decided to move away from its Microsoft .Net-based trading platform, TradElect. Instead, they'll be using the GNU/Linux-based MillenniumIT system. The switch is a pretty savage indictment of the costs of a complex .Net system. The GNU/Linux-based software is also faster, and offers several other major benefits. The details provide some fascinating insights into the world of very high performance — and very expensive — enterprise systems. ... [R]ather than being just any old deal that Microsoft happened to lose, this really is something of a total rout, and in an extremely demanding and high-profile sector. Enterprise wins for GNU/Linux don't come much better than this."

Might I suggest teak? I suppose you could go with faux finished MDF if you are on a budget, but teak is beautiful and will last forever. Then if you have any money left over, nothing says "I have arrived" like a porcelain fountain.

What's with the anal retentive GNU/Linux all the time. Just call in linux already. With a GNU/Linux name, linux never is going to get any popularity. Yeah yeah, I know, linux officially just denotes the kernel etc., but really, no-one cares. It's an image thing. How many people do you hear saying that they have "microsoft windows ixpee" on their computer? That's right. None. They just state that they've got "XP", or "Vista", or "Mac" / "Apple"

I don't think I've ever heard anyone bragging or claiming to run "ntoskrnl.exe" or "Mach" on their Windows/Apple box, whereas I hear plenty of people who say they run Redhat, Ubuntu, SuSE, etc. I don't see the problem.

The guys that run Enterprise Oracle databases might not want to run thedesktop distribution that has bleeding edge video driver support andautomates the installation of video codecs and browser plugins. Imaginethat?

Actually from what I have seen.net is a good development environment. Mono has produced some very nice software for the Linux desktop that lots of people use. What I didn't get from this story was just what they where using for the development system?I doubt that it is all in c or c++ so maybe they are using mono.

The TradElect system was originally developed by Microsoft and Accenture so it was to be a showcase of how excellent.NET was.

Unfortunately, other companies showed how good their stuff was for this kind of work, and MS showed that you cannot polish a turd.

(well, ok,.NET would have created a system that would run your line-of-business apps without problem, but when it comes to very high performance, low latency systems, its simply not suitable, a bit like Java is not suitable for nuclear reactors).

The new system will be written entirely in a lower level language, and MilleniumIT does use C++ - take a look at their jobs board and you'll see the only skill referenced is C++.

It doesn't take a rocket scientist to work out that a GC-based, VM-based language that has layers of intermediate execution is going to be slower than is required for a trading system. What I don;t get is that MS thought they could throw hardware at it until it worked. Don't forget that MilleniumIT also was bought for $30m which is roughly half what the.NET system cost (£40m).

The moral is that you don't want to use the simple-to-code MS platform when you can get a best-of-breed system, based on Linux and good engineering for a lot less. IT managers around the world should be looking at this and thinking what similar lessons their IT departments could learn.

Actually you can polish a turd, it was a surprising result from a Mythbusters episode a while back. Denser turds turn out especially shiny.

Otherwise your post was spot on. I think the concept behind.Net can be very useful in the Enterprise environment, primarily in areas where efficient memory/processor management of apps from multiple vendors is required. This particular case seems more suited to the pure process effeciency you get with C and C++. The only thing that seems strange is the cost disparity..NET apps are usually much cheaper to develop, and the hardware costs would have to be astronomical to make up the difference were that the case. Obviously something about the structure of.Net did not fit the application and had to be worked around in a big way. Of course I've also seen projects whose primary costs have nothing to do with actually engineering or delivering the system.

The only thing that seems strange is the cost disparity..NET apps are usually much cheaper to develop,

You caught where they had Accenture as the integrator. That's where your cost came from. And probably where the relatively poor performance came from. Accenture gets projects done, for the most part, but their history is full of situations like this..Net is not productive enough to overcome the bad effects of legions of inexperienced developers.

I'd be surprised if a strong team couldn't develop a fast, economical replacement using.Net. Or Java. Or Scala.

Disclosure: I worked for Accenture for 10 years, a long time ago, before they were Accenture. I was one of the more technical folk, which should scare you.

the right tool for the job?.NET is fantastic for many different things. In fact, I have written high end video encoder systems in.NET that performed all real time and file based management for multiple 1.5gbps streams in.NET. However for the high demand code, I used C++ and even a few lines of assembly (I can't resist it, just have to write them, helps me sleep at night).

Developing high performance systems using.NET is ENTIRELY! possible and even practical. Unfortunately, in a company like Microsoft, the developer with the skill set for such a job will almost always end up on development teams for Windows,.Net itself, Visual Studio, even Office. The developers left over to write database programs for customers will be of a much lower grade. Besides, there are very few good real-time systems developers that would choose to work on a database program rather than on something more interesting, like... I don't know... shaving toe nails for old ladies. Really, database programming is what people do when they can't do anything else, it's the data-entry job of programmers.

Sometimes Java is still a modern VM environment. CLR generally IS NOT. It has some features you would consider a VM runtime system, but if anything, those features improve performance over straight out compiling the MSIL code. Ideally, it would allow trace metrics to be calculated and where branches can be predicted, long traces can be compiled without cache-misses and penalties... creating MUCH higher performance code.

As for GC. Well, unless you can develop a system that eliminates memory allocation altogether and uses no threading while doing it. Good GC based environments (like CLR/Mono) are almost always faster than straight memory allocation. I highly recommend you research it... and if you're going to try and prove it with 5 lines of code, don't waste your time. That's not a real world test. Test it instead for example with an XML parser that generates a DOM tree and then deletes/dereferences it.

As a religious non-Java programmer and a devout Java basher, I'll shoot down the "Not suitable for nuclear reactors" thing. Java is 100 times more suitable for a nuclear reactor in most cases than C or C++ since the "object model" you would use in a Java program would centralize most critical bugs to a few lines of code that can be fixed to repair the whole program instead of spending months on diddling all the little memory and pointer related bugs you're likely to encounter. Also, for applications that are heavy allocators, relocatable memory in a Java environment can cause a system written by an "average programmer" to run much longer without crashing because of one memory abuse situation or another. Almost no "average programmers" even know where to begin to deal with memory fragmentation issues, yet they DO cause tons of problems.

In the case of this trading system, it's obvious Microsoft tried throwing hardware at the problem. That was all fine and good. Hell, add 500 more web servers and use 5 over those 32 physical Xeon processor machines from Unisys to drive the database..NET will make NO difference at this level. The flaw at this point was poorly coded SQL. After all, by distributing the load of the web traffic across 500 blade servers, there was little chance that the.NET program they were running was the problem.

Instead, it's FAR more likely that abuse of the database was the real problem. Most database front ends querying data from SQL servers are written by mediocre database UI developers that have no respect for what the SQL server might actually have to do in order to process their queries. On top of that, they like to do things like create tons of views and indexes that all need to be updated constantly. Queries get SLOWED down and it doesn't matter how fast the application is, the SQL server can't keep up with the crap code on the back end.

As for GC. Well, unless you can develop a system that eliminates memory allocation altogether and uses no threading while doing it. Good GC based environments (like CLR/Mono) are almost always faster than straight memory allocation. I highly recommend you research it... and if you're going to try and prove it with 5 lines of code, don't waste your time. That's not a real world test.

Speaking as a real-time programmer, GC and memory allocations are enormously damaging to system performance. You really do need to switch to an almost statically allocated approach, with no memory allocations in real-time execution segments. The x86 architecture has special instructions to make the use of Base Pointer, Stack Pointer, and Index Pointer based memory access usable. If you ever program on a less powerful processor, like an 8-bit PIC microcontroller, you would quickly discover that indirect memory accesses have significant timing penalties. Direct memory access, where data is at fixed locations in system memory, can be accessed in a single instruction on almost all architectures.

The second problem is that dynamic memory allocation has an unbounded maximum execution time. It can also be incredibly difficult to prove that memory accesses do not fragment, and that the program can execute in bounded memory space. Proving finite execution times and finite resource issues are major issues for a real-time system. In soft real-time systems, some forgiveness is tolerable. However, if you are in a language like C# and discover that one block of code is rate limiting because of memory allocation issues, how do you overcome the problem? In C/C++, you can statically allocate the memory blocks and work around the problem. In Java/C#, the issue is pretty much the end of the project.

Test it instead for example with an XML parser that generates a DOM tree and then deletes/dereferences it.

Simply put, you can't have algorithms like that in programs with bounded maximum execution times. What happens if the XML file is corrupt? Excessively large? A pathological case deliberately designed to take down the London Stock Exchange? An unbounded tree based on a customer provided data file is a bug in a LSE style application.

Whenever I am looking at code blocks that need to execute quickly, the first thing I look for is blocks of code with unbounded memory, or unbounded execution times. C# encourages using these blocks of code. Real-time software requires using a small subset of available computer science techniques. Language and library support for this must be present.

These sites are focussed on Java, but the points are applicable to.NET also as it's on par nowadays. In.NET you also get the option of using unmanaged code anyway so you can have areas that don't require the VM to underlie execution.

I'd imagine the real problem in this case was a combination of poor project management with poorly skilled developers in an attempt to make the profit margins for Microsoft and Accenture as big as possible. The net result though, as you can see, is quite bad. I do not believe for a second.NET was the problem as there is no reason it can't be used in a way that performs as well as or better than a C++ application. It would use a bit more memory to achieve that performance, but memory is cheap enough for this to not be an issue for most cases nowadays, particularly when you factor in the benefits of security and resilience you get from the managed parts of the codebase.

Every time there's a discussion of.Net,.Net developers defend the framework with a list of reasons intended to demonstrate how much better/easier it is to write applications in Java (oops, I mean.Net).

_Every_ time I use one of these applications I am _very_ unimpressed.

Those too things together have me thinking that all the mediocre programmers in the world gravitate to these 'easy' languages, and, we get TradeElect.

Seriously, as an end user I give a damn how easy it is for you to crank out your ideas/applications, so please stop with the aforementioned approach to defending.Net and provide me with some examples of useful, solid, fast, not-buggy applications that can be written with.Net. As it stands right now applications written in.Net/Java/Ruby-on-Rails/etc. have no chance of making it into my infrastructure -sometimes there's an argument, but even then the guy who wrote it starts in with how easy it was to write, at which point he's lost the argument.

1) Your programs don't even do the same thing since some of them have multiple increments of the loop variable.2) Your C++ program is not idiomatic STL. If it were you'd use an iterator (which is typically a pointer for vector) as the loop variable.

I'm not of the opinion that insert-high-level-language-with-GC-here is necessarily slower than insert-C-or-C++-or-whatever-here, but bullshit benchmarks aren't going to help make that case.

Anyone can use open source in their profit-generating business, it's only newsworthy when a major player makes a significant contribution back to the community whose shoulders it stands upon.

It is this type of thinking that prevents Linux from moving beyond being the behind the scenes "muscle" in an enterprise-level environment. Microsoft, on the other hand, wants as many people as possible to use their systems, and verious memo and powerpoint leaks over the years have shown that internally they focus on keeping their customers as satisfied as possible. Obviously, they fail sometimes (i.e. TFA), but overall this focus has allowed them to maintain their dominance, whereas Linux has somewhat pl

The article was clear that they were not dumping.Net just because it was.Net, but dumping the application and system that just happened to use.Net. They also dumped the out-source model and actually bought the company that makes the new product so that they would have more in-house control. The.Net thing is just a minor quibble in the big story.

The languages are merely tools; the bigger picture is about the application and the customers. The problem with Microsoft's approach to being a "solutions provider" is that they're too focused on pushing their tools and technology, with the actual application being an afterthought. That is, they're Microsoft focused, not customer focused.

At the end of the day, the London Stock Exchange could not care less what the developers think about the language they're using, or even what the operating system is. They're not trying to stick a finger in the eye of Microsoft or promote open source, they just want a product that does what they want at the best price they can get.

OT I know, but I never have understood why the Linux guys hate the registry so much. Do you really think hunting down a bazillion config files and editing them is better? I can just hand my customers Tuneup utilities and it automatically cleans out any cruft left over from uninstallers, and you really can't beat.reg files for fixing little niggling problems remotely. Got a borked sound server? just double click this.reg and reboot. Display problem? Borked autorun?.reg file. For having to work on a machine across town or across country it sure is a hell of a lot easier for me to just send them a.reg than it is walking them through a bunch of CLI.

So really, what is the hate? as someone who lived through the.ini days I'm quite happy with the way the reg works on WinNT based machines. I'm sure for Linux guys with lots of It experience editing a bunch of config files is just fine, but as someone who has to deal with home users on a daily basis, it is just easier IMHO to deal with the registry, especially with sites like this [kellys-korner-xp.com] at your fingertips. So I just don't get it. The MSFT Bob jokes I understand, and as someone who suffered with WinME and Vista before tossing them for the goodness of Win2K and WinXP pro respectively, yeah get the joke there. But seriously what is wrong with having a registry?

...#X11Forwarding - Specifies whether X11 forwarding is permitted. Default is "no"#rastos: changed to "yes" because boss B asked for it in e-mail sent on 7th of Oct 2009#rastos: with Subject "I'll throw some more chairs if this does not work tonight!"X11Forwarding yes...

The.reg files are about as repeatable as can be. In fact i keep a couple of them on my flash for the more common "gotchas" like the new Realtek HD borking Windows sound server issue. Just clicky clicky and reboot, it don't get easier than that. The Linux guys can scream about homogeneous environments, but to me that is one of the nice things about dealing with Windows. WinXP is WinXP is WinXP, and the reg works the same pretty much everywhere. And why bother writing my own, when there are Prebuilt solution [kellys-korner-xp.com]

"Enterprise wins for GNU/Linux don't come much better than this."
Enterprise wins like this are happening all the time for Linux and other free software options. What makes this unique is MS touted LSE running their system as a huge win for their solution. The fact it gets ripped out a year latter for Linux is marketing gold if free software needed to market.

Part of the problem is that they DO need to market. They just don't, for various reasons usually having to do with $$$.

It's one of the main complaints about Linux adoption. If the only two groups doing any form of real marketing are Novell and Red Hat, don't expect the platform as a whole to make more than a extremely small dent against Microsoft corporate and home solutions.

As one other note, while OSS fanatics (I'm quite keen about OSS, but not quite a fanatic) go apeshit about this - This was more "switched from Accenture to running it `in house' in the form of a large team of low-paid talent in Sri Lanka" way more than it was "abandoned.NET for Linux! Rah rah rah!". The fact that people are hilariously so focused on the latter while missing the former speaks to how incredibly myopic people can be.

This was more "switched from Accenture to running it `in house' in the form of a large team of low-paid talent in Sri Lanka" way more than it was "abandoned.NET for Linux! Rah rah rah!". The fact that people are hilariously so focused on the latter while missing the former speaks to how incredibly myopic people can be.

Horseshit. This is switching from "Accenture writing a slow unstable trading platform with.NET via cheap labor in India" to "buying the company that produces a fast, stable platform on Linu

As far as reliability, purportedly the LSE had a single day of problems caused by never qualified reasons.

Purportedly a single day of problems?

The exchange shut down during a high-volume trading session. That's not purported, that's fact. What's purported is the number of times HVTs observed execution delays on the LSE at other high-volume times... and that's one reason Euronext has been claiming increasing market share from LSE.

why didn't the LSE hire a team to develop a.NET system in-house, then?

My takeaway from the article (yeah, I know, I read it, sorry) was they bought the company because it had a solution built. I'd be surprised if the platform used was anywhere near as important as functionality and performance. Which is as it should be.

Now, it would be interesting to understand the history of the purchased outfit - how did they arrive at their decisions, and what would they do differently.

The LSE sounds like it has very incompetent technical leadership, and this sounds very pie-in-the-sky-ish. So now in return for selecting this Sri Lankan company, they get 100% ownership (???) of some speculative wish. Great..NET is a fantastic development environment, and it is fantastic for virtually any size websites. Probably not so great for real-time trading, though throw enough specialization at it and you can get whatever you want out of it.

Wait, Microsoft + Accenture built a piss-poor platform. As you may recall, Accenture is a giant in the consulting business. Their combined efforts failed miserably.

Linux is the OS of most large trading systems. This has been covered on slashdot before.

MilleniumIT has a proven product in deployment in several exchanges. Their product is not pie-in-the-sky. It works. They've had several big wins in the past decade. They've been collaborating with Intel on optimizing their platform. Their transaction processing times are an order of magnitude better than LSE's current system.

So, I'm not sure what your angle is... are you trolling? Astroturfing? Or just spouting knee-jerk reactionism without any kind of basis in reality? A quick googling might have helped you out a bit.

and the "pie in the sky" element is that one of the reasons they decided to acquire this company is because now they have stars in their eyes about the great things they are going to do.

Pie-in-the-sky is unobtainable by definition. Are you claiming that LSE won't be able to implement a trading platform with lower latency and better uptime than their current system? Or are you just claiming that LSE & MilleniumIT are being a little too optimistic in their press releases? Because the latter of those two is probably true.

Gosh, you got it all covered there. I guess you provided a savage indictment of my post. Or maybe I'm actually a realist, and see a lot of people doing a hilarious happy dance far too prematurely. That's what she said!

You made a very generic post about pie-in-the-sky cheap outsourcing to Sri
Lanka. You appeared to have little-to-no actual knowledge of the subject, since none was communicated in you post (except the mention of Sri Lanka, which was gleanable from the first comment to the article on the site it was originally posted). You do not appear to be familiar with MilleniumIT.

You call yourself a realist... yet realistic perspective is dependent upon knowledge of the subject. It's well known that most trading platforms are faster than the piece of crap they had on the LSE... often more than 25ms faster, which means that it was faster to trade on Euronext.

But you know... whatever man... you can try to backtrack and defend your reactionary post however you want... you simply made claims that don't stack up to reality.

LSE isn't going to run setup.exe (sorry./setup.so), they're going to have to do some large-scale integration work and customization to make it work with their system...?

Huh? Trading platforms are trivial applications. Send data down the wire. Commit it. Get data back. Typically, these systems have multiple servers per stock offered at the exchange, each of them acting as a market maker/auctioneer to each others (trivial, a 10KB binary can do it, VERY QUICKLY). Each of the machines buffer trading history until it can be sent to the clearing house.

There's little need to "customize" Linux. Linux already deals with the networking part just fine.

The issue is writing the software using an easily maintainable, testable, and rigorously provable language. Credit Suisse is using Haskell for this purpose, very successfully. The only real difficulty is implementing the exchange rules regarding sorting the stock orders. That's going to be a real issue in any language. Sorting large sets is always expensive (but can be done in parallel).

I find it humorous how quickly so many want to bask in the glow of this, using it as proof of something, when I'm fairly certain that it was discarded as proof of nothing when the LSE first went the.NET route.

Well, someone [microsoft.com] certainly thought LSE was proof of something, why otherwise would they have bragged [microsoft.com] about it? Now that that bragging has been shown [guardian.co.uk] to be moot [computerworlduk.com] surely you can understand this modest amount of schadenfreude?

I could see.NET being good for stock exchange, if coded properly. The thread management libraries in.NET make it really easy to develop a massively multithreaded piece of software which handles atomic changes properly and efficiently. That's basically what a stock exchange program would be, right?

.Net is just a specification and a bunch of languages. There is an open source implementation of.Net itself and certainly many open source projects written in C#. "Rejects windows for open source" would have been a more appropriate headline. I hope they still use some kind of language with bounds checking and type safety, given the dangers of buffer overrun exploits in a national stock trading system.

It should also come as absolutely no surprise that a C++ pointer based linked list running native locally on the OS performs faster than a.Net Generics List running as CLR in the.Net run-time environment.

What do you mean by "performs faster"? Iteration? Indexing? Insertion at front? Insertion at end? Removal? This is a surprisingly vague statement...

I can bet you $1000 that System.Collections.Generic.List<int> will significantly outperform std::list<int> on indexed access on lists of significant size, for example, simply because the former is array-backed, and the latter is a doubly linked list. This is just to show how meaningless your comparison is.

Now, yes, if you write idiomatic C# code for a linked list (using GC heap allocated objects and tracked references), it will be slower than equivalent C++ code because of all the safety checks (like null checks). But, of course, you can also use C# raw pointers and structs to write exact same code you would write for a linked list in C, and that would work just as fast (since it would compile to pretty much the same native code in the end).

This is all nice and stuff in theory. Every so often, people sometimes like to try to argue that code running under a VM such a java or C# with.Net are "as fast" or faster than machine-compiled code from C or C++ because of JIT and runtime optimizations and whatnot. Unfortunately, the reality just doesn't follow the theory. In real-world benchmarks, managed code is not faster than pre-compiled machine code. Period. This is just more evidence of what we already know. If the goal is sub-ten millisecond latency (and it is for stock exchange systems), LSE apparently never met that goal while other C++ solutions have for years. We can talk to death about data structure implementations and whatnot, but at the end of the day, we'll need to look around and see what the real-world results are telling us.

This is all nice and stuff in theory. Every so often, people sometimes like to try to argue that code running under a VM such a java or C# with.Net are "as fast" or faster than machine-compiled code from C or C++ because of JIT and runtime optimizations and whatnot.

In case you haven't noticed, I'm not arguing that. I'm arguing that C# has all low level operations that C has, which allows you to write C# code on the same level of abstraction as C# code. Naturally, pointer arithmetic gets compiled to same native instructions in any language. Optimizer can improve things somewhat, and I won't argue that.NET JIT optimizer is on par with, say, gcc, but that difference is very circumstantial, and small even in worst cases.

Unfortunately, the reality just doesn't follow the theory. In real-world benchmarks, managed code is not faster than pre-compiled machine code. Period.

I never claimed it's faster, either. It is still slower even with hand-tuning that I've mentioned, simply because you cannot kill GC entirely (though if you never allocate from managed heap, GC will simply never run).

Also, please don't drag Java into this. Java and C# are two very different languages by now, with C# having a much richer feature set, which is very much relevant to this discussion - since parts of that feature set are what enables C-like performance when needed. Furthermore, two most common runtime implementations for those languages - Microsoft.NET and Sun JVM - have radically different implementation strategies. As such, you cannot meaningfully translate your Java/JVM experience to C#/.NET, or vice versa.

Incidentally, you could use a shared_ptr and a vector to get the exact same behaviour of your C# list, including the same array-of-ptrs indexing, and the same cache misses for every object access.

Where do you get the notion that C# List is an "array-of-ptrs"? It's simply false. If you use a reference type there, then, naturally, it will be that, but you don't have to use reference type in a C# List, just as you don't have to use pointers in std::vector. Use a C# struct, and you'll get the same contiguous memory block.

Also, if you actually use vector<shared_ptr>, it will quite likely be slower than List<ref&gt, depending on C++ implementation on insertion, because of the unnecessary refc

Yes, the list is contiguous in memory but that list is just a list of object pointers. The data is scattered around the heap just like the linked list data is scattered around the heap. Fast access to the object pointer does not yield any speed boosts. In C++ you could create an array of actual objects and then all the objects are contiguous in memory and incrementing to the next object is incrementing a point by sizeof(theObject). For small objects, you might be within the range of the memory cache on each increment. The managed object system most likely cannot possibly put the actual objects into contiguous memory and so you still have the cache misses when dereferencing the object pointers.

So, tell us again who understands access characteristics of linked lists and array lists better?

Yes, the list is contiguous in memory but that list is just a list of object pointers... In C++ you could create an array of actual objects and then all the objects are contiguous in memory and incrementing to the next object is incrementing a point by sizeof(theObject). For small objects, you might be within the range of the memory cache on each increment. The managed object system most likely cannot possibly put the actual objects into contiguous memory and so you still have the cache misses when dereferencing the object pointers.

Not necessarily - this isn't true for any primitive types like int or float, and this isn't true for any user-defined structs.

Unlike Java, C# lets you define your own types that don't have to be heap-allocated. For such types, exact same technique that you describe for C++ can also be used.

Won't it depend on typical behaviour of the system at hand? Program for the typical case but prepare a single memcopy (not that I use them) for the 3 standard deviations case (and one for the outlier). Copying the memory, while expensive, *may* not be as expensive as extensive cache missing in the typical case.

Yes, which is why I raised the point that in C++ you can choose the backing for your linked list. Back it on an array for iterating performance. Back it with truly linked nodes for better insertion properties.

A generic list, even if it is array based, is going to be on the stack an array of pointers to other points of the stack and the heap.

If you use STL, then std::vector will also allocate its backing store array on heap.

On the other hand, if you use C#, you can use stackalloc [microsoft.com] to get a stack-allocated, non-GC-tracked array.

Managed.NET arrays (not stackalloc or unmanaged heap allocated) will still be slower because there are bound checks for element access (though JIT can eliminate them sometimes when it sees that they can never fail).

Mutable generic collection classes are even more slow, because they also have safeguards to do things like throwing an exception if you get an enumerator for a collection, then remove an item from that collection, and then try to move the enumerator (whereas in C++, doing same thing for a vector would just render all active iterators invalid, and their use would lead to a crash at best, and silent data corruption at worst). This is achieved by storing a "version number" for a collection (just as plain int) which incremented it on every insertion/removal - and which enumerators check against every time you move them. Naturally, this increment happening on every insert also slows things down.

.NET JIT is fairly pessimistic and generally simple in kinds of optimizations that it performs. The reason for that is that.NET doesn't have a bytecode interpreter at all, only a JIT; therefore, a JIT has to be reasonably fast, otherwise too much time would be wasted on it alone. Therefore, it cannot be too sophisticated.

In contrast, Java HotSpot has both bytecode interpreter and JIT, and interpreter is used by default, with JIT being triggered by a frequent invocation of one particular method, and just fo

This is part correct, part wrong, and part outdated. In particular, branching by itself never blocked inlining, though complexity that results out of it may. Loops (i.e. any branching instruction that is potentially iterative - as there's no if/else or do/while/for on IL level) are not inlined. The detailed list is here [msdn.com], but note that this is pre-SP1.

JIT inliner was made more aggressive [microsoft.com] in 3.5 SP1, and will, in particular, inline methods with struct parameters.

Why is this news? Sun/Solaris dominated the high-end financial sector for ages...any exchange/trading house/equity firm/etc that is using Windows is insane IMHO. Linux is just the most recent unix platform to show up in the sector, it's not revolutionary...

That's great, but there are plenty of enterprise situations where.NET is being utilized. It all boils down to using what works. In this case, Microsoft's solution failed. Hopefully their Linux based solution works better. But lets say that it craps out and crashes too (yes, programs running on Linux systems do crash)...will you be out here saying that Linux isn't read for enterprise deployment either?

Why is this news? Sun/Solaris dominated the high-end financial sector for ages...any exchange/trading house/equity firm/etc that is using Windows is insane IMHO. Linux is just the most recent unix platform to show up in the sector, it's not revolutionary...

any exchange/trading house/equity firm/etc that is using Windows is insane IMHO

You mean like an exchange that was the cornerstone of MS's advertisements for 2 years? About how.NET was so scalable, it was used in the exchange, and SQL Server was so wonderful, it was used in the exchange...

Well, it was the cornerstone of advertising until the exchange had a few day long technical outtage a year or so ago.. That left people in the dark, and they had to suspend all trading for a few days.. suddenly, the ads stopped.

Why is this news? Sun/Solaris dominated the high-end financial sector for ages...any exchange/trading house/equity firm/etc that is using Windows is insane IMHO.

Its news because, in fact -- whether or not it was "insane" to do so -- the London Stock Exchange was relying on Windows,.NET, and other Microsoft products: "As part of its strategy to win more trading business and new customers, the London Stock Exchange needed a scalable, reliable, high-performance stock exchange ticker plant to replace its earli

Didn't the New York Stock Exchange move over to Linux because Microsoft couldn't provide a good, low-latency RT kernel? They begged Microsoft, wanted to stay with Microsoft, and Microsoft couldn't provide them with a solution.

Didn't the New York Stock Exchange move over to Linux because Microsoft couldn't provide a good, low-latency RT kernel? They begged Microsoft, wanted to stay with Microsoft, and Microsoft couldn't provide them with a solution.

I could be wrong, but IIRC the NYSE has never been a Microsoft shop for the hard-core trading systems. They may have wanted to switch to Microsoft from the previous big Unix iron, but Linux won out.

TradElect platform, supplied by Accenture, has finally been answered: yes, it will. This hardly comes as a surprise â" the issue of the platformâ(TM)s speed and efficiency as well as Accentureâ(TM)s support has been a hot topic for the market in the last couple of months.

Accenture? Not exactly a low cost vendor there. Meaning, much of the "costs" of this.NET system is Accenture's high fees.

âWe want to address the entire suite of products and MillenniumIT gives us that scale.â(TM) Indeed, its offshore development centre â" âa hotbed of top graduatesâ(TM) â" with 94 per cent being top-class alumni from Sri Lanka and around the world, including MIT in the US, caters for such magnitude of scope.

Offshoring. They're going with a cheaper, although quite smart, set of folks.

If you had read the earlier articles on the TradElect fiasco, you would have known that it was basically written and designed by Microsoft itself. Accenture had a very heavy involvement in the project straight from Redmond.

So yes, this is an outright condemnation of the quality of Microsoft's products.

They are buying the whole company in Sri Lanka, not just hiring them to build a project for them. The software in question already exists, the company in Sri Lanka already built it and is selling it today to other exchanges.

Further, your statement that its about "going with a cheaper vendor and a software platform that GIVES THEM MORE CONTROL" is very much a damning of Microsoft and its technology. With Microsoft you don't have control THEY

while TradElect is based on Microsoftâ(TM)s.Net technology. The choice of the latter, which has raised quite a few eyebrows in the market, is defended by Lester. He claims that LSE is coming off TradElect not because of the.Net technology itself (although its trading speed is 2.7 milliseconds compared to Linux-based Chi-Xâ(TM)s 0.4 milliseconds), but âfor more control, less costs, and the ability to build and innovateâ(TM). Furthermore, he describes LSEâ(TM)s experience with.Net as âvery positiveâ(TM).

i will grant that the 2.7 ms benchmark is definately slower than.4 ms. However, i don't think you can benchmark the trading speed of.Net, only the trading speed of TradElect. Last time i checked msdn, there was no System.StockExchange namespace provided with the.net framework.

These articles sound more like MilleniumIT's just got a faster, nicer, cheaper product than TradElect. It sounds to me like Accenture failed, not.net

Strictly speaking, it isn't. However, back before they made the change, the deployment of the app was supposed to be a ringing endorsement of the platform. It was one of the most prominent "get the facts" cases. So, although the relationship between the quality of the app and the quality of the platform isn't obvious in either direction, there is certain symmetry here. If the app's success was going to be an endorsement, and was hyped as such, its failure can plausibly be considered an indictment.

From what I understand, it was the app that sucked. Why is this then a stinging indictment of the platform?

Because Microsoft used the app and its supposed superiority in the area it was deployed as a major case study in the strength of the Microsoft programming and platform components used in the implementation: the products called out in their case study include the.NET Framework, Windows Server, SQL Server, Visual Studio.NET, Microsoft Operations Manager, ASP.NET, and Visual C#.NET (I may have missed s

The platform was limited by the run time which was Windows /.NET. If you dig into the stories you can find quotes indicating that a crucial advantage of the new platform is that they were able to examine and tune every layer of the stack from the kernel upwards to avoid latencies right down to the processor level itself. Only Microsoft can do that for windows and they aren't in a hurry to make customized versions of their stack for individual applications.

Having read the article, and having traded equities on the London Stock exchange and Borsa Italiana for twenty years, I must say that I believe that the declaration that it was not a performance issue is correct.....to the point that I suspect that no amount of performance gains on Microsoft's part would have turned the scales. Stock Exchanges are not national monopolies anymore, even if the few remaining big ones are gobbling each other. Controlling the technology involved is much more important than a slight performance hit. The London stock exchange scores a double hit on this one, since not only it will own the system, but the internals of said system will be open source, freeing it for example from limitation of sale to third parties by the US government. And anyway, when an istitution that big uses only Microsoft inhouse, is like having another stakeholder on your back, with an agenda of its own, like having you switch soon to the latest and greatest of its Server suite, if only for its publicity value. By doing the move, LSE is back to setting its own pace. I wish I could do the same on my desktop in the office.

Anyone who was ever fool enough to believe that Microsoft software was
good enough to be used for a mission-critical operation had their face
slapped this September when the LSE (London Stock Exchange)'s
Windows-based TradElect system brought the market to a standstill for
almost an entire day. While the LSE denied that the collapse was
TradElect's fault, they also refused to explain what the problem really
wa. Sources at the LSE tell me to this day that the problem was with
TradElect.

Since then, the CEO that brought TradElect to the LSE, Clara Furse, has
left without saying why she was leaving. Sources in the City-London's
equivalent of New York City's Wall Street--tell me that TradElect's
failure was the final straw for her tenure. The new CEO, Xavier Rolet,
is reported to have immediately decided to put an end to TradElect.

TradElect runs on HP ProLiant servers running, in turn, Windows Server
2003. The TradElect software itself is a custom blend of C# and.NET
programs, which was created by Microsoft and Accenture, the global
consulting firm. On the back-end, it relied on Microsoft SQL Server
2000. Its goal was to maintain sub-ten millisecond response times,
real-time system speeds, for stock trades.

It never, ever came close to achieving these performance goals. Worse
still, the LSE's competition, such as its main rival Chi-X with its
MarketPrizm trading platform software, was able to deliver that level
of performance and in general it was running rings about TradElect.
Three guesses what MarketPrizm runs on and the first two don't count.
The answer is Linux.

It's not often that you see a major company dump its infrastructure
software the way the LSE is about to do. But, then, it's not often you
see enterprise software fail quite so badly and publicly as was the
case with the LSE. I can only wonder how many other Windows enterprise
software failures are kept hidden away within IT departments by
companies unwilling to reveal just how foolish their decisions to rely
on archaic, cranky Windows software solutions have proven to be.

I'm sure the LSE management couldn't tell Linux from Windows without a
techie at hand. They can tell, however, when their business comes to a
complete stop in front of the entire world.

So, might I suggest to the LSE that they consider Linux as the
foundation for their next stock software infrastructure? After all,
besides working well for Chi-X, Linux seems to be doing quite nicely
for the CME (Chicago Mercantile Exchange), the NYSE (New York Stock
Exchange), etc., etc.

From now on, it will no longer possible to take refuge in the idea that you can't get fired for keeping with Microsoft.

the CEO that brought TradElect to the LSE, Clara Furse, hasleft without saying why she was leaving. Sources in the City-London'sequivalent of New York City's Wall Street--tell me that TradElect'sfailure was the final straw for her tenure.

So, when Microsoft makes so much noise with press adverts & getthefacts campaigns, its 'marketing' and when FOSS supporters rejoice they are 'fanatics'!! Just STFU and get back to your windows 7 house party.

But gained a lot anyway. Sure, the LSE has moved on, but the fact that a.Net application on Windows Server 2003/SQL Server 2000 could handle the LSE at all isn't a total loss for MS. I'm sure they learned a lot in this failure. Seriously,.Net 1.0 came out in 2002. In five/six years, the VM held up to a pretty high standard. Sure, the damn thing melted down for a day and getting five/six nines out of any Windows system is/will be a total black art, but doing so for a Linux system isn't a walk in the park either. There is plenty of room for this new deployment to crash. If it does, it's not an indicment of Linux; it is a statement on how hard such systems are at all levels.

Rock solid systems can be bit on both platforms. It is just a matter of if the costs and benefits are worth it. In this case, it doesn't seem the Windows solution held up. But to call this as proof on how bad MS software is seems to be hyperbole that misses the fact for a good while, it did work. Given how new all the software involved is, I'm surprised.

This is not my area of expertise, but it strikes me that for these applications, the rule of thumb for decades has been to get as close to the iron as you can. There are just way too many layers of abstraction in Windows and.Net, and damned little control over what goes on under the hood. With a Linux kernel, if you need very high RT performance you can cherry pick your hardware, compile the kernel and various other supporting apps with that in mind. I would imagine that they've also moved to Oracle her

Hardly. My complaint isn't about the TradElect software's performance. It was slower. But why was it slower? Is the implementation crap? Could it be redesigned to run faster while still running from the.Net framework? Or is it the inherent lag of running inside a sandbox that prevents it from executing as fast as the "GNU/Linux" solution?

My complaint is that the author is roasting the.Net platform as compared to "GNU/Linux". That is like comparing the performance of Java to OS/2. One is a programing platf

You are incorrect, they are trading a $65 million dollar piece of software for a $30 million COMPANY. They bought the MilleniumIT company that had ALREADY IMPLEMENTED a trading platform. They bought the company for the platform, and now they control the development of the platform going forward in house. They are not trading one IT consultancy for another, as they now OWN the software and the company that built it.

However, they state the platform they bought ALREADY achieves 6 times the performance of the piece of software built by accenture (.4ms vs 2.7ms transaction times).

While I agree that this is more of an indictment of Accenture's apparently shoddy work than of.NET itself, the fact that they've had 6 years (the article states the TradElect software/project was started in 2003) and $65 million dollars thrown at the problem and haven't been able to make the software perform better does raise some eyebrows about the underlying technology as well.

If you're over 300 kilometers away from the server, a one-way transaction will take more than 1 millisecond at the speed of light anyway. If millisecond gaps were that important, you'd hear about global disparities directly related to distances from the stock exchange servers.

Democratic Senators Charles Schumer and Ted Kaufman urged the commission to halt the practice, arguing frequent traders use technology to profit from access to information not available to retail investors.

Flash traders have direct connections to the NYSE exchange and pay large sums just for bandwidth to make sure the trades are almost real time. Goldman Sachs is a key participator in this.

That said, their trades often have no human interaction and generally are computers following trading algorithms only a block away from the exchange with a direct fiber line to the office. It would be impossible otherwise.

Some traders have been raising a stink over this, but generally the miliseconds do count.

The maximum allowable time for a flash is 500 milliseconds, or half a second, although most of the markets flash routable orders for under 30 milliseconds.

Of course I don't know how the LSE handles flash trading or even wants it but I'm going to assume they need everything to be as real time as possible. You just don't hear the finacial firms complaining about the disparities simply because they have the money to set up the transactions their servers pretty much next to the exchange itself (if not in the same building).

It was cheaper for them to buy the WHOLE COMPANY that had built this technology, than it was to continue running/maintaining a.NET application. The.NET application was built and maintained by accenture, who can just as easily hire cheap devs in india or sri lanka as any other outsourced IT consultancy.

Also, they specifically state multiple times that the.NET solution would not scale to meet their needs, the quoted stats are 2.7ms/transaction in.NET and the linux app performs the same transaction in.4ms... So the linux system can handle 6-7 times the transactions on the same hardware...

They are talking about scaling up from 100 million transactions a day to 5-6 billion, so, yeah having to buy 6 times less hardware will probably save them some cash.

It could be that the cheaper labor, who can create a good enough product, does not have the resources to acquire a fully licensed MS platform. Instead, they may have grown up with older computers running Linux and open source tools. One can image a motivated student, who has been told that unlicensed software is stealing, and stealing is wrong, might choose to learn cheaper tools. One can also imagine a company, wary of the costs of a MS development solution, and able to hire local developers who are not

Uhh... I've never seen this level of RTFA and.. man this is slashdot where that is the norm.

the LSE ALREADY ENTERED A PURCHASE AGREEMENT TO BUY THE COMPANY that ALREADY BUILT A TRADING PLATFORM THAT IS BEING USED TODAY IN OTHER EXCHANGES! The deal closes in the next week or 2. The article says 95% of the "Non-Refundable" parts of the deal have already been transacted. Neither the LSE nor Millenium IT (the Sri Lankan company that is being purchased) is walking away from this deal.

You don't spend $30 million dollars and purchase a company if you aren't moving your software to that platform. The article states they already had a trial phase and brought in originally 20 platforms, shortlisted 4, ran those for a period, and MilleniumIT won. They then decided to purchase the entire company. This process is much further along the road than you seem to think.

You are not accurate. The LSE bought a dev shop that ALREADY BUILT A TRADING PLATFORM, that is being used today in other exchanges. The platform in question ALREADY achieves 6 times the performance of their existing platform (built by accenture), and has MORE FEATURES.

And they are moving from an outsourced dev model to an in house model, as they now own the devs and the software. Sure they devs are still in Sri Lanka, but Accenture could just as easily hire people in India or Sri Lanka to get the same cost savings.

Yeah, we're a Microsoft "Partner" also where I work. It means free software and great support. What it does not mean is that they write our software, so I'm a bit skeptical of them actually putting their "in house" stamp on it. It sounds like marketing spin when the going was good.

I bet they will use Mono to ease the transition. If they've already got a huge codebase written for.NET, wouldn't it be insane to throw it away?

They don't like the performance, or the feature set of the current codebase. They are buying an entirely new system to address those issues. It would be far more insane to keep any of it, or have to maintain it - they want it out wholesale.

Well, they also increased performance. If throwing money/systems at a problem can't get you performance maybe the other guys are really better ?
Oh, it is interesting that an ex consultant would spell "Disclosure" as "Discloser". On an internet forum where your spelling/language becomes a major indicator of credibility, you might want to use a browser like Firefox which will correct errors for you.