Archive for 03/2004

Over at Falafel land, Steve Teixeira and Charlie Calvert have been bickering (actually, having a stimulating and interesting discussion) about open standards. While I pretty much agree with both of them, I think Steve summed it up best for me when he wrote:

"While Charlie articulated what I'll call the developer's point of view, I would like to present a different point of view on the matter, a point of view that is perhaps a bit more in tune with the economics of software development."

Open standards are indeed a good thing, for a variety of reasons: they allow better commonucation between systems and components, provide the basis for system interoperability, and - in some cases - lower both development and maintenance costs. They are not, however, a silver bullet. Here's a little story:

As Chief Technologist at SintecMedia, I was assigned the task of designing the architecture for their flagship product, OnAir. The system was targeted at the high-end broadcasting market, and I had to design a scalable, robust architecture as well as select tools, development languages and supporting technologies for the project. When considering possible technologies, I had to evaluate not only their technical merit, but also their affect on the company's business process. This included things like maintenance costs, technology lock-ins, and even reputations (when trying to sell a multi-million dollar system, which of these statements sound better: "we're using Oracle because it is designed for the enterprise" or "we're using [obscure database server here] because we've found it to be technically superior?").

I did, however, follow some guidelines in making my decisions, which I'm happy to say paid off really well:

Use industry-accepted standards. There's an important distinction here - it doesn't matter if the standard is open or proprietary, as long as it is widely accepted in the industry. For example, Java is a proprietary standard, although many people tend to forget that fact. It is, on the other hand, an industry-accepted standard. (By the way, we actually wrote the system in C++ and Delphi, because they better suited the job.)

Get source code. This is critical. No matter whether you're using an open or proprietary tool, make sure you get the source code to any library you include in your project. When there's a bug in your software and thousands of users can't do their job, you need to be able to debug and fix every piece of code in your project. Even if you can't fix external libraries, reading the source code is tremendously helpful in finding workarounds.

Get tools that just work. If you find a tool that does what you need, and you're sure it works and is the best tool for the job, hardly anything else matters. For example, we needed a central database server that would carry the huge workload of the application and serve thousands of users, and went with Oracle. Now, Oracle doesn't give you the source code, and uses a proprietary variant of SQL (until version 9, you couldn't even use the standard syntax for joins). But it just works. For the benefits provided by the Oracle database server, we were willing to ignore the previous guidelines.

Use technologies that customers can accept. It doesn't matter if you think your system will look better on the Mac, or run better on Linux, if you're developing software for large corporations, chances are your user interface (at least) should run on Windows. The same goes for the servers - at the time, large clients were much more likely to accept Oracle than, say, MySQL (although this have slightly changed recently).

Reduce development costs and time-to-market. This may or may not apply to you. We had a limited window of oppurtunity to release a new product. As for development costs, this requirement pretty much applies to everybody. In our case, it meant building a Windows client rather than a web client, and using RAD tools to build it.

Test everything. Past experience, good advice and product demostrations are all great, but your final decision should always be based on actual test results. Test every tool, library or technology you're considering. Build test that match your project. Try to break things. You don't want to get halfway through development (or worse, customer installations) and find out you made the wrong choice.

You may still make mistakes following these guidelines, but they won't be as bad as choosing technologies based on arbitrary criteria such as open standards, open source, or tools from big companies.

A new virus has been spreading around the last couple of days. The "Witty" virus, as it is affectionately called, is a real virus - not one of those script thingies. It targets computers running the BlackICE firewall, and basically pulverizes them by writing random data to their hard disks.

Personally, I don't get people who sit down, write a malicious piece of code designed to ruin computers and those who depend on computers (which includes pretty much everybody today), and unleash it on the world. I know I'm not alone on this. To me, this is the digital equivalent of terrorist activity. It has absolutely no positive value.

Fortunately, most virus writers are either lazy, or stupid, or both. They write their nasty little programs, then e-mail them to every living thing in the known universe, hoping that some of the recipients would install the virus on their system and commit virtual suicide. Unfortunately, other computer users are not much smarter, so it actually works.

Anyway, last week I was working at a customer's site, and the system administrator was trying to figure out why users couldn't get their e-mail. Although the problem turned out to be something completely different (the messages were stuck in the incoming queue of a spam blocker - restarting the right server solved it), he found some traffic that indicated that one of the machines on the network was infected with a variant of the NetSky virus. He was trying to figure out which machine was the infected one by checking the registry - the virus registers itself under the HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run key, so Windows automatically runs it on startup. It occurred to me that a lot of viruses do this, and checking the relevant registry keys could be a very simple way of detecting them. A program could get the list of all the computers on the network, then check each one to see if a specified value exists in the registry.

So I sat down and wrote this program, and I'm giving it away for free. The program starts by enumerating the computers on the network, using the same API functions Windows uses to display the "Network Neighborhood" ("My Network Places" in recent versions of Windows). When you run a search, it connects to each of these computers and checks for the requested data. It took me a couple of seconds to check 75 machines. It could take longer if some of the machines are disconnected but still listed in Windows' network browser, but that's still a lot faster than manually checking each machine.

The Delphi Information Website has a new homeThursday, March 11, 2004 04:43 PM

Borland's Steve Trefethen has updated his Delphi Information Website. The site has a new home and new content. Still, one of the most valuable items on this site is the unofficial ActionBands update (for Delphi 7 and Delphi 6).