I'm an "engineer" at my job, which means that I do the same things that any general-purpose computer guy does, but I do them for lots of different companies, all in the same day. I get to work within lots of different companies' work environments.

We got a gig to spend a lot of time helping a giant telephone company. This company was recently called one of Corporate America's most-evil in slashdot postings.

Succesful V0IP carriers have some things in common: a small, smart staff (two or three, at least at first) with a strong need for their system to work. They use the SIP phones on their desks as soon as the system barely works. They're involved in the business decisions. Technical control tends to be centralized to people who actually use the enable password. Meetings are hard work. Regular upgrades are a part of life, so they learn to deal with it. V0IP is treated like real lifeline telephone service. They accept responsibility for making the system work.

Big slow companies lack these features. (At least this one did.) Yes, the staff was smart. But as an organiztion they were practically neurotic -- every risk had to be controlled, every server had to be redundant, every action planned with multiple people, everything scheduled with a pseudo-deadline. Every meeting took a while to schedule, so everything moved slowly.

I just had a blinding flash of the obvious as I was slogging through gorp
of VoIP in my job: the sorts of things I do generally aren't respected by
the Computer Science academic community because they're not simple
enough; they're not elegant enough.

It seems that only simple things generally receive high praise by academics. Or perhaps it is that academia is about simple things that follow simple, elegant, knowable laws.

We live and work in a world of taming immense complexity. Much of it is unnecessary, but it is necessary to tame it and make it work properly. But my advisor had no respect for this type of work . . . and I have a hard time convincing myself that it is respectable.

How easily can you communicate? Do they listen to your
issues and try to understand it fully, or do they assume they know
what you mean before you even state all the facts? Can they explain
themselves fully?

Secrets. Are they careful with other customers' trade secrets? If not, then they won't be careful with yours. It's a good sign if your introductory interview includes the consultant saying of another client, "sorry, we're under NDA so we can't say any more".

How much work will they do for your business? Don't expect
the consultant to do a tremendous amount of work just to win your
business. Examine their history, and consider a slow-start approach.

Appropriate experience. Don't assume the consultant needs
to know everything you know and then some more. Choose a consultant that complements your knowledge.

Organization. Are they organized? Can they structure ideas into an organized document? Do they lose emails or voice mails?

This allows the underlying data structure that encodes the vendor count to change independently of the code that needs the vendor count. E.g., We might want to compute vendor count instead of just store it,

public int getVendorCount() { return this.vendors.length(); }

There's no special support in java for this particular programming idiom. But VB.NET and C# do have properties for doing just this. E.g., in C#,

private int _vendorCount;
public int vendorCount {
get { return this._vendorCount; }
set { this._vendorCount = value; // "value" is an implicit parameter of the same datatype as the property }

You can then use vendorCount in C# essentially as if it were a field in the class; e.g.,

vendorCount += 2;

This seems nicer than:

setVendorCount(getVendorCount() + 2);

Properties aren't amazingly wonderful, they're just nice. But properties are being badly abused by the Microsoft class writers and those who follow MS coding standards.

The problematic use occurs when the class provides support for multiple 'input' types to the property.
For example, an ADO.NET SqlCommand has a collection of SqlParameter objects; these would be used to specify parameters for a stored procedure. Only certain types can be assigned to the "value" property of a SqlParameter -- e.g., string, int, DbNull. But the type defined for a SqlParameter is just Object.

The problem is that I can't know -- without just trying and failing -- exactly which types can be assigned to the Value property to get a useful result. Only certain types will work there -- but just saying that you can assign any Object doesn't help. If I pass in a ServiceProvider object (a class I wrote), will that work? Certainly not.

Likewise, I don't know what I'm going to get when I get the Value(). All I know is that it'll be an object. It might be a DateTime, or a string, or anything. I have to manually try casting from Object to other classes to figure out how the class library works.

The java way of doing this would be to use polymorphism to define multiple getters and setters -- one for each of the valid types that can be gotten and set. For example,

Properties are a nice language feature, but I hate that they're being abused. This reduces the overall quality of the .NET programming platform, because it reduces the effectiveness of .NET programmers.

Several of my respected friends/colleagues view Perl, PHP and the ilk as great -- for "fast" jobs. We've had experiences with these scripting languages that make us tend away from them for "serious" work. We all like scripting, but we don't want to be stuck hacking on the same scripts for several months.

What does "fast" or "serious" mean? Some of the problems we've had with scripting include --

Difficulty finding bugs. These languages are flexible -- very flexible. Logical and even syntax bugs can be difficult to find because simple mistakes in assumptions have to be checked by the programmer everywhere.

Difficulty reading the code. Perl is like "interpreted line noise", according to jcv one of my friends. In Paul Graham's essay Hackers and Painters, he also notes this phenomenon:

Many a hacker has written a program only to find on returning to it six months later that he has no idea how it works. I know several people who've sworn off Perl after such experiences.

Inconsistent language/library design PHP, for example, seemed to be a mish-mash of useful routines, each with its own special calling conventions and partial documentation.

Maintenance becomes a problem with scripting languages because I can't fit the entire system into my head. I don't remember all of the rules and assumptions that are intrinsic to my application. (I'm speaking mostly of the gnarly vertical-market business applications here.) I'm crafting new software, yes; and I need to be able to change it rapidly -- therefore, I need system support to keep me in the sandbox of my application.

The reason why dynamic languages like Perl, Python, and PHP are so important is key to understanding the paradigm shift. Unlike applications from the previous paradigm, web applications are not released in one to three year cycles. They are updated every day, sometimes every hour. Rather than being finished paintings, they are sketches, continually being redrawn in response to new data.

I'll buy this comment about rapidly-changing applications -- but I don't understand why scripting languages are perceived as being more suitable for this.

Indeed, to make changes rapidly, you need a system helping you find when you've violated assumptions -- this is why agile programming methods are often associated with testing. And this is why we need strong language support for limiting the behavior of a system -- especially for rapidly-changing applications.

I haven't found this kind of support in scripting languages, by and large. But what attracts us to scripting languages in the first place? Maybe it's the fact that we don't have to define a class and stick a method in it just to do a simple job. This suggests that it's the lack of system support for constraint validation that attracts us to scripting.

etrepum noted that "programming languages should do what you want" when I questioned the wisdom C# designers, because they didn't force you to explicitly handle exceptions. In Java terms, all C# exceptions are effectively "unchecked".

In practical terms, this just means the runtime environment (the "CLR") provides an exception handler at the bottom of the stack, as does Java. It also means that the programmer may be unaware of the exceptions that may be thrown by the routine that he's writing. The question for debate is whether that's a bad thing.

It might be a bad thing if the programmer may be able to recover from an exception; e.g., if the DBMS transaction fails due to deadlock, then maybe we should just try it again. Maybe this is what Drayton, Albahari, and Neward mean in C# in a nutshell by saying that "Application exceptions should be treated as nonfatal."

OTOH, as a programming culture, we really haven't made peace with exceptions as they appear in Java and in the .net languages. Are exceptions supposed to be used as a fancy means to control flow during normal execution? Or are they intended to make it easy to detect errors? Recall that in system programming, virtually every important function/system call returns a value that must be checked. Exception handling seems to have been intended to factor all of those checks out to the "catch" clauses, so that the kernel of logic within a routine can be evident.

My current situation has afforded me the chance to use Microsoft's .net-brand collection of programming tools. I'm glad to make this report: they've started to learn a lot from us.

For example, the language "C#" is clearly largely modeled after Java. Good for them! I'm glad to see that the recognize good ideas when they see them. The are some notable differences, though.

C# doesn't force you to keep track of all the types of exceptions that can be thrown and handle each one. This seems bad.

In fact, programmer-defined exceptions are supposed to subclass "ApplicationException"; according to some
normally reputable source, such exceptions are supposed to be non-fatal. The reasoning is left to mystery.

"delegates" are a new way of identifying a static method; it seems to be a grown-up replacement for function pointers, in that you can be sure that the delegatee adheres to a specific call signature. A single delegate reference can refer to multiple functions; when you call the delegate reference all of the attached functions are called, and in a deterministic order.

C# allows crazy stuff like pointer arithmetic.

There are other differences. These come to mind.

My primary desktop platform at work is Windows. I shuddered at first, but with cygwin and a dose of humility, I'm making peace with it. It's too bad, though -- shuddering was great exercise.

I like programming in Java. But I can't find any tools for rapidly developing simple database applications in java -- even if I'm willing to spend money doing it.

Maybe there are some out there -- has anybody seen any? I really just need something that'll help me with form validation, and creating/modifying/deleting rows from a database. I need more than just a one-shot code generator; I'll need to modify the web forms and reports quite often. I.e., code generators are okay -- but I must be able to specify all of my page markup and such in the input language to the code generator.

Foray into Microsoft

Given the need for a rapid web-application development environment, my boss and I have started looking at Microsoft ways of doing it. To be honest, the asp.net stuff seems to come pretty close to doing what we need.

I have three initial impressions about the world of MS-oriented software development:

There are lots of fuzzy, feel-good, technical terms for establishing a relationship between two entities; e.g., linking and wiring. The late Edsgar Dijkstra, in his
EWD 1044: To hell with "meaningful identifiers"!, discusses the danger of using such fuzzy terms to mean specific things. I buy his argument -- that it can be misleading to use such terms that appear to have an intuitive meaning. However, I don't completely buy his conclusion; carefully chosen identifiers can be helpful without implying too much.

There seem to be lots of related ways to do the same thing. I.e., the explanations of precisely what they do seem to be vague. E.g., I'm accustomed to seeing a locale specification in the Unix/Java world; but in asp.net, there's Culture, UICulture, and LCID (Locale ID) -- all which seem to have something to do with changing the language and locale of the page. I'm sure each one of them is independent, and there's probably a good explanation for splitting them up. I just wish I knew that explanation.

MS devotees seem to be heavy on marketing. It's as if Microsoft programmers feel the need to validate and be validated in their choice of MS development environments. Everything is splendiforous to them.

Back in April, I posted a comment about the Earliest-Deadline-First (EDF) scheduling discipline that was not fully accurate. I had commented that the original, overly-simplified explanations of EDF claimed that task sets would be `feasible', even though the explanations did not include any accounting for context switching.

It can be shown [...] that an edf schedule on n jobs will have <= 2n-1 context switches, rather than oodles. In analyzing a system, this context-switch overhead is accounted for by "inflating" (in the analysis) the execution requirement parameters of each job by the amount of time taken to perform 2 context switches.

Of course, he's right. He's published several important papers in the Real-Time area and knows it all far better than I do.

In retrospect, I was really commenting on the methodology of the proof more than on EDF per se; the original proofs of EDF's optimality (from Liu and Layland) do not account for context-switching cost. As such, I found the proofs unconvincing -- at least for practicable definitions of feasible.

I've recently started an MPEG-2 video decoder project; our goal is to make analysis and processing of the video stream easy. Thus, we're using a modern approach to the interpretor, by having various parser/decoder classes which publish decoded objects as they appear on the bitstream.

For example, an external object could subscribe to the Macroblock events which occur in the bitstream, or it could subscribe to the Picture event to get whole, decoded pictures from the stream.

We're intentionally not spending a lot of effort on efficiency; in fact, we chose to write a new decoder specifically for analysis purposes because the other software decoders tend to be obscure and optimized for speed -- frame-rate processing on 1996-era computers, e.g..

Contact me if you're interested in participating. The project (me!) is funded currently through UNC-CH Computer Science.
I'm going to have to find something better than CVS to manage my code, though.