One might also ask, does it really matter? Sure enough, it does. As it turns out, the Borland Turbo Pascal 6.0 run-time, and probably a few related versions, handle keyboard input in a rather unorthodox way. The run-time installs its own INT 9/IRQ 1 handler (keyboard interrupt) which reads port 60h (keyboard data) and then chains to the original INT 9 handler, which reads port 60h again, expecting to read the same value.

That is a completely crazy approach, unless there is a solid guarantee that the keyboard can’t send a new byte of data before port 60h is read the second time. The two reads are done more or less back to back, with interrupts disabled, so much time cannot elapse between the two. But there will be some period of time where the keyboard might send further data. So, how quickly can a keyboard do that?

And Turbo Pascal 6.0 was released in 1990 ! Intel was already selling the i486. [/q] That were still using PS/2, and the PS/2 timing didn’t change.

[q]Such hack could be excusable for 8088 IBM PC software, but, by that time, expecting any kind of timing garantees on computers able to multitask and run VMs was lucridous.

No 1990s PC was able to run VMs, they were far too slow and didn’t have the necessary virtualisation possibilities (other than running DOS VMs). And timing guarantees were fine when pertaining to hardware.

Some early software/games would rely on unchanging hardware performance and consequently are unusable on modern systems. IMHO this was amateurish back then (I was guilty of hard coding timing assumptions when I was learning to program). But there was less hardware variety back then, so it sometimes could pass as commercial software. This is obviously bad practice today.

Worst case scenario would be a 4.7MHz 8088 with a PS/2 interface attached.

Worst case timing would be the time of the keyboard buffer read in the interrupt routine to the reading of the keyboard buffer in the chained routine. Will that timing ever be exceeding the minimum time between keyboard interrupts? Nope.

This is basic real-time stuff.

Some early software/games would rely on unchanging hardware performance and consequently are unusable on modern systems. IMHO this was amateurish back then (I was guilty of hard coding timing assumptions when I was learning to program). But there was less hardware variety back then, so it sometimes could pass as commercial software. This is obviously bad practice today.

This isn’t hard-coded timing assumptions and not relevant. If things get faster the routine still work and the timing can’t get worse in hardware!

If one claim to emulate hardware and don’t actually do it, well, the problem isn’t in the original code.

Edit: quotes in bold as the ****** comment system doesn’t accept quote tags.

Worst case scenario would be a 4.7MHz 8088 with a PS/2 interface attached.

Worst case timing would be the time of the keyboard buffer read in the interrupt routine to the reading of the keyboard buffer in the chained routine. Will that timing ever be exceeding the minimum time between keyboard interrupts? Nope.

This is basic real-time stuff. [/q]

The breakage is real at least on some newer hardware. It was at a job I had years ago, but as I recall everything in DOS would work perfectly, including things like DOS edit.com. However when you started interactive borland apps the keyboard would ack up. We never solved it, but this theoretically explains the symptoms: “As it turns out, the Borland Turbo Pascal 6.0 run-time, and probably a few related versions, handle keyboard input in a rather unorthodox way. The run-time installs its own INT 9/IRQ 1 handler (keyboard interrupt) which reads port 60h (keyboard data) and then chains to the original INT 9 handlerâ€¦ which reads port 60h again, expecting to read the same value.”

This is an inherently fragile hack that may or may not work as hardware gets cloned and evolves. Maybe they had not anticipated how it could break, but at least in hindsight software should not assume that the hardware is too slow to update the value between reads. This may have happened to have been true on the original hardware, but it clearly isn’t a great practice for software to do this. This is all IMHO of course.

[q]This isn’t hard-coded timing assumptions and not relevant. If things get faster the routine still work and the timing can’t get worse in hardware!

If one claim to emulate hardware and don’t actually do it, well, the problem isn’t in the original code.

Re-read the part I quoted. I’m taking it at face value, but the hardware might not wait for the software to read the same value twice.

I was sitting in front of an F5 BIG-IP back in the 1999/2000 timeframe. At the time, it was BSDI as the base operating system (they’ve since moved to Linux). It was being slammed by web requests and was frozen, because someone on live TV said “go to our website” and kabooom.

It exhibited a behavior I’ve not seen before or since. I would type something on the keyboard (old PS/2 interface), and on the VGA screen it would take several seconds for it to appear. I’ve always seen overwhelmed systems at least echo my typing back when on the VGA console. But this one didn’t. I wonder if it’s related.

I was sitting in front of an F5 BIG-IP back in the 1999/2000 timeframe. At the time, it was BSDI as the base operating system (they’ve since moved to Linux). It was being slammed by web requests and was frozen, because someone on live TV said “go to our website” and kabooom.

It exhibited a behavior I’ve not seen before or since. I would type something on the keyboard (old PS/2 interface), and on the VGA screen it would take several seconds for it to appear. I’ve always seen overwhelmed systems at least echo my typing back when on the VGA console. But this one didn’t. I wonder if it’s related.

I don’t know anything about that specific computer system, but I would guess it has to do with the screen interaction code not being interrupt driven.

When the screen is updated from within the keyboard interrupt handler, it ought to update immediately regardless of system activity. Technically, code executing “cli” would temporarily inhibit all system interrupt handlers, but interrupts don’t get disabled for a prolonged period in a normal application/OS setting, even on a busy system.

However, in applications that don’t use interrupt handlers and process screen interactions outside of interrupts, it could mean that the keystrokes will wait in a buffer doing nothing until they application polls for them.

On a related note I believe many operating systems handle the mouse pointer in an interrupt to minimize mouse pointer latency even during high system load.

Borlands solution isn’t a hack – it’s a proper design that works. There are no timing differences that matter, faster hardware will work and the slowest hardware possible will work. It works.

So you say some buggy software will fail to handle this case correctly? Sucks, but the fault is in that software that doesn’t handle things correctly. And handling it correctly isn’t exactly hard – real emulators do much worse things than emulating ~1msec signals.

Borlands solution isn’t a hack – it’s a proper design that works. There are no timing differences that matter, faster hardware will work and the slowest hardware possible will work. It works.

So you say some buggy software will fail to handle this case correctly? Sucks, but the fault is in that software that doesn’t handle things correctly. And handling it correctly isn’t exactly hard – real emulators do much worse things than emulating ~1msec signals.

To me, these two paragraphs contradict each other since borland’s own software breaks on some modern hardware/controllers.

Expecting the same value twice from port IO creates a timing race condition that would not exist if you only read it once. Perhaps they assumed that the race condition would be fairly safe on the hardware they had then, but this has the potential to introduce fragility with modern controllers (USB/bluetooth/etc) that might send key sequences immediately as they are read using port IO without adding the PS/2’s inter-character delays. “Normal” keyboard handlers are ready to handle the next character as soon as it read the last. Borland’s handler, on the other hand, doesn’t handle that because of it’s unique requirement to read the same input character twice.

If you want, I can budge and meet you somewhere in the middle: Borland’s approach worked back when hardware was more homogeneous and everyone’s computer used identical controllers, but they made assumptions that could break with new hardware.