Thursday, 19 January 2017

Looking again at the VHDL for my interrupt router, I realised that adding a priority encoder was fairly simple:

Instead of being the masked and active interrupt lines, ROUTED is now the number (bit position plus one) of the active line. 0 is used as a special case: no interrupt lines, which are exposed by the mask, are active. This should never happen inside the ISR because when there are no active interrupts the MPU's interrupt lines should stay high, but it is catered for "just in case". The only downside with this method is that interrupt priorities are set in the VHDL. In other words the fact that a QUART interrupt overrides (and hides) a RTC interrupt can't be changed by MPU code.

This all means my main interrupt service routine is reduced to something approximating:

As each driver is opened, it will set up its interrupt handler routine pointer in the appropriate table, and set DISCo's interrupt mask bit to allow the incoming interrupt through to the MPU. For example, here is the relevant code in the UART driver:

This is much more efficient then before, when the main ISR had to manipulate mask bits, looping over a list of possible interrupt lines. The top level (IC level) interrupt processing is now nice and fast and most of the time spent in handling an interrupt is now used dealing with the actual interrupt-generating peripheral IC's registers and not "housekeeping".

I've also been working on the keyboard microcontroller code and have implemented, at last, key repeat and support for caps lock.

Key repeat was easier then I expected. Key repeat in any keyboard generally consists of two, separately configured delay values:

Typematic delay: the pause between a key being pressed and the first repeat press being generated.

Both values have reasonable default values set in the MCU code. I have also introduced a command code so the 6809 can change these values via the UART channel.

Because of wanting to keep all commands in a single byte, I have had to be a bit frugal. The command byte is arranged as follows:

0b00000000 - red LED off

0b00000001 - red LED on

0b00000010 - green LED off

0b00000011 - green LED on

0b00000100 - blue LED off

0b00000101 - blue LED on

0b00000110 - re-initialize the controller state

0b01xxxxxx - set typematic delay to xxxxxx

0b10xxxxxx - set typematic rate to xxxxxx

The delay values are scaled such that 63 (decimal) is a suitable - slow enough - value. There is currently no acknowledging of the byte sent to the keyboard controller. I will look at doing that when I have figured out transmission interrupts in the main computer.

From the 6809 user-level code point of view, sending a command to the controller is accomplished via a syscontrol call on any open console. However the delay values impacts all consoles. Eventually I will write a system utility to set these values when the system starts up, as well as interactively.

I had a couple of ideas for how to program in the operation of the caps lock key and its LED. Ideally I wanted the LED to be turned on and off by the 6809 sending a control byte to the MCU when the caps lock key was pressed. This would allow the caps lock LED to be used, in a crude way, to determine if the system was responsive since the LED would only toggle if the 6809 was running and able to pick up the keypresses, generating a command to turn the LED on or off in reply.

This would have required a byte to be sent in response to one received, in the console driver's sysread function. The problem with this is that task switching would have to be disabled while the byte was sent because another task might also be wanting to send a byte to the keyboard MCU, for the same or a different reason. This is a clear millisecond at 9600 baud and whilst I could increase the rate to sidestep this problem, I instead decided on a simpler approach.

The caps lock LED is handled entirely in the keyboard MCU code, which tracks wether caps lock is on or not. Communicating caps lock changes with the main 6809 MPU is done by sending a key down scancode byte (high bit clear) when caps lock turns on, and a key up sequence (high bit set) when caps lock is turned off. Cunningly, this mirrors what an ordinary shift key does. This means that the actual event of the caps lock key being released is not sent, but there seems to be no reason why the 6809 would ever need to know about this occurrence.

If anyone is interested, the code for my keyboard controller is available on github as normal. I'm pleased with how relatively simple the controller code has remained with these two bits of extra functionality.

One small "annoyance" that I noticed, after making the initial changes to scancode translation in the console driver: caps lock does not behave exactly like shift in "real" keyboards. Indeed, some early computer keyboards had two "lock" keys, a shift lock and a caps lock. If caps lock is on, numbers should still be available instead of punctuation. This has complicated the 6809 scancode translation a little. Instead of yet another translation table, a call to a toupper subroutine is made to convert a byte, if it's a lowercase letter, to uppercase. This occurs - only if caps lock is on - after the shift and control key checks have selected an alternative scancode translation table.

I've also been busy improving the core OS itself. Tasks now have a default IO device. IO routines - character based like sysread and the string based wrappers like getstr - have been wrapped *defio variants which pull out the default IO channel from a variable in RAM. For efficiency, this variable is updated from the copy held in the task structure each time a task is scheduled. It is also possible to set this IO device when a task is created.

With this work it is possible to use the same task code in multiple tasks. The "echo test" task, as I now call it, allocates the memory to hold the inputted string and uses the default IO channel to get and put the string. Multiple tasks all use the same code, without making a copy, and operate on two virtual consoles as well as a UART port. In other words this task code is reentrant. This method will eventually be used when the echo test task is replaced with something resembling a Shell and should be a significant memory saver.

One issue to address in most multitasking systems is how a task should exit. This is a surprisingly complex operation for any multitasking OS. I have borrowed some ideas from UNIX (massively simplified) and hold an exit code in the task's structure. The exiting child signals its parent which can, in turn, extract the task structure pointer (now fee memory) and exit code, returned in x and a respectfully.

I am still not sure if I'll use the above described mechanism for the running of all sub-tasks, since it is quite a bit of setup. After assuming for years AmigaDOS used this kind of mechanism in its CLI, I have since learned that instead external commands are simply read off disk and run as subroutines of the CLI task. The reason is obvious: efficiency. However, this simple technique makes pipelining commands more awkward, since the whole output has to be buffered for the next task (this probably explains why AmigaDOS lacked pipes). I will have to experiment when I get closer to having a useable Shell to determine which approach to use.

Finally, I've been working on improving the debug output which is presented, if enabled at assembly time, on UART port 2 - the TTL header. There are code, and text output, improvements.

Each debug message now has a "section type", for example messages related to the memory allocator are marked against DEBUG_MEMORY. At assembly time, it is possible to select t which debug messages should be generated for that particular build. So if I have a problem with the interrupt processing I can choose to only output those messages, etc. Debug messages were previously always included in the generated binary; now they are only included if debug mode is enabled.

The messages also look neater because each one is prefixed by a section label. To print out that task address with the task name, the following code is used:

debugxtaskname ^'Scheduling task: ',DEBUG_TASK

This required some new uses of macros in ASxxxx. The macro has to allocate space for the message to print in the generated code, but it can't go directly into the stream of code since otherwise the MPU would try to run the message as if it as code. Instead a new "area" was created, called DEBUGMSG. This is towards the end of ROM and contains all the debug messages. This particular message only shows if DEBUG_TASK is enabled for the the build.

Typical output from the debug serial line looks like the following:

All told, I'm pretty pleased with how my weird little OS is coming along. There's still lots of interesting things to work on:

I'm keen to try organising the code a little better. There are now about 30 files, and it would be nice to have a directory layout with drivers in their own directory etc.

Improvements to the console driver: I could look at some of the VT100 codes, and try to make the serial terminal more useable with them

Transmission interrupts on the UART. I still have yet to crack them.

Extend my, very preliminary, debug monitor. I have started "porting" my old monitor to the MAXI09OS so I can use it for debugging by dumping out things like task structures, the allocated memory blocks etc.

I could also have a crack at extending the VHDL in MuDdy and DISCo:

The IDE port needs a high byte latch implemented in DISCo

DISCo also needs a proper SPI controller, instead of the basic SPI pin interface which is currently written

Or I could have a think about how I will implement a storage interface in the OS. I have a few ideas, but more thinking and planning is needed...