Bare Metal PC Hacking 4 - 16bit interrupts from protected mode

With loading out of the way, and my whole program securely situated in high
memory (above 1mb), I can finally leave the grotesque horrors of real mode behind, and continue forth in 32bit
protected mode.

So I jump into a C function, and
continue on my merry way. I make a proper GDT and
IDT, write code for interrupt
handlers, exceptions, and spurious interrupts. A keyboard driver, and timer code (to
be honest I just pulled most of that code straight from my old
kernel project really). But there's
something bugging me. It's just not cool enough, to have to switch video modes from
the boot loader, before entering protected mode,
and never touch that again.

Video BIOS

Let me explain, for those of you not intimately familiar with low level VGA/SVGA
graphics code. All graphics cards on the PC architecture, have a ROM on board,
containing a piece of code called the "video BIOS". During startup, the video BIOS
hooks interrupt vector 10h, and provides the services of a rudimentary but generic
video driver for the particular graphics card it's written for; nothing too fancy,
basically video mode switching, text output, bank switching, and so on. The problem
is, int 10h is a 16bit interrupt vector. I can't call it from
protected mode and expect it to work properly, even if
I do set it up in my protected modeIDT; it must be called from
real mode.

A simple solution, as I've hinted above, is to have the boot loader set the
required video mode before entering protected mode.
That might actually be
sufficient for this project, but not very convenient. If at all possible I'd like to
be able to switch video modes much later, from the application-specific 32bit C
code.

int86

DOS protected mode extenders provide a function
(usually called int86), to allow calling 16bit interrupts from
protected mode. It takes the interrupt
number, and a structure with all the values we want in the registers when the
interrupt is called, and after the interrupt returns, the processor state is saved
back to the same structure. The way it works is by utilizing the
virtual 8086 mode
of the processor; a mode designed to run 16bit programs under the strict control and
supervision of a 32bit kernel. The same method was also used to run 16bit DOS
programs under Windows 9x.

Initially I thought I'd have to implement int86 in a similar way, using
a v86 task.
I was dreading the prospect, because it's very complicated to set
up, having to essentially execute a 16bit user level process, and then trap and
emulate any privileged instructions that process might attempt to execute.

Thankfully there's a simpler way than using v86 mode
just for calling a BIOS
interrupt. What I ended up doing, which in retrospect should have been the obvious
solution, is to simply write a function which drops back to real mode
(or actually unreal mode), calls the requested
interrupt, and then re-enters protected mode before
returning.

So basically I'm setting the values I want each register to have when it
calls the actual interrupt, then provide the interrupt number as the first argument.
When the function returns I can check the value of each register for whatever
returned values or error codes each interrupt provides me with.

8086 vs x86 interrupts

Interrupt vectoring on the 8086 used to work with a simple table of
segment:offset addresses called the IVT
(Interrupt Vector Table), always located at address 0. Each of the addresses in the
table points to the entry point of the corresponding interrupt service routine. For
instance at address 12 would be 4 bytes making up the segment and offset of the
interrupt 3 entry point.

Later x86 processors use a much more complicated scheme which I'm sure I've
mentioned before. Interrupt entry points are defined by a number of interrupt
descriptors placed in the Interrupt Descriptor Table
(IDT). Each of those descriptors contain the 32bit address of the entry point,
the selector for its code segment, and a number of protection and privilege
bits. The table itself can be located anywhere in memory, as long as we make sure
the idtr register points to it, so that the processor can find it.

As I've also mentioned time and time again, on reset the x86 starts in
real mode, which emulates the 8086. It still uses the
idtr to locate the IVT, which is initialized with a base of 0 and limit
3ffh, but accesses it in the simpler 8086 way, expecting addresses there, and
not descriptors.

int86 implementation

I can see two ways of going about calling the 16bit interrupt (after we drop into
real mode): either set everything up so that I can simply
issue an int instruction, or fake the interrupt stack frame, then look up manually the
original 16bit IVT and
jump to the correct entry point. I thought the first way would be less error prone,
so I went with that. The first step therefore is to save the contents of
idtr, which is currently pointing to my protected modeIDT, and then point it to address 0, where the original
IVT has remained unchanged.

A slight complication is that I want to grab the first argument of the int86
function, and use it as the interrupt number in the int instruction, but
the x86 instruction set does not have an int opcode with a register or memory
operand. I'm glad about that omission, because it's very rare to find a genuine
excuse for self-modifying code these days. The int
instruction is encoded as two bytes: the opcode cd followed by the
interrupt number. So, making sure there is a label at the int instruction, it suffices
to simply plonk the correct number at offset 1 from that label.

To correctly execute the 16bit code in the interrupt service routine, I'm going
to need to call it from code running in a 16bit code segment. For this reason I've
included a 16bit code segment descriptor in my GDT at
index 6, and I'll have to load the appropriate selector into the cs
register before dropping to real mode and loading
cs with 0. Obviously I
also need to make sure this code can be reached from a 16bit segment with base at 0,
which is why I placed it alongside the second stage boot loader in src/boot2.s

Loading the registers with the values provided in the int86regs
structure is easily done by setting the stack pointer to point to the structure, and
then performing a number of pop operations, with all the general purpose
registers loaded with a single popa.

I won't tire you with listing the code entering real mode,
returning to protected mode, and pushing all the
registers pack into the regs structure, since you've seen all that before.
Feel free to visit the pcboot repository. I've tagged this version of the code as
int86_done.

Here's a short video of my old pentium retro-pc booting the current pcboot test
code, listing all available VESA video modes,
and displaying a test pattern in a 640x480 high color mode (16bpp 565):