I'm usually always lurking in this type of communities, but since I'm a condemned procastinator that demands INSTANT GRATIFICATION, I had never enough patience and willpower to actually learn programming myself, and chances are that I will never get enough skills to be able to make decent progress in an involved project like an OS is. Still, I love to see other people progress, and I always have fun reading comments from people that develops hobby OSes and program in Assembler, or throwing ideas myself.

The reason of this Post is that something called heavily my attention: UEFI specifications, besides its powerful pre-boot enviroment, allows for the creation of "Runtime Drivers", which can be called by the OS after the infamous UEFI ExitBootService() that finishes the pre-boot stage is invoked, and these Drivers live until system shutdown. So far, I occasionally google this matter and in years I couldn't find someone else either claiming to be implementing them, or asking about them, so I don't recall seeing any discussion about this matter to see the pros and cons of moving the Drivers to the UEFI layer.

Some of the things I could think about:

-OS independence: Runtime Drivers would be OS independent. Assuming people can manage to standarize a framework, naming convention, how to deal with Hardware Devices of the same type but with vastly different capabilities, etc, the end result would be something similar to BIOS Interrupts (INT x). Instead of each OS needing its own Drivers to initialize and set up the Hardware, an UEFI aware OS could rely on UEFI having preinitialized the Devices.The usefulness of this is that Driver development and OS development wouldn't be so close any more. As far that I know, nearly every hobbyst had to reimplement the same set of basic Drivers everytime they wanted to do something (USB support, video acceleration, etc). This may partially tackle that. People that likes Hardware interaction could fine tune the Drivers to get the most out of the Hardware (Be it in features or in speed via optimizations), while people caring about the "user experience" (In marketing terms, heh) could focus more on how the OS would work to be useful and functional instead of having to learn all the quircks of each piece of Hardware to make it work. It would look very similar to the Linux world where you have the main Linux Kernel, then a lot of distributions using it or tweaking it for their own needs.

-Bare Metal (Type I) Hypervisors: I became a fan of virtualization a few years ago, since with a good infrastructure, virtualization is ridiculous versatile. Why focusing on a single OS, when you can run in parallel all them?The most interesing Hypervisors are the Bare Metal ones. One of such exist, which is Xen. Xen can be optionally compiled as an EFI Executable and thus be loaded directly by the UEFI Firmware, you could even skip having a Boot Loader/Manager if your UEFI Firmware allows you to manually choose with EFI Executable you want to load. However, Xen has a sort of drawback: While the Hypervisor itself is bare metal, it has no Drivers and a lot of other things of its own, thus it instead relies on an administrative domain (Called Dom0), which is usually a Linux distribution, to provide Drivers and management tools. Since Dom0 could be either a minimalistic console based distribution or a full blown one with GUI like Ubuntu, being Bare Metal becomes trivial since the end result is that in usage is extremely close to KVM for Linux, which is a Type II Hypervisor. And both relies on QEMU for the emulation of Devices inside the VM anyways, making them even more closer...

This project would have loved UEFI Runtime Drivers, since it could allow to augment a Bare Metal Hypervisor before even loading an OS. To be honest, I would believe that a Bare Metal Hypervisor should be fully functional in the UEFI pre-boot enviroment, instead of needing an OS to fill the missing gaps, which is why UEFI Runtime Drivers would be critical for it. However, no such other projects about this surfaced in nearly a decade as far that I know. Nor I know if such Driver could be as complex and full-featured as an OS Driver, since there is a lack of them to begin with.

I would like to know if someone else has experience, or thoughts on the potential and use cases of these Runtime Drivers.

Have yet to really screw with UEFI, but I wanted to randomly interject: Having done a whole bunch of low-level work on old 68k macs lately, how has no one come up with firmware that good since? Like, UEFI should be the modern world's implementation of the Toolbox, and yet, having reviewed a lot of the UEFI literature, it kind of sucks donkey **** at doing the core thing it was intended for.

The reason of this Post is that something called heavily my attention: UEFI specifications, besides its powerful pre-boot enviroment, allows for the creation of "Runtime Drivers", which can be called by the OS after the infamous UEFI ExitBootService() that finishes the pre-boot stage is invoked, and these Drivers live until system shutdown. So far, I occasionally google this matter and in years I couldn't find someone else either claiming to be implementing them, or asking about them, so I don't recall seeing any discussion about this matter to see the pros and cons of moving the Drivers to the UEFI layer.

An OS can only rely on the "runtime services", which excludes normal drivers and is limited to drivers for things like the RTC ("getTime()") and drivers for accessing/setting UEFI variables. UEFI also says things like "Due to the complexity of performing a virtual relocation for a runtime image, this driver type is discouraged unless it is absolutely required".

Also note that UEFI drivers in general are "synchronous, single-tasking", which makes them useless for any OS that doesn't suck because you don't want the entire OS to stall while you're (e.g.) transferring data to/from disk. What you do want is all the hardware working in parallel wherever possible (e.g. all CPUs doing useful work while disk controller transfers data to/from disk while network card sends/receives packets while USB controller transfer data to/from USB devices while sound card plays and/or records while...).

zirblazer wrote:

-OS independence: Runtime Drivers would be OS independent.

Only if they suck; and don't honour the OS's IO priority scheme, don't/can't reorder requests to improve performance (e.g. to minimise seek times), don't participate in the OS's power management scheme, and don't support anything special (e.g. don't support "unused VRAM as swap space", don't support the OS's hardware usage tracking/hardware failure prediction scheme, etc).

Note that the idea of "universal drivers" is a recurring theme (someone suggests in one form or another about every 12 months on average); and more practical suggestions have always failed. The most prominent example of this is UDI, which is superior to UEFI's driver model (e.g. actually designed for a high performance operating systems and not designed to be "minimal, to get an OS booted only"), but still failed (became "abandon-ware") despite being backed by multiple large companies (Intel, IBM, Sun, HP, Compaq, ...).

Cheers,

Brendan

_________________For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.

UEFI also says things like "Due to the complexity of performing a virtual relocation for a runtime image, this driver type is discouraged unless it is absolutely required".

I have been wondering how this complexity can be handled at all even if we accept the fact that it is "complex" to do it. At a first glance it sounds more like "nearly impossible" so calling it "just complex" might be overly positive? Of course I do believe that it is possible, and has been done for existing run-time services, but in general making sure a runtime-service process handles the ever-changing virtual memory map reliably including all the corner cases might just be too hard. If third-party driver writers can actually do this, I have been wrong all the time. I believe that there are programming platforms where you can easily just see if they are going to be too difficult in practice. The idea of relocation at run time is one of the things that can be found in the category of "too difficult in practice".

UEFI also says things like "Due to the complexity of performing a virtual relocation for a runtime image, this driver type is discouraged unless it is absolutely required".

I have been wondering how this complexity can be handled at all even if we accept the fact that it is "complex" to do it. At a first glance it sounds more like "nearly impossible" so calling it "just complex" might be overly positive? Of course I do believe that it is possible, and has been done for existing run-time services, but in general making sure a runtime-service process handles the ever-changing virtual memory map reliably including all the corner cases might just be too hard. If third-party driver writers can actually do this, I have been wrong all the time. I believe that there are programming platforms where you can easily just see if they are going to be too difficult in practice. The idea of relocation at run time is one of the things that can be found in the category of "too difficult in practice".

I'd assume it mostly just re-uses the same method of relocation used by Windows for DLLs (which would also need to be used by UEFI drivers that aren't built into the motherboard's firmware anyway). The next problem would be figuring out what is in each "runtime memory range" described by the OS (via. the "SetVirtualAddressMap()" runtime service). It'd be messy, but not impossible.

For the OS; "SetVirtualAddressMap()" is a run-time service and has to be called while using physical mappings (e.g. with paging disabled for 32-bit UEFI or paging set to identity mapping for 64-bit UEFI). This is awkward/ugly.

For my case; the OS is not supposed to know or care what the firmware was (so the OS can support "anything" with nothing more than a suitable boot loader), so I refuse to use UEFI's run-time services in the first place. Ignoring that, for security (micro-kernel) and to support "64-bit kernel booted from 32-bit UEFI" I'd want to implement "UEFI runtime" as a (32-bit or 64-bit) process in user-space, which is likely to cause problems if UEFI uses privileged instructions; and makes the "32-bit only kernel/OS booted from 64-bit UEFI" a nightmare because now you're looking at a 32-bit kernel (that was designed for "assume long mode doesn't exist at all") trying to support 64-bit processes.

The "slightly more sane" alternative would be for the OS to support UEFI drivers directly, without relying on UEFI run-time services. In that case the OS could use UEFI drivers even if it booted from BIOS or something else. Of course this has been suggested before.

Cheers,

Brendan

_________________For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.

Hi zirblazer, I can confirm that the project from year 2008 you mentioned in the link is dead. Hypervista technologies was founded by Don (aka Hypervista) and 2 my friends did programming of the hypervisor - Vid and MazeGen, they tried to start hypervisor from UEFI before OS. Don had the idea and bought one of the first available motherboards with UEFI (it was some Intel MoBo).Few year later I did my own hobby work in these pre boot hypervisors (BIOS, UEFI) and I bypassed the runtime driver necessity, I preferred to load hypervisor manually (using UEFI shell and the hypervisor was compiled as an UEFI app) - it just required to steal/hide some memory from system so OS which then booted was not trying to use this memory where hypervisor body was copied and executed. The biggest problem was to survive INIT-SIPI when OS initilized all CPUs, especially challenging task to implement it for AMD64, easier for Intel. I wanted to avoid loading UEFI runtime driver every boot, because in case of severe error in such UEFI runtime driver I wouldn't be able to boot PC, so here my preference of manual loading came.

I'd assume it mostly just re-uses the same method of relocation used by Windows for DLLs (which would also need to be used by UEFI drivers that aren't built into the motherboard's firmware anyway). The next problem would be figuring out what is in each "runtime memory range" described by the OS (via. the "SetVirtualAddressMap()" runtime service). It'd be messy, but not impossible.

Any "general" driver model would need to assume the operating mode of the processor, paging setup and similar things. In addition to that, there is the multiprocessor or multitasking model used.

Brendan wrote:

For my case; the OS is not supposed to know or care what the firmware was (so the OS can support "anything" with nothing more than a suitable boot loader), so I refuse to use UEFI's run-time services in the first place. Ignoring that, for security (micro-kernel) and to support "64-bit kernel booted from 32-bit UEFI" I'd want to implement "UEFI runtime" as a (32-bit or 64-bit) process in user-space, which is likely to cause problems if UEFI uses privileged instructions; and makes the "32-bit only kernel/OS booted from 64-bit UEFI" a nightmare because now you're looking at a 32-bit kernel (that was designed for "assume long mode doesn't exist at all") trying to support 64-bit processes.

Exactly. If the operating mode of EFI doesn't agree with the operating mode of the OS, then the OS will simply need to turn off EFI and run it's native drivers no matter how good the EFI drivers are. It's easier to run a real mode driver in a virtual environment than at 32-bit or 64-bit EFI driver.

Still, I disagree about 32-bit kernel not beeing extendable to long mode. This is very much possible and have been done too (I can add a long mode driver to the configuration which makes it possible to run 64-bit apps with the 32-bit kernel).

Only if they suck; and don't honour the OS's IO priority scheme, don't/can't reorder requests to improve performance (e.g. to minimise seek times), don't participate in the OS's power management scheme, and don't support anything special (e.g. don't support "unused VRAM as swap space", don't support the OS's hardware usage tracking/hardware failure prediction scheme, etc).

If an UEFI Driver can't be high performance, it sadly kills the entire idea, as it can't replace an OS Driver...

feryno wrote:

Hi zirblazer, I can confirm that the project from year 2008 you mentioned in the link is dead. Hypervista technologies was founded by Don (aka Hypervista) and 2 my friends did programming of the hypervisor - Vid and MazeGen, they tried to start hypervisor from UEFI before OS. Don had the idea and bought one of the first available motherboards with UEFI (it was some Intel MoBo).Few year later I did my own hobby work in these pre boot hypervisors (BIOS, UEFI) and I bypassed the runtime driver necessity, I preferred to load hypervisor manually (using UEFI shell and the hypervisor was compiled as an UEFI app) - it just required to steal/hide some memory from system so OS which then booted was not trying to use this memory where hypervisor body was copied and executed. The biggest problem was to survive INIT-SIPI when OS initilized all CPUs, especially challenging task to implement it for AMD64, easier for Intel. I wanted to avoid loading UEFI runtime driver every boot, because in case of severe error in such UEFI runtime driver I wouldn't be able to boot PC, so here my preference of manual loading came.

I think I recall your nickname from FASM Forums, vid too. Maybe I even know that history already, and just forgot it.

Xen succeded in becoming a production UEFI Hypervisor, but last time I checked, it was still not a common practice to use it that way. I was a rather early adopter and jumped to Linux Kernel 3.17 as soon as it was released, since it was the first one to support being used as Dom0 for Xen in UEFI Mode (Previously Xen itself worked, but couldn't chain boot Linux).I lost track about Xen news around one year ago since I started to research KVM, as its PCI/VGA Passthrough capabilites (VFIO) are much better developed and maintained, so I was going to migrate to it, but I still like the Type I Hypervisor concept. However, as I stated, the thing is that while Xen is a Type I Hypervisor, it is too dependant on Dom0 for administrative work, and if you're going for a full featured VM (In Xen jargon called HVM), you also require QEMU for device emulation, and that makes it too close to standalone QEMU with KVM.I was thinking that if you attempt to implement a Type I Hypervisor that tries to provide a full VM enviroment, it has to absorbs QEMU device emulation capabilites to provide the emulated Hardware itself, and at that point, you can't really avoid UEFI Drivers. Imagine that you want to provide en emulated Sound Card for the VM (Which QEMU currently does), so for the sound to be outputted in the host, you need an UEFI Driver for the Sound Card. Xen instead relies on the Linux Dom0 to provide the Drivers. So, in order to provide a full featured Type I Hypervisor, all the Drivers and tools backbone must be in the UEFI layer.

My wet dream was something like having a Firmware with Coreboot, TianoCore, and a self contained Type I Hypervisor as payload that could create VM instances from within the Firmware itself, having no need of anything else, and thus obviously required UEFI Drivers. Someone else had a extremely similar idea, it was Coreboot AVATT, also made during 2008:https://www.coreboot.org/AVATTSadly, it didn't got anywhere as far that I know. But since it had a Linux as payload, I suppose that you could have an embedded Linux as payload, and use its Drivers instead of the UEFI layer. Results shoud be around the same.

Hi zirblazer, I forgot to mention f0dder as one of main developers. I do not know whether the team successfully built running hypervisor started as UEFI runtime driver, we met at fasm con 2007 and 2009 (iirc in 2009 without Don either f0dder).Yes, Xen, KVM, QEMU are huge projects done be a team of programmers. I had an idea of lightweigth type-1 hypervisor which only starts before OS and watches running OS. A job which can be done by only one programmer working alone in free time. But it was very long ago and 90% of work was debugging using Simnow 4.6.2 (AMD) and BOCHS (Intel) and programming itself was 10%. Much more later I realized it was not wasted time, because this paid me back when working in nested hypervisor implementation. My hypervisors (AMD, Intel) were small and did not require devices enumerations. Running OS did only a little of vm exits and it was possible to intercept more events by enabling some bits in VMCS / VMCB. I wanted hypervisor only just to "live and watch".

Who is online

Users browsing this forum: No registered users and 2 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum