Posted
by
timothy
on Tuesday January 27, 2009 @10:42AM
from the now-you're-just-messing-with-me dept.

billybob2 writes "CoreBoot (formerly LinuxBIOS), the free and open source BIOS replacement, can now boot Windows 7 Beta. Videos and screenshots of this demonstration, which was performed on an ASUS M2V-MX SE motherboard equipped with a 2GHz AMD Sempron CPU, can be viewed on the CoreBoot website. AMD engineers have also been submitting code to allow CoreBoot to run on the company's latest chipsets, such as the RS690 and 780G."

What is the benefit of writing a BIOS in C over assembly code? Is it for transparency? Easier to catch bugs? Does compiling from C to machine assembly protect you from obvious errors in assembly? Is it for reusability of procedures, modules & packages?

Oftentimes I have wished I knew more assembly so I could rewrite often used or expensive procedures to fit the target machine and try to optimize it. I don't know assembly well, however, and therefore don't mess with this. Doesn't handwritten assembly have the potential to be much faster than assembly compiled from C? I thought often run pieces of the Linux kernel were being rewritten into common architecture assembly languages because of this?

I'm confused why mainboard companies don't write their BIOS in C if this is an obvious benefit--or is it that they do and all we ever get to see of it is the assembly that results from it?

Can anyone more knowledgeable in this department answer these questions?

I would have thought that immediately ( 160 ) since I did so much assembly with CGA, however I may be even older and some of the displays were 40 characters wide, so 80 would be correct for that. On the issues of coreboot, that is fantastic and I want it for my machine now. I want instant boot to linux and ext4 for my next upgrade. On the other issue of _asm_ as faster, I bet I could make some of it faster, but gcc is way good anymore and I often objdump my "c" code to look at the assembly and the people who write the compiler are virtually magicians with that code. I have tried competing with the compiler and it is a waste of time for most things and unless I was doing firmware or a device driver, I wouldn't even consider assembly. As far as the code, the one thing I wouldn't do is a "mul" just for the cycle cost, I would combine shifts and adds to get (16x+64x).

I thought C compilers had gotten to the point where C was just a convenient syntax for assembly anymore?

I'm only half-kidding here. I'm sure the main reason is for portability across different chipsets, as well as ease of debugging. But, as I said, I think a lot of current C compilers can generate code that's not appreciably larger than hand-written assembly.

Doesn't handwritten assembly have the potential to be much faster than assembly compiled from C?

Short answer: no.

Long answer: rarely. Optimizing compilers are so good these days that very few humans would be capable of writing better assembler, and I contend that no humans are capable of maintaining and updating such highly-tuned code.

Embedded assembler makes a lot of sense when you're embedding small snippets inside inner loops of computationally expensive function. Outside that one specific case (and disregarding embedded development on tiny systems), there's not much need to mess with it. Note that need is not the same as reason. Learning assembler is good and valuable on its own, even if there are few practical applications for it. If nothing else, it'll cause you to write better C.

I've had to optimize generated code (for space and speed) a long time ago

I did too. But how can you write assembler for a 31 stage pipeline by hand ? Or out of order instructions ? I'm pretty sure that's completely impossible, unless you insert 30 NOPs between each instructions, and by that time it's far from being optimized anymore !

I did too. But how can you write assembler for a 31 stage pipeline by hand ? Or out of order instructions ? I'm pretty sure that's completely impossible, unless you insert 30 NOPs between each instructions, and by that time it's far from being optimized anymore !

Out of order execution is something to compensate for suboptimal instruction ordering, so I'm not sure why you think it would make it harder to write assembly for an OOO chip. You just worry slightly less about sequencing. As for deep pipelines, y

Specialized instructions (MMX, SSE, etc) can provide substantial speed boosts with certain code. Unfortunately no C compiler really takes full advantage of those features (if at all) despite them being widely available nowadays.

So in those cases it may be a whole lot faster to use assembly. Usually this is just embedded within a C function because of the specialized nature.

Turns out that the amount of bugs in a given amount of lines of code is fairly constant, regardless of language. Thus, it takes fewer lines in C code = fewer bugs.

Also, it is extremely rare that the compiler cannot emit more optimal code than what is hand-written - compilers are extremely good at optimizing these days. The more common trend is to provide hints & use intrinsics so that you get all the benefits of writing in C code (type checking, more readable code), but the compiler is better able to generate the assembly you want.

You will almost never write better assembly than what the compiler outputs - remember, the compiler takes a "whole program" approach in that it makes optimizations across a larger section of code so that everything is fast. It is highly unlikely that you will be able to match this - your micro-optimization is more likely to slow things down.

There is actually very little in the Linux kernel that is written in assembly (relatively compared to the amount of C code) - the only time it is, is because it is the only way of doing it to support multiple architectures, not performance. For performance, by far, the kernel code is written in C and relies on working with the compiler people to make sure that the code is optimal.

Now, we're not saying that Perl has a lot of bugs. But it's a program, and every program has at least one bug. Programmers also know that every program has at least one line of unnecessary source code. By combining these two rules and using logical induction, it's a simple matter to prove that any program could be reduced to a single line of code with a bug.

You have some insight, but not into how compilers work, nor how good programmers improve code. It has been a very long time since I wrote any C, but it was writing much of a (non-optimizing) C compiler. It must be said though that one reason I didn't finish it was looking at all the cool ways to optimise:)

If I were writing/designing a BIOS (which I must admit I am glad I am not) I would also pick C as the "best" language for the job. I'd then write the cleanest possible implementation of the design, using

Well, it's much easier to write anything in C than assembly, but assembly lends itself to small pieces of self-contained code that do one thing only.

The idea is that assembly is only used where is needs to be, because you have to do something that you can't do in C, such as fiddling around with the CPU's internal state. The rest is written as a collection of modules in C. To build a BIOS for a particular board, you just link the required modules together.

That suggests the question "why not write the BIOS in C++, or Java, or whatever". Anything higher-level than C tends to require more complex runtime environments (which are usually written in C), while C requires nothing more than assembly. It's the highest level language commonly available that can run with absolutely no OS support at all.

Doesn't handwritten assembly have the potential to be much faster than assembly compiled from C?

For a piece of software that gets run once per boot, speed is probably not very critical. A typical BIOS completes its run in a couple of seconds.

Using an optimizing C compiler also has a further potential benefit -- given that motherboards specifically target certain CPUs, you can optimize the BIOS code for that CPU family. Not sure how much improvement this will yield, though.

Writing in 'C' is an order of magnitude faster than writing in assembler; if you're building a system with 10 man-years of coding in it, that becomes really, really important.

Imagine writing a host-side USB stack in assembler; a BIOS has to have that. Or writing an Ethernet driver and TCP/IP stack in assembler. Or any of the other large subsystems of a BIOS; the task would be daunting to me, a 20 year veteran of embedded systems (yes, my 'C' and Assembly mojo is strong).

Assembler has proven its worth when sprinkled through embedded systems. When profiling finds the routines that are bottlenecks for time-critical functions, a good assembly programmer can often speed up the 'C' code by a factor of 2 to 10. But, this generally involves very small chunks of code - 10 to 50 lines of assembly.

In most real systems, the vast majority of the code is executed rarely, and rarely has a performance impact. For example, on a modern dual-core, 2 GHz processor with a GB of RAM, the code used to the display the BIOS setup UI and handle user input will execute faster than human percepption in almost any language you could imagine (say, a PERL interpreter written in VB which generates interpreted LISP). There is no reason in the world to try to optimize performance here. Even in things like Disk I/O, the BIOS' job is mostly to boot the OS, then get the hell out of the way.

A BIOS actually does not have that much to do, by definition. The problem is that PC architecture is crusty and hasn't evolved all that much since the 80's. It should not be the BIOS' job to handle USB/Ethernet or any other hardware niggling, such feats belong in the hardware's own controller. If each component did its job and presented a uniform, reliable interface to the BIOS, we could be writing very simple BIOSes that glue it all together and give us a simple UI to configure the pre-boot stage. That

AMD engineers have also been submitting code to allow CoreBoot to run on the company's latest chipsets, such as the RS690 and 780G."

Now that would freakin' rock!!!

Until now, CoreBoot has been really hampered by the fact that it has mostly been supported on server boards [coreboot.org], with little to know support on Desktop and Laptop chipsets. This is mostly the fault of the chipset/mobo manufacturers, who have zealously guarded their legacy BIOS crap for reasons that are pretty unfathomable to me.

I would love to be able to run CoreBoot on my Desktop and laptops. It would help to fix soooo many of the legacy BIOS issues that people tear their hair

Alright, at the risk of further revealing my stupidity--what does this matter? I mean, isn't the BIOS tied to the architecture of the chipset anyway? It's not like I'm going to write a C program that compiles into the BIOS for an x86 chipset and--oh, by the way--thank god I can also compile that down to a PowerPC binary! I don't think that any piece of that integrated circuit is going to be developed in a mirror fashion on a PPC architecture... or is that common practice?

"The real accomplishment was to be able to write memory and other early initialization code in C. Which is much easier to write and maintain then assembler. Assembly code is fragile when you change it, especially when you don't have a stack. C is much more robust â" the code is easier to change without breaking everything. This makes coreboot easier to work on, to contribute to and to maintain."

Actually, Coreboot is faster. The record from power on to Linux login is, according to their FAQ [coreboot.org], 3 seconds. Writing it in C speeds up development compared to writing it in assembly and allows compilers to optimize it.

As another poster pointed out, Coreboot (which is written in C now) already boots Linux in 3 seconds, which is far faster than normal BIOSes. So obviously, performance isn't a problem here.

Custom assembly language routines, written by an expert, may be helpful for certain things which are executed a lot. However, BIOS code is NOT executed a lot: it's executed just before booting the OS, and that's it. After the OS boots, BIOS code is never seen again. So if you're trying to eke out every bit of performance on your computer, writing custom assembly language code for your BIOS is probably the biggest possible waste of time. Instead, you should concentrate on things like OS interrupt routines, or certain software libraries which demand high performance (video codecs for example). Of course, even then it's debatable how well your asm routines will perform compared to well-written C code that's compiled by a good compiler with optimization.

Actually you get massive speed gains if you use SSE assembly (and your app benefits from SSE) because the compiler often doesn't produce it willingly.

SSE-intrisinc functions are much better though.

I'm not sure what the SSE instructions are, I have never coded assembly outside of one class in college. However, maybe the key is to fix the compiler to produce the proper SSE code. If a human can produce better assembly than a compiler, he can probably teach the compiler to do the same.

They probably need to give compiler specific hint flags therefore marrying the build to a specific compiler. This still produces code that can be changed quickly and is free of the errors that the particular automations pre

There may be SOME architecture specific code, even a lot of that can probably be written in C. 99% of the Linux kernel is C and that has to interact with hardware too.

As far as efficiency goes, in the old days it was true that a coder with an intimate knowledge of the architecture could usually hand code more efficient assembly. Modern C compilers however can do a LOT of optimization and generally the resulting code is faster than anything that could be coded by hand, or at least AS fast. Even if it is microscopically slower it is still a LOT easier to use C. Plus if hardware abstraction is done properly even a low level driver back end should be portable for the most part.

Manufacturer BIOS may be written in Assembly since they are A) targeting a specific board which is going to obviously only run that one family of chip and B) probably have a lot of legacy assembly code they would rather not bother to port to C. Neither of those would apply to Coreboot.

what does this matter? I mean, isn't the BIOS tied to the architecture of the chipset anyway?

I'm not a BIOS writer, but I am a software developer.

My best guess is there are parts of a BIOS that are tied to the hardware architecture, and there are parts that aren't.

For instance, what if you want to write a BIOS that can read an EXT3 partition? Or has a TCP/IP stack in it? These might be bad examples, but I can certainly see that there's generic things to be done that aren't necessarily tied to a particular processor.

My best guess is there are parts of a BIOS that are tied to the hardware architecture, and there are parts that aren't.

For instance, what if you want to write a BIOS that can read an EXT3 partition?

Actually, a BIOS that can read EXT would be kickass. My bios can only read FAT12 (and maybe FAT32 for a hdd) off the floppy. If I wanted EXT3 (or 2), I'd have to put that stuff in the "kernel" that gets loaded by the boot sector. That kernel, however is on a FAT12 partition =P
As for TCP/IP, that would be nice to allow diskless boots. PXE anyone?

As for TCP/IP, that would be nice to allow diskless boots. PXE anyone?

Not only that, but a minimal TCP/IP stack in the BIOS would remove much of the reason for purchasing expensive remote-management add-in cards (and sacrificing PCI slots as a result) in order to perform hard reboots and view the boot console over a network. (Those cards are in themselves an alternative to even more expensive out-of-band management systems, using either the serial port or proprietary hardware interfaces.)

Although there would be some obvious security concerns with such a system -- you wouldn't want to enable it by default on non-headless systems, clearly -- it would be a pretty neat feature and would go a long way towards making commodity servers (built up from semi-generic components, like Rackspace's) feature competitive with the big names. And it'd be nice, just in general, to get a standardized approach to true headless operation that was vendor-agnostic and didn't require the purchase of additional addon parts.

Alright, at the risk of further revealing my stupidity--what does this matter? I mean, isn't the BIOS tied to the architecture of the chipset anyway?

No, not really. Small parts of a standard bios are, but if you notice BIOS version numbers for say, Award or AMI, you'll see that they don't actually vary much by motherboard; just by manufacturing date. There are also major bits that stay the same -- PCI drivers, for instance, even across different architectures like x86, AMD64, PowerPC, Alpha, etc.

I'd really like to see the buggy vendor bioses get the boot and be replaced by this. The BIOS on my motherboard has all sorts of quirks, like missing one stick of my ram during detection randomly, to really laggy page switches. Windows support is what CoreBoot needs to get accepted.

some fully supported desktop mobos is what coreboot needs;)if a mobo was fully supported, that would be a huge plus when i'd choose.we've seen a lot of issues where even if a bios isn't massively buggy by itself, future development of hardware leaves a lot to be desired, but vendor has dropped any support. this includes servers by ibm, hp, desktop boards...problems have been various during the years (and i really mean only the problems that can be fixed in bios) - larger disk drive support, larger memory support, proper booting from hdd (for example, ibm netfinity 5000 stops booting when an ide drive is attached), proper booting from all cdroms, usb booting...so, amd, if your products will be fully supported by - or even shipped with - coreboot before everybody else, it is very likely that my future purchases will go to you:)

Each time I do a Coreboot/LinuxBIOS announcement on Freshmeat, I usually add a whole bunch of chipsets and a fair dollop of motherboards. I don't, as a rule, state the level of completeness, simply because there's barely enough space to list just the components.

Having said that, assume the web page is out-of-date when it comes to fully-supported motherboards. I know for a fact that I've seen a lot more motherboards get listed as complete in the changelog than are listed on the website, even though I started tracking those changes relatively recently and they'd plenty of mobos complete even then.

One of the important things to remember about LinuxBIOS/Coreboot (the new name doesn't have the same ring to it, for me) is that it's a highly modular bootstrap, so it has a high probability of working on just about anything, so long as the components you need are listed and ready. I feel certain that a few good QA guys with a bit of backing from mobo suppliers could pre-qualify a huge number of possible configurations. The developers, as with most projects, don't have time to validate, debug and extend, and their choice has (wisely) been to put a lot of emphasis on the debugging and extending.

Of course, Coreboot isn't even the only player in the game. OpenBIOS is out there. That project is evolving a lot more slowly, and seems to have suffered bit-rot on the Forth engine, but that's a damn good piece of code and it deserves much more attention than it is getting.

Intel also Open Sourced the Tiano BIOS code, but as far as I know, the sum total of interest in that has been zero. I've not seen a single Open Source project use it, I don't recall seeing Intel ever release a patch for it. That's a pity, as there's a lot of interesting code there with a lot of interesting ideas. I'd like to see something done with that code, or at the very least an assessment of what is there.

Its not AMD's decision to say whether or not Coreboot is used in place of AMI/Award/Phoenix/etc but motherboard makers themselves. Coreboot has to be stable and fully support all the chipsets, CPU's, hardware and operating systems attached to that board. And on top of that it has to have a full tool kit to enable the maker to easily customize the bios for their exact board configuration.

But I will add that I too have been eagerly awaiting a free open source BIOS that will ship on mainstream boards. Problems

Well, there's two issues there. One is that Vendors haven't cared a lot about getting it right, and two that the BIOS itself as a specification is pretty limited.

Replacing the BIOS with EFI or something more up to date and extensible could potentially solve the second problem.

But, ultimately vendors are lazy and tend not to bother doing it right. More often than not they just use a stock BIOS which is itself buggy. Really it's probably the BIOS manufacturers that ought to be taken to task for screwing it up

Wow. Sucks to be your board. I don't think I've ever had any big problems with BIOSes on desktop boards (or even on server boards passed the "a few updates after public release" point. My current setup doesn't have ANY that I know of, or care about... and newer versions of the BIOS simply add support for newer procs/stepping.

Now on server boards, I've only really had problems with boards from Tyan. Though mostly it was because they mislabeled things or couldn't spell ("CPU1 FAN DOESN'T DETECTED" comes to mi

Reprogramming the BIOS is not a good idea unless you've some method of recovery. This is true whether you are timid or brave. CoreBoot goes through a lot of bugfixes each day, every day, and there's no telling that tomorrow's patch might relate to a problem with your hardware.

If there's a way to flash your BIOS externally, such as via JTAG, your number one concern should be to get the hardware you need. Dump the contents of the flash to some backup storage (that you can access without a working flash), then

Excuse my ignorance but is it already possible to have a fully working computer that doesn't perform a single unknown operation?

Possible? Yes. Feasible for an enthusiast? Not in the first quarter of 2009. Intel and AMD CPUs contain secret microcode. There exist Free CPU cores such as the MIPS-compatible Plasma [opencores.org], but as far as I know, none are commercially fabricated in significant quantities.

I've been buying Intel because they support their 3D graphics with open source code really well under Linux, unlike AMD/ATI.But Coreboot says support AMD, because AMD helps them run on AMD chipsets, unlike Intel.

What's more important to you? OSS graphics drivers or OSS BIOS? And by the way, if you need a decent graphics card, you're gonna need ATI or nVidia anyways, Intel doesn't make really high performance cards.

Also, ATI has open source 2D drivers and just yesterday released specs that should allow for good open source 3D drivers. Sometime in the next 6 months, their graphics cards should support OpenCL, too. ATI is the way to go for open hardware support at the moment.

ATI's binary drivers actually work, too. They had problems in the past, but I've recently bought a new card from ATI to replace my Nvidia card and I can say easily that they both work very well with the binary drivers.

I beg to differ, if you want a stable system capable of running well, consuming power, suspending, doing composting, flash, etc, intel are the only choice. Unless you play recent games intel really kick the pants out of ati/nividia for stability (the same can be said for windows drivers tbh)

Booting Linux (and other free operating systems) is relatively simple: They quite robust against quirks in the BIOS, as they're usually not part of the testsuite of the BIOS vendors.It's also possible to boot Linux (and a smaller set of other free operating systems) without any PCBIOS interface (int 0x13 etc), as they don't rely on that.

Windows does. There has been, for a couple of years, a useful, but very fragile hack called ADLO, which was basically bochsbios ported onto coreboot, to provide the PCBIOS.Recently, SeaBIOS (a port of bochsbios to C) appeared and was a more stable, more portable choice (across chipsets) in that regard.

So yes, we're proud that we can run the very latest Microsoft system, simply because it's less a given than booting Linux.Even VirtualBox (commercially backed, and all) seems to require an update (very likely to its BIOS!) to support Windows 7. "We were first";-)

I had to write a bootloader for an embedded PPC board recently (and the associate Linux kernel). I was really surprised at how easy it was. Basically 3 lines of C: main(), an almost fake function using a static pointer and a return !

Just a random ramble, but why change the name from LinuxBIOS, surely it would have been easier to point out the irony of Windows needing Linux to start itself. Maybe it would have got some people to think more of the capabilities of Linux then?

We are working on making Linux our BIOS. In other words, we plan to replace the BIOS in NVRAM on our Rockhopper cluster with a Linux image, and instead of running the BIOS on startup we'll run Linux. We have a number of reasons for doing this, among them:... [LinuxBIOS.org, Aug. 2000 [archive.org], at the bottom of the page]

This may not be the reason that this project changed its name, and IANAL, so take this with a block of salt, but one reason that I can think of immediatly is Trademark Dilution. Since the BIOS has little to do with Linux (and visa versa), using Linux in the name simply confuses things by suggesting a connection that isn't there. Really "Open BIOS" is more accurate than Linux BIOS, and CoreBoot is probably better yet from a trademark standpoint.

EFI is useful in the same way Open Firmware on PowerPC and Sparc is useful. It gives you an extensable system that can do different things with devices. This is great on a system where you don't know what the hardware may be (i.e. Workstations).. but starts to fall down when you get to servers, blades or embedded systems.

On most systems these days BIOS or any type takes between 3 and 30 seconds to boot to the OS. This is simply not acceptable to many blade and embedded system designs.. (Even some server designs this isn't acceptable.)

I can boot a system with coreboot in a second or less to the OS. This is really the most important part of coreboot. (For embedded systems, most of the time our target is in the.2 to.5 range from power on to OS start... this almost all but excludes ia32 from many embedded applications today.)

Coreboot by itself is initialization firmware only. That means, it doesn't provide any callable interfaces to the operating system or its loader. So you cannot ask coreboot to load a block from disk. That's were BIOS, OpenFirmware and (U)EFI come into play to fill the gap. They don't define the firmware, but its interface.

I haven't read the article, but I'm quite sure that they're using SeaBIOS - running on top of coreboot - to boot Windows. In this setup, coreboot performs hardware initialization and Sea

It's an interesting idea. I think open source toner might be a bit tricky (likely the manufacturing process is difficult) but it certainly might be possible to work out the components for an open source driver board that could be used as a module in existing printers, bypassing the "chips" and allowing the simple use of third party toner. It might then be possible to move forward with open source printer hardware from there.

The answer is simple: just don't buy a printer that requires chips embedded in the toner cartridges. Most laser printers, AFAIK, still don't. Even my HP Laserjet 2300 doesn't require it (though it will give you a warning message when you turn on the printer with a non-HP cart, but that doesn't matter).

It's mainly the stupid inkjet printers that have this problem, and anyone stupid enough to use an inkjet printer deserves to be ripped off, when laser printers are so much cheaper to operate, and only cost $

Looking at the CoreBoot site, it seems there best support is for the AMD Geode chips. It is ironic that this Slashdot article is one after the article saying AMD has no successor planned for the Geode line and it may fade away.

BIOS is way, way obsolete. The bigger question is whether or not Windows 7 will be bootable on EFI machines. This article [is.gd] says that Windows 7 "delivers support" for EFI. It is likely that means that it will be bootable on EFI equipped machines, but there's wiggle room there.

With Intel, Apple, and rapidly the entire rest of the x86, x64, and IA64 hardware world moving to (or explicitly running on, in the case of Apple and IA64) EFI, what is the whole point of CoreBoot besides being a nifty experiment? Intel won't let anything replace EFI anytime soon. Trust me - I've had EFI shoved at me for almost a decade.

BIOSes only boot MBR/FDISK disks. And MBR tops out at 2TB. So now that the largest drive that MBR can support was announced today, isn't it time to move forward to a new boot system that can boot GPT disks (which can go well past a petabyte)?

True, some of what a BIOS does (e.g. initializing the display, sending reset to printers etc) is not really needed. Including the POST. But much of it makes diagnostics easier, in case of an OS boot failure.

You can follow the early boot process if the graphics is initialized - if it fails, control is never returned to the BIOS, so you would end up with a black screen and a dead computer without graphics.

You still need a video BIOS to display VGA until/unless the OS provides real drivers.

Perhaps more importantly, you still need the BIOS to provide hard drive support until the native support is loaded.

In the case of Linux, all the needed drivers will be loaded from the initramfs within the first few seconds of boot, but the bootloader (Grub or Lilo) still needs the BIOS to read the kernel and initrd from disk. Only way around that, that I can think of, is for coreboot to natively support loading and running

I don't know either, but I would guess that Coreboot has a menu you could bring up when you power up which would allow you to choose your boot device order. However, many modern motherboards have many settings in BIOS for things you mention, like RAM timings, voltages, CPU multipliers, etc., and it seems like most of those would be board-specific.