Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

dartttt writes, quoting Ubuntu Vibe: "Dmitry Grinberg has successfully booted Ubuntu 9.04 on an 8 bit micro machine with 6.5 KHz CPU and 16 MB RAM. Grinberg did this experiment on a ATmega1284p, 8-bit RISC microcontroller clocked at 24MHz and equipped with 16KB of SRAM and 128KB of flash storage. Since the RAM was too low, he added 30-pin 16MB SIMM to the machine and a 1 GB SD card to host Ubuntu image. ... To get the world's slowest Linux Computer running, he had to write an ARMv5 emulator which supports a 32bit processor and MMU. A similar machine can be made very easily and everything should come in about $20."
There is source code available, but it's under a non-commercial use only license. Just how slow is it? "It takes about 2 hours to boot to bash prompt ('init=/bin/bash' kernel command line). Then 4 more hours to boot up the entire Ubuntu ('exec init' and then login). Starting X takes a lot longer. The effective emulated CPU speed is about 6.5KHz, which is on par with what you'd expect emulating a 32-bit CPU & MMU on a measly 8-bit micro. Curiously enough, once booted, the system is somewhat usable. You can type a command and get a reply within a minute." If you like watching a whole lot of nothing, there's a video of the boot process below the fold.

Maybe someone desperately needs to retrofit modern code to crappy old equipment? Maybe the ultra low power requirements of an extreme low-end machine makes this a fit somewhere?

Most importantly though, he did it because he could. Doing it puts his skill set far above that of most people, and having that on the resume would get him in good with nearly any semiconductor corp on the planet that needs a software or firmware developer.

It would come in handy when we decided to retroactively upgrade that space probe we send out 30 years ago. Failure to upgrade the OS on that baby will result in the end of Earth as we know it. Hey, it could happen.

Alternatively, someone might want to design a new 8-bit CPU for certain embedded tasks where it's essential for there to be low power consumption and a high-end sophisticated OS. There are plenty of extremely slow mechanical operations (combine harvesting, for example) where millisecond responses are not going to be useful but where the complexity of the problem (varying evenness of the ground, varying field shapes, etc) mean you do want to be able to handle many different types of sensor, sophisticated algorithms, etc, within something that needs to be extremely cheap to build/replace and extremely low power to run to be more cost-efficient than having a farmhand (who is likely to be earning minimum wage or below).

Another option is a System-on-a-Chip. At present, SoC runs into all kinds of problems because of the compromises you have to make to fit everything into one die. If you can reduce the transistors of the CPU component, you can increase the transistors somewhere else, which means this knowledge increases your flexibility in such systems. That's extremely valuable to know, even if you never go to this extreme.

For deep space probes, radiation is a major concern. Well, for anything in space it's a major concern, but the deeper you go into space the nastier the radiation. It's why the highest-end space-rated CPUs are so primitive compared to commercial CPUs. Being able to reduce the complexity of the CPU and utilize the extra space for redundancy, without reducing the sort of complexity of software the CPU will run, is great news for anyone wanting to rival the Pioneer 10 & 11/Voyager 1 & 2 missions in terms of longevity whilst equally wanting to match Deep Space 1 or the Mars Rovers in terms of flexibility. Knowing that you don't strictly need a 32-bit architecture to run Linux and that you can slice out huge chunks of the architecture gives you tremendous power.

The question is not "why make an 8-bit cpu and run stuff on it". The question is "why load linux on an 8-bit cpu where it's unusable?".

There are tons of embedded 8 bit processors, and all can run very complex software written in c/c++/etc.

I think this is cool, but the answer is simply "as a challenge." The microcontroller problem, and making it either powerful or easy to use, has been solved for years and is evolving. Running linux on them was never what was holding them back.

You will have learnt so much about the ARM architecture, and have a really good view of how things really work. I'm pretty sure that if you got your hands on an FPGA board you now have the skills to make your own ARM processor.

This puts you head and shoulder above lot of the 'ubergeeks' that lurk on Slashdot discussing how they can get another 1% from their shiny purple Corsair RAM - they just don't get it!

That may be, but DOS was 16-bit, as was Windows 3.0 and 3.1. Until OS/2 2.0 and Windows 95, there was no particular need for a 386 - a 286 would have worked just fine. In fact, the minimum one needed was an 8086. However, for an 8-bit 8085, CP/M and CCP/M were the OSs, but DOS didn't run on that. So the right comparison would be Ubuntu 9.04 on this 8-bit ARM to CP/M on an 8085.

Incidentally, why are they running Ubuntu on this? They could have taken Minix, which was originally written for 16 bit CPUs,

Incidentally, why are they running Ubuntu on this? They could have taken Minix, which was originally written for 16 bit CPUs, and tried compiling it on this one. That's the smallest Unix that would run on an 8-bit CPU.

The problem is the native processor has very little memory and storage... I ran minix on a 8-bit Tandy machine with 640K ram as an experiment and that has several orders of magnitude more storage both disk and ram than this processor.. As seen above he interfaced a simm as memory but that won't work natively, so he has to emulate...

No version of windows ever ran on an 8-bit processor. Windows 1.0-3.0 would run on an 8086, but that is still 16-bit, and Windows 3.1 won't even run on that, it needs a 286 or higher.

But you could run Windows on an 8-bit processor the same way this guy ran Linux on one -- write an 8-bit emulator for the 32-bit platform you want to run. Except that I don't think you could get any semi-modern Windows to run in 16 MB of RAM.

Intel sold the 8088 CPU as an 8 bit processor. It really was a 16 bit machine inside, but with an 8 bit data bus that could use all the 8 bit parts that worked with the 8085 CPU. So many people are confused and think windows ran on an 8 bit machine. BTW windows vers 1 and 2 could run on a 16 bit 8088 PC XT machine. Version 3 and 3.1 dropped the XT support but continued to run on 286 AT machines.

Good memory, you are correct. It had "real mode" (640k) for XT/8086/8088 clones, "standard mode" (16mb/286 prot mode) for AT/286 class machines and "386 Enhanced mode" which gave you virtual 8086 DOS multitasking along with Windows proper.in a 64 megabyte address space. No one had more than 8mb in a 386 at the time anyway.

An 8088 is still a 16 bit machine. It has 16 bit registers and supports arithmetic operations on 16 bit values. Just the data bus is 8 bit and hence pushing those 16 bit values around the system takes more time...

The 'bitness' is the #bits in the ALU - the width of inputs to and outputs from the ALU. If that number is 8, it's an 8-bit CPU, no matter what the data bus is. However, if the registers of a CPU are 128 bit and the operations are 128 bit e.g. you can add 2 128-bit numbers, then it's a 128 bit CPU.

On one level this shows just how clever Dmitry is and it shows excellent problem solving skills. However, I would be more impressed if he could do something interesting with more modern technology. The technical challenges of booting a modern OS on dinosaur hardware are amazing and if he could take his innovation ability and apply it to state of the art technology, image what he could achieve.

If you're curious what he does and has done in his day jobs, see my response to your other post on this topic [slashdot.org] (I may have incorrectly assumed you weren't just trolling on this topic, as it's not hard to find his "Work" pages if you actually follow the link to TFA, and I suspect most Slashdot readers have at least enough familiarity with the world to be familiar with the concept of a "hobby").

Well, I hope the guy got plenty of "personal enjoyment" because I think it's a lame hack. He didn't actually get Linux working on an 8-bit processor. He got it working in an emulator, which apparently he DID write. At no point did he port the Linux kernel to a new platform. This is right up there with booting Linux on a GP2X console via Bochs.

So, to recaption this article:

"ARMv5 emu for underpowered and rarely used AVR chip. ATmega community baffled and bewildered. Oh, and it boots Linux in half a day

To this project or to this discovery? To the project, probably no. Well, other than being excellent practice in problem-solving. To the discovery, probably yes. There have long been arguments over the minimum complexity requirements for a general-purpose OS, which is an important problem to solve as complexity is a governor of many things (cost, durability, power requirements, heat generation, etc). We already know from Turing that any CPU can run any software for any other CPU, provided the memory is available and the CPUs are Turing Machine equivalents. What we've been less clear on is what this means in practice, how to exploit it, and whether architectural limitations violate the Turing Machine equivalency requirement. We now have numbers to work with, a case study, and a proof by example that equivalency is satisfied.

On one level this shows just how clever Dmitry is and it shows excellent problem solving skills. However, I would be more impressed if he could do something interesting with more modern technology. The technical challenges of booting a modern OS on dinosaur hardware are amazing and if he could take his innovation ability and apply it to state of the art technology, image what he could achieve.

On one level this shows just how clever Dmitry is and it shows excellent problem solving skills. However, I would be more impressed if he could do something interesting with more modern technology. The technical challenges of booting a modern OS on dinosaur hardware are amazing and if he could take his innovation ability and apply it to state of the art technology, image what he could achieve.

It's called a "hobby project"; you might have heard the term "hobby" on occasion - people occasionally do not-necessarily-useful-in-the-Real-World(TM) things as hobbies, such as getting old {radios, cars, airplanes, computers, etc.} to work, because it's fun for them.

If you're curious what he achieves when he's not working on his hobbies, you might want to check his [dmitry.co] work [dmitry.co] pages [dmitry.co], which are linked to from the sidebar on the site to which the article refers.

I do plenty of other things (flying planes, collecting speeding tickets, etc). This was just for fun. And it was quite fun. I never expected it to be fast enough to use, and am still quite amazed that it is usable (for some definitions of "usable")

Nice trick. However, let me point out that in 1990 Geoworks GEOS was capable of running a preemptive multitasking GUI looking much like QT but with better automatic widget layout, on an 8 MHz 8088. I will just heave a great sigh in the name of the lost art of tight coding. No, Linux is not tightly coded. I should know. The best you can say about it is, the other guys are worse.

My old stuff works so much better then my new stuff...%#@&*&(@+++ NO CARRIER Sorry Computer crashed. Because old software was so optimized... $#@%^^++ NO CARRIER For the old equipment. The only trade off was fault tolerance.

You will not believe how much Computing power goes to making sure your computer doesn't crash every day.

Back in the old days computers crashed much more then it does now. And it isn't that they are better programmers but more to the fact that there was a trade off on how much code in the back end needed to be done to protect the system.

Nice trick. However, let me point out that in 1990 Geoworks GEOS was capable of running a preemptive multitasking GUI looking much like QT but with better automatic widget layout, on an 8 MHz 8088. I will just heave a great sigh in the name of the lost art of tight coding. No, Linux is not tightly coded. I should know. The best you can say about it is, the other guys are worse.

"better automatic widget layout" - this made my day. I remember using GEOS as a boy, on a C64. It was a lot of fun going from text menus to an actual mouse-relevant UI, but sophisticated it was NOT. Automatic widget layout? There were 8 icons per window and if you didn't like where they were you could (a)bort, (r)etry, (i)gnore.

That was a limitation of the C64 video chip. Only 2 colors were allowed per 8x8 square, so any attempt to move the icons would have led to a graphical mess (like macroblocking in heavy-compressed video).

In order to avoid that mess, GEOS assigned every icon to a fixed location. It was intended to fit inside just 0.06 meg of RAM, not to be fancy. (For contrast the Mac OS ran in 0.5 meg of RAM.)

My first and lasting impression of GEOS was how fast it felt. Everything I clicked on seemed instantaenous. Despite the massive advances in technology, I've yet to experience another GUI as responsive as that.

I wrote some code for it: see here [cowlark.com] (including a Linux86 execution environment, that would allow you to directly run Linux86 binaries from GEOS, that I was really rather proud of).

I can sum up the coding experience with the phrase: THE HORROR, THE HORROR.

In order to write code for GEOS you needed a monster, badly written and badly documented SDK and a copy of Borland C. The actual code you wrote was in a C superset called GOC, which was compiled via a buggy preprocessor into incredibly cryptic C, which was then compiled with Borland and linked with a custom linker. Alternatively, if C wasn't your thing, there was an object-oriented dialect of 8086 assembler available. The OO system was bizarre, and allowed for classes to have unspecified superclasses, where the superclass was determined at run time: the system used this to great effect in the UI, where the app author's generic UI was turned into a specific UI implementation for the device. The C bindings were full of bugs, too, including function calls which didn't save all the registers properly...

The actual architecture exploited the hell out of the 8086 segmentation architecture. Memory was organised as a set of relocatable blocks which were referred to by handles (which, under the hood, were usually segment descriptors). To dereference the memory, you had to lock the block, do your manipulation, then mark the block as dirty if you had changed it, and unlock it. The lock/unlock procedure allowed the system to ensure that the block was in memory, by paging it in if necessary, either from EMS RAM or disk. It was incredibly, utterly, un-Posix, and a complete pain to do anything in. The learning curve was insane.

Where GEOS really did well was the application stack, which was subtle and elegant. There was a mechanism to allow you to use a file as a heap backing store (using a very similar but annoyingly different API to the block API described above, but that's not really important). The system automagically loaded and saved data from the file as you locked and unlocked blocks. There were standard components for everything up to and including a complete bitmap paint package, a vector drawing package, and a word processor --- and these all used these file heaps as storage. And, of course, you could have multiple components in the same file. OLE! But done right.

By today's standards, of course, it's all a huge pile of incomprehensible, unmaintainable cruft, all inextricably linked to 16-bit 8086 code. It wasn't just utter mismarketing that killed GEOS: it was the inexorable march of time. It was simply unable to adapt to the 32-bit world. All the clever tricks they did to get decent performance out of an 8086 were liabilities on more modern hardware.

That said, towards the end, when Geoworks was in its death spiral, they did produce two different attempts to rewrite GEOS for 32-bit RISC processors: GEOS-SE and GEOS-SC. I know absolutely nothing about these other than on the wikipedia page, and if anyone has any info, I'd be fascinated to hear about it.

Interesting. I did a little bit of coding to the GEOS API as well, not much more than building their hello world application, whatever that was. I noticed that the SDK itself would tend to crash if you looked at it sideways. I think the SDK ran on a Sun 1, I could be wrong about that.

As far as the segment architecture goes, I did a similar thing myself with the 8086 segment architecture, which extended to the 286 protected mode model nicely, and also worked fine on 386, still using the 286 memory model but

I beat that score by a large margin. Years ago I took an old 386 Laptop that ran at 25Mhz, I don't recall how much ram it had, but I am going to go with "not much", and booted DSL (Damn Small Linux) in just over 21 hours. Which is over 10x as slow as the one in the article! So technically I think I had the "slowest Linux computer".

Why did it boot so slow? Well it was also the reason I used DSL, because it was less than 50MB, and I could fit it on a Zip drive. Attached via a parallel cable. It did work, and it did eventually boot, however I had to leave it over night (I thought it would eventually just crash), but it worked its way through. Also on a fun note, when typing and executing commands it was like telneting to the moon, there was like a 4-5 second delay between typing any command hitting execute, and any sort of result. I really just wanted to see if it was possible to install and run an OS on a zip drive connected via a parallel port. The answer is yes, but not very well.

Now that I think about it, I think the laptops hard drive was also so small that even DSL was much too large for it. It probably only had a 20MB hard dive in the thing, which would have made it necessary to try the zip drive thing at all if I wanted to use it as a linux machine. I think that necessity is what gave me the idea. Had a useless piece of hardware sitting around that I thought might be useful for something if I could get Linux on it. Turns out I was wrong... Still useless...:)

It was designed much for the same reasons the 8088 was designed - motherboards and the relevant chipsets were all 16 bit, so a company could save a bit of cash and development time and get an SX out there faster and cheaper than a DX.

Neither the SX nor the DX had a coprocessor. You could buy one and install it, if the board supported it - I never actually saw an Intel 80387 chip, but I saw a few clones.

The whole SX/DX thing got more confusing with the 486, since it didn't mean

That's how this is best thought of. In effect, he used an AVR chip as the microengine for a vertically-microcoded implementation of ARMv5, with some extensions. It's not as if Linux is running natively on an 8-bit architecture; that's be like saying, for example, that when OS/360 was running on a 90-bit-instruction/32-bit data VLIWish Harvard architecture machine when it's running on a System/360 Model 50.

So how exactly is a processor running a program to implement another instruction set architecture, with the main memory used by the implemented ISA being accessed by special operations, and with the program and its internal data existing in a separate block of memory, different from, say, a (vertical) microcode engine, running microcode to implement another instruction set architecture, with the main memory used by the implemented ISA being accessed by special microcode operations, and with the microprogram and its internal data existing in a separate block of memory?

I've worked as a logic monkey building CPUs in the past - this is SOP in our world - we'd boot linux on our hardware on the verilog simulator as part of our QA - 2 hours is nothing.....

It's not even a new idea 20 years ago I used to port Unix for a living (no linux yet), when the early RISCs came out they came with architectural simulators, while waiting for real silicon we'd spend the time bringing the kernel (and compiler) up

... install Linux on a '486 system with a mere 16MB of RAM? I still recall how POed I was when I needed to borrow RAM frmo another system to install Red Hat because the new Anaconda required 32MB. (Because, you know, all that additional memory was required for that slide show showing you all the cool features that you were probably going to be too lazy to read about.)

That's simply not true. Those little 8-bit microcontrollers are used all over the place. You probably have several in your desktop, some in your monitor, more in your TV, a whole bunch in your car. You just never see anyone trying to run one as the primary CPU on an interactive computer these days.

I design musical synthesizers using Atmega MCUs. They work really well as controllers in price-sensitive consumer applications, but booting linux on one is about as sensible as fixing your car with a spoon.

Main engine fuse blew out, I was 60 miles from anywhere, and for whatever reason, had a cheap ass fork in my car. Bent up the middle two tines, shoved the outer tines in the fuse holder, taped the hell out of it to prevent shorting and away I went.

486 came in a DX model which ran at 33/66Mhz. The 1st Pentiums came in at 75Mhz. The only 286 i remember was a Unisys 8 or 10Mhz. I'm just sayin.

The 486DX4s ran at 75Mhz (with a 25Mhz bus, since despite the name, they only had a 3x multiplier. The DX4-100 had a 33Mhz FSB.). The first Pentiums were 60 or 66Mhz, with no multiplier (i.e. the CPU and FSB were clocked the same). The 75Mhz Pentiums came a year later and ran on a 50Mhz FSB (at 1.5x), and were cheaper (or at least the same price) compared to the 66Mhz model (since you had a faster CPU, but slower bus), if I recall correctly.

Back in the days of dial-up and time-outs I had a co-worker running linux on two stripped down 386 machines. They didn't do anything but run ping periodically to keep the connection open. Still... it's nice to know you can still do it if you have such a limited needs as that.

1. the emulated cpu *effective clockspeed* averages 6.5 KHz in released code (10KHz with better RAM code, that i am releasing later today)
2. Site is still up (it occasionally hic-ups with a 4xx ot 5xx HTTP error, but mostly it is still up
3. the linux ram is a 30-pin SIMM of 16MB capacity, the interface to which (incl. refresh) I bit-banged using 3 8-bit IO ports. The AVR's internal RAM is used for emulator SoC state, AVR stack, and the icache