Posted
by
Unknown Lamer
on Wednesday December 26, 2012 @01:00PM
from the just-a-flesh-wound dept.

After two years of work, Debian m68k has working build servers, and is slowly working through the backlog of stale packages. "Contrary to some rumours which I've had to debunk over the years, the m68k port did not go into limbo because it was kicked out of the archive; instead, it did because recent versions of glibc require support for thread-local storage, a feature that wasn't available on m68k, and nobody with the required time, willingness, and skill set could be found to implement it. This changed a few years back, when some people wrote the required support, because they were paid to do so in order to make recent Linux run on ColdFire processors again. Since ColdFire and m68k processors are sufficiently similar, that meant the technical problem was solved. However, by that time we'd fallen so far behind that essentially, we needed to rebootstrap the port all over again. Doing that is nontrivial, and most of the m68k porters team just didn't have the time or willingness anymore to work on this; and for a while, it seemed like the m68k port was well and truly dead."
The tales of acquiring the needed hardware are pretty interesting (one machine is an Amiga in a custom tower case).

You boot some sort of Kickstart/Workbench, then run an AmigaOS program which is the Linux bootloader and pass it the kernel and, if needed, the initrd from the AmigaOS filesystem, it will load them and make them usable, then jump into Linux. From then onwards, that one will be the OS in charge, making ext4 available etc.

Sadly, no kexec yet. Having to copy out the kernel instead of being able to load it directly from ext4 (or whatever you choos

because they were paid to do so in order to make recent Linux run on ColdFire processors again.

Do you have the attention span of a gnat?

Do you have the reading comprehension of a gnat? The 'Why' that was postulated was 'why resurrect the m68k port'. It had nothing to do with someone else paying to port it to Coldfire. The latter just gave the m68k geeks an opening. The rest of the 'Why' is 'because we can.'

Something I know, that hasn't been mentioned, is freescale released at least some cores under an at least sorta-free license near a decade ago, so I would think it amusing to make a multi-machine build farm out of a big FPGA (or board full of FPGAs...) At least way back then, there were not many options for running linux on a (official released) FPGA soft core. I would imagine there are more options now if you want to run linux on a (official) soft core.

"not every application needs multi-gigaFLOP/second performance or even an integrated FPU"Well yeh, not everybody does. But do they even sell cheap 68k chips? And if they do, don't they sell *cheaper* ARM chips! Just because you don't need it, doesn't mean there's any advantage in using this.

If they make it will they come? Because if nobody uses it, it isn't properly tested, and if it isn't properly tested, nobody will use it.

ARM isn't always the right choice and it does have its problems. Additionally if you have to interface with an old system it's often easier to just grab a M68K or an old Intel 8xxx series device. The interface was already designed in a lot of cases for the older devices. And the M68K is advanced and fast enough to easily interface with modern hardware. So it actually does make for a pretty good bridge when you have to make two incompatible systems work together and don't want to go through the trouble of starting from scratch. Not to mention that the M68K is a great device to introduce people to the hardware side of embedded system design. Fairly cheap, comes in easy to solder packages unlike most ARM processors, robust, well documented, loads of software has been written for it,...

Totally worth the effort! It's not because something is old that it's not worth using anymore. Look at the Intel 8051 architecture. You'll find several microcontrollers based on that architecture in your house on this very moment. Sure it's an ancient 8 bit CISC architecture, but most designers are very familiar with it and it's one of the cheapest microcontrollers available so it still sees quite a lot of use. Fun fact is that it's commonly used as USB host controller.

I keep hoping someone will take the 6809 architecture, extend it to 64 bits wide per register, add an MMU, implement underneath a modern microcoded engine (the original was random logic), and throw an FPU on-board. Maybe add a few megs of register pages for context switching, a few instructions to give it supervisor/user smarts.

It was *so* easy to write code for that thing; it had pretty much the perfect mix of instructions -- way better than the 68000, for instance. The 6809 was the best 8 bit uP ever from a programming POV. I wrote a couple of compilers for it over the years, it felt like the uP designers totally knew what I was going to need.

Yes, that's true. But if the software is 10x faster because the instruction set allows the compiler to produce excellent code, but the hardware is 1/2 as fast, you end up with software that is "only" 5x as fast.

Just throwing numbers, admittedly, because I don't really have deep familiarity with today's machine code, but in the 6809's day, it could do with one instruction what took several on the 6800 or many on the 6502, and the register capabilities were such that you didn't need to be constantly storing t

Thing is, we have been avoiding microcode. It's slow, RISC makes for faster designs that are easier to design and optimize. You can make a simple hardware fsm that gets the job done. Uses less area as well. We don't quite care anymore about assembly programmers I guess. This leads to ugly but fast instructions. RISC also makes pipelining damn easy, the gain of that outweighs your coding efficiency increase by an astronomical margin. And in the end the compilers don't seem to mind much if they use good opti

Yeah, I probably just don't really understand the difference. Looking at the code the GCC compiler produces, all I can think is "that's awful"... but if it's some crazy factor faster -- like twenty or so -- then yeah, it'd come out ahead of something that took 2...6 clocks per instruction (which is where most of the 6809's instructions landed.) Also, the 6809 was random logic... one of the reasons they said they couldn't really speed it up much.

Well yes, but you have levels of complication in that. I'd hardly call the 6809 a RISC architecture. Complicated asynchronous logic becomes rather unreliable if you keep demanding faster speeds from it.

I don't have the patience to try and find a USB->serial adapter to use my ancient GraphLink, otherwise I'd load it on my 89 and see how true-to-life it is.

The REAL challenge would be to make a passable port of Sonic The Hedgehog to the Z80. Sega did this [wikipedia.org] both as a last hurrah for the Sega Master System, and a port to promote the Game Gear.

Right, but we are doing this âoethe Debian wayâ, that is, running a native compilation and package generation in clean throw-away chroots. Debian package generation is not just compilation, itâ(TM)s a bunch of other stuff (dependency management, shared library management, etc.) and, personally *and* from my experience with the BSDs and FreeWRT, I am of the opinion that cross-compiling is only good for initially bootstrapping a port.

I doubt I'll *ever* make use of this project myself, but I'm inspired by the tale of how it went from "left for dead" to a full-on revival, based on something as unexpected as a rather unrelated 3rd. party software project (Atari emulator that happened to allow the m68k developers to work on their code from any laptop computer they happened to be using), as well as a single motivated individual bent on making his shell run on all known variants of Debian.

I still have many 68k Macs that could be put to some kind of use if they could run a modern OS. The issue is that everything that sits on top of the Linux kernel has unfortunately followed the Windows and Mac OS trend of requiring GPU support. I don't know (yes, I could Bing it) if LXDE requires compositing to run decently...

Not just Debian. Another two persons interested in porting mksh to anything possible and then some, as well as I, are trying to get an A/UX box running.

Also, whatâ(TM)s the leading GNU/Linux distribution on cris (ETRAX 100)? Debian doesnâ(TM)t support thatâ¦ (also, I dab in klibc and dietlibc a bit, and the formerâ(TM)s got cris support code that warrants testing.)

The metaphor is all wrong. It's Christmas, not Easter. You're supposed to say that an updated version of the Debian m68k port was delivered by Santa, or that Rudolph helped them find their way back to the main branch, or that wise men brought Debian gifts of gold, frankincense, and m68k ports.

Like anything - if someone does the hard work, and it's supported enough, and it doesn't break OTHER architectures, there's no reason why not.

It just seems that m68k (and other projects along the same lines) have people willing to do all that work, whereas the 386 architecture doesn't (yet?).

This is the thing I actually quite like about Linux. MCA support? Few used it, fewer wanted it enough to do the so, so bye-bye. But other buses? They are still around. Applies to buses, architectures, drivers, features, even "helper code" of one type or another.

If someone's willing to put in the back-breaking to get it up to standard, there's no reason to NOT let it in. Unfortunately, that standard has to be high for a number of reasons (e.g. legal obligations like licensing, coding quality, support, ongoing maintenance etc.). And for some, it's so high it doesn't justify the work.

Linux is a meritocracy, like more open-source code. If there's a reason to do so, and it's done well, it happens. If not, it doesn't. If only parts of law and government were like that.

The specific reasons to drop 386 support from the kernel were because 1) its MMU is substandard compared to 486 and later and causes a lot of complications in the kernel, 2) it doesn't have CMPXCHG which is used for semaphores (in glibc, not just the kernel), and 3) it doesn't have the byte swap instruction which makes a big difference in network code.

Dropping 386 support is like dropping 68000 and 68010 support. It's the oldest sub-architecture, lacking a lot of good improvements that came in the next gen

As a matter of fact, I do have gear in use that is affected by the removal of 386 support. (The linux terminal server project crowd in particular is affected by this also.) If I was trying to troll I think I'd have been a bit more... obnoxious with my wording? Back to the topic at hand, my understanding was that it wasn't the 386's shortcomings that doomed it, it was that they had to invoke workarounds in the x86 branch for them, and THAT was where the hardship came from when trying to move the ball forw

> Nobody uses anything anymore that won't work a 486 build and thus requires 386, aside from someone with a 20-year old PC.

This is factually untrue. The chip was in production until 2007 and shows up in all sorts of odd/interesting things. There's an entire ecosystem of STD-BUS and Multibus 386 systems that are still supported and could run Linux, not to mention things like the Nokia 9000.

I'm somewhat surprised they go to the trouble to resurrect and upgrade Amigas to do this work. There are plenty of recent ColdFire dev boards that could be used, if they can get someone to donate the board. Coworker tells me the default install for some of those boards is dicey and could stand to benefit from some attention.

We (author of article being quoted here;-) actually do own ColdFire V4E boards, which were donated by Freescale at some point. Unfortunately they can't be used for the plain m68k port without some substantial work.

While the ColdFire is sufficiently similar to the m68k so that code written to support one processor (at least in userspace) benefits the technical situation for the other, unfortunately they are also sufficiently different that you can't just take binaries for one processor and try to run them o

Perhaps because you are more of an "appliance operator" you don't appreciate the science and engineering behind the scenes.

Working with old hardware, like new hardware, presents a lot of challenges. The learning that takes place is very useful.

Unlike new hardware, old hardware is cheap and plentiful. Yard sales, garages, surplus stores... this is the place to go. For new hardware, you are looking at some money.

The learning that takes place on the old hardware is useful on problems beyond this "ancient platform". The folks that accomplished this port have flexed their brains around complicated problems, and are thus able to process other complicated problems more efficiently.

Bottom line, some people are passionate about engineering and science, and do it because they enjoy the learning process.

Ever hear of Coldfire? [wikipedia.org] It isn't nostalgia (not yet, at least), it's still a viable embedded CPU architecture, less than 10 years old. It's a RISC-ified 68K, with a few instructions removed (they can be implemented via the illegal instruction trap) to make the RISC work. If you had bothered to read TFS, you would see that was what started all this.

Maybe you should put your time into something more constructive instead of trolling for no useful purpose at all.

June 2010, Freescale announced the ColdFire+ line, which is a ColdFire V1 core using a 90 nm TFS technology

90nm? In 2010? That should be enough to tell you that Freescale doesn't care. A chip announced in 2010 (no idea when, or even if, it actually shipped), using a process that was state of the art in 2002. Cheap parts were using 65nm in 2010. 90nm is the stuff you stick in the fabs that you don't have the spare capital to refurbish and want to keep ticking over. Followed by:

The future of the ColdFire architecture is uncertain given that Freescale has been focusing on ARM-based cores in this market segment

I'm not familiarized with ColdFire, but the grid size on the manufacturing process is no way of measuring the relevance of a given product. There are a ton of applications that actually require reliable processors instead of "latest tech". Some embedded applications may require 10-20yr lifespan under radiation, extreme heat, magnetic interference, and so on and so on. Just because they aren't the best choice to create handheld devices to play Angry Birds, or to create desktop computers, doesn't mean they ar

90nm? In 2010? That should be enough to tell you that Freescale doesn't care. A chip announced in 2010 (no idea when, or even if, it actually shipped), using a process that was state of the art in 2002.

ColdFire is a line of microcontrollers. Microcontrollers are not built on state of the art CMOS processes, partly to keep the cost down, partly to keep the power consumption down, and partly because they need high-quality embedded NOR flash, making them not pure CMOS anymore. This means you're making a whole separate, specialized process for your MCUs. In that context, 90nm is pretty good. I think the absolute top of the line for flash MCUs right now is 55nm, which might not even be in production yet. There

Please put your time into something more constructive than yet another implementation of the standard slashdot "work on a project I like, not that thing you find interesting" post that serves no purpose aside from trolling.

Finding bugs in Debian, gcc, eglibc, the Linux kernel, by running it on minority systems is a decent outcome of this, Iâ(TM)d say.

The purpose of having bragging rights that mksh works on all platforms, no matter what obscure, is personal, so you canâ(TM)t measure relevance anyway. Iâ(TM)ve even done DEC ULTRIX and Haiku successfully. Oh, and Plan 9â¦

You can really breath new life into older computers. The results are often startling and better than their intel cousins from the same era. Not to say that this is a good "production environment" strategy, but if you have old macs collecting dust, and you'd like to learn some real linux-fu, install m68k linux on them. You will end up with useful computers, sometimes even useful for light desktop. Definitely useful for low-volume web servers,

Needless to say that, even *if* there's an exploit for say, the webserver, out there: nobody's going to write shellcode for m68k.

For the same reason, Miod Vallat of OpenBSD fame runs his website on a VAX, and the BSI is said to still use BS/2000 somewhere. Even if not unbreakable, nobody's going to be able to use it;-) At least not your average 08/15 skriptkiddie.

I have a stash of retrocomputers and consoles, and for everyone of them that i can get to run *nix it's always cool.
Amiga now has DebianM68k and NetBSD in new versions, PS2 has the kernelloader live cd, My old Mac PPC has Linux Minut, and my Sam460 has Debian too.
As for the Speccy - well at least it got esxdos:)

I'm glad there is an apparent consensus on "cool." I just went through a recent horrible forced move and I was thinking how much of an idiot I could be for hanging onto all the really old mac stuff plus documentation.

It's nice seeing Linux run in WinUAE, but the distro is rather dated. It would be nice to have something recent running in WinUAE. And before you ask, I have no idea why this is so cool to me and why I want this so much. I just know that I do. Having a recent distro running in WinUAE is for some odd reason very nifty.

Would be more interesting to get it to run AMIX tho...Linux/m68k can already run under emulation on qemu (generic 68k, not amiga specific), and there is very little (if anything) available for linux/68k that doesn't run on linux/x86. I never understood why so many hardware emulators only seem able to run linux (which the emulator itself generally runs on anyway), and cannot run whatever was the native os of the time for these hardware types.

Last time I looked, qemu-system-m68k lacked MMU support.Someone recently said qemu-user-m68k was usable, but that does syscall level translation (I wonder what they do about the TLS and atomic-cmpxchg syscalls that are recent-m68k specific) and thus doesn't suffice.

I have a working AMIX system, a genuine Amiga 3000UX although it don't keep it powered up all the time... It comes with an old version of GCC (1.4.x if i remember) so it may be possible (albeit time consuming) to compile mksh on it. I will give it a try once i finish moving house.

cbmuser already issued an Intent To Package FS-UAE to Debian, which makes use of WinUAE's "accurate emulation".

I believe that you should be able to use wouter's d-i build from http://people.debian.org/~wouter/d-i/ [debian.org] to install an m68k system from unstable (with the usual caveats, i.e. installing or debootstrapping unstable does not always work). Note that the build is still "fresh" and nobody has tested it yet, so a failure would not mean an emulation problem.

Been following the MMU development on WinUAE for a while and I think you'd be pleased to know the base MMU code was lifted from Aranym in the first place. A few bugs were found that were corner cases that Toni found. [abime.net] I believe there was some effort to back-port those fixes to Aranym, or at least let their devs know about them. If you're interested you could drop a message to him and I'm sure he would point them out.

Ah, thanks for the additional background. Yes, a pointer to the problems would probably be appreciated by the ARAnyM developers.

The d-i will not work right now, not with the normal mirrors at least, due to debootstrap being unable to cope with needing to pull packages from *two* distributions (unstable and unreleased), we think. We're working on it.

There's a big difference between being a hobbyist developer for an old platform and maintaining a ported operating system for it. It's time to let it go, folks. I have quite a bit of nostalgia for my old 8088, but it doesn't mean I'm going to put weeks or months of my life into writing code for it anymore. There's quite a lot of low-power modern architectures out there that a person could spend their time porting software to instead.

Right, but I recently tried to install NetBSD/atari on AtariFrosch's box, and the installer died on itself. I, having BSD experience, managed to still install it by manually untarring the sets, running MAKEDEV, etc. but the kernel seems to have hardcoded booting into securelevel -1 and single user, so the system doesn't come up afterwards without some manual effort on each boot.

No NetBSDÂ® person I asked could help, and the mailing list was dead as well.

the kernel seems to have hardcoded booting into securelevel -1 and single user, so the system doesn't come up afterwards without some manual effort on each boot.

No NetBSDÂ® person I asked could help

Some help from an unexpected place: I suspect it cannot find the root filesystem, then drop into single user and asks you where it is. Is that the case? If this is your problem, you can patch the kernel with gdb (or rebuild a kernel, but that takes longer) in order to hardcode the right root. Send me a private message if you need assistance.