A one-bit processor

Put on that abstract thinking cap, get out the pen and paper, and spend some time figuring out how this one-bit processor works. [Strawdog] came up with the concept one day during his commute to work (don’t worry, he takes the train… much safer than [Dave Jones’] frightening drive-time podcasts). He sketched it out on paper to make sure he knew where he was going with the project, then collaborated with Legion Labs to implement it in processing as an easier way to visualize its functionality. Since it’s one-bit there’s only room for one instruction. That instruction is a copy, then branch-if instruction. It copies the current bit to one address, and if that bit was one, it branches to a second address.

Going a bit fast for you? We think the description is fairly good, but if you can’t quite put it together from the article’s description, you may want to build this 2-bit paper processor and learn how it works first. It should teach you the basic concepts you need to understand the 1-bit version. As you can see in the image above, there’s also a single-step feature in the processing example that lets you analyze the effects of each instruction during program execution.

@Stripe so, how is it “1-bit”? Sounds like the classic 0-bit processor to me.

This is not something new or something this guy just figured out recently. I remember reading about this years ago. Did he rediscover it on his own? Or did he read about this on someone’s blog and build an emulator? I’m not sure what the accomplishment is here.

Excuse me, that should be “technically uses ZERO bits per instruction” (since there is only one instruction, information theory dictates that you don’t need to track which one you expect the computer to execute).

Yes sure you could, just imagine there is a serial to paralell converter in front of the fetch stage of this processor. Or if you don’t accept that it would be logicaly equivalent if you use extra registers where you set each bit every clock cycle (that would use implicite operands which you don’t need to store in the instruction) and once that is done set a flag to perform the actual operation which uses the registers as operands.

Whoa – I appreciate the attention but i didnt do this all myself. LegionLabs deserves the credit (or perhaps blame) for thinking it up. I dont think on my ride to work, its a good day if im even awake. My participation was more on the simulator writing end.

While designing it, I wasn’t sure whether it was truly one bit either. I suppose it depends on what part of the processor you consider 1 bit. While trying to reduce the ALU for another design, I first reduced that part to 1 bit, then started eliminating transistors from the ALU design (until it was just a wire, actually). That’s where the name came from, and it sort of stuck.

While there is only one instruction, NOP can be emulated by adding a line of code that copies a 0 to itself. This is akin to clearing a register actually, but if you don’t use the register it may as well be NOP.

Anyway, I think it is a fun toy and run it in my head or on some graph paper sometimes. I hope you have fun with it too!

Anyway, I think it is a fun toy and run it in my head or on some graph paper sometimes. I hope you have fun with it too!

Indeed, I think it’s a great way to introduce someone to efficient processor design and understanding how an instruction can be decoded. Unfortunately I can’t see myself playing with this as it’s too close to work (in a professional capacity I work on processor design).

Your point about it being 1-bit depending upon what part of the processor you look at is a good example of the difference between the instruction set and the micro-ops (the internal instructions of the processor). While the processor could be considered 0-bit (it’s the same instruction executed each cycle), internally it sounds like you’ve decoded it into a 1-bit micro-op.

I think one could also argue this is anything but a 1-instruction cpu and you “self-modify” the code because each instruction includes a 1-bit and 2x5bit operands. Another way to look at the instructions is that you have is a 1-bit ISA:
0 = clear bit at #A
1 = set bit at #A, branch to #B
where #A and #B are supplied literals but, as they’re literals, they could be argued as part of ISA (if I was to draw a classic fetch-decode-execute, it would highlight that the instruction is got in the ‘fetch’ and that would be the entire 11-bits)

@Sheldon: Thanks for drawing out the explanation and looking at it from various angles… Particularly regarding the micro-ops point of view. I quite enjoy looking at simple processor designs like this and thinking about how they work, probably because it doesn’t resemble what I do at work.

Hey, I know it’s been a while, but I moved to Vietnam and I have some time over new year to implement this. I’ll take the lazy route and use a few lines of ASM on an AtMega to implement the opcode, and memory map some I/O with the MCU ports. I have some IS61C64AL 8-bit CMOS static ram, and some MAX232 level converters. I figure I can load software to the SRAM via RS232, then flip a switch to put it into ‘CPU Mode’. Since it has fully static operation, I’m not sure what to use as a clock. I don’t have an Arduino, my RaspberryPi is occupied, and my quantum TRNG (for some bell’s inequality+halting problem) is on the other end of the planet. That rules out most of the ‘trolling’ options!

I guess I could use an actual wallclock as the clock, or a 555/hex inverter with a knob to change the speed dynamically. Obviously the speed knob would go up to 11.

Hey, I really have to point out for the record that this is a joint project, between Sean (aka LegionLabs) and me. Its not all my work and i never meant to take credit for it all. Credit where its due – Sean thought it up and worked out the theory, I mainly worked on the simulator and assembly. And I made the blog post, which is I guess where the confusion started. In any case, glad to see some people enjoying it.

It’s amazing how many people find the “bitness” of a processor hard to understand. The “bitness” or “width” of the processor is the number of bits it can naturally and generally process at a time. This device operates on a single bit at a time, so it’s a one-bit computer. It doesn’t matter how many instructions it has, or how many addresses are encoded in it – that’s the instruction set width.

A Turing machine is a one-bit computer with multiple instructions and an infinite instruction width (since there is no limit to the number of states) – it is still 1-bit.

An x86 processor in a typical PC is 32-bit or 64-bit, since that is the maximum width for general purpose data. It doesn’t matter that it can also work with 128 or 256-bit data, nor that some instructions are 8-bit long and others can be huge.

@David: I think in this case the “bitness” is convoluted because yes it has a 1-bit ALU but it has three datapaths, one to handle the instruction and the others to handle the addresses held in the operands.

The crux is not that it has a 5-bit wide address bus, it’s that it acts on data representing each address 5-bits at a time.

It’s more like a 1-instruction, 1-bit cpu with a 5-bit co-processor (for lack of a better term).

I might have to disagree! If you have an arbitrarily fast 1-bit processor, you can emulate any number of bits as output at some speed using memory-mapped I/O.

Seems silly? Consider that when messing around with new CPU technologies, a 1-bit processor might be suddenly useful. It also might not, I suppose!

Or, consider putting a whole bunch of these in parallel. Define how they are linked together and mapped to memory in software, like logic cells in an FPGA. Probably performance would be unimpressive, but it’s a really interesting thing to think about as an academic exercise.