Custom Chips As A Service

Ages ago, making a custom circuit board was hard. Either you had to go buy some traces at Radio Shack, or you spent a boatload of money talking to a board house. Now, PCBs are so cheap, I’m considering tiling my bathroom with them. Today, making a custom chip is horrifically expensive. You can theoretically make a transistor at home, but anything more demands quartz tube heaters and hydrofluoric acid. Custom ASICs are just out of reach for the home hacker, unless you’re siphoning money off of some crypto Ponzi scheme.

Now things may be changing. Costs are coming down, the software toolchain is getting there, and Onchip, the makers of an Open Source 32-bit microcontroller are now working on what can only be called a, ‘OSH Park for silicon’. They’re calling it Itsy-Chipsy, and it’s promising to bring you your own chip for as low as $100.

The inspiration for this business plan comes from services like MOSIS that allows university classes to design their own chips on multi-project wafers. This aggregates multiple chips onto one wafer, bringing the cost of a prototype down from tens of thousands of dollars to about five thousand dollars, or somewhere around a thousand dollars a chip.

Itsy-Chipsy is taking this batch processing one step further. This is a platform that combines multiple projects on one die. That thousand dollar chip is now sixteen different projects, tied together with regulators, current sources, clocks, and process monitors. Using a 2 mm by 2 mm chip size, Itsy-Chipsy gives chip designers 350 μm of silicon using a 180 nm CMOS process. That’s enough for a basic 32-bit RISC-V microprocessor in a QFN or DIP 40 for just one hundred dollars.

This project is a contender for The Hackaday Prize — the Prize ends in November and we’d be amazed to see results by then. The Onchip team is talking to foundries, though, and it looks like there’s interest for this model in the industry. We’d guess that the best case scenario is a crowdfunding campaign for an OSH Park-like chip fab sometime in 2019. Whenever it comes, this is something we’re eagerly awaiting.

Given that humans are unable to write code for traditional computers without creating massive security holes, perhaps it is time to invent a new kind of computer. Humans cannot write secure software, this is not a difficult concept to accept. Humans cannot throw baseballs at 400 mph and they cannot run 100 yards in 2 seconds. Maybe we could have computers that actually know the types of the data they are manipulating. Perhaps we could have computers that manage storage for us. But no, in 2018 we can automate the driving of automobiles but we cannot automate the destruction of a variable.

Eh. If a security vulnerability cannot be found by humans or human made tools, I don’t think of it as much of a vulnerability. Also, computers aren’t very good at programming themselves. Humans still need to tell the computers what to do. Hence, untll the singularity, we’ll just keep programming in higher and higher abstraction levels.

The most skilled humans in the world still make mistakes. They can still have a bad day, or get distracted, or stressed, or need more sleep. Even the best programmer manages a bug per couple hundred lines.

That’s where higher level language constructs come in. Especially ones based around strong, static typing. The more of your mistakes you convert into compile errors, the fewer of them make it to runtime. A typed container is easier to manage than one that just stores void*, is typesafe, and cleans up after itself.

Just as good code can be written in machine language, bad code can be written in a nanny language. If it’s Turing complete, bad code can happen… All the higher level constructs do is try and keep you from writing code in ways that are known to cause problem.

And “even the best programmer manages a bug per couple hundred lines.” citation needed.

Right click on a property and click “find all usages” guaranteed to work on static typed languages. try that on a dynamic language and you will be lucky to get a few results. dynamic languages are just unusable on large projects due to the fear that changing anything can cause unforeseeable consequences. this leads to huge technical debts as people code around the old code to avoid changing it

If you consider C++ a “nanny language”, then you haven’t written any of it. It just gives you tools to train the compiler to catch your screw-ups. Properly applied, you’ll only have to chase down logic errors, not random memory corruptions, double-frees, or memory leaks. Improperly applied, you can still *(int*)314159=1; and crash things, if that’s your thing.

Citations for bug rates are easy to find. A common one, listed as a quote from Code Complete, says “Industry Average: ‘about 15 – 50 errors per 1000 lines of delivered code.'” I estimated a great programmer as being between several times and an order of magnitude better than that. I’ve been at this a long time, and I’ve never seen anyone do better than that in first-pass, untested code. Error rates drop after testing, use of code analysis tools, independent design review, and the like.

Nah, you get SKYNET specifically by giving a military strategy assistant programmed to maximize degree of military victory direct control over a large automated armed force and the ability to break into other computers and distribute itself across the internet to avoid being distrupted, but program it to mark anyone who attempts to shut it off as a traitor and enemy before letting the highest ranking members of the military do exactly that. Also, time travel is in there somewhere.

I think it’s more like true security is physically impossible. If any sort of communication or recall of data is possible, it’s possible for the wrong person to access it. Sure, a lot of people could do better than they currently are, but breaches would still happen. Breachers would just try harder.

Capability machines ensure nobody can access things they shouldn’t. Combine that with OS software with security graded data and leaking information is very hard.

Type tagging have existed and exists in some software today.

Managing storage? That is in wide use today. Garbage collection.

Now the problems with the above:

Capabilities are hard to design with. Make a mistake and data can be leaked. Software should check for that you say? Impossible in general.

Tagging isn’t a cure all and have both theoretical and practical problems. Even finding unique identifiers for a certain structure is hard as a general problem. Structures being identical doesn’t mean they are of the same type so automatic identification is impossible.

Garbage collection is nice. But it have several problems – too many to list here. Moving GC to hardware wouldn’t do much better than current software GC even potentially slowing down execution speed! Some extra hardware help could probably help a bit though.

Back in the 1980’s I worked with a company called ECI Semiconductor in Santa Clara CA that did custom chip fabrication. They had a catalog of standard chips with a “sea of gates” or “sea of diodes, resistors, and transistors” that were all unconnected. There was layout software (just like PCB layout) that you used to connect them up to form any circuit you wanted, analog or digital. Then, you just paid a mask charge (on the order of $500-$1000) plus some small amount per chip. Voila! You had your own custom IC!

I specifically worked with their 700-series linear bipolar array chips. We integrated a 324 quad opamp, 78L05 voltage regulator, and power-on-reset, so we had a one-chip “support” chip for our microcomputer. The chip was just under $1 each in 1000s, so quite affordable for modest production runs.

An old, faster and cheaper way. It’s been around since the 90s. LSI had RapidChip, and once during the (first) gulf war when some defense system was discovered to have a bug in the silicon, they managed to design and fab a chip in 72-hours using structure asic.

It never really caught on though, price point wasn’t competitive enough to displace traditional asic.

Maybe because recently 80+% of all ICOs were determined to be fraud? That brings it back around to a slashdot article earlier today talking about the difference between a good hack and bad hack, and how we need to find a way to separate the two (which we’ve needed for 20+years and will probably need for 20+ more, entrenched as “bad hacker” is in the media et al).

AMAZED at the negativity towards crypto/blockchain tech around here… (yes they are basically the same thing and inherently inseparable) While there is a lot of fraud in the space, there is also at least hundreds of valid projects… You may think its overvalued now but once widespread adoption really takes hold today’s prices will be the thing of dreams. I also think that the fundamental goals of crypto are probably held by most of the readership… Even if they don’t know it.

Maybe you should sit on your hands a while longer. As we speak Intel, and possibly a few other players are interested in getting in on those lucrative GPGPU dollars. If let to play out until release instead of imploding the virtual currency bubble now, a lot more interesting hardware will come to bear in a market that had whittled down to an INTERNATIONAL DUOPOLY. Reflect on that a moment. In the whole world there were only two commercially viable consumer graphics card companies. With the current market demand however other players are now seeing $.$

Furthermore, for your complaints on power supply costs: I can now find >1kw PSUs for 120 dollars. 3kW if you have 220V circuits. For a SINGLE PSU. We now also have access to large quantities of CHEAP PCIe bridges/port multipliers, allowing you not only to work around the limitations of unslotted PCIe x1 ports with wider connector PCIe cards, but also using them to allow expansion on boards that otherwise couldn’t be expanded, like SBCs with mini-PCIe or M.2/NVMe sockets, allowing you to use traditional PC GPUs, Sound Cards, Ethernet Adapters, USB3 cards, SATA cards, or PCIe to PCI boards to provide hardware access previously inconceivable without a multi-thousand dollar workstation or server board.

The virtual currency craze may be a setback for many of us short term, but it is a catalyst towards long term hardware opportunities we have been lacking for 5-10 years.

A blockchain without the currency isn’t useful. If there’s no reward, nobody’s going to bother help maintain the distributed network. And without a distributed network, you’d be better off with a traditional centralized database.

Yes, but the piece of inherently worthless paper in my pocket is legally defined as having some value by the state, who put quite a lot of effort into maintaining its market value and will generally resort to using any means necessary to enforce the system. As they seem to be reasonably good at this, as demonstrated by hundreds of years of history, I have some faith in the piece of paper. As bitcoin has no state or anything else to back it, I have no faith in it.

That argument goes both ways. Hundreds of years of history have shown that the government doesn’t act on your behalf. It acts on its own behalf and if your interests line up you profit. If they don’t, too bad for you. A lot of the time it means the money you have devalues. Again, by design. You can ask the citizens of countries with hyperinflation how they feel about your statement, or how they feel about their tender legally bring declared as having value. Value is decided by the faith people have in it and not by law. Hyperinflation happens in countries that did well not too long ago, so don’t think yours is immune.

The question remaining is whether you want to ride on the bull that serves another and crushes you without thought, or on the tornado that serves none. Neither is inherently appealing, neither is concerned with your wellbeing.

One has value because it’s backed by people, the other has value because it’s backed by people. A law can’t declare value, nor can an army or land. The value is based on the exact same thing. It’s a system of faith in both cases. Some people see it as a weakness there’s no one tinkering with cryptocurrencies, some see it as a distinct advantage, as the ones tinkering with traditional fiat currencies aren’t serving your interests. If your interests align you get to profit, if they don’t it’s just tough luck.

So you get a chip where 1/16 of the functionality is your design and the rest is functional but randomly selected from whatever projects were part of that run? Well that could turn out very interesting, particularly if people cooperated and there was some control over grouping projects, but on the down side is your chip going to have 15/16 of it’s transistors cycling away to the shared clocks and turning money into heat with no benefit to you if you can’t make use of them?

As Andreas pointed, clock gating will be implemented. We have fine-grained LDOs that will service the blocks, head transistors might get area expensive but an option. Notice that if you reserve 1/16 you will have access to a limited current-regulator controlled by regular mirror-current on/off cells.

You could perhaps include some logic in your design where you would need to set say an X bit vector to a certain value for the design to work at all. Being able to use your design at all would then hinge upon being able to solve boolean SAT easily. This wouldn’t stop anyone from decapping and imaging your device but you will at least know that noone should be able to use the chip without knowing the magic bit-pattern. (This might not be feasible if the logic area for a boolean function that is hard to solve easily might be too large for the blocks.)

I imagine though that this might be used in hobbyist/open source settings. If so it doesn’t really matter if someone steals your design. On the other hand, for a company, the cost of an MPW tapeout in an older technology is not that expensive compared to salary costs/CAD tool cost.

(Although I hope that this could be a good incentive for improving open source CAD tools as well.)

Ages ago, making a custom circuit board was hard. Either you had to go buy some traces at Radio Shack, or you spent a boatload of money talking to a board house. Now, PCBs are so cheap, I’m considering tiling my bathroom with them.

Great initiative! While doing my PhD I wondered why this kind of service was not available anywhere

* If possible, include an SRAM in the shared area.
* Perhaps include a small processor in the shared area as well able to talk to each embedded core via a suitable interface.
* if not pad-limited, could custom bonding schemes for each customer be used to increase the io pins available?

Andreas,
Great suggestions. SRAM are area expensive but will be great to place a 256B bank with a small RISC-V I. By now, basic blocks are going to fill the service area considering a larger real state for users. Custom bonding schemes increase setup packaging costs , certainly something to put in the TODO list.

Whether as part of this chip design or another one when funding allows, consider a chainable SDRAM northbridge if patent/copyright concerns allow. One of the biggest current limitations with current ‘open processor’ designs is that all of them seem targetted at embedded designs, with memory controllers (or lack thereof) limiting the designs to toys. Even if it was only 440BX era memory performance, having the capability to run 2 DIMMs per controller/channel with a chain of increasing latency memory controllers off it could offer both the cost effectiveness and technical kludge needed to make independent processor designs take off. Obviously there are other bits (Bus I/O, IOMMU, etc) that are worth looking at, but the sheer lack of RAM capacity and patent unencumbered memory designs is limiting a lot of the tech available today. I am not in a position to make this happen, but perhaps one of you can.

If companys like thid can be trusted it could change things.
Processors that are so different the government doesn’t know how to hack them. No big brother management engine.
Emulate a zx spectrum. Now how many could you fit on an i7 die and still have them talk to each other.

This type of service isn’t technically something new.
Like one can always go to Global Foundries for example and practically give them the schematics, and they can make a chip. Though, it do also have its cost associated with it. But if one needs a few thousand chips, then it isn’t super expensive most of the time. (Unless one needs a chip with many interconnection layers, copper interconnects and other such things…)

True, but the added value here is that you don’t have to coordinate yourself with several other potential customers and come up with a shared design to pass on to the foundry – these guys will do it for you. Grossly oversimplifying, you could explain it to the uninitiated as one of those carpooling apps/websites, only for ICs.

I want this to succeed. If this happens, and the barrier to IC tapeout is lowered, it would seem like the next major hurdle would be designing the part itself. Are there affordable options for software which can do place and route from designs created in higher level languages?

I get why fabs (even larger foundries) might be interested in this. 180nm CMOS is basically HUGE by todays standards, using dinosaur machines that have long since reached their “pay off” date. Keeping a fab like that running for anything above cost is pretty much just easy money in the bank.

Back in the mid 90’s when I did the MOSIS thing the basic layout program used was Magic. Which is still around and maintained. The tools available then for standard cell and such (Lager and friends) are a little harder to find.

I see this is the begin for project to recreate classic chips like the MOS Technology SID IC or 4008/8008 chips or chips that replace chips form arcade or classic game consoles like C64 include the SID chip, Nintendo NES or atari 2600