Secret Labhttp://www.secretlab.ca
Grant Likely's home in the IntertubesTue, 03 Nov 2015 07:33:03 +0000en-UShourly1https://wordpress.org/?v=4.5Debugging 96Boards I2Chttp://www.secretlab.ca/archives/164
http://www.secretlab.ca/archives/164#respondTue, 03 Nov 2015 00:35:12 +0000http://www.secretlab.ca/?p=164Continue reading Debugging 96Boards I2C→]]>I was originally just going to post this to one of the 96boards mailing list, but it got sufficiently interesting that I thought I’d make it a blog post instead. I’ve been working on making i2c on the 96Boards sensors adapter work properly and I’ve made some progress. The problem that user have run into is that the Grove RGB LCD module won’t work when connected to one of the baseboard’s I2C busses. I pulled out the oscilloscope today to investigate.

The LCD module is particularly useful for testing because it actually has 2 i2c devices embedded in it; an LCD controller at address 0x3e, and an RGB controller at 0x62. The two devices operate independently with different electrical properties.

​On Hikey+sensors (TXS0108 level shifter), the RGB device will work, but only after pulling the ribbon cable apart to reduce crosstalk due to insufficient pullups. However, the LCD causes the entire bus to lock up, and no further transactions will work.

On Hikey+pca9306 the LCD isn’t detected and the RGB works correctly (undetermined if there are crosstalk issues)

​The traces below show both sides of the level shifter. Green and blue on the top for the data line. Orange and purple on the bottom with the clock.​

First, what I saw on using Hikey+pca9306+RGB:

RGB transaction via PCA9306

And with the LCD:

LCD transaction via PCA9306

In both traces you can see the start condition (data goes low while clock is high), the 7 bits of address (7 rising clock edges), the R/W bit (1 rising clock), and then the acknowledgement bit driven by the device. If the controller doesn’t see the device drive the data line low on the 9th clock, then it decides the device isn’t there and it terminates the transaction. It is easy to recognize the ack bit because the device has a different drive strength and the voltage level is different.

The RGB controller is a happy little device and it jumps at the chance to drive the data line low. It goes down pretty close to 0V. The LCD on the other hand is sulky and doesn’t drive the line quite as low as the controller can. About to 1V. 1V is recognized fine as logic low on a 5V device, but with 1.8V it is not even less than half. The way the pca9306 level shifter works is there are pull-up resistors on either side of the device that draws each side up to its respective high level. In this case, 1.8V and 5V. When either side gets driven low, the level shifter begins to conduct and the other side also gets drawn down to the same voltage, but it can only go as low as the voltage it is driven to. If it only gets driven down to 1V, then it will never get low enough for a 1.8V controller to recognize it as a low state.

It may be that with weaker pull-ups the LCD will be able to drive to a lower voltage level. I’ll need to experiment more, but in the mean time let’s move onto the Sensors board. Back to the traces:

First, here is a transaction to address 0x63 with no device present:

No device

​Looks perfectly normal so far. Next, the RGB device at address 0x62:

RGB

Also behaving the same way as it did with the pca9306. Finally, an LCD transaction:

LCD

Again we see the start condition, the 7 data bits and 1 r/w bit, but the ack bit looks weird. The LCD successfully drives the data bit low enough to be recognized, but then something weird happens. The data line stays low and the clock stops running. I don’t know actually know what is happening here, but I’ve got my suspicions. The LCD is continuing to drive the data line low, (you can tell by the slightly different voltage level) but keeping data low should not stop the clock. I suspect the txs0108 is getting confused and driving the clock line high. I’ve come across reports from others having trouble with the txs010x series on i2c. It has ‘one-shot’ accelerators to reduce rise time by driving the line high. I don’t know for sure though.

On the plus side, I now know that the Hikey I2C busses are working correctly. Now I need to decide what to do next. Aside from the i2c problem, Rev B of the sensors board ready for manufacturing. I either need to make the txs part work, or rework the design to use a pair of pca9306s. I think I’ll try weaker pull-ups on the pca9306 breakout board first and see how that goes. Sadly, I blew up the i2c drives on my Hikey board while experimenting today, so I need to do the same experiments with my Dragonboard 410c.

Dear lazyweb, do you have any other suggestions on things to try?

]]>http://www.secretlab.ca/archives/164/feed0Why ACPI on ARM?http://www.secretlab.ca/archives/151
http://www.secretlab.ca/archives/151#commentsSat, 10 Jan 2015 14:00:15 +0000http://www.secretlab.ca/?p=151Continue reading Why ACPI on ARM?→]]>Why are we doing ACPI on ARM? That question has been asked many times, but we haven’t yet had a good summary of the most important reasons for wanting ACPI on ARM. This article is an attempt to state the rationale clearly.

During an email conversation late last year, Catalin Marinas asked for a summary of exactly why we want ACPI on ARM, Dong Wei replied with the following list:
> 1. Support multiple OSes, including Linux and Windows
> 2. Support device configurations
> 3. Support dynamic device configurations (hot add/removal)
> 4. Support hardware abstraction through control methods
> 5. Support power management
> 6. Support thermal management
> 7. Support RAS interfaces

The above list is certainly true in that all of them need to be supported. However, that list doesn’t give the rationale for choosing ACPI. We already have DT mechanisms for doing most of the above, and can certainly create new bindings for anything that is missing. So, if it isn’t an issue of functionality, then how does ACPI differ from DT and why is ACPI a better fit for general purpose ARM servers?

The difference is in the support model. To explain what I mean, I’m first going to expand on each of the items above and discuss the similarities and differences between ACPI and DT. Then, with that as the groundwork, I’ll discuss how ACPI is a better fit for the general purpose hardware support model.

Device Configurations

From day one, DT was about device configurations. There isn’t any significant difference between ACPI & DT here. In fact, the majority of ACPI tables are completely analogous to DT descriptions. With the exception of the DSDT and SSDT tables, most ACPI tables are merely flat data used to describe hardware.

DT platforms have also supported dynamic configuration and hotplug for years. There isn’t a lot here that differentiates between ACPI and DT. The biggest difference is that dynamic changes to the ACPI namespace can be triggered by ACPI methods, whereas for DT changes are received as messages from firmware and have been very much platform specific (e.g. IBM pSeries does this)

Power Management Model

4. Support hardware abstraction through control methods
5. Support power management
6. Support thermal management

Power, thermal, and clock management can all be dealt with as a group. ACPI defines a power management model (OSPM) that both the platform and the OS conform to. The OS implements the OSPM state machine, but the platform can provide state change behaviour in the form of bytecode methods. Methods can access hardware directly or hand off PM operations to a coprocessor. The OS really doesn’t have to care about the details as long as the platform obeys the rules of the OSPM model.

With DT, the kernel has device drivers for each and every component in the platform, and configures them using DT data. DT itself doesn’t have a PM model. Rather the PM model is an implementation detail of the kernel. Device drivers use DT data to decide how to handle PM state changes. We have clock, pinctrl, and regulator frameworks in the kernel for working out runtime PM. However, this only works when all the drivers and support code have been merged into the kernel. When the kernel’s PM model doesn’t work for new hardware, then we change the model. This works very well for mobile/embedded because the vendor controls the kernel. We can change things when we need to, but we also struggle with getting board support mainlined.

This difference has a big impact when it comes to OS support. Engineers from hardware vendors, Microsoft, and most vocally Red Hat have all told me bluntly that rebuilding the kernel doesn’t work for enterprise OS support. Their model is based around a fixed OS release that ideally boots out-of-the-box. It may still need additional device drivers for specific peripherals/features, but from a system view, the OS works. When additional drivers are provided separately, those drivers fit within the existing OSPM model for power management. This is where ACPI has a technical advantage over DT. The ACPI OSPM model and it’s bytecode gives the HW vendors a level of abstraction under their control, not the kernel’s. When the hardware behaves differently from what the OS expects, the vendor is able to change the behaviour without changing the HW or patching the OS.

At this point you’d be right to point out that it is harder to get the whole system working correctly when behaviour is split between the kernel and the platform. The OS must trust that the platform doesn’t violate the OSPM model. All manner of bad things happen if it does. That is exactly why the DT model doesn’t encode behaviour: It is easier to make changes and fix bugs when everything is within the same code base. We don’t need a platform/kernel split when we can modify the kernel.

However, the enterprise folks don’t have that luxury. The platform/kernel split isn’t a design choice. It is a characteristic of the market. Hardware and OS vendors each have their own product timetables, and they don’t line up. The timeline for getting patches into the kernel and flowing through into OS releases puts OS support far downstream from the actual release of hardware. Hardware vendors simply cannot wait for OS support to come online to be able to release their products. They need to be able to work with available releases, and make their hardware behave in the way the OS expects. The advantage of ACPI OSPM is that it defines behaviour and limits what the hardware is allowed to do without involving the kernel.

What remains is sorting out how we make sure everything works. How do we make sure there is enough cross platform testing to ensure new hardware doesn’t ship broken and that new OS releases don’t break on old hardware? Those are the reasons why a UEFI/ACPI firmware summit is being organized, it’s why the UEFI forum holds plugfests 3 times a year, and it is why we’re working on FWTS and LuvOS.

Reliability, Availability & Serviceability (RAS)

7. Support RAS interfaces

This isn’t a question of whether or not DT can support RAS. Of course it can. Rather it is a matter of RAS bindings already existing for ACPI, including a usage model. We’ve barely begun to explore this on DT. This item doesn’t make ACPI technically superior to DT, but it certainly makes it more mature.

Multiplatform support

1. Support multiple OSes, including Linux and Windows

I’m tackling this item last because I think it is the most contentious for those of us in the Linux world. I wanted to get the other issues out of the way before addressing it.

The separation between hardware vendors and OS vendors in the server market is new for ARM. For the first time ARM hardware and OS release cycles are completely decoupled from each other, and neither are expected to have specific knowledge of the other (ie. the hardware vendor doesn’t control the choice of OS). ARM and their partners want to create an ecosystem of independent OSes and hardware platforms that don’t explicitly require the former to be ported to the latter.

Now, one could argue that Linux is driving the potential market for ARM servers, and therefore Linux is the only thing that matters, but hardware vendors don’t see it that way. For hardware vendors it is in their best interest to support as wide a choice of OSes as possible in order to catch the widest potential customer base. Even if the majority choose Linux, some will choose BSD, some will choose Windows, and some will choose something else. Whether or not we think this is foolish is beside the point; it isn’t something we have influence over.

During early ARM server planning meetings between ARM, its partners and other industry representatives (myself included) we discussed this exact point. Before us were two options, DT and ACPI. As one of the Linux people in the room, I advised that ACPI’s closed governance model was a show stopper for Linux and that DT is the working interface. Microsoft on the other hand made it abundantly clear that ACPI was the only interface that they would support. For their part, the hardware vendors stated the platform abstraction behaviour of ACPI is a hard requirement for their support model and that they would not close the door on either Linux or Windows.

However, the one thing that all of us could agree on was that supporting multiple interfaces doesn’t help anyone: It would require twice as much effort on defining bindings (once for Linux-DT and once for Windows-ACPI) and it would require firmware to describe everything twice. Eventually we reached the compromise to use ACPI, but on the condition of opening the governance process to give Linux engineers equal influence over the specification. The fact that we now have a much better seat at the ACPI table, for both ARM and x86, is a direct result of these early ARM server negotiations. We are no longer second class citizens in the ACPI world and are actually driving much of the recent development.

I know that this line of thought is more about market forces rather than a hard technical argument between ACPI and DT, but it is an equally significant one. Agreeing on a single way of doing things is important. The ARM server ecosystem is better for the agreement to use the same interface for all operating systems. This is what is meant by standards compliant. The standard is a codification of the mutually agreed interface. It provides confidence that all vendors are using the same rules for interoperability.

Summary

To summarize, here is the short form rationale for ACPI on ARM:

ACPI’s bytecode allows the platform to encode behaviour. DT explicitly does not support this. For hardware vendors, being able to encode behaviour is an important tool for supporting operating system releases on new hardware.

ACPI’s OSPM defines a power management model that constrains what the platform is allowed into a specific model while still having flexibility in hardware design.

For enterprise use-cases, ACPI has extablished bindings, such as for RAS, which are used in production. DT does not. Yes, we can define those bindings but doing so means ARM and x86 will use completely different code paths in both firmware and the kernel.

Choosing a single interface for platform/OS abstraction is important. It is not reasonable to require vendors to implement both DT and ACPI if they want to support multiple operating systems. Agreeing on a single interface instead of being fragmented into per-OS interfaces makes for better interoperability overall.

The ACPI governance process works well and we’re at the same table as HW vendors and other OS vendors. In fact, there is no longer any reason to feel that ACPI is a Windows thing or that we are playing second fiddle to Microsoft. The move of ACPI governance into the UEFI forum has significantly opened up the processes, and currently, a large portion of the changes being made to ACPI is being driven by Linux.

At the beginning of this article I made the statement that the difference is in the support model. For servers, responsibility for hardware behaviour cannot be purely the domain of the kernel, but rather is split between the platform and the kernel. ACPI frees the OS from needing to understand all the minute details of the hardware so that the OS doesn’t need to be ported to each and every device individually. It allows the hardware vendors to take responsibility for PM behaviour without depending on an OS release cycle which it is not under their control.

ACPI is also important because hardware and OS vendors have already worked out how to use it to support the general purpose ecosystem. The infrastructure is in place, the bindings are in place, and the process is in place. DT does exactly what we need it to when working with vertically integrated devices, but we don’t have good processes for supporting what the server vendors need. We could potentially get there with DT, but doing so doesn’t buy us anything. ACPI already does what the hardware vendors need, Microsoft won’t collaborate with us on DT, and the hardware vendors would still need to provide two completely separate firmware interface; one for Linux and one for Windows.

]]>http://www.secretlab.ca/archives/151/feed1git.secretlab.ca is downhttp://www.secretlab.ca/archives/144
http://www.secretlab.ca/archives/144#respondFri, 10 Oct 2014 13:53:40 +0000http://www.secretlab.ca/?p=144For anyone who has been using git.secretlab.ca, the server is currently down and I don’t know when it will be back up. I’ve moved my Linux kernel tree over to kernel.org. The new tree can be found here:

First, when we’re talking about Linux and ACPI on ARM, we’re talking about general purpose servers. In the general purpose server market, Linux is already the dominant OS, regardless of the CPU architecture. Servers are designed, built and sold to run Linux. It is already the situation that x86 server vendors build their ACPI tables to work with Linux. Supporting Linux on ARM servers is merely an extension of what vendors are already doing to support Linux on x86. Despite Matthew’s concern, I don’t think we’re entering new territory in this regard.

Second, many of us have bad memories of getting ACPI to work with Linux. However, it is worth remembering that most of our problems have been with machines where the vendor really doesn’t care about Linux – usually desktop or laptop PCs. It’s not surprising that we have problems with these machines since they’ve only been tested with Windows! Server vendors, on the other hand, have a vested interest in ensuring that Linux runs well on their hardware and so they regularly test with Linux. The negative lessons learned in the laptop and desktop markets don’t carry over to machines built to run Linux.

Third, the ACPI world has changed in the last 2 years. It used to be that the ACPI spec was governed in a closed process by 5 companies: HP, Intel, Microsoft, Phoenix, and Toshiba, with nary a Linux person to be seen. Last year ACPI governance was transferred to the UEFI Forum and we’ve got plenty of Linux engineers sitting at the table. In light of that, it is no longer true that ACPI only caters to the needs of Windows, and we have the ability to propose changes to the spec. In fact, if you look at the revision history in version 5.1 of the spec, you’ll find changes that were proposed by Linux engineers to make ARMv8 work.

That said, the issues raised by Matthew are important. There is a big question about how Linux should declare itself to the platform. Claiming to be compatible with “Windows 8” in the ACPI _OSI (Operating System Interface) method obviously isn’t appropriate on ARM. There is some talk about removing _OSI entirely on ARM since the way Linux uses it isn’t actually useful, and the _OSC (Operating System Capability) method has been proposed as a better way to declare what the OS supports. There is also a need to make sure vendors are testing with linux-next and mainline kernels so that we know when breakage happens and we can either do something about it, or work with vendors to fix their firmware.

Both of these are important issues and I think we need to propose solutions before merging ARM ACPI support into the kernel. Some of this work has already started: Linaro is running Canonical’s Firmware Test Suite (FWTS), the ACPI API tests, and the ACPI ASL tests on ARM, and we’re porting the Linux UEFI Verification (LUV) project which packages all the test suites into an easy to use distribution.

While I agree with Matthew that getting the interface between firmware and the OS is hard, I do not see the nightmare scenario he is describing. It certainly hasn’t played out that way on x86 servers where Linux is already the preferred OS. Besides, I really cannot agree with the premise that Linux being the dominant OS is a bad thing! We have a lot more influence than we give ourselves credit for.

Christoffer Dall lead a session today at Linaro Connect discussing standards for portable ARM virtual machines (video). About a week ago, Christoffer posted a draft specification to the linux-arm-kernel, kvm and xen mailing lists which attracted lots of useful feedback. Today we went over the major points of issue and Christopher is going to take the feedback to prepare a new draft.

Many of the issues raised boil down to how much reach the spec should have. If it specifies too much, then it will be burdensome for vendors to be compliant, but if it specifies too little then it won’t be useful for making portable disk images. Today we talked about how specific it must be on the topics of required hardware, required virtual interfaces (virtio, xenbus), firmware interface (UEFI) and hardware description (ACPI, FDT).

We also talked about the use-cases covered by this spec. For instance, while there is interest in supporting some hypothetical future version of ARM Windows as either a host or a guest, it is pointless to try and guess what requirements Microsoft will have. For now the focus is on Linux hosts running either Xen, KVM or QEMU, with guests running predominantly Linux (while still supporting any guest OS that conforms). OS vendors should be able to use the spec to design installation and update tools that will work with any compliant virtual machine.

The ARM Server Base System Architecture (SBSA) specification defines the basic requirements for ARM server hardware. Christoffer used the SBSA as a starting point, but quickly realized that the peripheral options described in the SBSA makes little sense in a virtual environment. For instance, a virtual machine can certainly emulate a SATA controller, but it can provide far better performance with an interface designed for virtualization. It was asked if the spec should specify a choice of either virtio or xenbus, but the problem with doing so is it effectively requires OSes to implement support for both in order to be compliant. This isn’t a problem for Linux guests because the kernel already has drivers for both, but it could be a problem for non-Linux guests.

Instead the choice was made to treat virtual buses in exactly the same way we treat real hardware; it is still up to the OS to include driver support for the platform it is running on. OS vendors are strongly encouraged to support both, but the spec does not require them to do so. If only one is supported then the onus is on them to list it in their own requirements.

Particular attention was given to the SBSA serial port requirement. Level 1 of the SBSA requires the platform implement a debug port which is register compatible with ARM’s pl011 UART. Ian Campbell and Stefano Stabellini from Citrix were concerned that implementing full pl011 emulation would perform poorly and would be require a lot of work to implement. However, Alexander Graf pointed out that an always available console device would eliminate a lot of the pain of failed booting without any log output. It was also pointed out that the SBSA does not actually require a full pl011 implementation. DMA and IRQ support are not necessary, which makes emulation trivial, and the virtual UART is only expected to be used during early boot scenarios. Normally console output will be reported first via the UEFI console before ExitBootServices() is called, and then via the VM’s preferred console device. At the close of the discussion we decided to require the SBSA debug port definition in the VM spec.

The requirement of UEFI for the firmware interface was mostly uncontroversial. In the earlier mailing list discussion, Dennis Gilmore did take issue with specifying UEFI over U-Boot given that UEFI is not in heavy use on 32-bit ARM. U-Boot is also making strides forward in standardizing the boot flow which would make it it more suitable for VM scenarios. Dennis is concerned that UEFI would require a lot of new effort to get working. However, that work has already been completed. There is a 32-bit port of UEFI running under QEMU, mainline GRUB includes ARM UEFI support, and merging kernel support is in progress.

None of the VM developers in the room today seemed concerned about requiring UEFI for virtual firmware, and the UEFI spec covers quite a few standard booting scenarios including, removable media, network booting, and booting from a block device. The feeling is that it is important for both 64-bit and 32-bit virtual machines to have the same behaviour and so the UEFI requirement will remain.

Deciding whether an FDT or an ACPI hardware description is required was more of an concern. Jon Masters from Red Hat has previously stated that Red Hat Enterprise Linux will only support booting with ACPI. There is concern that the specification will not be acceptable to Red Hat if it does not require ACPI. However, ACPI is still a work in progress and we don’t yet know how to implement it in a VM. Since all of the VMs already use FDT, and will continue to do so for the foreseeable future, it was decided to make FDT support mandatory in version 1 of the spec. A future version 2 will allow ACPI to be provided in addition to FDT with the expectation that an OS vendor can choose to make ACPI support mandatory for their product.

For the next steps, Christoffer is going to take all the comments from the mailing list and today’s meeting and he will post a second draft of the spec. Then after further feedback, the specification will probably get published, possibly as a Linaro whitepaper.

]]>http://www.secretlab.ca/archives/98/feed3Inside a knockoff Wii Nunchukhttp://www.secretlab.ca/archives/84
http://www.secretlab.ca/archives/84#commentsSat, 01 Mar 2014 14:33:22 +0000http://www.secretlab.ca/?p=84Continue reading Inside a knockoff Wii Nunchuk→]]>As part of the Lightsaber project, I’ve been looking for a low pin count way to add control since the ATTiny85 that I’m using only has 6 IO pins. For the prototype I connected a button and a potentiometer to a pin each. I’d like to have an accelerometer and another button or two, but that uses up pins pretty quickly. However, if I hang all the controls off an i2c bus, then I only need to use two IO pins.

The Wii Nunchuk just happens to be an i2c bus. It also happens to have 4 inputs built into it: 2 buttons, a 2-axis joystick and a 3 axis accelerometer. That’s pretty close to everything I want. It also aggregates reading all of those sensor inputs into a single i2c transaction which means less work for the ATTiny85 software.

Official Wii Nunchuks aren’t the cheapest things in the world. Even 8 years after the Wii was first released, a genuine Nintendo Nunchuk is £15. That blows my budget for this project. I can however order replica Nunchuks via Aliexpress for a mere £2.95 each including shipping. I ordered a lot of 5 to experiment a couple of weeks ago and they arrived today.

For such a low price I was not expecting much, and indeed, my expectation were met. They work, I’ll say that much for them, but I wouldn’t want to use them for actually playing a game. Button presses don’t always make contact and feel a bit sloppy. I’m not too worried about that though because I’m going to gut them for the electronics and throw away the plastic.

More troublesome though is that the clones don’t behave in exactly the same as an official Nintendo Nunchuk. In fact, in the lot of 5 I purchased I seem to have two different variants, each of which behaves differently. Two of them I was able to get working completely by following the instructions in this forum post. The other three are recognized by the new code, but the event reports are still encrypted. I need to do some debugging to figure out what else is needed.

Inside the NunchukElectronics from two difference Nunchuk clones

Cracking the case open, it’s clear that the plastic moulding is a direct copy of the Nintendo part. The boards in both versions have exactly the same outline as the Nintendo one and all the plastic looks identical. One of them even uses the Nintendo tri-wing screws. The boards themselves look designed to be as cheap as possible. The main controller is an anonymous die-on-board held with a blob of epoxy. Soldering quality is marginal at best.

Not that any of this bothers me. If I can get it mounted inside the Lightsaber hilt then it will do the job nicely for less than it would cost to buy each button and sensor individually.

]]>http://www.secretlab.ca/archives/84/feed2The New Projecthttp://www.secretlab.ca/archives/12
http://www.secretlab.ca/archives/12#commentsWed, 12 Feb 2014 21:41:28 +0000http://www.secretlab.ca/?p=12Continue reading The New Project→]]>It’s great to look out and see all the ingenious things people have build with the nifty WS2812b LEDs pixels from WorldSemi, but I’m quite surprised that I’ve yet to see any project use them in the way they were intended. I speak, of course, of Lightsabers.

Most custom Lightsaber designs are built around either a superbright LED module in the hilt shining up into the blade, or a string of LEDs up through the blade. In both designs it is difficult to emulate the effect of the blade growing up from the hilt when turned on, and then drawn back in when it is turned off. However, WS2812b LEDs are individually addressable, so a small microcontroller can easily animate the blade growing and shrinking effect. Blade shimmer and flash effects are equally easy to implement and it can be any colour you like. Therefore it is obvious that this is what the WS2812b was designed for.

After finishing a few simple projects, my children and I have been looking for something new to build. When I suggested building Lightsabers they jumped all over the idea. Our friend Mandy, who is also a huge Star Wars nut, was going to be visiting us, so we made sure to start the project while she was in town.

Our Lightsabers

Building Lightsabers at home is by no means a new idea. Plans, kits and guides are all over the Internet. I spent a lot of time reading about what other people have done to prepare for this project and have gotten a lot of good suggestions about what to do. There are some incredibly ambitious projects out there even to the point of precision machining the hilt, but we’re not going to be nearly so ambitious.

Most of our materials will come from the plumbing section of our local B&Q, electronics from Proto-PIC and Aliexpress, and polycarbonate tube from theplasticshop.co.uk. Right now my prototype is running at about £46 in parts which includes £12.50 for LEDs and £11.50 for electronics, but I’d like to get the total down to about £25.1

Shopping for Lightsaber parts

For the prototype we’re using lengths of ABS waste water pipe which is easy to work with. Hopefully we can find paint which will adhere to it well when sanded. The blade is a length of 25mm polycarbonate tubing and a half-sphere polycarbonate cap for the tip. The LEDs are inside a second length of smaller 16mm polycarbonate tube which keeps the LEDs straight and holds them in the center of the blade.

Polycarbonate blade

Polycarbonate tube is transparent and the individual LEDs can be seen right through the tube, so the light needs to be diffused in some way. Two suggestions I’ve come across are to either sand the inside of the tube, or to use a roll of diffusing film on the inside of the blade. I’ve used the sanding methods which makes it look better, but the individual pixels can still be seen, at least in person. On photos it looks awesome, but only because it saturates the camera sensor.

Illuminated blade

You can see on the photo on the right the individual pixels showing up in the reflection of the blade even though the blade itself looks like a single beam of light. I’m still experimenting with how to make it look better and I’d love to hear about how others have solved the problem.

LED strip and supporting 16mm tube

To light the blade, I used 2 lengths of 60 pixels/m WS2812b LED strip and pasted them back-to-back. Originally I merely used one 2M length and folded it in half, but doing it that way requires twice the memory and processor time to drive the pixels.

Lightsaber electronics

The controller is a Digispark which uses the attiny85 microcontroller and is Arduino compatible. The Digispark has an on-board 500mA regulator, but that isn’t nearly enough for 110 WS2812b LEDs which can draw up to 60mA per pixel when fully lit. Instead I added a Pololu 5V/3.5A regulator module to provide power to the strip, which is still less than the theoretical maximum of 6.6A for all the LEDs, but for how they are being used, somewhere around 2.5A will be the typical draw. Finally, I’m merely using 6xAA batteries to provide power, which is fine for now, but I will be replacing it with a rechargeable pack soon.

Control is provided via a button and a potentiometer wired to GPIO pins on the attiny. The firmware itself is trivial.2 I’m using the NeoPixel library from Adafruit to drive the control signal. Pressing the button toggles the saber on and off. Colour is set based on the potentiometer voltage reading. I’ve yet to add any other effects.

Mechanically, the potentiometers I’m used have turned out to be a bad choice. I’ve damaged two so far while mounting them inside the case. What happened is I ended up flexing the base so that the wiper doesn’t make solid contact any more and it causes the colour to flicker badly. I’m considering replacing it entirely with the sensor board from a Wii Nunchuck. Cheap replicas can be found on Aliexpress for about £3 each which I can teardown for parts. That would give me two buttons, a 2-axis joystick and 3-axis accelerometer all readable from a single i2c transaction.

The next thing I need to do is get all everything to fit inside the hilt. It’s mostly there, but the battery pack is a little large and the controls are interfering with the circuit boards. I may replace the digispark and power regulator boards with a single custom board to keep the size down. I would also like to add sound effects. A speaker could be attached to one of the attiny85 PWM pins, but generating the sounds may be asking a bit to much out of it. It only has 8K of flash and 512 bytes of SRAM after all. I could switch to a more capable microcontroller, but it is rather fun to see how much I can do with a small device.

Our friend Mandy succumbs to the dark side

I’m blithely ignoring shipping costs here. Once the design is sorted out I’ll purchase enough parts to build about a dozen sabers for my son’s birthday party. Buying that many at once diffuses the shipping costs somewhat. ↩

]]>http://www.secretlab.ca/archives/12/feed8When will UEFI and ACPI be ready on ARM?http://www.secretlab.ca/archives/27
http://www.secretlab.ca/archives/27#commentsFri, 31 Jan 2014 12:00:52 +0000http://www.secretlab.ca/?p=27Continue reading When will UEFI and ACPI be ready on ARM?→]]>As part of the work to prepare for ARM servers, the Linaro Enterprise Group has spent the last year getting ACPI and UEFI working on ARM. We’ve been working closely with ARM and ARM’s partners on this to make sure the firmware architecture meets the needs of the server market.

Yet this work has raised questions about what it means for the rest of the ARM Linux world. Why are we doing UEFI & ACPI? Who should be using UEFI/ACPI? Will U-Boot and FDT continue to be supported? Can hardware provide both ACPI & FDT? Can ACPI and FDT coexist? And so on. I want to quickly address those questions in this blog post, and then I want to discuss a development plan to get UEFI and ACPI onto shipping servers.

Table Of Content

Why UEFI and ACPI?

Note: I am only talking about general purpose ARMv8 servers here. Not mobile, not embedded. At this present time, I don’t see any compelling reason to adopt ACPI outside of the server market. If you are not doing server work you can stop reading right now and keep using what you already have.

The short answer is, “UEFI and ACPI should be used because ARM’s server specifications will require it.”, but that just leads the question, “Why do the specifications require it?” ARM has spent the last couple of years consulting with its partners to develops a common platform for ARM servers. Those partners include OS, hardware, and silicon vendors as well as other interested parties.

Firmware design was a big part of those consultations. The two big questions were, what firmware interface should be specified, and what hardware description should be used? First of all, it is important to note that while many of the same people are involved, UEFI and ACPI are not the same thing. UEFI is not tied to ACPI and will happily work with an FDT. Similarly, ACPI does not depend on UEFI, and can be made to work just fine with U-Boot.

On firmware interface, choosing UEFI was a pretty easy decision. UEFI has a specification, an open source BSD-licenced implementation, and the mainline project has ARM support. UEFI specifies how an OS loader is obtained from disk or the network and executed, and we have tools to work with it on Linux. Plus it works exactly the same way on x86. This makes life far simpler for vendors who already have tooling based on UEFI, and for end users who don’t have to learn something new. Supporting UEFI has minimal impact and doesn’t impose a major burden on Linux developers. When compared with U-Boot it was no contest. U-Boot is great in the environments that it grew up in, but it doesn’t provide any of the consistency that is absolutely required for a general purpose platform.

ACPI was a harder decision, particularly for us Linux folks. We’ve spent the past 3 years focusing on FDT development, and ACPI uses a different model. FDT is based on the model where the kernel drives all hardware right down to the clocks and regulators. The FDT merely describes how the components are configured and wired together. ACPI on the other hand moves a lot of the low level wiring details into the ACPI bytecode so that the kernel doesn’t need to be aware of power managements details. For ARM Linux this is an issue because it runs completely counter to all the work we’ve done on clock, regulator, gpio and power management frameworks; work that is absolutely essential when using board files or FDT, but may conflict when PM control is managed by ACPI. There is a lot of work that we need to do in order to get ACPI working on ARM Linux, especially since adding ACPI must not break existing board support.

Hardware and silicon vendors look at ACPI in a very different way than kernel engineers. To begin with they already have hardware and process built around ACPI descriptions. Platform management tools are integrated with ACPI and they want to use the same technology between their x86 and ARM product offerings. They also go to great lengths to ensure that existing OS releases will boot on their hardware without patches to the kernel. Using ACPI allows them limited control over low level details of the platform so that they can abstract away differences between systems.

We kernel engineers don’t like to give up that control. There have certainly been enough instances where firmware has abused that control to the frustration of kernel hackers. Yet by and large the system works and there is a very healthy ecosystem around platforms using ACPI.

Ultimately, ARM and the companies it consulted came to the consensus that ACPI is the best choice for the ARM servers. I personally think it is the right decision. It helps that both UEFI and ACPI specs are maintained under the umbrella of the UEFI Forum, which any company is welcome to join if they want to be involved in specification development. There are a lot of Linux people involved with the UEFI and ACPI working groups these days.

I expect ARM will be publishing a firmware document requiring both UEFI and ACPI in the near future.

Current Status

At this present moment, mainline only supports FDT. I think I’m safe in saying that among the ARM kernel maintainers we’re committed to FDT. It is not going away. Any hardware that provides an FDT that boots mainline Linux will continue to be supported. You can build a device with FDT and it will be supported for the long term. Similarly, there are no plans to deprecate U-Boot support, or any other boot loader for that matter. ACPI and UEFI support will happily coexist with FDT and support for other bootloaders.

ACPI support is not yet in mainline. The patches for ARM are done and have been posted to the mailing list for review. I expect that they will get merged in v3.15 or v3.16 of the kernel. Now, work has shifted to working out best practices for using ACPI on ARM. At the moment we don’t yet know what a “good” set of ARM ACPI tables should look like. Nor do we know how existing kernel device drivers and infrastructure should work when ACPI is provided. Until those questions are answered, ACPI isn’t ready to use. Getting those answers is going to take some time.

So, for the vendors who do want to use ACPI, what are they supposed to do? Ship ACPI (which doesn’t work yet)? Ship FDT and upgrade to ACPI later? Ship both (but how does that work)? In an effort to clarify, here is how I see the world:

What Should Vendors Do?

Given the current state of mainline support, what should vendors ship on their hardware? In typically helpful form, I answer, “it depends”. To keep the answer simple, I’ve split up my suggestions into three categories based on when hardware is going to ship: immediately, in the next year, and in the long term (2+ years).

For Hardware Shipping Very Shortly

There are two questions to answer, which firmware should vendors use, and which hardware description. I’ll start with firmware. At this moment, Linux UEFI support is essentially complete. The patches have been reviewed positively and will probably get merged in the next merge window. UEFI will also work equally well with either an FDT or an ACPI hardware description. Plus the TianoCore UEFI project can already boot a Linux kernel without any additional patches. Anyone planing to ship servers is the near future should plan on using UEFI right from the start.

UEFI is important because it provides a standard protocol and runtime for an OS to install itself. This is critical for distributions because it gets away from the hardware-specific install scripts that they have to do for U-Boot right now. UEFI has been working on ARM for years. Kernel patches for CONFIG_EFI_STUB and runtime services are under review for ARM3212 and ARM643 and should get merged soon. If you want a generic distribution image to boot on your hardware, then use UEFI.

ACPI is another matter. While basic support patches are in the process of getting reviewed for merging, there is still a lot of work to be done on the infrastructure side to get ACPI working well. It is still going to take some time before we can claim that the kernel will support ACPI systems. ACPI should be considered experimental at this time and expect changes will be required before being usable by the kernel. I suggest that any server vendor shipping hardware in the near future should make firmware provide an FDT.

Stability also used to be an issue for FDT, but we’ve hit the point where the majority of FDT support is in mainline. It is no longer necessary to update the FDT in lock step with the kernel. We debated the problem at the 2013 ARM kernel summit in Edinburgh and made the decision that the FDT is a stable ABI once it hits mainline. If the ABI gets changed in a way that breaks users, then it is a bug and it must be fixed. Therefore, upgrading the kernel shall not require an FDT upgrade, even it it means we need to carry some legacy translation code for older bindings.4

That said, there are other valid reasons for upgrading the FDT, so vendors should allow for that when designing firmware. For instance, the kernel will not support hardware that isn’t described in the FDT. An FDT update would be required to enable previously hidden functionality. Additionally, bugs in FDT data should be fixed with an FDT update. We don’t want to be dealing with individual bug workarounds in the kernel that can be easily repaired in the data.

A vendor can provide ACPI tables along side the FDT, but in doing so I would strongly recommend providing it as an experimental feature and not the default boot behaviour.

On a related note, UEFI may also provide SMBIOS to the kernel regardless of whether ACPI or FDT is used. Vendors who want to provide SMBIOS data should feel free to do so. SMBIOS is an independent table which can provide identification information about the platform that is useful for asset management. SMBIOS is maintained by a separate spec. A simple SMBIOS patch has been posted enabling it on ARM.

FDT, SMBIOS and ACPI tables are provided to the kernel via the UEFI Configuration Table. The configuration table is a list of key value pairs. Keys are well known GUIDs, and the value is a pointer to the data structure. SMBIOS and ACPI GUIDs are specified in the UEFI spec. The FDT GUID has been posted for review. FDT and SMBIOS data structures must be in memory allocated as EFI_RUNTIME_DATA.

For a Year From Now

In about a year from now I would make the prediction that ACPI support is in mainline. My recommendations are the same as above, with the following exceptions:

For widest range of support, platforms should support both FDT and ACPI. Some operating systems will only support ACPI, others only FDT. ACPI will probably be stabilizing to the point that if support is in mainline, then we will continue to support the platform in Linux.

My opinion is that Linux should use only FDT or only ACPI, but not both! [Edit: by this I mean not both at the same time. It is perfectly fine for an OS to have support for both, as long as only one is used at a time] I think that when provided with both, the kernel should default to ACPI and ignore the FDT (this is up for debate; Eventually I think this is what the kernel should do, and I think we should start with that policy simply because trying to change the policy at some arbitrary point in time will probably be a lot more painful than starting with the default that we want to ultimately get to).

The Long View

Servers must provide ACPI, but vendors can optionally choose to provide an FDT if they need to support an OS which doesn’t have ACPI support. For example, this may be an issue for the Xen hypervisor which does not yet have a design for adding ARM ACPI support. The kernel should prefer ACPI if provided, but there are no plans to deprecate FDT support. As far as the kernel is concerned, FDT and ACPI are on equal footing. We will not refuse to boot a server that provides FDT.

I cannot speak for OS vendors and hardware vendors on this topic. They may make their own statements on what is required to support the platform. So, while the kernel will fully support both FDT and ACPI descriptions, vendors may require ACPI.

Implementation Details

Here I’m going to talk about how everything works together. There are a lot of moving parts in the firmware architecture described above, so it helps to have a description of how the parts interact.

UEFI

The TianoCore UEFI project has a complete, open source UEFI implementation that includes support for both 32 and 64 bit ARM architectures. It can be used to build UEFI firmware which is compliant with the UEFI spec. UEFI cannot boot Linux directly, but requires a Linux specific OS loader which is not part of the UEFI spec. There is a legacy LinuxLoader in the UEFI tree, but as it is not standardized there is no guarantee that it will be included in firmware. Best practice is to use the native UEFI support in the kernel.

GRUB on UEFI

GRUB UEFI support has been ported to ARM and works almost identically to GRUB UEFI on x86. The patches have been merged into mainline and will be part of the GRUB release 2.02.

Internally, the most significant difference between x86 and ARM GRUB support is that on x86 GRUB the boot_params structure is used to pass additional data to the kernel, while on ARM it uses an FDT.

Linux on UEFI (CONFIG_EFI_STUB)

The current set of ready-to-merge patches to the Linux kernel add support for both CONFIG_EFI_STUB and UEFI runtime services. CONFIG_EFI_STUB embeds a UEFI OS loader into the kernel image itself which allows UEFI to boot the kernel as a native UEFI binary. The stub takes care of setting up the system the way Linux wants it and jumping into the kernel. The kernel-proper entry point remains exactly the same as it is now and a CONFIG_EFI_STUB kernel is still bootable on U-Boot and other bootloaders.

The kernel proper still requires an FDT pointer to be passed at boot time, so the UEFI stub is responsible to parse the UEFI data, set up the environment including an FDT, and jump into the kernel proper. When booting with FDT, the stub will obtain the FDT from UEFI and pass it directly to the kernel. When booting with ACPI, an empty FDT is created and used to pass boot parameters (kernel command line, initrd location, memory map, system table pointer, etc) similar to how x86 uses the boot_params structure.

If both ACPI and FDT are provided by firmware, then all hardware description in the FDT will be ignored. The kernel should never attempt use ACPI and FDT hardware descriptions at the same time.5

UEFI runtime services are also supported. The stub will pass the UEFI system table pointer through to the kernel and the kernel will reserve UEFI memory regions so that it can call back into UEFI code to query and manipulate boot variables, the hardware clock, and system wakeup.

ACPI

As described above, the kernel will use ACPI if present in the configuration table, and fall back to FDT otherwise. The kernel will not attempt to use both ACPI and FDT hardware descriptions.

One potential problem is that Kexec may interact poorly with ACPI. The OS isn’t supposed to unpack the DSDT more than once, which would happen if the kernel kexecs into another kernel (each kernel will unpack it on boot). However, x86 has been doing kexec for years so this may not actually be a problem in the real world.

With the caveat that if nobody notices, is it really an ABI breakage? There are many embedded platforms which want to keep the FDT in lock step with the kernel and the build toolchain reflects that ↩

This is still up for debate, the priority of ACPI over FDT may yet be changed ↩

]]>http://www.secretlab.ca/archives/27/feed3Standardization and ARM Servershttp://www.secretlab.ca/archives/39
http://www.secretlab.ca/archives/39#respondThu, 30 Jan 2014 10:50:56 +0000http://www.secretlab.ca/?p=39Continue reading Standardization and ARM Servers→]]>This is an important week for ARM’s push into server platforms. On Tuesday AMD announced their Opteron A1100 “Seattle” ARM processor will begin sampling in March, and yesterday ARM announced availability of Server Base System Architecture (SBSA) document for ARM servers. Of the two, the SBSA announcement is the most significant because begins laying out the platform for ARM server machines that both hardware and software vendors can built to and ensure compatibility.

The SBSA document is specifically about hardware design and includes requirements on CPU features, cache architecture, MMU organization and standard peripheral interfaces like SATA and USB. Despite media reports, the SBSA does not cover firmware architecture. You can expect ARM to release a separate document specifying firmware requirements (spoiler: UEFI and ACPI will be required).

Full disclosure: I was part of the group consulted by ARM for the drafting of the SBSA

]]>http://www.secretlab.ca/archives/39/feed0The Difference Between Learning and Understandinghttp://www.secretlab.ca/archives/5
http://www.secretlab.ca/archives/5#commentsMon, 27 Jan 2014 23:34:03 +0000http://www.secretlab.ca/?p=5Continue reading The Difference Between Learning and Understanding→]]>There is a big a difference between knowing something, and really understanding it. That thought occurred to me a few days ago when I made a bad decision on a project I’ve been tinkering with. To explain what I mean, let me describe a bit about my experience as an engineer.

I graduated about 15 years ago with an electrical engineering degree, and I’ve spent my entire career with a desk covered in random bits of electronics. Despite that, it’s rare for me to build a circuit or pick up a soldering iron. Right from the beginning I’ve always been more fascinated with software and what it could make hardware do rather than building the hardware itself. Software was just so much fun that I never seemed to get around to building much.

Arduino projects expertly soldered by my children

Recently, however, I’ve started teaching my kids about electronics and I’ve had to dust off the old skills. It has been incredibly fun to build things with them. Last Christmas we built the excellent electronic dice kit from SpikenzieLabs. This year we built Jimmie Rodger’s LoL shield, Adafruit’s MiniPOV3, and SparcFun’s Mr. Roboto, all Arduino-like devices so that they could try their hand at microcontroller programming. The kits are wonderful, and I heartily recommend them for anyone wanting to learn how to solder. My 8 year old rocked the soldering iron! Now we want to try something a little bit more ambitious and build something from scratch.

L78S05 linear voltage regulator circuit.

Building from scratch requires thinking about circuits that I usually take for granted. In this case, we’re building a project which will draw more current than can be supplied by the Arduino’s on board regulator. I needed to get or build a 5V, 2A regulator. This turns out to be new territory for me. Pretty much everything I’ve used so far either didn’t need a regulator or already had one on board. So, I did some research and settled on the well know L78S05CV linear regulator. I picked one up at the local electronics supply, wired it up on a breadboard and turned on the power. Low and behold it worked beautifully, with one exception.

Now, I learned in school that a switching regulator is far more efficient than a linear regulator. I know that the L78xx regulators require a heat sink, and I understand why they generate a lot of heat. However, it wasn’t until I built the circuit and accidentally touched the very hot L2805 that I came to a full and complete understanding of what that means. Ouch!

Should I have known better and chosen a switching regulator at the start? Yes, probably, but now I really know better!