Monday, August 15, 2016

In the previous post we looked at how we can emulate sequencer/driver communication using a lightweight stub of uvm_sequencer. Let's also look at some more tips and tricks I've picked up while writing unit tests for drivers. To mix things up a bit, let's look at the AXI protocol. We're not going to implement a full featured driver; instead, we'll focus on the write channels:

The tests above cover the desired functionality that we want to implement in our driver. We won't go through each and every one of them and see what production code we need to write, since the actual implementation of the driver isn't really important for this post. We want to focus more on the tests themselves.

When first confronted with these tests, a new developer won't have an easy time understanding what's going on. The tests are pretty verbose and it's not immediately clear what the focus of each one is. Let's see how we could improve this.

First, we'll notice that the one thing we do in each test is to create an item and queue it for the driver. We use randomization to make sure that the item is "consistent", meaning that the constraints defined in it hold. We could set item variables procedurally, but we would need to ensure that the length and the number of transfers match, which would mean even more code. Instead of repeating these steps in each unit test, we could centralize them into one place. Since we use randomization, we can't extract a function. We're forced to use a macro:

We can even go one step further. If we look at the tests again, we'll notice that we use the same constraint over and over again to get an item without any delay. Also, whenever we want to check some write data channel aspects, we want to constrain the delay of each transfer to be zero. We do have tests where we want non-zero delays, but they are the exceptional cases. What we could do is to add some default values to the delay variables via soft constraints. This way, whenever we use the `add_item_with(...) macro we know that we'll get an item without any delay:

If we compare them with the initial version from the beginning of the post, we'll see that they are much more compact. It's also clearer that the handling of the data variables are what we're testing and not anything related to delays.

Since we want to check the behavior of signals at certain points in time, we need to do a lot of waits. The statement @(posedge clk) comes up a lot in our unit tests. We could shorten this by using a default clocking block:

Having less words to parse makes the test's intent clearer. Using a default clocking block is an option most of the time, but if you have a more exotic protocol that uses both edges of the clock or multiple clocks altogether, it's not going to work.

One thing you may have noticed is that I've marked some wait statements with comments. If you've read Clean Code (and if you haven't you should), you'll call me out on this. Uncle Bob says that comments are a crutch for poorly written code that doesn't express its intent properly. Instead of relying on comments, we could create a named task:

taskwait_addr_phase_ended();##1;endtask

Now, when a test calls this task, it'll be immediately apparent what the intention is:

There is a mismatch between what the task does and its name. The task actually just waits for one cycle. To reflect this, it should have been named wait_cycle(). A task call like this will take us back to square one in terms of test readability. We may as well just use the ##1 statement as that tells us the same thing. If we want to solve this mismatch between the task name and its implementation, we should change the latter. In the context of our tests, we knew that the address phase was going to end after one clock cycle. Generally, though, What we want is to wait for AWVALID and AWREADY to be high at the same time:

Some unit-testing purists might bash this method because it adds complexity to the testing logic. This is something we should try to avoid, since we don't want to have to start testing our tests. The task is lean enough that I'd say this point doesn't apply here.

While we may have streamlined our test preparation code, checking that our expectations are fulfilled is ridiculously long. Procedural code isn't really well suited to check behavior over time. You know what is though? Assertions... Instead of writing a big mess of procedural code that does repeats and cycle delays, we could write a nice property and check that it holds. When people hear the word property, they normally think of concurrent assertions, but this isn't really what we want here. What we want to do is to check that a certain property holds after a certain point in time, not during the whole simulation.

Luckily, SystemVerilog provides the expect construct, which does exactly what we want. Given a property, it will begin evaluating it starting with the first subsequent clock. For example, to check that the driver can drive data transfers with delays, we could write the following:

This is much cleaner than the procedural code we had before. It also allows us to structure our unit tests according to the Arrange, Act, Assert pattern (even though most of the time for drivers Arrange and Act get a bit mixed, but at least Assert is separated clearly).

Since expecting that properties hold is something we'll want to do over and over again, let's wrap it in a utility macro and make it part of the vgm_svunit_utils package:

`define FAIL_UNLESS_PROP(prop) \ expect (prop) \ else \ `FAIL_IF(1)

Using this macro will give the unit tests a more consistent SVUnit look and feel:

As we saw in the previous post, it's very important to use the === (4-state equality) operator instead of ==, otherwise we're writing tests that always pass. I intentionally wrote the drive_write_addr_channel buggy to show this (kudos to anyone who noticed). Also, we need to watch out for hidden comparisons. In the last test we didn't even use the equality operator:

Because of the way the `FAIL_* macros are written, they will both always pass, so the test isn't really doing anything. If we were to re-write them using properties, we would notice if WLAST isn't being driven:

If WLAST were to be X, then the negation would also return X, which would be interpreted as a 0, leading to a fail of the property. For single bit signals, using properties is much safer than comparing for equality. There's also the added bonus that the code is more compact. For vectors, though, we still need to make sure that we're using the === operator.

Another cool thing that properties allow us to do is to focus more on the aspects that are important for a test. For example, in the data_held_until_ready test we want to check that the driver doesn't modify the value of WDATA until it sees a corresponding WREADY. We did this by choosing known values for the data transfers and checking that these values stay on the bus for the appropriate number of clock cycles. Actually, we don't (or shouldn't) care what data is being driven, as long as it stays constant. We could simplify the code to the following:

We don't really need to use a property to check that the signal stays stable. As per the LRM we could use the $stable(...) system function inside procedural code as well, but some tools don't allow this, limiting its use to assertions.

While it's usually frowned upon to have randomness in unit tests, I would argue that this isn't necessarily a problem here. Randomness should be avoided because it causes us to write complicated code for our expectations. As long as we don't need to do this (i.e. the checking code stays the same regardless of what we plug into it), everything should be fine. There is one caveat, though. On the off chance that two consecutive transfers contain the same data, it won't be possible to figure out whether the driver moved to the second transfer too early. In the extreme case, all transfers could have the same data value, making it impossible to check that the driver really holds transfers constant for their entire durations. This could be solved by writing a constraint that all data values should be unique.

Last, but not least, I hinted in the previous post that a driver not only drives items from the sequencer. It's also supposed to request items from the sequencer at well defined points in time. For example, a driver should call get_next_item(...) once it's able to process a new item, but not before, to allow the running sequence to randomize items at the latest possible time (so called late randomization). This is helpful when sequences use the current state of the system to decide what to do next. For simple protocols this is easy: a new item can start exactly after the previous one has finished. For pipelined protocols, though, it's a not as easy. The AXI protocol is massively pipelined and can have a lot of ongoing transactions at any time. I don't want to have to think what a sensible definition for an available slot for a new item would be, because the scheme would probably be too complicated. I do however want to show how we could verify that a driver calls sequencer methods at defined times.

We'll take a contrived example of how to handle responses, since I couldn't think of anything better. Whether we're checking that put_response(...) or get_next_item(...) was called when expected doesn't really matter, so this example should be enough to prove a point. Let's say that when a response comes on the write response channel, the driver is supposed to let the sequencer know.

The test is going to pass with this implementation. The problem is, though, that the response is considered to be done only once it has also been accepted, i.e. BREADY goes high. Unfortunately, the previous code sends responses too early. We'll need to update our unit test to check that put_response(...) was called after BREADY was also high and that it was called only once. This is where the test diagnostic information that sequencer_stub provides is going to be useful:

The sequencer_stub class contains a named event put_response_called that, as the name suggests, is triggered when put_reponse(...) is called by the driver. The `FAIL_UNLESS_TRIGGERED(...) macro is part of the vgm_svunit_utils package. It wraps the code required to check that an event was triggered in the current time step:

The wait statement checks if the event was already triggered (because the driver code could have executed first) and if it wasn't, it blocks. If time moves forward and the event didn't get triggered, an error message is triggered. Since the test will now fail, we'll need to update the driver code (not shown).

We've covered a lot of ground in this post on how to write more comprehensive, readable and maintainable unit tests for UVM drivers. You can find the example code here. I hope this helps you be more productive when developing your own drivers.

Sunday, August 7, 2016

It's that time again when I've started a new project at work. Since we're going to be using some new proprietary interfaces in this chip, this calls for some new UVCs. I wouldn't even consider developing a new UVC without setting up a unit testing environment for it first. Since this is a greenfield project, a lot of the specifications are volatile, meaning that the interface protocol can change at any moment. Having tests in place can help make sure that I don't miss anything. Even if the specification stays the same, I might decide to restructure the code and I want to be certain that it still works.

I first started with unit testing about two years ago, while developing some other interface UVCs. I've learned a few things throughout this time and I'd like to share some of the techniques I've used. In this post we'll look at how to test UVM drivers.

A driver is supposed to take a transaction (called a sequence item in UVM lingo) and convert it into signal toggles. Testing a driver is conceptually pretty straightforward: we supply it with an item and we check that the toggles it produces are correct. As an example, we'll take the Wishbone protocol, revision B.3. Our sequence item models the properties of an access:

Let's look at how to supply a driver with an item. A driver is an active component, that asks for items at its own pace. Inside an agent, it's connected to a sequencer, that feeds it with items when they become available. Inside our unit test, we need to emulate the same relationship by having a test double which the driver can interrogate. There's nothing stopping us from using uvm_sequencer itself:

The first test we want to write is that when the driver gets an item with no delay, it drives CYC_O and STB_O immediately. We first create an item and we start it on the sequencer using execute_item(...). This models a `uvm_send(...) action inside a sequence:

Since execute_item(...) blocks until the driver finishes processing the item, we'll need to fork it out to be able to check what the driver does with the it.

After supplying the driver with the item, we need to check that it drives the appropriate signal values. Once an item is gotten, we expect the driver to start driving CYC_O and STB_O and their values to be valid on the next posedge. We have to check one clock cycle at a time. For example, if we would drive an item with a delay of three cycles, the unit test would look like this:

It's not enough to just skip the first three clock cycles. We need to ensure that the driver signals idle cycles during that time. Also, notice the use of the === operator (4-state equality). If we were to use the == operator instead, we would get false positives if the driver doesn't drive any of the signals. This is because X (unknown value due to not being driven) matches anything.

We could write a few more unit tests for our driver. For example, a test could check that a read transfer is properly driven:

Notice that in the last two tests we didn't check CYC_O and STB_O anymore. This is because we already checked that they get driven when sending an item. When we write the unit tests, we don't write them in isolation from the production code. They evolve together. To save effort and make the tests more readable, we can focus on certain aspects of the class we want to test.

The implementation of the drive() task would look something like this:

We can see that we drive WE_O and ADR_O at the same time we write CYC_O and STB_O. It wouldn't bring us much if we, for example, exhaustively checked that the address can be driven with different delays.

Up to now we only tested that the driver properly reacts to requests from the sequencer. Most of the times, the driver also has to react to other events triggered by its partner on the bus. In our case, since we're developing a master driver, it needs to be sensitive to toggles on signals driven by the slave. One such requirement is that a master is supposed to keep the control signals stable until the slave acknowledges the transfer. This is signaled by raising the ACK_I signal.

We need to write a test where, aside from executing an item on the sequencer, we also model the behavior of the slave:

We basically model all collaborators of the driver, where some might communicate with it via method calls (like the sequencer) and some might communicate with it via signal toggles on the interface (like a connected slave).

This last test would suggest that we should update the drive() task with the following code:

While this is what we would need to do, the test still passes without this line. This is because, once the driver starts driving an item, it won't touch the signals anymore until it gets another item. The extra wait statement would cause the driver to mark an item as finished at a later time. This is hints that it isn't enough to just check signal toggles. It's also important that the driver makes calls to item_done() at the right time. This isn't something that we can check easily check with uvm_sequencer. We'd need to capture information about method calls from the driver to the sequencer.

We could choose to implement such extra testing functionality in a sub-class of uvm_sequencer. We could capture the number of times item_done() (or any other method) got called during a test. This wouldn't be a problem to implement, but have you ever taken a look at uvm_sequencer? That class is massive. The functionality is shared with two of its subclasses, uvm_sequencer_base and uvm_sequencer_param_base. Most of the stuff a sequencer does (prioritization, locking, etc.) we don't even need. Debugging anything would be a nightmare. Interrupting an item in the middle of it being driven (due to a unit test finishing early) is also going to cause fatal errors to be issued (e.g. "get_next_item() called twice without a call to item_done() in between").

A better alternative would be to start from scratch with a lightweight class that mimics the uvm_sequencer functionality we need and provides us with test diagnostic information. When talking about test doubles, I like to use the same terminology as outlined in this article. I can't really decide whether what we want to create should be called a fake (since we'll implement working functionality, but only a limited part of what uvm_sequencer can do) or a stub (since we want to collect information about method calls). I chose 'stub', for now, based on gut feeling.

Our stub has to be parameterizable, just like uvm_sequencer, to be able to interact with any driver. Just like a sequencer, it's going to have a seq_item_export for the driver to connect to:

Last, but not least, the sequencer methods (get_next_item(), item_done(...), etc.) operate on these FIFOs to emulate the behavior of a real sequencer. For example, get_next_item() peeks inside the request FIFO:

tasksequencer_stub::get_next_item(outputREQt);reqs.peek(t);endtask

The item_done(...) function pops a request from the FIFO, because it's been handled:

Last, but not least, we'll need to ensure that all unit tests start from the same state. This means that there aren't any items queued from previous unit tests and that the diagnostic information has been cleared. A flush() method (similar to uvm_tlm_fifo::flush()) should be called in teardown() to enforce this rule:

We could also do the more funky stuff, like testing that we only get a certain amount of calls to item_done(...) in a certain time window. Let's skip this for now to keep the post short.

I've uploaded the code for the sequencer_stub to GitHub under the name vgm_svunit_utils. I'll use this as an incubation area for additions to SVUnit. These could eventually get integrated into the main library if deemed to be worthy and useful to others.

You can also find the example code for this post here. I hope it inspired you to give unit testing a try!

About

I am a Verification Engineer at Infineon Technologies, where I get the chance to work with both e and SystemVerilog.
I started the Verification Gentleman blog to store solutions to small (and big) problems I've faced in my day to day work. I want to share them with the community in the hope that they may be useful to someone else.