Verification Gentleman

Sunday, June 25, 2017

After a pretty long absence, it’s finally time to complete the series on unit testing interface UVCs. I meant to write this post in October/November 2016. While writing the code, I got bogged down by a simulator bug and tried to find an elegant work around, but failed. I got frustrated and shelved the work for a while. In the meantime I got caught up with technical reading and with taking online courses. I’ve also been pretty busy at work, putting in quite a bit of overtime, which left without much energy to do anything blog related. Well, enough excuses, it’s time to get to it…

Aside from monitoring and driving signals, an interface UVC is also responsible for checking that the DUT conforms to the protocol specification. This is typically done with SystemVerilog assertions, which provide a compact and powerful syntax for describing temporal behavior at the signal level. I already wrote a bit about unit testing SVAs using SVAUnit, a library from our friends at AMIQ Consulting.

I’ve since then had a bit of an epiphany while working on an interface UVC for a new proprietary on-chip interface. Now on a previous module, I needed to write some bigger SVA properties of the type “when register X is written via AHB, then Y happens”. The AHB UVC I was using (also written by me a while back) only had assertions embedded in a checker, but it didn’t export any SVA sequences for users to combine into their own properties. I ended up doing a bit of clipboard based inheritance and defining the needed sequences in my module SVA checker. I had a similar problem when I was trying to write some simple formal properties for a different module, where I also just copied (gasp!) and patched the needed code. Now AHB isn’t such a complicated and/or dynamic protocol, but my actions were in direct violation of the DRY princple. For the new UVC I was developing, I decided to not make the same mistake and build a nice hierarchy of SVA sequences, properties and assertions. These would form part of the UVC API, sort of an SVA API which users could “call” in their own code. As any important parts of the exported API, such members have to be unit tested.

Let’s look at some simple AHB SVA constructs. Since this protocol is so popular, I assume that most of you are already aquainted with it and for those of you who aren’t I’ll try to keep things simple, but I won’t explain any protocol details. Since the spec isn’t open, I can’t link it here, but a quick search should net you some useful resources. Nevertheless, I’m pretty sure you’ll be able to follow the post without becoming an expert in the protocol.

One of the first non-trivial protocol rules for AHB is that “[when] the slave is requesting wait states, the master must not change the transfer type […]”. Let’s write a property for this with the following signature:

propertytrans_held_until_ready(HTRANS,HREADY);// ...endproperty

In the previous post we saw how we can use the expect statement to check that signal toggles happen as desired. The unit test supplied the property, while the code being tested handled the signals. To check a property, we can reverse the two roles and have the test supply the signal toggles, while the code under test is exactly our property of interest. The same `FAIL_UNLESS_PROP(…) macro can help us check if the property passes for a legal trace:

I’ve ommited the definitions of the signals, which are bundled in a clocking block called cb. Declaring this clocking block as default also allows use to use the cycled delay operator, ##n, which makes the code a bit more readable.

Just checking that properties pass is rather boring though. Not only that, but a property that doesn’t pass when it should results in a false negative, which is instantly visible in the simulation log. It’s much more valuable that a property fail when it should, because false positives are much more insidious and less likely to be caught. We can do this also with an expect statement, but we’ll need to trigger a fail if the corresponding pass block is triggered. As with `FAIL_UNLESS_PROP(…), we can wrap this check inside a macro:

`define FAIL_IF_PROP_PASS(prop) \ expect (prop) \ `FAIL_IF(1)

Having HTRANS change before an occurence of HREADY should cause our property to fail:

Those of you who’ve worked with AHB before might raise an eyebrow looking at that code. What if, for example, HTRANS comes together with HREADY and changes in the following cycle? The property shouldn’t fail, as the first transfer was accepted and a new one can begin. We can add a test for this exact situation:

Some of you might argue that when HTRANS comes at the same time as HREADY, we’re not really “holding” anything. It could be argued that this is a special case and what we’re really interested in for this property are the cases where we actually see some wait states. We could exclude the instant grant case by tweaking the antecedent in such a way that it doesn’t match. This would lead to a vacuous pass of the property. A vacuous pass means that we have’t really checked anything, because we didn’t particularly care what happened in that situation. Vacuous passes aren’t usually shown in the assertion statistics, so we could “misuse” the number of times an assertion of this property passes as coverage for how many times we’ve seen (correctly) stalled transfers.

A vacuous pass is still a pass though and as per the LRM it should also trigger the execution of an assert/expect statement’s pass block. Some tools don’t work like this, though, choosing instead to not execute pass blocks on vacuous successes (unless the user explictly enables this, maybe via some command line switch or simulator setting). We can use this to our advantage to distinguish between a “real” pass and a vacuous pass. What we have then, is a sort of ternary logic, where a property can result in one of the following:

(nonvacuous) pass, where the pass block is executed

fail, where the fail block is executed

vacuous pass, where neither block is executed

Note that there’s no concept of vacuous fails. Something either works, it doesn’t or it isn’t “important”.

Even if a simulator does execute pass block for vacuous successes, this behavior can either be turned off via a switch or, in a more portable fashion, via the $assertcontrol(…) system task (if it’s supported by the tool). This means that we can rather safely rely on the behavior described in the outcome list to determine vacuity.

As before, we can wrap such a check inside a macro. It’s definition is a bit trickier, since we need to check that neither block was executed. We can do this using variables:

If any of the two blocks gets executed, one of the variables will be set and we can issue an error. This code, while deceptively simple, fails to compile in some simulators, with them complaining that they can’t find the definition of pass_called inside the pass block (and, of course, the same for fail_called). This is the part where I got bogged down trying to find a suitable workaround. The only way I could get this to work was to define the *_called variables inside a package and use the scope operator to reference them in the pass/fail blocks:

packagevgm_svunit_utils_sva;bitpass_called;bitfail_called;endpackage

This seems rather crazy, because not only does it require a user to include the file with the macro definition, but to also compile the extra support package. It’s a bit much for just a couple of measly variables, but it’s either this or nothing…

Since we’re going to rely on global variables with a persistent lifetime, we’ll need to make sure to set them to 0 before executing the expect:

Let’s take a step back now. Remember, that for the `FAIL_IF_PROP(…) macro we were checking whether the pass block is executed and if it was we issued an error. This doesn’t fit into the whole “ternary logic” scheme we discussed above when talking about vacuity. If we only did this, we wouldn’t be able to distinguish a fail from a vacuous pass. The macro name is also kind of misleading. What do we want here? Do we want the property to fail? Do we want it to fail or be vacuous, but under no circumstances result in a nonvacuous pass? What I intended was the former, but others could just as well interpret it as the latter.

More explicit macros would better clarify our intent. In this case, what we want is a `FAIL_UNLESS_PROP_FAIL(…) macro, because we are explicitly checking that the property can catch errors. What we should check is that the fail block gets executed:

In some cases, we would want to forbid a property to fail, but it wouldn’t be important if the pass is vacuous or not. Here we would have a `FAIL_IF_PROP_FAIL(…) macro, that check that the fail block wasn’t executed. There are six such macros that we can define, a pair for each of the three possible outcomes. We won’t look at their definitions here, but their construction is pretty simple now that we know the behavior of the pass/fail blocks.

It’s time for another realization about our property: the trigger condition is slightly off. Once we assert it what’s going to happen is that a new evaluation thread is going to be started on each clock cycle where a transfer is stalled. All of these threads are going to run in parallel, perform the same check and end at the very same time – when HREADY finally comes. This isn’t good for perfomance, especially if we have many AHB interfaces and very long stalls.

The start of an AHB transaction is a pretty interesting event, not only for this property, but potentially for others. A UVC user might be interesed in writing an own property that triggers once a transfer has started. We can fix our property and at the same time provide a reusable building block by defining a sequence:

sequencetrans_started(HTRANS);// ...endsequence

Just as for properties we wanted to check whether they fail or pass when we want them, for sequence we want to make sure that they match when they should and don’t match when they shouldn’t. We can check for a sequence match (or lack thereof) by treating it as a property and checking its pass/fail state. Doing the following would look a bit weird, though:

`FAIL_UNLESS_PROP_PASS(trans_started(HTRANS))

The intent of the code becomes a bit muddied: “Are we testing a property? But I thought trans_started(…) was a sequence…”. It would be better to have separate macros for sequence testing:

Using these will make the code a bit clearer. Also, notice the extra ##0 1 fused after the sequence. This is to ensure that we can’t accidentally pass a property as an argument, because the fusion operator will cause a compile error unless what comes before it is a sequence.

Coming back to our trans_started(…) sequence, the first thing we would like it to do is to not match when HTRANS is IDLE:

Something still doesn’t feel right, though. What about back to back transfers? It’s perfectly legal to start a new transfer immediately after the previous one was granted. In this case, there isn’t any IDLE cycle to use as an anchor. What we can, however, use is the occurrence of HREADY in the previous cycle, which we’ll have to feed to the property. Here’s how this could be tested:

If we try to run this in the simulator, though, the test is still going to fail, even though there isn’t anything obviously wrong with our fix. This is because the test contains a very subtle mistake. When the underlying expect statement from the `FAIL_*(…) macro kicks in, in its very first cycle the value returned by $past(HREADY) is 0, because we haven’t actually let it run long enough for there to have been an actual previous cycle. The LRM states that in this case $past(…) returns the default value of the expression passed to it. What we need to do is move the delay operator into the `FAIL_*(…) macro, to allow the expect to sample HREADY first and look for a match of trans_started(…) afterwards:

This way the test does what we intend it to do. We can now instantiate the sequence inside our trans_held_until_ready(…) property to have it trigger at the appropriate times. Since trans_started(…) has already been tested, we don’t have to write too many tests for the property, focusing just on what’s important. Also, by breaking the problem into smaller parts its easier to notice what the corner cases might be. As we’ve seen, writing even such a small property can be tricky, so we should make sure that our code works before trusting it to find design bugs.

Regarding the testing of assertions, I’m not saying that this isn’t important as well. A lot of the more complicated assertions we need to write will have to rely on support code (for example when pipelining comes into the mix) and we’re going to want to check that all parts of an assertion – the property, the support code and the connections between them – fit properly together. For protocol assertions and other simple assertions, I favor breaking down into smaller properties and sequences and testing those, not only to make testing easier, but to also provide a set of reusable elements that UVC users can integrate into their own code.

You can find the full code for the examples here and you can also download the SVUnit utils package if you want to start using these techiques for your own code.

There’s still some work to be done regarding unit tests and SVA constructs. For one, we strongly relied on the assumption that vacuous passes don’t trigger action blocks. We could add some code that tests this assumption by doing a trial run of a known vacuous property (e.g. 0 –> 1). If this isn’t the case, we could try calling the $assertcontrol(…) system task to disable vacuous success execution of pass blocks, if the task is available. Finally, if all else fails, we could inform the user to change the tool invocation to match our required behavior. This plan makes me feel less bad about the extra *_utils_sva package, which we had to use for the workaround with the status variables, because this is where we’d put all of this extra code. I’d also like to see this code merged into SVUnit at some point, but I’m not sure if now is the right time, due to the differences in tool capabilities across simulator vendors.

Now I’d like to conclude this series on unit testing UVCs. The tips in the past few posts should help you develop your UVCs faster and with higher quality, thereby increasing your confidence that they are ready for life in the harsh and unforgiving world of verification.

Monday, August 15, 2016

In the previous post we looked at how we can emulate sequencer/driver communication using a lightweight stub of uvm_sequencer. Let's also look at some more tips and tricks I've picked up while writing unit tests for drivers. To mix things up a bit, let's look at the AXI protocol. We're not going to implement a full featured driver; instead, we'll focus on the write channels:

The tests above cover the desired functionality that we want to implement in our driver. We won't go through each and every one of them and see what production code we need to write, since the actual implementation of the driver isn't really important for this post. We want to focus more on the tests themselves.

When first confronted with these tests, a new developer won't have an easy time understanding what's going on. The tests are pretty verbose and it's not immediately clear what the focus of each one is. Let's see how we could improve this.

First, we'll notice that the one thing we do in each test is to create an item and queue it for the driver. We use randomization to make sure that the item is "consistent", meaning that the constraints defined in it hold. We could set item variables procedurally, but we would need to ensure that the length and the number of transfers match, which would mean even more code. Instead of repeating these steps in each unit test, we could centralize them into one place. Since we use randomization, we can't extract a function. We're forced to use a macro:

We can even go one step further. If we look at the tests again, we'll notice that we use the same constraint over and over again to get an item without any delay. Also, whenever we want to check some write data channel aspects, we want to constrain the delay of each transfer to be zero. We do have tests where we want non-zero delays, but they are the exceptional cases. What we could do is to add some default values to the delay variables via soft constraints. This way, whenever we use the `add_item_with(...) macro we know that we'll get an item without any delay:

If we compare them with the initial version from the beginning of the post, we'll see that they are much more compact. It's also clearer that the handling of the data variables are what we're testing and not anything related to delays.

Since we want to check the behavior of signals at certain points in time, we need to do a lot of waits. The statement @(posedge clk) comes up a lot in our unit tests. We could shorten this by using a default clocking block:

Having less words to parse makes the test's intent clearer. Using a default clocking block is an option most of the time, but if you have a more exotic protocol that uses both edges of the clock or multiple clocks altogether, it's not going to work.

One thing you may have noticed is that I've marked some wait statements with comments. If you've read Clean Code (and if you haven't you should), you'll call me out on this. Uncle Bob says that comments are a crutch for poorly written code that doesn't express its intent properly. Instead of relying on comments, we could create a named task:

taskwait_addr_phase_ended();##1;endtask

Now, when a test calls this task, it'll be immediately apparent what the intention is:

There is a mismatch between what the task does and its name. The task actually just waits for one cycle. To reflect this, it should have been named wait_cycle(). A task call like this will take us back to square one in terms of test readability. We may as well just use the ##1 statement as that tells us the same thing. If we want to solve this mismatch between the task name and its implementation, we should change the latter. In the context of our tests, we knew that the address phase was going to end after one clock cycle. Generally, though, What we want is to wait for AWVALID and AWREADY to be high at the same time:

Some unit-testing purists might bash this method because it adds complexity to the testing logic. This is something we should try to avoid, since we don't want to have to start testing our tests. The task is lean enough that I'd say this point doesn't apply here.

While we may have streamlined our test preparation code, checking that our expectations are fulfilled is ridiculously long. Procedural code isn't really well suited to check behavior over time. You know what is though? Assertions... Instead of writing a big mess of procedural code that does repeats and cycle delays, we could write a nice property and check that it holds. When people hear the word property, they normally think of concurrent assertions, but this isn't really what we want here. What we want to do is to check that a certain property holds after a certain point in time, not during the whole simulation.

Luckily, SystemVerilog provides the expect construct, which does exactly what we want. Given a property, it will begin evaluating it starting with the first subsequent clock. For example, to check that the driver can drive data transfers with delays, we could write the following:

This is much cleaner than the procedural code we had before. It also allows us to structure our unit tests according to the Arrange, Act, Assert pattern (even though most of the time for drivers Arrange and Act get a bit mixed, but at least Assert is separated clearly).

Since expecting that properties hold is something we'll want to do over and over again, let's wrap it in a utility macro and make it part of the vgm_svunit_utils package:

`define FAIL_UNLESS_PROP(prop) \ expect (prop) \ else \ `FAIL_IF(1)

Using this macro will give the unit tests a more consistent SVUnit look and feel:

As we saw in the previous post, it's very important to use the === (4-state equality) operator instead of ==, otherwise we're writing tests that always pass. I intentionally wrote the drive_write_addr_channel buggy to show this (kudos to anyone who noticed). Also, we need to watch out for hidden comparisons. In the last test we didn't even use the equality operator:

Because of the way the `FAIL_* macros are written, they will both always pass, so the test isn't really doing anything. If we were to re-write them using properties, we would notice if WLAST isn't being driven:

If WLAST were to be X, then the negation would also return X, which would be interpreted as a 0, leading to a fail of the property. For single bit signals, using properties is much safer than comparing for equality. There's also the added bonus that the code is more compact. For vectors, though, we still need to make sure that we're using the === operator.

Another cool thing that properties allow us to do is to focus more on the aspects that are important for a test. For example, in the data_held_until_ready test we want to check that the driver doesn't modify the value of WDATA until it sees a corresponding WREADY. We did this by choosing known values for the data transfers and checking that these values stay on the bus for the appropriate number of clock cycles. Actually, we don't (or shouldn't) care what data is being driven, as long as it stays constant. We could simplify the code to the following:

We don't really need to use a property to check that the signal stays stable. As per the LRM we could use the $stable(...) system function inside procedural code as well, but some tools don't allow this, limiting its use to assertions.

While it's usually frowned upon to have randomness in unit tests, I would argue that this isn't necessarily a problem here. Randomness should be avoided because it causes us to write complicated code for our expectations. As long as we don't need to do this (i.e. the checking code stays the same regardless of what we plug into it), everything should be fine. There is one caveat, though. On the off chance that two consecutive transfers contain the same data, it won't be possible to figure out whether the driver moved to the second transfer too early. In the extreme case, all transfers could have the same data value, making it impossible to check that the driver really holds transfers constant for their entire durations. This could be solved by writing a constraint that all data values should be unique.

Last, but not least, I hinted in the previous post that a driver not only drives items from the sequencer. It's also supposed to request items from the sequencer at well defined points in time. For example, a driver should call get_next_item(...) once it's able to process a new item, but not before, to allow the running sequence to randomize items at the latest possible time (so called late randomization). This is helpful when sequences use the current state of the system to decide what to do next. For simple protocols this is easy: a new item can start exactly after the previous one has finished. For pipelined protocols, though, it's a not as easy. The AXI protocol is massively pipelined and can have a lot of ongoing transactions at any time. I don't want to have to think what a sensible definition for an available slot for a new item would be, because the scheme would probably be too complicated. I do however want to show how we could verify that a driver calls sequencer methods at defined times.

We'll take a contrived example of how to handle responses, since I couldn't think of anything better. Whether we're checking that put_response(...) or get_next_item(...) was called when expected doesn't really matter, so this example should be enough to prove a point. Let's say that when a response comes on the write response channel, the driver is supposed to let the sequencer know.

The test is going to pass with this implementation. The problem is, though, that the response is considered to be done only once it has also been accepted, i.e. BREADY goes high. Unfortunately, the previous code sends responses too early. We'll need to update our unit test to check that put_response(...) was called after BREADY was also high and that it was called only once. This is where the test diagnostic information that sequencer_stub provides is going to be useful:

The sequencer_stub class contains a named event put_response_called that, as the name suggests, is triggered when put_reponse(...) is called by the driver. The `FAIL_UNLESS_TRIGGERED(...) macro is part of the vgm_svunit_utils package. It wraps the code required to check that an event was triggered in the current time step:

The wait statement checks if the event was already triggered (because the driver code could have executed first) and if it wasn't, it blocks. If time moves forward and the event didn't get triggered, an error message is triggered. Since the test will now fail, we'll need to update the driver code (not shown).

We've covered a lot of ground in this post on how to write more comprehensive, readable and maintainable unit tests for UVM drivers. You can find the example code here. I hope this helps you be more productive when developing your own drivers.

Sunday, August 7, 2016

It's that time again when I've started a new project at work. Since we're going to be using some new proprietary interfaces in this chip, this calls for some new UVCs. I wouldn't even consider developing a new UVC without setting up a unit testing environment for it first. Since this is a greenfield project, a lot of the specifications are volatile, meaning that the interface protocol can change at any moment. Having tests in place can help make sure that I don't miss anything. Even if the specification stays the same, I might decide to restructure the code and I want to be certain that it still works.

I first started with unit testing about two years ago, while developing some other interface UVCs. I've learned a few things throughout this time and I'd like to share some of the techniques I've used. In this post we'll look at how to test UVM drivers.

A driver is supposed to take a transaction (called a sequence item in UVM lingo) and convert it into signal toggles. Testing a driver is conceptually pretty straightforward: we supply it with an item and we check that the toggles it produces are correct. As an example, we'll take the Wishbone protocol, revision B.3. Our sequence item models the properties of an access:

Let's look at how to supply a driver with an item. A driver is an active component, that asks for items at its own pace. Inside an agent, it's connected to a sequencer, that feeds it with items when they become available. Inside our unit test, we need to emulate the same relationship by having a test double which the driver can interrogate. There's nothing stopping us from using uvm_sequencer itself:

The first test we want to write is that when the driver gets an item with no delay, it drives CYC_O and STB_O immediately. We first create an item and we start it on the sequencer using execute_item(...). This models a `uvm_send(...) action inside a sequence:

Since execute_item(...) blocks until the driver finishes processing the item, we'll need to fork it out to be able to check what the driver does with the it.

After supplying the driver with the item, we need to check that it drives the appropriate signal values. Once an item is gotten, we expect the driver to start driving CYC_O and STB_O and their values to be valid on the next posedge. We have to check one clock cycle at a time. For example, if we would drive an item with a delay of three cycles, the unit test would look like this:

It's not enough to just skip the first three clock cycles. We need to ensure that the driver signals idle cycles during that time. Also, notice the use of the === operator (4-state equality). If we were to use the == operator instead, we would get false positives if the driver doesn't drive any of the signals. This is because X (unknown value due to not being driven) matches anything.

We could write a few more unit tests for our driver. For example, a test could check that a read transfer is properly driven:

Notice that in the last two tests we didn't check CYC_O and STB_O anymore. This is because we already checked that they get driven when sending an item. When we write the unit tests, we don't write them in isolation from the production code. They evolve together. To save effort and make the tests more readable, we can focus on certain aspects of the class we want to test.

The implementation of the drive() task would look something like this:

We can see that we drive WE_O and ADR_O at the same time we write CYC_O and STB_O. It wouldn't bring us much if we, for example, exhaustively checked that the address can be driven with different delays.

Up to now we only tested that the driver properly reacts to requests from the sequencer. Most of the times, the driver also has to react to other events triggered by its partner on the bus. In our case, since we're developing a master driver, it needs to be sensitive to toggles on signals driven by the slave. One such requirement is that a master is supposed to keep the control signals stable until the slave acknowledges the transfer. This is signaled by raising the ACK_I signal.

We need to write a test where, aside from executing an item on the sequencer, we also model the behavior of the slave:

We basically model all collaborators of the driver, where some might communicate with it via method calls (like the sequencer) and some might communicate with it via signal toggles on the interface (like a connected slave).

This last test would suggest that we should update the drive() task with the following code:

While this is what we would need to do, the test still passes without this line. This is because, once the driver starts driving an item, it won't touch the signals anymore until it gets another item. The extra wait statement would cause the driver to mark an item as finished at a later time. This is hints that it isn't enough to just check signal toggles. It's also important that the driver makes calls to item_done() at the right time. This isn't something that we can check easily check with uvm_sequencer. We'd need to capture information about method calls from the driver to the sequencer.

We could choose to implement such extra testing functionality in a sub-class of uvm_sequencer. We could capture the number of times item_done() (or any other method) got called during a test. This wouldn't be a problem to implement, but have you ever taken a look at uvm_sequencer? That class is massive. The functionality is shared with two of its subclasses, uvm_sequencer_base and uvm_sequencer_param_base. Most of the stuff a sequencer does (prioritization, locking, etc.) we don't even need. Debugging anything would be a nightmare. Interrupting an item in the middle of it being driven (due to a unit test finishing early) is also going to cause fatal errors to be issued (e.g. "get_next_item() called twice without a call to item_done() in between").

A better alternative would be to start from scratch with a lightweight class that mimics the uvm_sequencer functionality we need and provides us with test diagnostic information. When talking about test doubles, I like to use the same terminology as outlined in this article. I can't really decide whether what we want to create should be called a fake (since we'll implement working functionality, but only a limited part of what uvm_sequencer can do) or a stub (since we want to collect information about method calls). I chose 'stub', for now, based on gut feeling.

Our stub has to be parameterizable, just like uvm_sequencer, to be able to interact with any driver. Just like a sequencer, it's going to have a seq_item_export for the driver to connect to:

Last, but not least, the sequencer methods (get_next_item(), item_done(...), etc.) operate on these FIFOs to emulate the behavior of a real sequencer. For example, get_next_item() peeks inside the request FIFO:

tasksequencer_stub::get_next_item(outputREQt);reqs.peek(t);endtask

The item_done(...) function pops a request from the FIFO, because it's been handled:

Last, but not least, we'll need to ensure that all unit tests start from the same state. This means that there aren't any items queued from previous unit tests and that the diagnostic information has been cleared. A flush() method (similar to uvm_tlm_fifo::flush()) should be called in teardown() to enforce this rule:

We could also do the more funky stuff, like testing that we only get a certain amount of calls to item_done(...) in a certain time window. Let's skip this for now to keep the post short.

I've uploaded the code for the sequencer_stub to GitHub under the name vgm_svunit_utils. I'll use this as an incubation area for additions to SVUnit. These could eventually get integrated into the main library if deemed to be worthy and useful to others.

You can also find the example code for this post here. I hope it inspired you to give unit testing a try!

About

I am a Verification Engineer at Infineon Technologies, where I get the chance to work with both e and SystemVerilog.
I started the Verification Gentleman blog to store solutions to small (and big) problems I've faced in my day to day work. I want to share them with the community in the hope that they may be useful to someone else.