Contributing to SVUnit

Adding Features

If you’re interested in contributing to SVUnit and serious enough that you got to this point, chances are you know what you want to do and you’re not afraid to figure out how to do it! I’m kind of counting on people to take the initiative when it comes to new features or making bug fixes to existing features. Of course I don’t mind fielding how-to questions, etc so if there’s something you’re not sure of or you’re looking for an opinion, feel free to get a hold of me at neil.johnson@agilesoc.com. I’ll help you out as best I can.

As for a starting point, I’d prefer it if you opened a new issue on github. Give people an idea of what you’re going to do, why, etc. For a bug fix this’ll be pretty easy, just explain what’s broken. If you’ve got a new feature in mind, still open an issue and scribble down the gyst of what you want. If you feel like you need to describe the new feature or you need a little advice, say so. But add detail at your discretion; don’t feel like you need to go overboard. A second step for new features would be to post something for review on the SVUnit user group. Maybe someone will chime in with a great idea, save you some trouble and/or come up with some other great idea.

And in general, when you’re adding new features, make sure they are…

simple

usable

thoroughly tested

features others will also use

supported by all relevant simulators

Release Flow

I reckon the best approach to contributing is to fork the svunit-code repository, make your changes and create a pull request so I can integrate and release. This seems to be ‘the way’ people do it and it’s worked for SVUnit so far so no need to reinvent the wheel. Here’s a reference for forking a github repository. When you’re all done your new feature or bug fix, you’ll also need the creating a pull request reference. I’ll get a pull request notification and take it from there.

At this point it’s probably worth saying that before you get started you’ll need to be savvy with git. Not a pro by any stretch, but know it well enough that you’re comfortable forking a repository and making, committing and pushing changes. If you need it, here’s a git reference to get started.

Testing Features

Pretty much any new code added to SVUnit gets tested before it’s released; simple/complex or otherwise. I’ve been burned by untested code in the past so I’ve made it a habit of writing a test or 2 with every new feature. Not to say that all the code in the repository has a test, but most of it does at this point so let’s keep the streak alive!

Running the Testsuite

As of now, the testsuite is pretty brute force. There’s nothing fancy about it. The tests are written in bash and apply the library in a ‘real-life’ mode without any mocking. Maybe we’ll get a little more elegant at some point, but for now it’s done the trick.

The testsuite is in the svunit-code/tests directory. To run it…
>cd test
>./run

The ‘run’ script will round up tests in several different groups and run them. To get into a test group, new tests are directories that contain their own ‘run’ script and have a directory name that begins with any of:

frmwrk_: tests that focus on the creation of the verilog framework files like .testrunner, .__testsuite and so on.

sim_: tests that focus on simulator compatibility

svunit_base_: tests that focus on code in the svunit_base library, like does the logging show up properly, do the assertions fire correctly, etc.

example_: tests that run an example (every example in the release package has a corresponding test to make sure it runs as expected)

mock_: tests that focus on any of the mock plugins in svunit_base/*mock

util_: tests that focus on any of the svunit utilities in svunit_base/util

For example, if you’re adding a new verilog function so that users can easily create blibbets in their unit tests, you’re probably going to put your new blibbet creating utility in svunit_base/util and add a new test called util_blibbet_creater. util_blibbet_creater/run will automatically be run when someone invokes ‘./run’ and it’ll get a pass/fail based on the exit status.

After you’ve run the testsuite, you can clean up all the generated output by:

./clean

Clean dives into all the test directory and does away with everything that shouldn’t be committed to the repository.

Adding a New Test

Which brings us to how to write a new test. If you have a new test directory called util_blibbet_creater, you’ll also need an executable script in there called ‘run’. At a minimum, run will need to ‘exit 0’ because the 0 return code is a passing test. If you have a failing test, you need to exit 1 (or any other non-0 return code).

As for what’s in your run script, there’s a fair amount of leeway there. A lot of the tests I’ve run rely on test/test_functions.bsh. Sourcing that gives you a bunch of useful functions for checking logfiles for pass/fail status, comparing golden and actual output, running examples and stuff like that. Let’s say for now that it’s best if you take a look at a few <test>/run scripts in the test group you’re to and follow the pattern that’s there. Especially before adding your first new test, take some time to browse what’s there and get a feel for how we’ve been testing so far. Most importantly though, make sure that whatever you’re doing is fairly rigorous. If you need to bend the ‘rules’ in the name or rigour, so be it.

The last requirement of a new test is to also have a ‘clean’ script that rm’s any output created by the test shouldn’t be committed to the repository.

Integration and Release

When you fork the repo and run the testsuite in a clone, you should see all the tests pass. After you’re done adding/fixing features, you should still see all the tests pass :). A passing testsuite means you’re ready for release. Refer to creating a pull request at this point to get in line. Once you create a pull request, I get notified. I’ll take a look through what you’ve got, do a sanity test check, run the test suite, check for any necessary updates to the script usage, etc. If all is well, I’ll integrate with the trunk with a new release number and close the corresponding issue. If it’s a cool new feature that people have been waiting for, I might do a blog post to let people know what’s happening.