Blueshell is a toolkit for building networks on chip using Bluespec System Verilog and various Xilinx components. The purpose of Blueshell is to enable experiments with memory architectures at the system level.

There are currently a lot of files stored within the project. Please see the Blueshell SVN Structure page for more information on the various components.

Two Components

Bluetiles is a Manhattan grid mesh network. It is intended to be used for communication between CPUs, co-processors and I/O devices. Each device on a Bluetiles network is identified by its location (X, Y). Each device may provide multiple services, which are identified by port numbers.

Bluetree (previously called XPortMC) is a tree network. It is intended to be used for access to a shared memory. The memory appears at the root of the tree, the leaves are CPUs and co-processors, and non-leaf, non-root nodes may be multiplexers or intelligent components.

Implementations

There are a number of Bluetiles components (filenames: Tile*.bsv) and Bluetree components (filenames: Bluetree*.bsv) found within SVN. Some components are implemented in pure BSV. Others rely on Xilinx IP and therefore mix BSV with VHDL/Verilog and XPS projects. See the Bluetiles and Bluetree pages for details of the various components.

General advice on building Blueshell components

The files in each directory are built by first running the "compile" script (to build BSV code) and then running "rebuild" to run the Xilinx tools.

Before you can run either of these commands in any directory, you must run the "setup_dir" script in the "blueshell" directory.

The difficulty of porting to new FPGA boards is due to the need to recreate the Xilinx components. Generally speaking, BSV and VHDL/Verilog are easily ported, but more complex IP such as Microblaze and various memory controllers involve FPGA-specific configurations which are not portable. Creating these "board support packages" is a dark art.

Pre-6th generation FPGA boards, e.g. ML505, are not supported because of our dependence on AXI buses for external connections. For example, ML505 support would require a Bluetree to PLB bridge.

Important Note for 7-series Devices

Larger designs for 7-series devices (i.e. the VC707) typically fail timing under the standard ISE/XST flow. Despite all attempts to make this route properly, the tools simply weren't having it, and it seems that using the Vivado flow gives much better results. Unfortunately, Vivado 2012.4 (which is semi-recommended for ISE 14.4) does not infer BlockRAMs from Bluespec correctly, however, this is fixed in Vivado 2013.2. Remember: Vivado only supports 7-series devices!

If larger 7-series-based designs are required, Vivado 2013.2 is the recommended tool flow. Please bear in mind that this will automatically update the included XPS project to 14.6. This should not be an issue in future, since we are planning to update all projects to 14.6 in coming months.

This requires a few more careful considerations. Vivado does not support mixed VHDL/Verilog deigns using the work package at current, so the .vhd files must be changed in order to add the Verilog component specification to the top level VHDL files. See AR# 47454 for more details. Additionally, UCF constraints are no longer supported and must be replaced by XCF constraints. This is done automatically for all but the top level UCF.

This has all been implemented using vc707_4x4, although it is marked as nobuild until all machines can have Vivado 2013.2 installed. Please use this design for guidance when porting designs to Vivado for now.

If Vivado is not an option, it is possible to over-constrain the failing design. Specify a clock with an over-constrained period (say, 8ns instead of 10ns), synthesise (and check the report passes the original timing!), then feed the files back into a normal synthesis using SmartGuide. See the "hack_rebuild" flow of the vc707_4x4 project for pointers on this.

Testing

Automatic: the "build_all" script builds bitfiles for both boards by following the steps listed above. After this the "test_all" script can be used to check each bit file. It requires only that suitable virtual lab keys should be placed in the blueshell directory, named atlys.key, ml605.key and vc707.key. On success, it prints the last line of each test log, which should read "That's all folks". These tests form the core of the overnight build system. Once all the tests are completed you can generate a test report by running "python bluetest/lunix/report.py" from the "blueshell" directory.