Q: Why are 'no blocks' showing up in the simulink browser?

Q: What 10GbE switch and/or NIC should I buy?

Q: How do I use interchip connections on the BEE2?

A: You must take time-of-flight variations into account. For more information, see the Interchip Interconnects page. NRAO is successfully using interchip connections for their GUPPi design, and Caltech is also using them for GAVRT.

Q: My Platform USB cable has stopped responding, and iMPACT will no longer connect to the cable until I restart the computer.

Q: Where can I find an inexpensive card cage to house my iBOBs?

A: Some have had luck buying old CompactPCI card cages and gutting them to house iBOBs. Vector electronics makes a number of suitable card cages, though it can be a bit tricky figuring out which ones are the right size. Part number CCK220-6U combined with CG1-220/12 card guides provides a suitable minimalist card cage that can then be mounted in a standard 19" rack.

Q: How can I program several iBOBs with one Platform USB or Parallel IV programmer?

A: Since the programming interface is a standard JTAG port, you can daisy chain multiple iBOBs together. XAPP139 from Xilinx details the connection on page 14, but basically Vref should only be connected to one board. All TMS and TCK signals should be tied together. The signal labeled TDO on the programmer should be connected to the TDO pin on the first iBOB, then the TDI pin from that iBOB should be connected to the TDO pin of the next iBOB and so on until the final iBOB TDI pin is connected back to the programmer. If all iBOBs are grounded externally, it may be wise to only connect one iBOB to the ground of the programmer to avoid a ground loop (although there will still be a ground loop to the ground of the computer that is controlling the programmer). The TCK maximum frequency of the programmer will almost certainly have to be reduced in iMPACT to account for the increased latency in the programming chain. So far I have only tested daisy chaining two iBOBs with one USB programmer.

Q: I'm having trouble getting Impact to run under Linux with a Platform USB cable.

A:This information from Xilinx gives the basics, but is a little sparse and difficult to follow. This site gives more details, but all of the hacking was not necessary with ISE 10.1. Some snags I came across:

setup_pcusb must be run as root. su, then source setup_pcusb.

fxload must be installed on your machine. Xilinx doesn't mention this, so I guess maybe it is standard with RedHat, but was not on Ubuntu.

setting the XIL_IMPACT_USE_LIBUSB environment variable to 1 is crucial.

Q: How can I use the flexibility of the Xilinx DCM to generate an arbitrary clock from a given reference?

A: I am not an expert on this, but I managed to generate a 250MHz "simulink" clock (the clock domain of your Simulink design) from a 100 MHz external reference driving the User Clock input on the BEE2. The DCM configuration is done in the XPS_base/system.mhs file. First, compile your design with usr_clk2x selected as the clock source. Then edit the system.mhs to look like this:

After modification, it took a few tries to get the system to rebuild the necessary parts (you basically need to run xflow again) without overwriting the changes to the mhs file.
Note: using the frequency synthesizer mode seems to make the DCM much more sensitive to variation in input clock frequency as might be expected. I found in this case the DCM would lock with an reference ~90-110 MHz.

Q: How do the iBOB LWIP MAC and IP addresses get set?

A: The file in XPS_iBOB_base/drivers/xps_lwip/lwipinit.c sets everything up. You can change the IP and MAC there. By default, the lowest two bytes of the IP are equal to the lowest two bytes of the MAC. The lowest 14 bits of the MAC are set with headers on the 14 positions of J8 starting from the little dot.

Q: How can I plot/visualize the data coming out of my instrument?

Q: How can I increase or decrease the number of Block-RAMs assigned to the PowerPC?

A: You can change the total size of the memory block allocated to the PowerPC by editing the plb_bram_if_cntlr line in ~design/XPS_iBOB_base/system.mhs. The size must be a power of 2 (eg, 16k, 32k, 64k, etc.). To decrease BRAM usage from 64k (the default) to 32k, change:

Then you need to recompile your design. You can do this in Linux by running xps -nw system.xmp from your shell, and then run init_bram at the XPS prompt. You can also recompile in Windows from the XPS GUI, but you should be careful not to rebuild system.mhs.

You must also edit the LinkerScript in ~design/XPS_iBOB_base/Software to reflect the size of the new memory region.

This problem can also occur in designs with additional code (LWIP takes up a significant amount of space, leaving less room for custom functions). In this case, try editing the system.xmp file in XPS_iBOB_base. Find the line CompilerOptLevel: 2 and change to CompilerOptLevel: 4 which optimizes for size rather than speed.

Q: Why didn't adding latency help my timing closure?

A: If you used a Delay block to add latency, turn on the Enable Register Retiming option. This turns your delay line into a string of registers, which can help with spatial propagation. By default, it will pack a 16-element shift register into a single slice, which does very little to help you cross the chip. But you should use this technique sparingly. If you're just using a Delay block to synchronize signals, leaving the delay line as a shift register greatly reduces resource usage.

If you are having trouble getting an Adder/Subtractor, Convert, or Multiplier block to meet timing, make sure that the Pipeline to Greatest Extent Possible option under Show Implementation Parameters is enabled. This builds the latency into the operation. Without this option, it still tries to complete the operation in a single cycle, and then tacks all the latency onto the end.

Q: Why am i getting a brefclk error when building a BEE2 design with 10GbE?

A: On the BEE2, it is standard practice to stop the toolflow and manually trim unused clock nets. Which nets need removing depends on which serdes ports are used.

To get the 10GbE block to compile on the BEE2, you have to stop bee_xps after step 4 (Copy base package) and edit the system.ucf in XPS_BEE2_usr_base\data to remove the references to the unused clock nets. Then, continue with steps 5-9 of bee_xps.

Depending upon which XAUI ports are used, you will have to trim the references to brefclk_top_p and brefclk_top_m or brefclk_bottom_p and brefclk_bottom_m. The error messages will indicate whether top or bottom needs excision.

More specifically, adapted from advice from Jason:
After compiling up to EDK/ISE/Bitgen, open XPS_BEE2_usr_base/data/system.ucf and comment the four lines near the top that mention brefclk_top or brefclk_bottom depending on which is giving an error message. Save the UCF file and run the EDK/ISE/Bitgen backend. From BEE_XPS, unselect all check boxes except Update Design and EDK/ISE/Bitgen.

A: Accessing Block-RAMs, SW Registers and FIFOs from BORPH in ASCII mode is known to be buggy. If possible, avoid ASCII mode at all costs. Instead, write your own interactive wrappers to read and write shared memory regions in binary mode.

Q: Why do i get permissions errors when i run "make bits" from the xps prompt?

then you should check the permissions of your build directory. If the permissions already look OK, there might be a more subtle problem. If your Linux build machine is NFS or SMB mounting a share on a Windows file server, the Linux permissions might not accurately reflect the permissions on the Windows server. From the Windows machine, make sure that you have "Full Control" of the build directory, and then try building again.

This is caused by the read only permissions being carried over from directories in EDK. You need to use cygwin or the EDK Shell to change permissions of the EDK/sw/ directory(and all subdirectories) to be read/write.