This paper presents the implementation and evaluation of a computer vision problem on a Field Programmable Gate Array (FPGA). This work is based upon previous work where the feasibility of application specific image processing algorithms on a FPGA platform have been evaluated by experimental approaches. This work coveres the development of a BLOB detection system on an Altera Development and Education II (DE2) board with a Cyclone II FPGA in Verilog. It detects binary spatially extended objects in image material and computes their center points. Bounding Box and Center-of-Mass have been applied for estimating center points of the BLOBs. The results are transmitted via a serial interface to the PC for validation of their ground truth and further processing. The evaluation compares precision and performance gains dependent on the applied computation methods.

Having multiple talkers on a bus system rises the bandwidth on this bus. To monitor the communication on a bus, tools that constantly read the bus are needed. This report shows an implementation of a monitoring system for the CAN bus utilizing the Altera DE2 development board. The Biomedical Institute of the University of New Brunswick is currently developing together with different partners a prosthetic limb device, the UNB hand. Communication in this device is done via two CAN buses, which operate at a bit-rate of 1 Mbit/s. The developed monitoring system has been completely designed in Verilog HDL. It monitors the CAN bus in real-time and allows monitoring of different modules as well as of the overall load. The calculated data is displayed on the built-in LCD and also transmitted via UART to a PC. A sample receiver programmed in C is also given. The evaluation of this system has been done by using the Microchip CAN Bus Analyzer Tool connected to the GPIO port of the development board that simulates CAN communication.

Nowadays Field Programmable Gate Arrays (FPGA) are used in many fields of research, e.g. to create prototypes of hardware or in applications where hardware functionality has to be changed more frequently. Boolean circuits, which can be implemented by FPGAs are the compiled result of hardware description languages such as Verilog or VHDL. Odin II is a tool, which supports developers in the research of FPGA based applications and FPGA architecture exploration by providing a framework for compilation and verification. In combination with the tools ABC, T-VPACK and VPR, Odin II is part of a CAD flow, which compiles Verilog source code that targets specific hardware resources. This paper describes the development of a graphical user interface as part of Odin II. The goal is to visualize the results of these tools in order to explore the changing structure during the compilation and optimization processes, which can be helpful to research new FPGA architectures and improve the workflow.

This paper presents two approaches to accelerate the MMD algorithm in multi-core environments. The MMD algorithm is a transformation-based algorithm based in the field of reversible logic synthesis. It is used to synthesize and optimize reversible circuits which are an integral part of future technologies like quantum computers. However, the MMD algorithm is computationally intensive and the acceleration of the algorithm might not only produce faster but also better results. This paper focuses on two parallel hardware environments, the Cell Broadband Engine and the NVIDIA Tesla architecture. In the course of this project two different parallel algorithmic approaches have been implemented on both hardware architectures. These implementations have been compared in order to find the best combination of algorithmic approach and matching architecture. Additionally, the answer to the question if parallel hardware architectures are a means to improve algorithms in the field of reversible logic synthesis has been examined.

This contribution describes an optical laser-based user interaction system designed for virtual reality (VR) environments. The project's objective is to realize a 6-DoF user input device for interaction with VR applications running in CAVE-type visualization environments with flat projections walls. In case of a back-projection VR system, in contrast to optical tracking systems, no camera has to be placed within the visualization environment. Instead, cameras observe patterns of laser beam projections from behind the screens. These patterns are emitted by a hand-held input device. The system is robust with respect to partial occlusion of the laser pattern. An inertial measurement unit is integrated into the device in order to improve robustness and precision.

The FIVIS simulator system addresses the classical visual and acoustical cues as well as vestibular and further physiological cues. Sensory feedback from skin, muscles, and joints are integrated within this virtual reality visualization environment. By doing this it allows for simulating otherwise dangerous traffic situations in a controlled laboratory environment. The system has been successfully applied for road safety education applications of school children. In further research studies it is applied to perform multimedia perception experiments. It has been shown, that visual cues dominate by far the perception of visual depth in the majority of applications but the quality of depth perception might depend on the availability of other sensory information. This however, needs to be investigated in more detail in the future.

This paper presents the newly founded Institute of Visual Computing at the Bonn-Rhine-Sieg University of Applied Sciences in Sankt Augustin, Germany. The research focuses as well as an overview of current projects are going to be part of this presentation.