Introduction

It is becoming increasingly difficult for users to interact with the slew of portable gadgets they carry, especially in the area of text entry. Although miniature displays and keyboards make some portable devices, such as cell phones and PDAs, amazingly small, users’ hands do not shrink accordingly.

To solve this problem, we proposed a Virtual Keyboard. This device will replace a physical keypad with a customizable keyboard printed on a standard A3 size paper whose “keystrokes” are read and translated to real input. This virtual keyboard can be placed on any flat surface, such as desktops, airplane tray tables, kitchen counters, etc. and can theoretically be interfaced with any computing device that requires text entry. This would eliminate the need to carry anything around and also prevent any chance of mechanical damage to the keypad in harsh environments if a simple lamination is used to protect the paper. In addition, buttons on this device can be reconfigured on-the-fly to give a new keyboard layout using a GUI we built in Java and then transferring that data to the device using a computer’s serial port.

High Level Design

The Virtual Keyboard has three main components: the laser, camera, and printed keyboard. The laser beam is simply a conventional off-the-shelf red laser with a line-generating diffractive optical element attached to it. This assembly generates an invisible plane of red light hovering a few millimeters above the typing surface. When a finger passes through this plane, it shines bright red in that region.

Figure 2: C3038 image sensor module mounted on a custom PCB

The CMOS camera continuously captures images of the region where the printed keyboard is supposed to be placed and checks these images for red color data above a specified threshold. The threshold concept works in this case because the laser shining on a typical human finger generates saturating values of red color data, which is very easily distinguishable from its surroundings.

Figure 3: Comparing an actual keyboard with a printed keyboard

Lastly, the printed keyboard is simply a standard A3 size paper that contains a custom keyboard layout. After rigorous testing, we decided to use a black background and blue letters for the printed keyboard because our device doesn’t use its own light source. Therefore, proper contrast is necessary to distinguish the typing finger from the surrounding area in various lighting conditions. The actual programming of the printed keyboard layout into the device can be done using a serial port and a GUI we developed in Java. This GUI basically gives the user a blank grid of buttons and the user can choose to assign any button to any letter or number he/she desires.

Software Implementation

The software component was split into 5 main components:

Implementing the I2C protocol to read and write registers from camera

Reading values from camera to obtain 6 frames every second

Processing the images to obtain a pressed key

Converting the pressed key into a scan code which is then transmitted using the PS/2 protocol

Sending serial data from a java application to update the array of scan codes in the Mega32

Main Operation

At first we initialize PORTA on the Mega32 to take UV input from the camera and PORTC to communicate with the camera over the I2C interface. The baud rate is set to 19,200bps for serial communication. We then run the calibrate function on the camera, which looks at a black keyboard to determine a distinguishable value for red color threshold. Then we call a function called "init_cam" which performs a soft reset on the camera before writing the required values to corresponding camera registers. These registers change the frame size to 176x144, turn on auto white balance, set the frame rate to 6 fps, and set the output format to 16-bit on the Y/UV mode with Y=G G G G and UV = B R B R. The code then enters an infinite loop which checks for the status of the PS2 transmitting queue and tries to process the next captured frame if the queue is empty. If not, the queue is updated and the PS2 transmission is allowed to continue.

Image Processing

The getRedIndex function captures rows of data from the camera and processes each of them. We first wait until a neg edge on the VSYNC, which indicates the arrival of new frame data on the UV and Y lines. We then wait for one HREF to go by since the first row of data is invalid. At this point, we can clock in 176 pixels of data for a given vertical line in the Bayer format.

Figure 4: Bayer color pattern

In the mode where the UV line receives BR data, the output is given by: B11 R22 B13 R24 and so on. Since we only needed red data, we stored an array of 88 values in which we captured the data on the UV line every 2 PCLKS. The OV6630 also repeats the same set of pixels for consecutive rows and thus 2 vertical lines processed would have data about the same pixels. We considered optimizing this by completely dropping data about the even rows, but this was not going to save us anything since all our processing could be done between one neg edge and a pos edge (when data becomes valid again) of HREF.

Since we don’t have enough memory to store entire frames of data to process, we do the processing after each vertical line. After each vertical line of valid data, HREF stays negative for about 0.8ms and the camera data becomes invalid; this gives us ample time to process one line worth of data. After each vertical line was captured, we looped through each pixel to check if it exceeded the red threshold found during calibration. For every pixel that met this threshold, we then checked if the pixel was part of a contiguous line of red pixels, which would indicate the presence of a key press. If such a pixel was found, we then mapped this pixel to a scan code by binary searching through an array of x, y values. If this scan code was found to be valid, we debounced the key by checking for 4 continuous presses, and then added the detected key to the queue of keys to send to the PC.

I2C Communication

A very big part of our challenge was to figure out the correct configuration to use to capture and process the images from the camera. The communication protocol was not easy to work with and there were a total of about 92 registers we could use to set up the camera. At first we considered using the TWI interface provided by CodeVision to communicate with the camera, but we were unable to do so. Thus, we decided to modify and use a version developed by Ruibing Wang, which uses a lot of the TWI settings provided on the Mega32. The protocol uses a 2-wire communication scheme, which is activated by a 10kOhm pull-up resistor. The clock signal to the camera is provided by the SCL line, and the frequency is given by: 16MHz / (16 + 2 x (TWBR)(4TWPS)). We decided the optimal solution would be to satisfy the minimum requirement which was to set the bit rate register (TWBR) to 72 and the status register (TWSR) to 0. The rest of the code just followed the standard protocol defined by Philips. The camera registers could be written by writing a start bit, followed by a target register address and then the target data. We had no need to read from the camera registers except in the initial phase when we had to make sure we had the protocol working properly.

Camera Settings

We decided to use a resolution of 176x144 since that was the minimum required to detect an entire A3 size paper on which the keyboard would be printed. At this resolution, we could capture at most 6 frames of color images per second. The camera output format was set to capture 16 bit UV/Y data, where UV had BRBR data and Y had GGGG data. The Y data was completely ignored.

Programming the EEPROM

Since we wanted to be able to change the key assignments on the fly, we stored the array of scan codes corresponding to each key in EEPROM and turned on the RS-232 receive interrupt. We also wrote a java applet that was a simple GUI where the user can enter scan codes of the keys they desire and transmit it to the microcontroller through a standard COM port on the PC.

Keyboard Output (PS/2)

The code was structured using two timer compare interrupts where timer1 compare was used to start transmissions of each data byte and timer2 compare was used to reset the waiting. Since the protocol allows a range of frequencies that a computer would understand, we decided to use a clock time of 250 and wait time of 700. When the timer1 interrupt is fired, it transmits the bits in the following order when the clock is set to high: start bit(0), data bits, parity bit(xor of all bits), and a stop bit(1). If not, the clock state is updated. The rest of the code simply maintains a queue which would hold the elements to transmit as characters. The queue has a get and put method that updates the 2 pointers in an array.

Hardware Implementation

The three main components of our hardware design are as follows:

Laser module

Camera and its associated circuitry

Outer casing for the entire device

Laser

Figure 6: Red laser module with a line-generating DOE attached

Our original plan, at the time of the project proposal, was to use an infrared laser to detect button presses using the CMOS camera, but we realized that user safety would be a major issue in that case. The user would never know even if he/she is staring directly at the laser and, therefore, there would be no way to prevent eye damage. In addition, we also realized that the CMOS camera we’re using (OV6630) is not very effective at detecting infrared light. Hence, we decided to use a Class II 635nm red laser instead.

The laser module we bought came with a built-in driver; therefore, we didn’t have to worry about biasing the laser properly to make it operational. All we had to do was to connect the laser to a 3V power source, which we obtained using a simple 3V voltage regulator.

Figure 7: Laser line generation calculation

The laser module also came with a line-generating diffractive optical element attached to it. However, since we didn’t know the fan-angle for this DOE, we had to experiment with various distances in order to obtain a line length of at least 8.5”, which was required to cover the entire width of our printed keypad. In the end, we had to place the laser at a distance of approximately 12.5” to obtain good results.

Camera

Figure 8: C3038-4928IR 1/4” Color Sensor Module

For this project we decided to use the C3038 1/4” color sensor module with digital output, which uses OmniVision’s CMOS image sensor OV6630. The two primary reasons why we chose this specific camera module were low cost and the fact that it is capable of outputting image color data in progressive scan mode. Progressive scanning was an important consideration for us since we don’t have enough computational power available on the 16Mhz Mega32 microcontroller to process entire frames at once; however, we can certainly process images line-by-line as they come in. After rigorous testing and a lot of research, we realized that we could work with only the red channel data from the camera and still be able to identify keystrokes accurately. Hence, we connected the 8-bit red channel output from the camera (UV[7:0]) to PORTA[7:0] on the Mega32.

Casing

The hardware assembly for our device is designed to hold the camera at a fixed position such that it looks over the appropriate region of the printed keyboard. In addition, it also holds the laser module at a fixed position such that the plane of red light completely covers the area above the printed keyboard. In order to ensure that new custom-printed keyboards can be swapped in-and-out of the device while maintaining proper distances, we permanently attached a piece of black poster board of the right length to the assembly and mounted 4 photo-corners on it.

Figure 9: Hardware casing for Virtual Keyboard device

Testing

Keystroke Accuracy:

As a result of the limited viewing angle of the camera and positioning of the laser, we had to design and calibrate with various keypad layouts to make sure we could detect all of the buttons with reasonable accuracy. Our final design for the generic keypad and testing results (percentage accuracy) for this layout are given in Figure 3. For the testing, we tried 100 keystrokes per key and set the acceptance threshold at 70% for side areas and 80% for the central area. This means that if we can recognize a certain key accurately at least 70 or 80 times, respectively, out of the 100 times that it’s pressed, that key passes the test.

Figure 10: Testing results for keystroke detection accuracy

Conclusion

Although the final project was very satisfying, our results did not completely meet our expectations. The keyboard worked as we predicted but typing speed was minimal (about 60 characters per minute) due to limited processing capabilities of the Mega32 microcontroller.

If we had more time, we would have liked to increase the theoretical maximum typing speed by possibly using another microcontroller in parallel or maybe even an external FPGA to do extra image processing. In addition, we would also like to include sound effects for keystrokes and a dynamic calibration algorithm which can be used to orient the custom-printed keyboard in any direction. This sort of functionality would require performing 2D image transforms on-the-fly, which is not feasible with the existing microcontroller. Last but not least, we could certainly try to improve our current keystroke detection algorithm to improve typing accuracy.

Appendix

Standards

I2C-Bus Specification Version 2.1:

Two wires in an I2C bus, serial data (SDA) and serial clock (SCL), carry information between the devices connected to the bus. Each device is recognized by a unique address and can operate as either a transmitter or receiver, depending on the function of the device. In addition to transmitters and receivers, devices can also be considered as masters or slaves when performing data transfers. A master is the device which initiates a data transfer on the bus and generates the clock signals to permit that transfer. At that time, any device addressed is considered a slave.

PS/2 Keyboard Protocol:

The PS/2 keyboard interface typically uses a bidirectional synchronous serial protocol, but for our implementation we do not need the computer (host) to communicate with the microcontroller (device). Therefore, for our purposes, the device always generates the clock signal and all data is transmitted one byte at a time. Each byte is sent in a frame consisting of 11 bits, in the following order:

1 start bit. This is always 0.

8 data bits, least significant bit first.

1 parity bit (odd parity).

1 stop bit. This is always 1.

The parity bit is set to 1 if there is an even number of 1's in the data bits and set to 0 if there is an odd number of 1's in the data bits. The number of 1's in the data bits plus the parity bit always add up to an odd number (odd parity.) This is used for error detection. Data sent from the device to the host is read on the falling edge of the clock signal; the clock frequency must be in the range 10 - 16.7 kHz. This means clock must be high for 30-50 µs and low for 30-50 µs.

Ethical and Legal Considerations

Throughout the final project, we committed ourselves to the highest ethical and professional conduct and closely adhered to the IEEE Code of Ethics. We placed an extra emphasis on the following points mentioned in the Code of Ethics:

To accept responsibility in making decisions consistent with the safety, health and welfare of the public, and to disclose promptly factors that might endanger the public or the environment.

To avoid real or perceived conflicts of interest whenever possible, and to disclose them to affected parties when they do exist.

To be honest and realistic in stating claims or estimates based on available data.

To improve the understanding of technology, its appropriate application, and potential consequences.

To maintain and improve our technical competence and to undertake technological tasks for others only if qualified by training or experience, or after full disclosure of pertinent limitations.

To seek, accept, and offer honest criticism of technical work, to acknowledge and correct errors, and to credit properly the contributions of others.

To avoid injuring others, their property, reputation, or employment by false or malicious action.

To assist colleagues and co-workers in their professional development and to support them in following this code of ethics.

Since we were using a Class II laser device, we always made sure to keep the laser targeted away from other individuals in the Lab. In addition, we designed an enclosure for our device such that the laser would not be visible to the user.

We did not have any legal considerations since we did not use code or algorithms from other sources, did not use parts regulated by federal agencies, and did not infringe upon any existing patents. Although a commercial product similar to our device, called the “Virtual Laser Keyboard,” is currently manufactured by a company known as I-Tech, we believe that we have significantly distinguished our product such that we will not encounter any copyright issues. The commercial product is not dynamically reconfigurable, uses a red laser to project a standard QWERTY keyboard pattern onto a surface, and uses an infrared laser for keystroke detection. Our product, on the other hand, uses a customizable printed keyboard, a red laser for keystroke detection, and custom keystroke detection algorithms.

Safety

Figure 11: EM radiation absorption characteristics of the human eye

The human body is vulnerable to the output of certain lasers, and under certain circumstances, exposure can result in damage to the eye and skin. Research relating to injury thresholds of the eye and skin has been carried out in order to understand the biological hazards of laser radiation. It is now widely accepted that the human eye is almost always more vulnerable to injury than human skin. The cornea (the clear, outer front surface of the eye’s optics), unlike the skin, does not have an external layer of dead cells to protect it from the environment. Hence, the cornea absorbs the laser energy and may be damaged. The figure below illustrates the absorption characteristics of the eye for different laser wavelength regions. Since we only used a Class II laser in this project and provided a proper enclosure for the device such that it isn’t directly visible to the user, special protection is not required for normal users. People with sensitive eyesight or other severe vision problems, however, might want to take some precautionary measures and should not use the device for extended periods of time. In addition, users are strongly advised not to look directly into the laser beam at any time.

Budget

Our total budget for this project was $75.00, and we easily managed to keep our costs less than that.

PART

COST

SOURCE

Total

$48.41

--

RS-232 Serial Port

$1.00

ECE 476 Digital Lab

MAX233CPP

--

Sampled

Red LED

--

ECE 476 Digital Lab

Jumpers

--

ECE 476 Digital Lab

Surface mount capacitors, resistors

--

ECE 476 Digital Lab

16 MHz Crystal Oscillator

--

ECE 476 Digital Lab

LM340T5 Voltage Regulator

--

Sampled

Slide Switch

--

ECE 476 Digital Lab

Atmel ATMega32

$8.00

ECE 476 Digital Lab

40 pin DIP Socket

$2.00

ECE 476 Digital Lab

8 pin DIP Socket

$0.40

ECE 476 Digital Lab

635nm Laser

$8.00

Ebay

9V Power supply

--

ECE 476 Digital Lab

OV6630 CMOS Camera

$25.03

Electronics123

PS/2 cable and USB adapter

--

Previously owned

Poster board

$3.99

Walmart

Wood

--

Scrap

Task Distribution

Naweed Paya:

Solder prototype board and C3088 camera module

Camera implementation

Design and assemble hardware casing

PS/2 communication

Testing and debugging

Final report

Venkat Ganesh:

Laser and camera implementation

Java applet

I2C communication

Testing and debugging

Final report

Acknowledgements

We would like to thank Prof. Bruce Land and the ECE 476 staff for their continual support, insightful comments, and suggestions which altogether made this project possible. We would also like to thank Ruibing Wang, an ECE M. Eng student at Cornell University, for his assistance in getting the I2C protocol to work. This protocol was necessary to communicate with the camera, and was therefore an integral component of our project.

In addition, we would like to reference two past ECE 476 projects. To get an initial idea of the camera settings required for proper operation of the OV6630 camera using a Mega32 microcontroller, we looked at the camera implementation in the project titled “Autonomous SearchBot” by John and Diego. In addition, we used the “Wireless Keyboard” project by Luke Hejnar and Sean Leventhal as an example to implement the PS/2 keyboard protocol using a Mega32 microcontroller.