he videos show a first attempt to demonstrate the application. It is difficult to demonstrate without dumping a stream of frames directly from the tablet, thus obtaining a proper screencast. There are 5 parts, with splitting done due to a technical issue with the microphone [1, 2, 3, 4, 5] (10 minutes or so in total).

I’ll look for a better way to demonstrate it. Someone told me there is a screencasts app for Android.

ascade classification is easy to work with in OpenCV, but it is not so well documented. This series of short videos explains how this is done on GNU/Linux-based systems (although it may be useful and applicable to other platforms too). The videos were not scripted or planned, so please excuse the occasional stuttering and mistakes.

n order for identification of vehicles to work (for navigation, not Big Brother application) I am preparing a paper on some existing methods. Along the way I found some interesting videos. The first one shows OpenCV on Android phones:

OpenCV on desktop hardware can achieve more, with high-resolution images as well:

Vehicle classification (not OpenCV):

For collision prevention it helps to estimate speeds of relative speed of nearby vehicles. Here is an application which is measuring vehicle speed on the fly:

ar navigation using computer vision algorithms/programs (as opposed to GPS/maps) is scarcely explored in the form of mobile applications. With many built-in cameras and increasing processing power/RAM it would be desirable to exploit — to the extent possible — whatever general-purpose devices have to offer while idle; single-purpose appliances like TomTom make less business sense nowadays and development frameworks for mobile platforms have become versatile enough to empower third-party developers. Based on conversations with colleagues, OpenCV and its many plugins should be somehow available for Android as well, albeit it may require some hacking and adaptation to the hardware at hand (high-end ARM for the most part).

If the goal is to make vehicles with cameras mounted onto them interpret a scene like humans do, then analysis of video sequences on mobile hardware (efficient applications) ought to be explored, with special emphasis on performance. C++ has little memory footprint and high efficiency. Contemporarily, resolution at a high capture rate is satisfactory enough for the task, but it is unclear whether a good algorithm that segments and tracks a scenes can keep up. A GPU-like processing power is available on some phones, but not all (drivers for non-x86 architectures are poor or scarce, too). MobileEye offers peripheral and assistive hardware for this reason, recognising the known caveats.Vuforia does augmented reality for mobile platforms and a company called ThirdSight also makes mobile applications with computer vision methodologies. Not so long ago (April 2010) it was reported that “development of new automobile safety features and military applications [...] could save lives.” The hardware is not specified in the report. To quote, “Snyder and his co-authors have written a program that uses algorithms to sort visual data and make decisions related to finding the lanes of a road, detecting how those lanes change as a car is moving, and controlling the car to stay in the correct lane.”

While purely automatic driving is currently verboten, computer-aided driving is legal and forms a growing trend. It need not involve any mechanics either, as it’s most about message-passing to a human (HCI).

omputer Vision is definitely made possible on Android using OpenCV. Here is an android-opencv demo app [via] which may come handy for programming in C/C++. This further and latest exploration complements the earlier post as car navigation-targeted open source code is absent; what we currently have out there mostly uses maps, not image/video, so there is a gap that would augment an open source car, e.g. with open source navigation that incorporates widely-researched methods. Dashboard Cam, an Android application which is demostratedhere, uses GPS and also uses photo overlays, but there is no computer vision/pattern recognition work being done.

few years ago the DARPA Grand Challenge explored that scarcely understood potential of autonomous vehicle navigation with on-board, non-remote computer/s and a fixed number of viewpoints (upper bound on apertures, processing power, et cetera). This received a great deal of press coverage owing to public interest, commercial appeal, and the general underlying novelty. While the outcome was promising, not many people are able to afford the equipment at hand. With mobile devices proliferating, semi-autonomous or computer-aided driving becomes an appealing option, just as surgeries are increasingly involving assistance from computers (c/f MICCAI as a conference). This trend continues as confidence in the available systems increases and their practical use is already explored in particular hospitals where human life is at stake.

Road regulations currently limit the level to which computers are able to control vehicles, but in the US those regulations are subjected to constant lobbying. Many devices utilise GPS-obtained coordinates, but very few exploit computer vision methods to recognise obstacles that are observed natively rather than derived from a map (top-down). A comprehensive search around the Android repository reveals very little work on computer vision among popular applications. Processor limitations, complexity, and lack of consistency (e.g. among screen sizes and camera resolution) pose challenges, but that oughtn’t excuse this computer vision ‘drought’. A lot of code can be conveniently ported to Dalvik.

In order to explore the space of existing work and products, with special emphasis on mobile applications, I have begun looking at what’s available for navigation bar stereovision (as it would require multiple phones or a detached extra camera for good enough triangulation). Tablets and phones make built-in cameras more ubiquitous, alas their full potential is rarely realised, e.g. when docked upon a panel in a car with a high-resolution, high capture rate (framerate) camera.

According to Wikipedia, “Mobileye is a technology company that focuses on the development of vision-based Advanced Driver Assistance Systems” and this system is geared towards providing the user with car navigation capabilities that are autonomous and rely only on a single camera, such as the one many phones have. Functionality is said to include Vehicle Detection, Forward Collision Warning, Headway Monitoring & Warning, Lane Departure Warning and Lane Keeping / Lane Guidance, NHTSA LDW and FCW, Pedestrian Detection, Traffic Sign Recognition, and Intelligent Headlight Control. The company received over $100 million in investment as the computer-guided navigation market seems to be growing rapidly. A smartphone application is made available by Mobileye, with a demo version available for Android. “Although the Mobileye IHC icon will appear on the application, it requires additional hardware during installation,” their Web site says. The reviews by users are largely positively (demo version, 1.0, updated and release January 5th, 2012).

The video “Motorola Droid Car Mount Video Camera Test” shows the sort of sequence which needs to be dealt with. Lacking hardware acceleration it would be hard to process frames fast enough (maybe the difference between them would be more easily manageable). Response time for driving must avert lag. It’s the same with voice recognition on phones as it’s rarely satisfactory in real-time mode. Galaxy II, for example, takes a couple of seconds to process a couple of words despite having some very cutting-edge hardware.