Computer Engineering Paper Alberto Broggi and Ernst D. Dickmanns, Applications of computer vision to intelligent vehicles, Image and Vision Computing Journal, 18(5):365-366, April 2000,
Download the paper in PDF format
.

ABSTRACT

Vision is the main sense that we use to perceive the structure of the surrounding environment. Due to the large amount of information that an image carries, artificial vision is an extremely powerful way to sense the surroundings also for autonomous robots. It has several advantages over radio, acoustic, tactile sensors (such as radars, lasers, sonars, bumpers), such as the possibility to acquire data in a non-invasive way, thus not altering the environment.

On the other hand, the use of active sensors, involving the measure of the alteration of signals emitted by the sensors themselves, has some specific peculiarities:

* active sensors can measure quantities in a more direct way than vision. As an example, a Doppler radar can directly measure the relative movements between an object and the viewer, while vision can detect movement only as a result of a complex processing of image sequences;

* active sensors require less performant computing resources, as they use a considerably lower amount of acquired data.

In many indoor applications, such as the navigation of autonomous robots in both structured and unknown settings, vision and active sensors can perform complementary tasks for the recognition of objects, detection of free-space, or check for some specific object characteristics. Unfortunately, when several robots are moving within the same environment, active sensors may interfere with each other, thus decreasing their reliability and usefulness. This problem gets even harder in an outdoor unstructured environment, in which a large number of vehicles could be moving simultaneously, as¯¯for example¯¯when autonomous vehicles move on (intelligent) highways.

Hence, in cases in which a massive and widespread use of autonomous sensing agents is envisaged, the use of passive sensors such as cameras has definite advantages over active sensors. These are the cases in which vision becomes of paramount importance.

Furthermore, the recent advances in computational hardware, such as a higher degree of integration, allows to have machines that can deliver a high computational power, with fast networking facilities, at an affordable price. Since low-level Image Processing is computationally demanding, the availability of specific solutions (such as SIMD-like processing paradigms) in low-cost general-purpose processors permits to solve some basic bottlenecks.

In addition to this, current cameras include new important features that allow to address and solve some basic problems directly at the sensor level. For example, image stabilization can now be performed during image acquisition, while an extension of the camera dynamics allows to remove the processing required to adapt the acquisition parameters to the specific light conditions. The resolution of the sensors has been drastically enhanced. In order to decrease the acquisition and transfer time, new technological solutions can be found in CMOS cameras, with important advantages such as that pixels can be addressed independently like in traditional memories, and that their integration on the processing chip seems to be straightforward.

These technological advances have not only promoted improved hardware devices, but triggered a renewed interest in the techniques for processing of iconic information. The success of computational approaches to perception is demonstrated by the increasing number of autonomous systems that are now being used in structured and controlled industrial environments and that are now being studied and implemented to work in more complex and unknown settings. In particular, the last years have witnessed an increasing interest towards the use of vision techniques for perceiving automotive environments, both for highway surveillance and supervised or automatic driving. The field of Intelligent Vehicles is an application domain in which the words `intelligent autonomous systems' represent not only an important research topic, but also a strategic solution to the mobility problem of the next years. Vehicles able to move autonomously and navigate in everyday traffic, in both highway and urban scenarios, will become a reality in the next decades. Besides the obvious advantages of increasing road safety and improving the quality and efficiency of for people and goods mobility, the integration of intelligent features and autonomous functionalities on vehicles will lead to major economical benefits such as reductions in fuel consumption, efficient exploitation of the road network. Furthermore, not only the automotive field (public transportation, trucks, and passengers cars) is interested in these new technologies, but other sectors as well, each with its own target (industrial vehicles, military systems, mission critical and unmanned rescue robots). The interest in this specific research fields is witnessed by the increasing number of research institutes involved in this application area. In particular, a new group (Intelligent Transportation Systems Council, ITSC) has been formed within IEEE and a new very specialized Transactions on ITS has been launched recently. The main conferences organized under this umbrella are the IEEE Intelligent Transportation Systems Conference, ITSC, and the IEEE Intelligent Vehicles Symposium.

This special issue presents 7 papers dealing with different aspects of machine vision, and proposing different solutions to the problems of perceiving and moving in the surrounding environment. In particular these papers present systems for driver assistance on roads as well as obstacle detection and tracking, and describe techniques for sensor data fusion.

We would like to thank the Journal Editor, Professor Keith Baker, for giving us the opportunity to bring this extremely topical research to Image and Vision Computing Journal.


For more information send email to broggi@CE.UniPR.IT