Mobile Mapping Technology for Road Network Data Acquisition 1

Preparing to load PDF file. please wait...

0 of 0
100%
Mobile Mapping Technology for Road Network Data Acquisition 1

Transcript Of Mobile Mapping Technology for Road Network Data Acquisition 1

Mobile Mapping Technology for Road Network Data Acquisition 1
C. Vincent Tao
Department of Geomatics Engineering, The University of Calgary
2500 University Drive, NW, Calgary, Alberta, Canada T2N 1N4 E-mail: [email protected]
Abstract
The advancement of sensor technology enables fast and cost-effective acquisition of spatial data. This paper provides a review of the state-of-the-art development of mobile mapping technology in North America. Following a background introduction, multi-sensor integrated mapping technology is described. The key concept of direct georeferencing is given. The sensors as well as their accuracy aspects are addressed. As a system example, the VISAT mobile mapping system is introduced. The recent progress on automated feature extraction from mobile mapping imagery is reported. Finally, applications of mobile mapping technology are discussed.
1 Introduction
It has been demonstrated that mapping science has been steadily stepped into a digital mapping era. The core technologies of photogrammetry, remote sensing, geographic information systems (GIS) and spatial positioning are becoming fully integrated, resulting in the tremendous expansion and rapid growth of mapping markets. Modern digital mapping technology is characterized by the capabilities of multi-discipline combination, multi-platform compensation, multi-sensor integration and multi-data fusion.
The multi-platform, and multi-sensor integrated technology has established a trend towards fast spatial data acquisition. Multi-sensor systems can be mounted on various platforms, such as satellites, aircrafts or helicopters, land-based vehicles, water-based vessels, and even hand-carried by individual surveyors. As a result, every vehicle or individual surveyor becomes a potential data collector, responsible for globally integrated data acquisition. The recent development of landbased mobile mapping systems represents a typical application of this integrated technology. In fact, the development of mobile mapping systems was largely driven by the transportation applications and is being further inspired by the wide implementation of Intelligent Transportation Systems (ITS) and Geospatial Information Systems in Transportation (GIS-T).
2 Development of Mobile Mapping Systems
2.1 Photo-Logging
In the 1970's, photo-logging systems were used by many highway transportation departments to monitor pavement performance, signing, maintenance effectiveness, encroachments, etc. These services are typically conducted every two or three years. Often film cameras were used to capture photos through the windshield of a vehicle (e.g., van). Inertial devices such as gyroscopes and accelerometers, and wheel counters were employed to determine the instantaneous positions of the captured photographs. Each photo was stamped with time and geographic position information. These photos were stored mainly as a pictorial record of highway performance (Birge, 1985).
Due to the poor accuracy of vehicle positioning and only a single camera configuration in these systems, the functionality of 3-D object measurement was not available. The main drawback of photo-logging is the film-based storage and processing. Accessing the photos for engineering,
1 Invited paper presented at the International Symposium of Urban Multimedia/3D Mapping, June 8-9, Tokyo, Japan, 1998
Journal of Geospatial Engineering, Vol. 2, No.2, pp. 1-13. Copyright  The Hong Kong Institution of Engineering Surveyors

Tao, C.
planning, legal or safety activities was time-consuming because film is fragile and the process requires costly film production. For example, Tennessee Department of Transportation (Tennessee, U.S.A.) maintains the photolog of approximately 27,000 miles of roadway in its state. All pictures were kept on film and they were only viewed by using some special viewing machines. In order to make these photos more accessible, the Department has converted its existing photolog from film to a digital format and developed a client/server based system. The system allows users to point-and-click any location along the roads and view the logging images through a desktop PC connected to the server.
2.2 Video-Logging
With the advent of the Global Positioning Systems (GPS) as well as the video imaging technologies, cumbersome photo-logging systems were replaced by GPS-based video-logging systems. It has been demonstrated by many projects that the GPS-based video-logging systems offer a fast and low-cost approach to highway inventory (Lapucha, 1990; and Schwarz et al., 1990). The collected video images can be georeferenced with respect to a global coordinate system by using continuous GPS navigation and positioning information. The turn-around time of data processing is significantly reduced since no film processing is involved. Furthermore, the digitally georeferenced video data allows quick retrieval and effective management. The capability of interpretation of highway video data is also strengthened through the use of image processing software. This approach has become widely accepted by most transportation departments. Visual inventory and feature documenting along road corridors remain the major purpose of these kinds of systems.
Due to the low resolution of video images, quantitative measurements from these images are still limited. The video images are often stored in a tape system. This may further degrade the quality of images. It is a fact that most of these systems only have a single camera configuration. Therefore, precise 3-D measurements are not possible. However, some alternative methods can be developed to provide relative measurements from a single image, such as height and offset measurements (Tao, 1997).
2.3 Mobile Mapping
The evolution of mobile mapping systems from video-logging systems was mainly contributed by the efforts of two research groups in North America, The Center for Mapping at The Ohio State University, U.S.A. and the Department of Geomatics Engineering at The University of Calgary, Canada (Bossler et al., 1991; and Schwarz et al., 1993). Compared to video-logging systems, mobile mapping systems are able to offer full 3-D mapping capabilities that are realized by using the advanced multi-sensor integrated data acquisition and processing technology (El-Sheimy and Schwarz, 1995; Li, 1997; Novak, 1995, and Tao, 1997).
A common feature of mobile mapping systems is that more than one camera is mounted on a mobile platform, allowing for stereo imaging and 3-D measurements. Direct georeferencing of digital image sequences is accomplished by the multi-sensor navigation and positioning techniques. Multiple positioning sensors, GPS, Inertial Navigation System (INS) and deadreckoning (DR), can be combined for data processing to improve the accuracy and robustness of georeferencing. The ground control required for traditional mapping is eliminated. The systems can achieve centimeter accuracy of vehicle positioning and meter or sub-meter 3-D coordinate accuracy of objects measured from the georeferenced image sequences.
Another advantage of mobile mapping systems is that the data link to a geospatial database is easy and straightforward. The collected geometric and attribute information can be directly used to build and update a database. With the development of fast communication and image compression technologies, real-time image data link from a field mobile mapping system to an office GIS can be realized. Furthermore, such data can be disseminated and accessed through widely distributed Internet and even wireless networks.
2

Mobile Mapping Technology for Road Network Data Acquisition

A list of some of systems available in North America is provided in Table 1 (sources are from the literature and the website information of companies). It is worth noting that data in Table 1 could be outdated and may not be accurate due to the continuous system improvements.
Table 1. Some of video-logging and mobile mapping systems in North America

System
ARAN
DGI GeoVAN GI-Eye GPSVan
GPSVision ON-SIGHT Roadview
Road Radar
TruckMAP
VISAT
VMS

Developer

Positioning Sensors

Roadware Corp., ON, Accelerometers/IMU/

Canada

GPS

Data Chromatics, Inc., USA GeoSpan Corp., CO, USA NAVSYS Corp., CO, USA The Ohio State University, Columbus, OH, USA Lambda Tech Int’l Inc., WI, USA TransMap Corp., OH, USA Mandli Communications, Inc, WI, USA Road Radar Ltd., Canada

GPS GPS/DR GPS/IMU GPS/Gyro/wheel counter GPS, INS GPS, INS GPS/IMU/Inclination Odometer/Barometer GPS

John E. Chance and Associate, Inc., LA, USA The Univ. of Calgary and Geofit Inc., Canada Redhen Systems, Inc. CO, USA

Multi-antenna GPS/gyro
GPS/INS/Anti-Brake System
GPS

Mapping Sensors
1 VHS, 2 or more CCD, Laser 1 CCD
8 CCD, voice recorder 1 CCD
2 CCD, voice recorder
2 color CCD
4 color CCD
Progressive Scan CCD
Ground Penetrating Radar, 1 Video Laser range finder, 1 Video
8 B/W CCD, 1 color SVHS
1 CCD, voice recorder, Laser

Website References
www.roadware.com
www.dcigis.com www.geospan.com www.navsys.com N/A
www.lambdatech.com www.transmap.com www.mandli.com
www.rrl.com
www.jchance.com
www.visat.com
http://www.redhensyste ms.com/vms/

3 Multi-Sensor Integrated Mobile Mapping Technology
A mobile mapping system consists of three components: data acquisition, information extraction and information management. In this section, the data acquisition component is described. This is the first but critical step to the development of a mobile mapping system.
3.1 Direct Georeferencing
The most important concept of mobile mapping is direct-georeferencing. The conceptual layout of direct georeferencing is shown in Figure 1. Direct-georeferencing refers to the determination of the exterior orientation of the mapping sensor without using ground control points and the photogrammetric block triangulation. For example, if a camera sensor is used, any captured image can be “stamped” with the georeferencing parameters, namely three positional parameters and three attitude parameters. As a result, 3-D object measurements can be achieved directly by using a photogrammetric intersection.
There are three modes for direct georeferencing, namely, stand-alone mode, integrated model and combined mode.

3

Tao, C.

Vehicle-oriented positioning sensors:
e.g., GPS

Feature-oriented mapping sensors: e.g.,
camera

A global coordinate system

A local coordinate system

An object of interest

Figure 1. The concept of direct georeferencing

Stand-alone mode: Due to the unavoidable occurrences of the GPS satellite signal blockage, the use of GPS alone to provide the positional information is not reliable. Although a number of radio navigation systems as well as recent cellular positioning systems, e.g., Loran-C, CDMA-based systems, are available, they do not yield sufficient accuracies for most mobile mapping applications. The stand-alone mode for georeferencing is not a stable solution for mobile mapping. However, it has been used very often for video-logging applications where the accuracy requirement is not high.
Integrated mode: The integrated use of external inertial positioning systems is, currently, widely adopted approach for georeferencing (Schwarz and Wei, 1994; Lithopoulos, 1996; Skaloud et al, 1996; Grejner-Brzezinska, et al., 1999). Depending on the applications, different levels of integration and different ways of sensor combination can be developed, for example, GPS with INS, GPS with DR, etc. This mode will be further discussed next.
Combined mode: In the airborne photogrammetry, combined mode for georeferencing is predominately used in practice due to its high performance, reliability and economic factors. In this mode, the GPS observations and the photogrammetric observations are both combined into a triangulation block adjustment. Thus the systematic GPS errors can be controlled and corrected via the combined adjustment (Ackermann, 1996). Very few control points are required in this combined mode to solve the datum transformation. In mobile mapping systems, it is a fact that the image tie points available from the overlapped image sequences can be used to perform a stripwise photogrammetric triangulation. Thus, the orientation parameters of the camera can also be derived. This technique can be used for quality control of georeferencing data derived from GPS/INS observations, and furthermore, to bridge gaps where the absence of the orientation observations occurs. The new georeferencing approach has been researched at the University of Calgary (Chaplin and Chapman, 1998; and Tao et al., 1999).
Direct georeferencing brings of significant benefits to the mapping procedure in terms of the rapid turn-around time of data processing and reduced cost of ground control surveys. The direct georeferencing technique makes mobile mapping feasible and has also led to the successful development of a new generation of airborne mobile mapping systems, such as airborne LIDAR mapping systems and airborne SAR systems. It is a goal that mapping would be independent of any ground control, and ultimately, mapping becomes completely autonomous.

4

Mobile Mapping Technology for Road Network Data Acquisition

3.2 Positioning and Mapping Sensors
The development of mobile mapping systems is featured by the use of multiple sensors as well as integrated sensor processing methods. In general, there are two primary types of sensors that are involved, namely, positioning sensors and mapping sensors:

Positioning sensors a) Environment-dependent external positioning sensors: GPS, radio navigation systems, Loran-C, and cellular positioning devices, etc. b) Self-contained inertial positioning sensors: INS or IMU (Inertial Measurement Unit), deadreckoning systems, gyroscopes, accelerators, compasses, odometers, and barometers, etc.

Mapping sensors a) Passive imaging sensors: video or digital cameras, multi-spectrum or hyper-spectrum scanners, etc. b) Active imaging sensors: Laser range finders or scanners, and synthetic aperture radar (SAR), etc.
Other sensors such as voice recording and speech recognition devices, touch-screens, temperature or air pressure meters, gravity gauges, etc. may be of use for integration.
The positioning sensors are vehicle-oriented. They are used to determine the absolute locations of the mobile mapping platform with respect to a global coordinate system, e.g., WGS-84. While, the mapping sensors are feature-oriented. They provide the positional information of objects (features) relative to the vehicle in a local coordinate system. In addition, attributes of features can be obtained from the mapping sensors. Precise calibration is required to geometrically align the positioning sensors and mapping sensors together. Accurate synchronization (time referencing) of the sensors is also required.
3.3 Accuracy Assessment of Sensors
The choice of sensors and the effective integration of sensors is a key to the development of a mobile mapping system. Accuracy, cost, reliability, data rate, portability, power consumption as well as integratibility are the most important factors to be considered. Among of them, accuracy and cost are the primary factors for a system design and implementation.

3.3.1 Accuracy of Positioning Sensors

GPS is the most viable choice for the determination of position information due to its wide acceptance and proved performance. GPS can be operated in a variety of modes. The associated accuracies of each mode are summarized in Table 2 (Ellum and El-Sheimy, 2000).

Table 2. Position accuracy of GPS

GPS mode
Code Differential (Narrow Correlator, Carrier-phase smoothing) L1 Carrier-phase RTK (Float ambiguities) L1/L2 Carrier-phase RTK L1 and L1/L2 Post-mission Kinematic L1 Precise ephemeris (with Ionospheric Modelling)

Horizontal Accuracy (2-D RMS) 0.75 m 0.18 m 0.03 m 0.02 m 1.0 m

Vertical Accuracy (RMS)
1.0 m 0.25 m 0.05 m 0.03 m 3.0 m

With respect to the attitude determination, there are a number of options. Table 3 gives a list of possible sensors for attitude determination. The accuracies stated are for tilt angles (roll and pitch) below 20 degree. For more information regarding these numbers, one may refer to Ellum and ElSheimy (2000), and Schwarz and El-Sheimy (1996).

In fact, only the IMU can provide all three attitude angles. For the other systems to provide all three attitude angles, they must be combined with additional sensors. Alternative to the IMU approach is to use GPS multi-antenna systems. It has been used in the airborne and ship-borne

5

Tao, C.

environments where its accuracy for attitude determination has been acceptable. However, for the land vehicle based mobile mapping applications, such a multi-antenna based GPS system can not reach the acceptable accuracy level due to the fact that the baseline between the antennas is limited. Most mobile mapping systems use navigation grade IMU in order to meet the accuracy requirements. This is one of the reasons that make the initial development cost very high.

Table 3. Accuracy of sensors for determining attitude

Sensor Type
Navigation Grade IMU Six-Axis Tactical Grade IMU Twin Antenna GPS High-Accuracy Tilt Sensor Low-Accuracy Tilt Sensor Magnetic Azimuth Sensors 3-Axis Magnetometer Integrated with 2-Axis Tilt Sensor

Accuracy in Roll and Pitch <0.01° 0.25° 0.5°-1.0° 0.05° 0.25° -
0.25°

Accuracy in Azimuth <0.03° 2° 0.75° -
-
1.0° 1.0°

Cost (USD)
>$100,000 $12,000 – 20,000
$2,000 – 6,000 $3500 $700 $250
$700-1,200

3.3.2 Accuracy of Mapping Sensors
Considering the error contribution from the imaging component alone, the positioning accuracy of object point coordinates derived from imagery is determined mainly by four factors: the object distance (Y), the baseline length (B), the focal length (f), and the mean square error of image coordinate measurements (mpxl). The along-track error component is the predominant error contributing to the total positioning error in mobile mapping applications. Tao (1999a) has conducted a comprehensive analysis of the achievable accuracy from a stereo imaging system. The main conclusions can be summarized as:
a) The maximum baseline length between two cameras is restricted by the desired overlap percentage; the overlap percentage is affected by the field of view angle of the camera; and the field of view angle is determined by the focal length and the camera sensing area. Therefore, a best trade-off is required for the configuration of these imaging parameters. In Tao (1999a), a method to find an optimal combination of imaging parameters was developed.
b) Under the assumption of that the manual measurement accuracy for a single image point is 0.29 pixel, the maximum camera-to-object distance is 35m if the object positioning accuracy of 30 cm is required (with a standard CCD stereo camera system). However, if the image coordinate measurement accuracy of 0.2 pixel can be achieved, the positioned object can be 50m far from the camera. Therefore, development of sub-pixel image measurement algorithms is of significant importance. The use of multiple-image matching for point measurement has been considered as an effective approach (Tao, 1997).
c) The choice of camera types is critical to overall system performance. The camera parameters such as pixel spacing, sensing area, electronic noises, data capture rate and storage requirement must be considered carefully. Cameras with small pixel spacing, pixel synchronization unit and built-in A/D converter allow for obtaining more accurate image coordinate measurements. The use of large sensor cameras will improve the overall system accuracy and permit flexibility on the configuration of imaging parameters.
An accurate calibration of interior orientation parameters, rotation angles ϕ (rotation around the axis Z) and the baseline is required. Based on the theoretical analysis (Tao, 1999a), the calibration accuracies of 0.3 pixel of interior orientation parameters, 0.02° for the rotation angles ϕ , 0.03° for the relative orientation angle and 3.5 mm for the camera baseline have to be achieved, so that the effects of the calibration errors onto the total system positioning accuracy are at the same level of that of errors arising from image coordinate measurements. However, calibration accuracy for the rotation parameter ω and κ is not stringent.

6

Mobile Mapping Technology for Road Network Data Acquisition
Accuracy for time referencing is also important but is technically achievable. The tests showed that if the error of 10-3 second in synchronization can be controlled, the resulting position error is less than 4 cm at a vehicle speed of 60 km per hour (Schwarz and El-Sheimy, 1996).
3.3.3 Accuracy Improvement
The accuracy from an individual sensor component is discussed above. However, effective combination of these sensors and integrated processing of the sensory data will be able to further improve the total system accuracy. There are basically two levels of the integrated processing: (1) integrated processing of multiple positioning sensory data; and (2) integrated processing of positioning and mapping sensory data. For example, if GPS and INS are combined for positioning, the INS drifts with time can be largely controlled by using GPS updates while GPS outages and cycle slips can be corrected by using INS data (Wei and Schwarz, 1990; Schwarz and Wei, 1994; Toth and Grejner-Brzezinska, 1998; and El-Sheimy et al., 1999). As for the second level of integrated processing, image-based sequential triangulation using the tie points from the overlapped images can be used to determine the orientation parameters of each image. This technique can be used to augment the georeferencing accuracy derived from the positioning sensors or to bridge gaps where the GPS signals are lost. However, this technique is not as effective as that being used in the airborne photogrammetry. This is mainly due to the poor triangulation geometry of tie points obtained from the terrestrial images. The accuracy evaluation and the automatic determination of these tie points can be found in Bruce and Chapman (1998). One good example on the use of this technique for georeferencieng of terrestrial digital images can be found in Silva et al. (2000).
3.4 The VISAT Mobile Mapping System
The VISAT system was developed by The University of Calgary in cooperation with Geofit, Inc., Canada. The project began in November 1992, and the first demonstration was given in July, 1993. The production system was implemented in 1995. The VISAT has evolved from a 3camera system to a 5-camera system, and is currently an 8-camera system. The overall objective of the VISAT project was to develop a precise mobile mapping system for road inventory and general GIS data acquisition, which is capable of providing an absolute positioning accuracy of 0.3 m and a relative accuracy of 0.1 m for object points within a 30 m corridor, at vehicle speeds of 50-60 km per hour (Schwarz et al, 1993).
The van-based VISAT system is shown in Figure 2. This system consists of a strapdown INS system, two L1/L2 GPS receivers, eight black/white CCD video cameras, an Antilock-Braking System (ABS), an image control unit, and a color S-VHS camera. The function of each component can be subdivided into primary and secondary functions (Schwarz and El-Sheimy, 1996). In terms of primary functions, the cameras provide 3D positioning with respect to the VISAT vehicle reference, which in most cases is the perspective center of one of the cameras. The position of this reference with respect to the existing control is determined by differential GPS techniques, while the camera attitude information is given by the INS. The ABS system will trigger the camera at constant distance intervals using the VISAT controller trigger channel. In terms of secondary functions, the cameras provide redundant image information of the objects, i.e., more than two images of the same object are available. The GPS is used to control INS drift errors, while the INS is used to bridge GPS outages, correct GPS cycle slips, and perform precise interpolation between GPS fixes. The ABS data can be used to update the INS data in case of the GPS signal blockage.
The road-related spatial data can be acquired through the on-line synchronization of GPS, INS, ABS, and image data. The georeferencing parameters of the moving VISAT reference system can then be obtained after processing the DGPS/INS inputs. Locations of objects of interest in the road corridor with respect to the VISAT reference system are determined during the postmission processing, using two or more georeferenced images. The information extracted can be directly used to update or generate GIS databases.
7

Tao, C.

The imaging component of the VISAT consists of eight COHU 4980 video cameras, each with a resolution of 640 x 480 pixels. The technical specifications of the camera are given in Table 4. The cameras are housed in a pressurized case and mounted inside two fixed-base towers on top of the van. Practical tests of the achievable system accuracy can be found in El-Sheimy (1996), and Schwarz and El-Sheimy (1996).

S-VHS Camera 8 CCD Cameras

GPS Antenna INS System

Figure 2 The VISAT mobile mapping system

Table 4. Technical specifications of the VISAT cameras

Parameters Image area Imager type Cell Size Resolution Field of view Electronic shutter

Specifications(RS 170 standard) 6.4 x 4.8 mm IT CCD with on-chip microlenses 8.4 x 9.8 micrometers 768 x 494 pixels 43° (H) and 37° (V) Eight options: 1/50 ~ 1/10 000 second

4 Information Extraction and Management
The major efforts of research and development have been placed on the improvement of quality and reliability of image georeferencing. Although automated information extraction from collected images is of considerable importance, few work has been done in this area. The Ohio State University (OSU), U.S.A., and The University of Calgary (UC), Canada, are most active in this area (Tao et al., 1999). Nevertheless, the related techniques of automatic information extraction from image sequences have been extensively researched, especially in the areas of mobile robotics, autonomous vehicle navigation and intelligent transportation systems.
Information management of large amount of images, mainly, raster data, as well as extracted vector data have been dealt by many GIS researchers and IT (Information Technology) practitioners. Information management has advanced to a point that vast amounts of image data can be handled, archived and retrieved effectively by using the spatially enabled relational database technology. At present, however, feature extraction is mainly manual. This laborintensive process remains fairly costly in mobile mapping production and, therefore, prevents the wide acceptance of mobile mapping as a primary mapping tool. In the following, the current research status of automated information extraction is reported.
4.1 Object Measurement
The objects of interest can be footprints of houses, street edges, centerlines, curbs, lane markers, manholes, culverts, fire hydrants, traffic signs, telephone booths, electric poles, etc. In order to calculate the 3-D coordinates of an object from images, at least two conjugate image points need to

8

Mobile Mapping Technology for Road Network Data Acquisition
be determined. Since the orientation parameters of the cameras are known, the corresponding epipolar lines in a stereo image pair can be computed. Once a point in the left image is measured manually, the corresponding point in the right image can be determined by using the epipolar-line based image matching method. The area-based cross-correlation matching was used in the OSU group for the automatic determination of image conjugate points (He and Novak, 1993).
Further improvement of the 3-D coordinate accuracy of object measurement was researched by Li et al. (1996) and Tao et al. (1997). Their work focused on the use of multiple images to perform photogrammetric triangulation. Their results demonstrated that the final 3-D coordinate accuracy can be improved if multiple corresponding points in image sequences can be used. It is required that multiple corresponding point be established first. In fact, use of multiple images is valuable not only for enhancing the reliability of image matching, but also for increasing the 3-D coordinate accuracy of photogrammetric triangulation. A semi-automatic multiple-image-based object measurement method was developed by Tao et al (1997).
4.2 Road Centerline Reconstruction
Road centerline information has been extensively used to generate road network information systems. Such information can also be used to derive road inspection parameters such as the longitudinal profile and surface deformation, which are important indicators for road maintenance. The acquisition of up-to-date road centerline data using conventional field survey is fairly difficult due to the cost and logistical reasons.
Since the image features of road centerlines are relatively distinctive, it is expected that automatic reconstruction of road centerlines from mobile mapping images can be realized. The first trial was conducted by He and Novak (1992). After the manual input of a starting point of a centerline in image, the line following algorithm was executed to trace this centerline. However, this method can not handle the difficult road scenarios and does not have the capability of continuous reconstruction of road centerlines from a long image sequence.
A fairly successful approach to automatic centerline reconstruction was developed by Tao (1996) and Tao et al. (1998). This approach is based on a new theory “shape from image sequences”. In this approach, the GPS/INS trajectory is used as an approximate 3-D road centerline shape model. This model is then gradually refined and updated by using road feature points extracted from image sequences. Constraints-based feature extraction and sophisticated multiple image matching techniques were integrated into this approach. It has been tested extensively that the approach functions very reliably even under poor imaging and road conditions. Another great advantage of the approach is that continuous 3-D reconstruction of road centerlines from entire image sequences can be realized.
4.3 Object Detection and Recognition
Without use of sufficient knowledge or image constraints about objects, results of automatic object recognition can never be reliable. Although there is an extensive literature covering the traffic sign detection and recognition, many problems remain unsolved. He and Novak (1993) published their results on the detection of mile makers from mobile mapping images. The focus was placed on the recognition of numbers appearing on mile-maker images. Geiselmann and Hahn (1994) developed a method of identification and location of stop signs. Firstly, mathematical morphology-based operations were applied to detect the region of interest. Then affine-invariant features were extracted to identify the simple shapes of objects. Many object recognition methods employed the color segmentation approach (Janssen et al, 1993; Priese et al., 1994; Tong et al., 1999). However, the segmentation results may be subjected to the conditions of lighting, shadows, weather, and sign painting, etc. The robustness of these approaches is still a major concern.
Semi-automatic object detection scheme presents an alternative due to the complexity of automatic object recognition. The basic principal of this scheme is that the human operator plays a role for object recognition. After an object is recognized by a human operator, automatic detection and
9

Tao, C.
precise location of objects are then conducted by the system (Tao and Lin, 1994). Tao (1999b) has developed an approach to extract and locate vertical line features from mobile image sequences. In this approach, a map database was used to predict the approximate location of the object. Then a set of algorithms, such as line grouping, feature correspondence and line reconstruction was used to detect the presence of the object and locate precisely the object from the image.
Except for the above mentioned work, there are some of interesting research addressing the generic road extraction from image sequences. In fact, automatic road extraction was a main topic for robot navigation and autonomous vehicle navigation. Interesting readers may refer to the literature (Waxman et al., 1987; Turk et al., 1988; Ma et al., 1999; Habib, 2000). Automatic road width measurement has been investigated in Gajdamowicz (1999).
5 Applications of Mobile Mapping Systems
Mobile mapping is an innovative concept. With a mapping system installed on a mobile platform, e.g., van or helicopter, many of costly field surveys can be replaced. Its potential applications appear to be limitless. Typical applications include:
Mapping of roads and railways for asset management and engineering planning
Mapping of roads and railways is perhaps the main application market. Mobile mapping systems can delivery maps of road features at the sub-meter accuracy. Such maps can be used for transportation departments for road assessment management, transportation planning and engineering. Engineers are then able to access road images and attributes for any engineering purposes without going to the field. The typical end products out of a mobile mapping project are:
a) captured digital images of roads and pavements b) road centerline maps derived from stereo images c) positions of visible objects, such as traffic signs, manholes, guardrails, bridges, culverts,
utility poles, buildings, trees, and road crossings, etc. d) road related feature information, such as mile marker positions, distance attributes, road
segments, and road attributes
Utility mapping and inventory
These maps are mainly used by utility and telecommunication companies. The accuracy may not be as strict as land surveying maps. However, it requires more frequent updating. Utility features such as transformers, power lines, telephone lines, street light poles, are need to be geocoded and linked to a road centerline based map database. Normally, the heights or the offsets of utility poles and cable attachments need to be measured. This information helps companies for design, planning and maintaining of electric unities, phone or cable routing.
Infrastructure mapping for emerging responses
Due to the wide implementation of E-911 services for emerging responses in North America. Positions and attributes of buildings, important landmarks, telephone booths etc. are required for the establishment of emergence response databases. It helps decision makers to layout the response routing and dispatch emergency vehicles. The images collected can be used to assist the appraisal, evaluate the condition and prepare for any special treatment.
It is worth mentioning that ITS presents a huge potential market for mobile mapping applications. It is projected that $209 billion will be invested in ITS between now and the year 2011 in the form of consumer products and services. The full-scale deployment of mobile mapping technology will take place for ITS data acquisition, database updating and information management.
6 Concluding Remarks
Mobile mapping represents a significant advance in multi-sensor integrated digital mapping. It provides a new avenue towards rapid and cost-effective collection of high-quality road related
10
SensorsAccuracyMappingImagesImage