|
Records |
Links |
|
Author |
Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla |
|
|
Title |
Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Sensors Journal |
Abbreviated Journal |
|
|
|
Volume |
Vol. 16 |
Issue |
|
Pages |
pp. 1-15 |
|
|
Keywords |
image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform |
|
|
Abstract |
This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR). |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
47 |
|
Permanent link to this record |
|
|
|
|
Author |
Angel D. Sappa; Cristhian A. Aguilera; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo |
|
|
Title |
Monocular visual odometry: a cross-spectral image fusion based approach |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Robotics and Autonomous Systems Journal |
Abbreviated Journal |
|
|
|
Volume |
Vol. 86 |
Issue |
|
Pages |
pp. 26-36 |
|
|
Keywords |
Monocular visual odometry LWIR-RGB cross-spectral imaging Image fusion |
|
|
Abstract |
This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
Enlgish |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
54 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias |
|
|
Title |
Scene representations for autonomous driving: an approach based on polygonal primitives |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Iberian Robotics Conference (ROBOT 2015), Lisbon, Portugal, 2015 |
Abbreviated Journal |
|
|
|
Volume |
417 |
Issue |
|
Pages |
503-515 |
|
|
Keywords |
Scene reconstruction, Point cloud, Autonomous vehicles |
|
|
Abstract |
In this paper, we present a novel methodology to compute a 3D scene representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing Switzerland 2016 |
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
Second Iberian Robotics Conference |
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
45 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira |
|
|
Title |
Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Robotics and Autonomous Systems Journal |
Abbreviated Journal |
|
|
|
Volume |
Vol. 83 |
Issue |
|
Pages |
pp. 312-325 |
|
|
Keywords |
Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives |
|
|
Abstract |
When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
49 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira |
|
|
Title |
Incremental Texture Mapping for Autonomous Driving |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Robotics and Autonomous Systems Journal |
Abbreviated Journal |
|
|
|
Volume |
Vol. 84 |
Issue |
|
Pages |
pp. 113-128 |
|
|
Keywords |
Scene reconstruction, Autonomous driving, Texture mapping |
|
|
Abstract |
Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
50 |
|
Permanent link to this record |
|
|
|
|
Author |
Victor Santos; Angel D. Sappa; Miguel Oliveira |
|
|
Title |
Special Issue on Autonomous Driving an Driver Assistance Systems |
Type |
Journal Article |
|
Year |
2017 |
Publication |
In Robotics and Autonomous Systems Journal |
Abbreviated Journal |
|
|
|
Volume |
Vol. 91 |
Issue |
|
Pages |
pp. 208-209 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
gtsi @ user @ |
Serial |
65 |
|
Permanent link to this record |