2019 |
|
Santos V., Angel D. Sappa., & Oliveira M. & de la Escalera A. (2019). Special Issue on Autonomous Driving and Driver Assistance Systems. In Robotics and Autonomous Systems, Vol. 121.
|
|
2018 |
|
Cristhian A. Aguilera, Cristhian Aguilera, & Angel D. Sappa. (2018). Melamine faced panels defect classification beyond the visible spectrum. In Sensors 2018, .
Abstract: In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond
the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.
|
|
|
Xavier Soria, Angel D. Sappa, & Riad Hammoud. (2018). Wide-Band Color Imagery Restoration for RGB-NIR Single Sensor Image. Sensors 2018, 18(7), 2059.
Abstract: Multi-spectral RGB-NIR sensors have become ubiquitous in recent years. These sensors allow the visible and near-infrared spectral bands of a given scene to be captured at the same time. With such cameras, the acquired imagery has a compromised RGB color representation due to near-infrared bands (700–1100 nm) cross-talking with the visible bands (400–700 nm). This paper proposes two deep learning-based architectures to recover the full RGB color images, thus removing the NIR information from the visible bands. The proposed approaches directly restore the high-resolution RGB image by means of convolutional neural networks. They are evaluated with several outdoor images; both architectures reach a similar performance when evaluated in different scenarios and using different similarity metrics. Both of them improve the state of the art approaches.
|
|
2017 |
|
Cristhian A. Aguilera, Angel D. Sappa, & Ricardo Toledo. (2017). Cross-Spectral Local Descriptors via Quadruplet Network. In Sensors Journal, 17, 873.
|
|
|
Juan A. Carvajal, Dennis G. Romero, & Angel D. Sappa. (2017). Fine-tuning deep convolutional networks for lepidopterous genus recognition. Lecture Notes in Computer Science, .
|
|
|
Victor Santos, Angel D. Sappa, & Miguel Oliveira. (2017). Spcial Issue on Autonomous Driving an Driver Assistance Systems. In Robotics and Autonomous Systems Journal, .
|
|
2016 |
|
Angel D. Sappa, Juan A. Carvajal, Cristhian A. Aguilera, Miguel Oliveira, Dennis G. Romero, & Boris X. Vintimilla. (2016). Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study. Sensors Journal, vol 16, 1–15.
Abstract: This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).
|
|
|
Angel D. Sappa, Cristhian A. Aguilera, Juan A. Carvajal Ayala, Miguel Oliveira, Dennis Romero, Boris X. Vintimilla, et al. (2016). Monocular visual odometry: a cross-spectral image fusion based approach. Robotics and Autonomous Systems Journal, vol 86, 26–36.
Abstract: This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
|
|
|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, Paulo Dias, & A. Paulo Moreira. (2016). Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives. Robotics and Autonomous Systems Journal, 83, 312–325.
Abstract: When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
|
|
|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, Paulo Dias, & A. Paulo Moreira. (2016). Incremental Texture Mapping for Autonomous Driving. Robotics and Autonomous Systems Journal, 84, 113–128.
Abstract: Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
|
|