|   | 
Details
   web
Records
Author Cristhian A. Aguilera; Cristhian Aguilera; Angel D. Sappa
Title Melamine faced panels defect classification beyond the visible spectrum. Type (down) Journal Article
Year 2018 Publication In Sensors 2018 Abbreviated Journal
Volume Vol. 11 Issue Issue 11 Pages
Keywords
Abstract In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond

the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 89
Permanent link to this record
 

 
Author Juan A. Carvajal; Dennis G. Romero; Angel D. Sappa
Title Fine-tuning deep convolutional networks for lepidopterous genus recognition Type (down) Journal Article
Year 2017 Publication Lecture Notes in Computer Science Abbreviated Journal
Volume Vol. 10125 LNCS Issue Pages pp. 467-475
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 63
Permanent link to this record
 

 
Author Cristhian A. Aguilera; Angel D. Sappa; Ricardo Toledo
Title Cross-Spectral Local Descriptors via Quadruplet Network Type (down) Journal Article
Year 2017 Publication In Sensors Journal Abbreviated Journal
Volume Vol. 17 Issue Pages pp. 873
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 64
Permanent link to this record
 

 
Author Victor Santos; Angel D. Sappa; Miguel Oliveira
Title Special Issue on Autonomous Driving an Driver Assistance Systems Type (down) Journal Article
Year 2017 Publication In Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 91 Issue Pages pp. 208-209
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 65
Permanent link to this record
 

 
Author Xavier Soria; Angel D. Sappa; Riad Hammoud
Title Wide-Band Color Imagery Restoration for RGB-NIR Single Sensor Image. Sensors 2018 ,2059. Type (down) Journal Article
Year 2018 Publication Abbreviated Journal
Volume Vol. 18 Issue Issue 7 Pages
Keywords
Abstract Multi-spectral RGB-NIR sensors have become ubiquitous in recent years. These sensors allow the visible and near-infrared spectral bands of a given scene to be captured at the same time. With such cameras, the acquired imagery has a compromised RGB color representation due to near-infrared bands (700–1100 nm) cross-talking with the visible bands (400–700 nm). This paper proposes two deep learning-based architectures to recover the full RGB color images, thus removing the NIR information from the visible bands. The proposed approaches directly restore the high-resolution RGB image by means of convolutional neural networks. They are evaluated with several outdoor images; both architectures reach a similar performance when evaluated in different scenarios and using different similarity metrics. Both of them improve the state of the art approaches.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 96
Permanent link to this record
 

 
Author Ricaurte P; Chilán C; Cristhian A. Aguilera; Boris X. Vintimilla; Angel D. Sappa
Title Feature Point Descriptors: Infrared and Visible Spectra Type (down) Journal Article
Year 2014 Publication Sensors Journal Abbreviated Journal
Volume Vol. 14 Issue Pages pp. 3690-3701
Keywords cross-spectral imaging; feature point descriptors
Abstract This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 28
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla
Title Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type (down) Journal Article
Year 2016 Publication Sensors Journal Abbreviated Journal
Volume Vol. 16 Issue Pages pp. 1-15
Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform
Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 47
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira
Title Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives Type (down) Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 83 Issue Pages pp. 312-325
Keywords Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives
Abstract When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 49
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira
Title Incremental Texture Mapping for Autonomous Driving Type (down) Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 84 Issue Pages pp. 113-128
Keywords Scene reconstruction, Autonomous driving, Texture mapping
Abstract Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 50
Permanent link to this record
 

 
Author Angel D. Sappa; Cristhian A. Aguilera; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo
Title Monocular visual odometry: a cross-spectral image fusion based approach Type (down) Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 86 Issue Pages pp. 26-36
Keywords Monocular visual odometry LWIR-RGB cross-spectral imaging Image fusion
Abstract This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Enlgish Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 54
Permanent link to this record