|   | 
Details
   web
Records
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira
Title Incremental Texture Mapping for Autonomous Driving Type Journal Article
Year 2016 Publication (up) Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 84 Issue Pages pp. 113-128
Keywords Scene reconstruction, Autonomous driving, Texture mapping
Abstract Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 50
Permanent link to this record
 

 
Author Angel D. Sappa; Cristhian A. Aguilera; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo
Title Monocular visual odometry: a cross-spectral image fusion based approach Type Journal Article
Year 2016 Publication (up) Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 86 Issue Pages pp. 26-36
Keywords Monocular visual odometry LWIR-RGB cross-spectral imaging Image fusion
Abstract This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Enlgish Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 54
Permanent link to this record
 

 
Author Rafael E. Rivadeneira, Angel Domingo Sappa, Vintimilla B. X. and Hammoud R.
Title A Novel Domain Transfer-Based Approach for Unsupervised Thermal Image Super- Resolution. Type Journal Article
Year 2022 Publication (up) Sensors Abbreviated Journal Sensors
Volume Vol. 22 Issue Issue 6 Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 170
Permanent link to this record
 

 
Author Cristhian A. Aguilera, Cristhian Aguilera, Cristóbal A. Navarro, & Angel D. Sappa
Title Fast CNN Stereo Depth Estimation through Embedded GPU Devices Type Journal Article
Year 2020 Publication (up) Sensors 2020 Abbreviated Journal
Volume Vol. 2020-June Issue 11 Pages pp. 1-13
Keywords stereo matching; deep learning; embedded GPU
Abstract Current CNN-based stereo depth estimation models can barely run under real-time

constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art

evaluations usually do not consider model optimization techniques, being that it is unknown what is

the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models

on three different embedded GPU devices, with and without optimization methods, presenting

performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth

estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture

for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically

augmenting the runtime speed of current models. In our experiments, we achieve real-time inference

speed, in the range of 5–32 ms, for 1216  368 input stereo images on the Jetson TX2, Jetson Xavier,

and Jetson Nano embedded devices.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 14248220 ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 132
Permanent link to this record
 

 
Author Ricaurte P; Chilán C; Cristhian A. Aguilera; Boris X. Vintimilla; Angel D. Sappa
Title Feature Point Descriptors: Infrared and Visible Spectra Type Journal Article
Year 2014 Publication (up) Sensors Journal Abbreviated Journal
Volume Vol. 14 Issue Pages pp. 3690-3701
Keywords cross-spectral imaging; feature point descriptors
Abstract This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 28
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla
Title Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type Journal Article
Year 2016 Publication (up) Sensors Journal Abbreviated Journal
Volume Vol. 16 Issue Pages pp. 1-15
Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform
Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 47
Permanent link to this record
 

 
Author Nayeth I. Solorzano, L. C. H., Leslie del R. Lima, Dennys F. Paillacho & Jonathan S. Paillacho
Title Visual Metrics for Educational Videogames Linked to Socially Assistive Robots in an Inclusive Education Framework Type Conference Article
Year 2022 Publication (up) Smart Innovation, Systems and Technologies. International Conference in Information Technology & Education (ICITED 21), julio 15-17 Abbreviated Journal
Volume 256 Issue Pages 119-132
Keywords
Abstract In gamification, the development of “visual metrics for educational

video games linked to social assistance robots in the framework of inclusive education” seeks to provide support, not only to regular children but also to children with specific psychosocial disabilities, such as those diagnosed with autism spectrum disorder (ASD). However, personalizing each child's experiences represents a limitation, especially for those with atypical behaviors. 'LOLY,' a social assistance robot, works together with mobile applications associated with the family of educational video game series called 'MIDI-AM,' forming a social robotic platform. This platform offers the user curricular digital content to reinforce the teaching-learning processes and motivate regular children and those with ASD. In the present study, technical, programmatic experiments and focus groups were carried out, using open-source facial recognition algorithms to monitor and evaluate the degree of user attention throughout the interaction. The objective is to evaluate the management of a social robot linked to educational video games

through established metrics, which allow monitoring the user's facial expressions

during its use and define a scenario that ensures consistency in the results for its applicability in therapies and reinforcement in the teaching process, mainly

adaptable for inclusive early childhood education.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 180
Permanent link to this record
 

 
Author Michael Teutsch, Angel Sappa & Riad Hammoud
Title Computer Vision in the Infrared Spectrum: Challenges and ApproachesComputer Vision in the Infrared Spectrum: Challenges and Approaches Type Journal Article
Year 2021 Publication (up) Synthesis Lectures on Computer Vision Abbreviated Journal
Volume Vol. 10 No. 2 Issue Pages pp. 138
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 166
Permanent link to this record
 

 
Author Jorge L. Charco; Angel D. Sappa; Boris X. Vintimilla; Henry O. Velesaca
Title Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem Type Conference Article
Year 2020 Publication (up) The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal
Volume 4 Issue Pages 498-505
Keywords Relative Camera Pose Estimation, Siamese Architecture, Synthetic Data, Deep Learning, Multi-View Environments, Extrinsic Camera Parameters.
Abstract This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model

a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The

transfer learning consist of first training the network using pairs of images from the virtual-world scenario

considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight

of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose

estimation accuracy using the proposed model, as well as further improvements when the transfer learning

strategy (synthetic-world data – transfer learning – real-world data) is considered to tackle the limitation on

the training due to the reduced number of pairs of real-images on most of the public data sets.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-989758402-2 Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 120
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel D. Sappa; Boris X. Vintimilla
Title Thermal Image Super-Resolution: a Novel Architecture and Dataset Type Conference Article
Year 2020 Publication (up) The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal
Volume 4 Issue Pages 111-119
Keywords Thermal images, Far Infrared, Dataset, Super-Resolution.
Abstract This paper proposes a novel CycleGAN architecture for thermal image super-resolution, together with a large

dataset consisting of thermal images at different resolutions. The dataset has been acquired using three thermal

cameras at different resolutions, which acquire images from the same scenario at the same time. The thermal

cameras are mounted in rig trying to minimize the baseline distance to make easier the registration problem.

The proposed architecture is based on ResNet6 as a Generator and PatchGAN as Discriminator. The novelty

on the proposed unsupervised super-resolution training (CycleGAN) is possible due to the existence of aforementioned thermal images—images of the same scenario with different resolutions. The proposed approach

is evaluated in the dataset and compared with classical bicubic interpolation. The dataset and the network are

available.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-989758402-2 Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 121
Permanent link to this record