|   | 
Details
   web
Records
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira
Title Incremental Texture Mapping for Autonomous Driving Type Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 84 Issue Pages pp. 113-128
Keywords Scene reconstruction, Autonomous driving, Texture mapping
Abstract Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language (down) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 50
Permanent link to this record
 

 
Author Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic
Title Multi-sensor Fusion Module in a Fault Tolerant Perception System for Autonomous Vehicles Type Journal Article
Year 2016 Publication Journal of Automation and Control Engineering (JOACE) Abbreviated Journal
Volume Vol. 4 Issue Pages pp. 430-436
Keywords Fault Tolerance, Data Fusion, Multi-sensor Fusion, Autonomous Vehicles, Perception System
Abstract Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language (down) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 51
Permanent link to this record
 

 
Author Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic
Title A Fault Tolerant Perception system for autonomous vehicles Type Conference Article
Year 2016 Publication 35th Chinese Control Conference (CCC2016), International Conference on, Chengdu Abbreviated Journal
Volume Issue Pages 1-6
Keywords Fault Tolerant Perception, Sensor Data Fusion, Fault Tolerance, Autonomous Vehicles, Federated Architecture
Abstract Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language (down) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 52
Permanent link to this record
 

 
Author Angel D. Sappa; Cristhian A. Aguilera; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo
Title Monocular visual odometry: a cross-spectral image fusion based approach Type Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 86 Issue Pages pp. 26-36
Keywords Monocular visual odometry LWIR-RGB cross-spectral imaging Image fusion
Abstract This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Enlgish Summary Language (down) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 54
Permanent link to this record
 

 
Author Cristhian A. Aguilera, Cristhian Aguilera, Cristóbal A. Navarro, & Angel D. Sappa
Title Fast CNN Stereo Depth Estimation through Embedded GPU Devices Type Journal Article
Year 2020 Publication Sensors 2020 Abbreviated Journal
Volume Vol. 2020-June Issue 11 Pages pp. 1-13
Keywords stereo matching; deep learning; embedded GPU
Abstract Current CNN-based stereo depth estimation models can barely run under real-time

constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art

evaluations usually do not consider model optimization techniques, being that it is unknown what is

the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models

on three different embedded GPU devices, with and without optimization methods, presenting

performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth

estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture

for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically

augmenting the runtime speed of current models. In our experiments, we achieve real-time inference

speed, in the range of 5–32 ms, for 1216  368 input stereo images on the Jetson TX2, Jetson Xavier,

and Jetson Nano embedded devices.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language (down) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 14248220 ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 132
Permanent link to this record
 

 
Author Ángel Morera, Ángel Sánchez, A. Belén Moreno, Angel D. Sappa, & José F. Vélez
Title SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. Type Journal Article
Year 2020 Publication Abbreviated Journal In Sensors
Volume Vol. 2020-August Issue 16 Pages pp. 1-23
Keywords object detection; urban outdoor panels; one-stage detectors; Single Shot MultiBox Detector (SSD); You Only Look Once (YOLO); detection metrics; object and scene imaging variabilities
Abstract This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO)

deep neural networks for the outdoor advertisement panel detection problem by handling multiple

and combined variabilities in the scenes. Publicity panel detection in images o ers important

advantages both in the real world as well as in the virtual one. For example, applications like Google

Street View can be used for Internet publicity and when detecting these ads panels in images, it could

be possible to replace the publicity appearing inside the panels by another from a funding company.

In our experiments, both SSD and YOLO detectors have produced acceptable results under variable

sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex

background and multiple panels in scenes. Due to the diculty of finding annotated images for the

considered problem, we created our own dataset for conducting the experiments. The major strength

of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable

when the publicity contained inside the panel is analyzed after detecting them. On the other side,

YOLO produced better panel localization results detecting a higher number of True Positive (TP)

panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models

with di erent types of semantic segmentation networks and using the same evaluation metrics is

also included.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language (down) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 14248220 Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 133
Permanent link to this record
 

 
Author Charco, J.L., Sappa, A.D., Vintimilla, B.X., Velesaca, H.O.
Title Camera pose estimation in multi-view environments:from virtual scenarios to the real world Type Journal Article
Year 2021 Publication In Image and Vision Computing Journal. (Article number 104182) Abbreviated Journal
Volume Vol. 110 Issue Pages
Keywords Relative camera pose estimation, Domain adaptation, Siamese architecture, Synthetic data, Multi-view environments
Abstract This paper presents a domain adaptation strategy to efficiently train network architectures for estimating the relative camera pose in multi-view scenarios. The network architectures are fed by a pair of simultaneously acquired

images, hence in order to improve the accuracy of the solutions, and due to the lack of large datasets with pairs of

overlapped images, a domain adaptation strategy is proposed. The domain adaptation strategy consists on transferring the knowledge learned from synthetic images to real-world scenarios. For this, the networks are firstly

trained using pairs of synthetic images, which are captured at the same time by a pair of cameras in a virtual environment; and then, the learned weights of the networks are transferred to the real-world case, where the networks are retrained with a few real images. Different virtual 3D scenarios are generated to evaluate the

relationship between the accuracy on the result and the similarity between virtual and real scenarios—similarity

on both geometry of the objects contained in the scene as well as relative pose between camera and objects in the

scene. Experimental results and comparisons are provided showing that the accuracy of all the evaluated networks for estimating the camera pose improves when the proposed domain adaptation strategy is used,

highlighting the importance on the similarity between virtual-real scenarios.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language (down) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 147
Permanent link to this record
 

 
Author Roberto Jacome Galarza; Miguel-Andrés Realpe-Robalino; Chamba-Eras LuisAntonio; Viñán-Ludeña MarlonSantiago and Sinche-Freire Javier-Francisco
Title Computer vision for image understanding. A comprehensive review Type Conference Article
Year 2019 Publication International Conference on Advances in Emerging Trends and Technologies (ICAETT 2019); Quito, Ecuador Abbreviated Journal
Volume Issue Pages 248-259
Keywords
Abstract Computer Vision has its own Turing test: Can a machine describe the contents of an image or a video in the way a human being would do? In this paper, the progress of Deep Learning for image recognition is analyzed in order to know the answer to this question. In recent years, Deep Learning has increased considerably the precision rate of many tasks related to computer vision. Many datasets of labeled images are now available online, which leads to pre-trained models for many computer vision applications. In this work, we gather information of the latest techniques to perform image understanding and description. As a conclusion we obtained that the combination of Natural Language Processing (using Recurrent Neural Networks and Long Short-Term Memory) plus Image Understanding (using Convolutional Neural Networks) could bring new types of powerful and useful applications in which the computer will be able to answer questions about the content of images and videos. In order to build datasets of labeled images, we need a lot of work and most of the datasets are built using crowd work. These new applications have the potential to increase the human machine interaction to new levels of usability and user’s satisfaction.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 97
Permanent link to this record
 

 
Author Cristhian A. Aguilera; Cristhian Aguilera; Angel D. Sappa
Title Melamine faced panels defect classification beyond the visible spectrum. Type Journal Article
Year 2018 Publication In Sensors 2018 Abbreviated Journal
Volume Vol. 11 Issue Issue 11 Pages
Keywords
Abstract In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond

the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 89
Permanent link to this record
 

 
Author Juan A. Carvajal; Dennis G. Romero; Angel D. Sappa
Title Fine-tuning deep convolutional networks for lepidopterous genus recognition Type Journal Article
Year 2017 Publication Lecture Notes in Computer Science Abbreviated Journal
Volume Vol. 10125 LNCS Issue Pages pp. 467-475
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language (down) Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 63
Permanent link to this record