|   | 
Details
   web
Records
Author Rafael E. Rivadeneira; Angel D. Sappa; Boris X. Vintimilla
Title Thermal Image Super-Resolution: a Novel Architecture and Dataset Type Conference Article
Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal
Volume 4 Issue Pages 111-119
Keywords (down) Thermal images, Far Infrared, Dataset, Super-Resolution.
Abstract This paper proposes a novel CycleGAN architecture for thermal image super-resolution, together with a large

dataset consisting of thermal images at different resolutions. The dataset has been acquired using three thermal

cameras at different resolutions, which acquire images from the same scenario at the same time. The thermal

cameras are mounted in rig trying to minimize the baseline distance to make easier the registration problem.

The proposed architecture is based on ResNet6 as a Generator and PatchGAN as Discriminator. The novelty

on the proposed unsupervised super-resolution training (CycleGAN) is possible due to the existence of aforementioned thermal images—images of the same scenario with different resolutions. The proposed approach

is evaluated in the dataset and compared with classical bicubic interpolation. The dataset and the network are

available.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-989758402-2 Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 121
Permanent link to this record
 

 
Author Cristhian A. Aguilera, Cristhian Aguilera, Cristóbal A. Navarro, & Angel D. Sappa
Title Fast CNN Stereo Depth Estimation through Embedded GPU Devices Type Journal Article
Year 2020 Publication Sensors 2020 Abbreviated Journal
Volume Vol. 2020-June Issue 11 Pages pp. 1-13
Keywords (down) stereo matching; deep learning; embedded GPU
Abstract Current CNN-based stereo depth estimation models can barely run under real-time

constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art

evaluations usually do not consider model optimization techniques, being that it is unknown what is

the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models

on three different embedded GPU devices, with and without optimization methods, presenting

performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth

estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture

for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically

augmenting the runtime speed of current models. In our experiments, we achieve real-time inference

speed, in the range of 5–32 ms, for 1216  368 input stereo images on the Jetson TX2, Jetson Xavier,

and Jetson Nano embedded devices.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 14248220 ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 132
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias
Title Scene representations for autonomous driving: an approach based on polygonal primitives Type Conference Article
Year 2015 Publication Iberian Robotics Conference (ROBOT 2015), Lisbon, Portugal, 2015 Abbreviated Journal
Volume 417 Issue Pages 503-515
Keywords (down) Scene reconstruction, Point cloud, Autonomous vehicles
Abstract In this paper, we present a novel methodology to compute a 3D scene representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
Address
Corporate Author Thesis
Publisher Springer International Publishing Switzerland 2016 Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference Second Iberian Robotics Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 45
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira
Title Incremental Texture Mapping for Autonomous Driving Type Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 84 Issue Pages pp. 113-128
Keywords (down) Scene reconstruction, Autonomous driving, Texture mapping
Abstract Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 50
Permanent link to this record
 

 
Author Jorge L. Charco; Angel D. Sappa; Boris X. Vintimilla; Henry O. Velesaca
Title Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem Type Conference Article
Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal
Volume 4 Issue Pages 498-505
Keywords (down) Relative Camera Pose Estimation, Siamese Architecture, Synthetic Data, Deep Learning, Multi-View Environments, Extrinsic Camera Parameters.
Abstract This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model

a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The

transfer learning consist of first training the network using pairs of images from the virtual-world scenario

considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight

of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose

estimation accuracy using the proposed model, as well as further improvements when the transfer learning

strategy (synthetic-world data – transfer learning – real-world data) is considered to tackle the limitation on

the training due to the reduced number of pairs of real-images on most of the public data sets.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-989758402-2 Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 120
Permanent link to this record
 

 
Author Ángel Morera, Ángel Sánchez, A. Belén Moreno, Angel D. Sappa, & José F. Vélez
Title SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. Type Journal Article
Year 2020 Publication Abbreviated Journal In Sensors
Volume Vol. 2020-August Issue 16 Pages pp. 1-23
Keywords (down) object detection; urban outdoor panels; one-stage detectors; Single Shot MultiBox Detector (SSD); You Only Look Once (YOLO); detection metrics; object and scene imaging variabilities
Abstract This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO)

deep neural networks for the outdoor advertisement panel detection problem by handling multiple

and combined variabilities in the scenes. Publicity panel detection in images o ers important

advantages both in the real world as well as in the virtual one. For example, applications like Google

Street View can be used for Internet publicity and when detecting these ads panels in images, it could

be possible to replace the publicity appearing inside the panels by another from a funding company.

In our experiments, both SSD and YOLO detectors have produced acceptable results under variable

sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex

background and multiple panels in scenes. Due to the diculty of finding annotated images for the

considered problem, we created our own dataset for conducting the experiments. The major strength

of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable

when the publicity contained inside the panel is analyzed after detecting them. On the other side,

YOLO produced better panel localization results detecting a higher number of True Positive (TP)

panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models

with di erent types of semantic segmentation networks and using the same evaluation metrics is

also included.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 14248220 Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 133
Permanent link to this record
 

 
Author Jorge L. Charco, Angel D. Sappa, Boris X. Vintimilla
Title Human Pose Estimation through A Novel Multi-View Scheme Type Conference Article
Year 2022 Publication Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISIGRAPP 2022 Abbreviated Journal
Volume 5 Issue Pages 855-862
Keywords (down) Multi-View Scheme, Human Pose Estimation, Relative Camera Pose, Monocular Approach
Abstract This paper presents a multi-view scheme to tackle the challenging problem of the self-occlusion in human

pose estimation problem. The proposed approach first obtains the human body joints of a set of images,

which are captured from different views at the same time. Then, it enhances the obtained joints by using a

multi-view scheme. Basically, the joints from a given view are used to enhance poorly estimated joints from

another view, especially intended to tackle the self occlusions cases. A network architecture initially proposed

for the monocular case is adapted to be used in the proposed multi-view scheme. Experimental results and

comparisons with the state-of-the-art approaches on Human3.6m dataset are presented showing improvements

in the accuracy of body joints estimations.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved yes
Call Number cidis @ cidis @ Serial 169
Permanent link to this record
 

 
Author Julien Poujol; Cristhian A. Aguilera; Etienne Danos; Boris X. Vintimilla; Ricardo Toledo; Angel D. Sappa
Title A visible-Thermal Fusion based Monocular Visual Odometry Type Conference Article
Year 2015 Publication Iberian Robotics Conference (ROBOT 2015), International Conference on, Lisbon, Portugal, 2015 Abbreviated Journal
Volume 417 Issue Pages 517-528
Keywords (down) Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion
Abstract The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained representations are evaluated under a visual odometry framework, highlighting their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 44
Permanent link to this record
 

 
Author Angel D. Sappa; Cristhian A. Aguilera; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo
Title Monocular visual odometry: a cross-spectral image fusion based approach Type Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 86 Issue Pages pp. 26-36
Keywords (down) Monocular visual odometry LWIR-RGB cross-spectral imaging Image fusion
Abstract This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Enlgish Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 54
Permanent link to this record
 

 
Author P. Ricaurte; C. Chilán; C. A. Aguilera-Carrasco; B. X. Vintimilla; Angel D. Sappa
Title Performance Evaluation of Feature Point Descriptors in the Infrared Domain Type Conference Article
Year 2014 Publication Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2013 Abbreviated Journal
Volume 1 Issue Pages 545 -550
Keywords (down) Infrared Imaging, Feature Point Descriptors
Abstract This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered.
Address
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference 2014 International Conference on Computer Vision Theory and Applications (VISAPP)
Notes Approved no
Call Number cidis @ cidis @ Serial 26
Permanent link to this record