|   | 
Details
   web
Records
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira
Title Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives Type Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 83 Issue Pages pp. 312-325
Keywords Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives
Abstract (down) When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 49
Permanent link to this record
 

 
Author Ángel Morera, Ángel Sánchez, A. Belén Moreno, Angel D. Sappa, & José F. Vélez
Title SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. Type Journal Article
Year 2020 Publication Abbreviated Journal In Sensors
Volume Vol. 2020-August Issue 16 Pages pp. 1-23
Keywords object detection; urban outdoor panels; one-stage detectors; Single Shot MultiBox Detector (SSD); You Only Look Once (YOLO); detection metrics; object and scene imaging variabilities
Abstract (down) This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO)

deep neural networks for the outdoor advertisement panel detection problem by handling multiple

and combined variabilities in the scenes. Publicity panel detection in images o ers important

advantages both in the real world as well as in the virtual one. For example, applications like Google

Street View can be used for Internet publicity and when detecting these ads panels in images, it could

be possible to replace the publicity appearing inside the panels by another from a funding company.

In our experiments, both SSD and YOLO detectors have produced acceptable results under variable

sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex

background and multiple panels in scenes. Due to the diculty of finding annotated images for the

considered problem, we created our own dataset for conducting the experiments. The major strength

of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable

when the publicity contained inside the panel is analyzed after detecting them. On the other side,

YOLO produced better panel localization results detecting a higher number of True Positive (TP)

panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models

with di erent types of semantic segmentation networks and using the same evaluation metrics is

also included.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 14248220 Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 133
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel D. Sappa; Boris X. Vintimilla; Lin Guo; Jiankun Hou; Armin Mehri; Parichehr Behjati; Ardakani Heena Patel; Vishal Chudasama; Kalpesh Prajapati; Kishor P. Upla; Raghavendra Ramachandra; Kiran Raja; Christoph Busch; Feras Almasri; Olivier Debeir; Sabari Nathan; Priya Kansal; Nolan Gutierrez; Bardia Mojra; William J. Beksi
Title Thermal Image Super-Resolution Challenge – PBVS 2020 Type Conference Article
Year 2020 Publication The 16th IEEE Workshop on Perception Beyond the Visible Spectrum on the Conference on Computer Vision and Pattern Recongnition (CVPR 2020) Abbreviated Journal
Volume 2020-June Issue 9151059 Pages 432-439
Keywords
Abstract (down) This paper summarizes the top contributions to the first challenge on thermal image super-resolution (TISR) which was organized as part of the Perception Beyond the Visible Spectrum (PBVS) 2020 workshop. In this challenge, a novel thermal image dataset is considered together with stateof-the-art approaches evaluated under a common framework.

The dataset used in the challenge consists of 1021 thermal images, obtained from three distinct thermal cameras at different resolutions (low-resolution, mid-resolution, and high-resolution), resulting in a total of 3063 thermal images. From each resolution, 951 images are used for training and 50 for testing while the 20 remaining images are used for two proposed evaluations. The first evaluation consists of downsampling the low-resolution, midresolution, and high-resolution thermal images by x2, x3 and x4 respectively, and comparing their super-resolution

results with the corresponding ground truth images. The second evaluation is comprised of obtaining the x2 superresolution from a given mid-resolution thermal image and comparing it with the corresponding semi-registered highresolution thermal image. Out of 51 registered participants, 6 teams reached the final validation phase.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 21607508 ISBN 978-172819360-1 Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 123
Permanent link to this record
 

 
Author Jorge L. Charco; Boris X. Vintimilla; Angel D. Sappa
Title Deep learning based camera pose estimation in multi-view environment. Type Conference Article
Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) Abbreviated Journal
Volume Issue Pages 224-228
Keywords
Abstract (down) This paper proposes to use a deep learning network architecture for relative camera pose estimation on a multi-view environment. The proposed network is a variant architecture of AlexNet to use as regressor for prediction the relative translation and rotation as output. The proposed approach is trained from scratch on a large data set that takes as input a pair of images from the same scene. This new architecture is compared with a previous approach using standard metrics, obtaining better results on the relative camera pose.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 93
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud
Title Near InfraRed Imagery Colorization Type Conference Article
Year 2018 Publication 25 th IEEE International Conference on Image Processing, ICIP 2018 Abbreviated Journal
Volume Issue Pages 2237-2241
Keywords
Abstract (down) This paper proposes a stacked conditional Generative

Adversarial Network-based method for Near InfraRed

(NIR) imagery colorization. We propose a variant architecture

of Generative Adversarial Network (GAN) that uses multiple

loss functions over a conditional probabilistic generative model.

We show that this new architecture/loss-function yields better

generalization and representation of the generated colored IR

images. The proposed approach is evaluated on a large test

dataset and compared to recent state of the art methods using

standard metrics.1

Index Terms—Convolutional Neural Networks (CNN), Generative

Adversarial Network (GAN), Infrared Imagery colorization.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 81
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel D. Sappa; Boris X. Vintimilla
Title Thermal Image Super-Resolution: a Novel Architecture and Dataset Type Conference Article
Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal
Volume 4 Issue Pages 111-119
Keywords Thermal images, Far Infrared, Dataset, Super-Resolution.
Abstract (down) This paper proposes a novel CycleGAN architecture for thermal image super-resolution, together with a large

dataset consisting of thermal images at different resolutions. The dataset has been acquired using three thermal

cameras at different resolutions, which acquire images from the same scenario at the same time. The thermal

cameras are mounted in rig trying to minimize the baseline distance to make easier the registration problem.

The proposed architecture is based on ResNet6 as a Generator and PatchGAN as Discriminator. The novelty

on the proposed unsupervised super-resolution training (CycleGAN) is possible due to the existence of aforementioned thermal images—images of the same scenario with different resolutions. The proposed approach

is evaluated in the dataset and compared with classical bicubic interpolation. The dataset and the network are

available.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-989758402-2 Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 121
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla
Title Adaptive Harris Corners Detector Evaluated with Cross-Spectral Images Type Conference Article
Year 2018 Publication International Conference on Information Technology & Systems (ICITS 2018). ICITS 2018. Advances in Intelligent Systems and Computing Abbreviated Journal
Volume 721 Issue Pages
Keywords
Abstract (down) This paper proposes a novel approach to use cross-spectral

images to achieve a better performance with the proposed Adaptive Harris

corner detector comparing its obtained results with those achieved

with images of the visible spectra. The images of urban, field, old-building

and country category were used for the experiments, given the variety of

the textures present in these images, with which the complexity of the

proposal is much more challenging for its verification. It is a new scope,

which means improving the detection of characteristic points using crossspectral

images (NIR, G, B) and applying pruning techniques, the combination

of channels for this fusion is the one that generates the largest

variance based on the intensity of the merged pixels, therefore, it is that

which maximizes the entropy in the resulting Cross-spectral images.

Harris is one of the most widely used corner detection algorithm, so

any improvement in its efficiency is an important contribution in the

field of computer vision. The experiments conclude that the inclusion of

a (NIR) channel in the image as a result of the combination of the spectra,

greatly improves the corner detection due to better entropy of the

resulting image after the fusion, Therefore the fusion process applied to

the images improves the results obtained in subsequent processes such as

identification of objects or patterns, classification and/or segmentation.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes 1 Approved no
Call Number gtsi @ user @ Serial 84
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla
Title Cross-spectral image dehaze through a dense stacked conditional GAN based approach. Type Conference Article
Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) Abbreviated Journal
Volume Issue Pages 358-364
Keywords
Abstract (down) This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 92
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud
Title Deep Learning based Single Image Dehazing Type Conference Article
Year 2018 Publication 14th IEEE Workshop on Perception Beyond the Visible Spectrum – In conjunction with CVPR 2018. Salt Lake City, Utah. USA Abbreviated Journal
Volume Issue Pages
Keywords
Abstract (down) This paper proposes a novel approach to remove haze

degradations in RGB images using a stacked conditional

Generative Adversarial Network (GAN). It employs a triplet

of GAN to remove the haze on each color channel independently.

A multiple loss functions scheme, applied over a

conditional probabilistic model, is proposed. The proposed

GAN architecture learns to remove the haze, using as conditioned

entrance, the images with haze from which the clear

images will be obtained. Such formulation ensures a fast

model training convergence and a homogeneous model generalization.

Experiments showed that the proposed method

generates high-quality clear images.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 83
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud
Title Image Vegetation Index through a Cycle Generative Adversarial Network Type Conference Article
Year 2019 Publication Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States Abbreviated Journal
Volume Issue Pages 1014-1021
Keywords
Abstract (down) This paper proposes a novel approach to estimate the

Normalized Difference Vegetation Index (NDVI) just from

an RGB image. The NDVI values are obtained by using

images from the visible spectral band together with a synthetic near infrared image obtained by a cycled GAN. The

cycled GAN network is able to obtain a NIR image from

a given gray scale image. It is trained by using unpaired

set of gray scale and NIR images by using a U-net architecture and a multiple loss function (gray scale images are

obtained from the provided RGB images). Then, the NIR

image estimated with the proposed cycle generative adversarial network is used to compute the NDVI index. Experimental results are provided showing the validity of the proposed approach. Additionally, comparisons with previous

approaches are also provided.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 106
Permanent link to this record