|   | 
Details
   web
Records
Author Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic
Title Multi-sensor Fusion Module in a Fault Tolerant Perception System for Autonomous Vehicles Type Journal Article
Year 2016 Publication Journal of Automation and Control Engineering (JOACE) Abbreviated Journal
Volume (down) Vol. 4 Issue Pages pp. 430-436
Keywords Fault Tolerance, Data Fusion, Multi-sensor Fusion, Autonomous Vehicles, Perception System
Abstract Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 51
Permanent link to this record
 

 
Author Jorge L. Charco; Angel D. Sappa; Boris X. Vintimilla; Henry O. Velesaca
Title Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem Type Conference Article
Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal
Volume (down) 4 Issue Pages 498-505
Keywords Relative Camera Pose Estimation, Siamese Architecture, Synthetic Data, Deep Learning, Multi-View Environments, Extrinsic Camera Parameters.
Abstract This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model

a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The

transfer learning consist of first training the network using pairs of images from the virtual-world scenario

considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight

of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose

estimation accuracy using the proposed model, as well as further improvements when the transfer learning

strategy (synthetic-world data – transfer learning – real-world data) is considered to tackle the limitation on

the training due to the reduced number of pairs of real-images on most of the public data sets.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-989758402-2 Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 120
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel D. Sappa; Boris X. Vintimilla
Title Thermal Image Super-Resolution: a Novel Architecture and Dataset Type Conference Article
Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal
Volume (down) 4 Issue Pages 111-119
Keywords Thermal images, Far Infrared, Dataset, Super-Resolution.
Abstract This paper proposes a novel CycleGAN architecture for thermal image super-resolution, together with a large

dataset consisting of thermal images at different resolutions. The dataset has been acquired using three thermal

cameras at different resolutions, which acquire images from the same scenario at the same time. The thermal

cameras are mounted in rig trying to minimize the baseline distance to make easier the registration problem.

The proposed architecture is based on ResNet6 as a Generator and PatchGAN as Discriminator. The novelty

on the proposed unsupervised super-resolution training (CycleGAN) is possible due to the existence of aforementioned thermal images—images of the same scenario with different resolutions. The proposed approach

is evaluated in the dataset and compared with classical bicubic interpolation. The dataset and the network are

available.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-989758402-2 Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 121
Permanent link to this record
 

 
Author Rafael E. Rivadeneira, Angel D. Sappa and Boris X. Vintimilla
Title Multi-Image Super-Resolution for Thermal Images. Type Conference Article
Year 2022 Publication Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISIGRAPP 2022 Abbreviated Journal
Volume (down) 4 Issue Pages 635 - 642
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 181
Permanent link to this record
 

 
Author A. Amato; F. Lumbreras; Angel D. Sappa
Title A general-purpose crowdsourcing platform for mobile devices Type Conference Article
Year 2014 Publication Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 Abbreviated Journal
Volume (down) 3 Issue Pages 211-215
Keywords Crowdsourcing Platform, Mobile Crowdsourcing
Abstract This paper presents details of a general purpose micro-taskon-demand platform based on the crowdsourcing philosophy. This platformwas specifically developed for mobile devices in order to exploit the strengths of such devices; namely: i) massivity, ii) ubiquityand iii) embedded sensors.The combined use of mobile platforms and the crowdsourcing model allows to tackle from the simplest to the most complex tasks.Users experience is the highlighted feature of this platform (this fact is extended to both task-proposer and task- solver).Proper tools according with a specific task are provided to a task-solver in order to perform his/her job in a simpler, faster and appealing way.Moreover, a task can be easily submitted by just selecting predefined templates, which cover a wide range of possible applications.Examples of its usage in computer vision and computer games are provided illustrating the potentiality of the platform.
Address
Corporate Author Thesis
Publisher IEEE Place of Publication Lisbon, Portugal Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference Computer Vision Theory and Applications (VISAPP), 2014 International Conference on
Notes Approved no
Call Number cidis @ cidis @ Serial 25
Permanent link to this record
 

 
Author Roberto Jacome Galarza; Miguel-Andrés Realpe-Robalino; Chamba-Eras LuisAntonio; Viñán-Ludeña MarlonSantiago and Sinche-Freire Javier-Francisco
Title Computer vision for image understanding. A comprehensive review Type Conference Article
Year 2019 Publication International Conference on Advances in Emerging Trends and Technologies (ICAETT 2019); Quito, Ecuador Abbreviated Journal
Volume (down) Issue Pages 248-259
Keywords
Abstract Computer Vision has its own Turing test: Can a machine describe the contents of an image or a video in the way a human being would do? In this paper, the progress of Deep Learning for image recognition is analyzed in order to know the answer to this question. In recent years, Deep Learning has increased considerably the precision rate of many tasks related to computer vision. Many datasets of labeled images are now available online, which leads to pre-trained models for many computer vision applications. In this work, we gather information of the latest techniques to perform image understanding and description. As a conclusion we obtained that the combination of Natural Language Processing (using Recurrent Neural Networks and Long Short-Term Memory) plus Image Understanding (using Convolutional Neural Networks) could bring new types of powerful and useful applications in which the computer will be able to answer questions about the content of images and videos. In order to build datasets of labeled images, we need a lot of work and most of the datasets are built using crowd work. These new applications have the potential to increase the human machine interaction to new levels of usability and user’s satisfaction.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 97
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla
Title Colorizing Infrared Images through a Triplet Condictional DCGAN Architecture Type Conference Article
Year 2017 Publication 19th International Conference on Image Analysis and Processing. Abbreviated Journal
Volume (down) Issue Pages 287-297
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 66
Permanent link to this record
 

 
Author Xavier Soria; Angel D. Sappa; Arash Akbarinia
Title Multispectral Single-Sensor RGB-NIR Imaging: New Challenges an Oppotunities Type Conference Article
Year 2017 Publication The 7th International Conference on Image Processing Theory, Tools and Application Abbreviated Journal
Volume (down) Issue Pages 1-6
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 72
Permanent link to this record
 

 
Author Milton Mendieta; F. Panchana; B. Andrade; B. Bayot; C. Vaca; Boris X. Vintimilla; Dennis G. Romero
Title Organ identification on shrimp histological images: A comparative study considering CNN and feature engineering. Type Conference Article
Year 2018 Publication IEEE Ecuador Technical Chapters Meeting ETCM 2018. Cuenca, Ecuador Abbreviated Journal
Volume (down) Issue Pages 1-6
Keywords
Abstract The identification of shrimp organs in biology using

histological images is a complex task. Shrimp histological images

poses a big challenge due to their texture and similarity among

classes. Image classification by using feature engineering and

convolutional neural networks (CNN) are suitable methods to

assist biologists when performing organ detection. This work

evaluates the Bag-of-Visual-Words (BOVW) and Pyramid-Bagof-

Words (PBOW) models for image classification leveraging big

data techniques; and transfer learning for the same classification

task by using a pre-trained CNN. A comparative analysis

of these two different techniques is performed, highlighting

the characteristics of both approaches on the shrimp organs

identification problem.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 87
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud
Title Near InfraRed Imagery Colorization Type Conference Article
Year 2018 Publication 25 th IEEE International Conference on Image Processing, ICIP 2018 Abbreviated Journal
Volume (down) Issue Pages 2237-2241
Keywords
Abstract This paper proposes a stacked conditional Generative

Adversarial Network-based method for Near InfraRed

(NIR) imagery colorization. We propose a variant architecture

of Generative Adversarial Network (GAN) that uses multiple

loss functions over a conditional probabilistic generative model.

We show that this new architecture/loss-function yields better

generalization and representation of the generated colored IR

images. The proposed approach is evaluated on a large test

dataset and compared to recent state of the art methods using

standard metrics.1

Index Terms—Convolutional Neural Networks (CNN), Generative

Adversarial Network (GAN), Infrared Imagery colorization.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 81
Permanent link to this record