toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Miguel Realpe; Boris X. Vintimilla; L. Vlacic pdf  openurl
  Title (up) Towards Fault Tolerant Perception for autonomous vehicles: Local Fusion. Type Conference Article
  Year 2015 Publication IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Siem Reap, 2015. Abbreviated Journal  
  Volume Issue Pages 253-258  
  Keywords  
  Abstract Many robust sensor fusion strategies have been developed in order to reliably detect the surrounding environments of an autonomous vehicle. However, in real situations there is always the possibility that sensors or other components may fail. Thus, internal modules and sensors need to be monitored to ensure their proper function. This paper introduces a general view of a perception architecture designed to detect and classify obstacles in an autonomous vehicle's environment using a fault tolerant framework, whereas elaborates the object detection and local fusion modules proposed in order to achieve the modularity and real-time process required by the system.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 37  
Permanent link to this record
 

 
Author Jorge L. Charco; Angel D. Sappa; Boris X. Vintimilla; Henry O. Velesaca pdf  isbn
openurl 
  Title (up) Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem Type Conference Article
  Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal  
  Volume 4 Issue Pages 498-505  
  Keywords Relative Camera Pose Estimation, Siamese Architecture, Synthetic Data, Deep Learning, Multi-View Environments, Extrinsic Camera Parameters.  
  Abstract This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model

a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The

transfer learning consist of first training the network using pairs of images from the virtual-world scenario

considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight

of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose

estimation accuracy using the proposed model, as well as further improvements when the transfer learning

strategy (synthetic-world data – transfer learning – real-world data) is considered to tackle the limitation on

the training due to the reduced number of pairs of real-images on most of the public data sets.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-989758402-2 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 120  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  url
openurl 
  Title (up) Vegetation Index Estimation from Monospectral Images Type Conference Article
  Year 2018 Publication 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science Abbreviated Journal  
  Volume 10882 Issue Pages 353-362  
  Keywords  
  Abstract This paper proposes a novel approach to estimate Normalized

Difference Vegetation Index (NDVI) from just the red channel of

a RGB image. The NDVI index is defined as the ratio of the difference

of the red and infrared radiances over their sum. In other words, information

from the red channel of a RGB image and the corresponding

infrared spectral band are required for its computation. In the current

work the NDVI index is estimated just from the red channel by training a

Conditional Generative Adversarial Network (CGAN). The architecture

proposed for the generative network consists of a single level structure,

which combines at the final layer results from convolutional operations

together with the given red channel with Gaussian noise to enhance

details, resulting in a sharp NDVI image. Then, the discriminative model

estimates the probability that the NDVI generated index came from the

training dataset, rather than the index automatically generated. Experimental

results with a large set of real images are provided showing that

a Conditional GAN single level model represents an acceptable approach

to estimate NDVI index.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 82  
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla pdf  url
openurl 
  Title (up) Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type Journal Article
  Year 2016 Publication Sensors Journal Abbreviated Journal  
  Volume Vol. 16 Issue Pages pp. 1-15  
  Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform  
  Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 47  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: