toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Ricaurte P; Chilán C; Cristhian A. Aguilera; Boris X. Vintimilla; Angel D. Sappa pdf  url
openurl 
  Title Feature Point Descriptors: Infrared and Visible Spectra Type Journal Article
  Year 2014 Publication Sensors Journal Abbreviated Journal  
  Volume 14 Issue Pages (down) 3690-3701  
  Keywords cross-spectral imaging; feature point descriptors  
  Abstract This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 28  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud pdf  openurl
  Title Image Vegetation Index through a Cycle Generative Adversarial Network Type Conference Article
  Year 2019 Publication Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States Abbreviated Journal  
  Volume Issue Pages (down) 1014-1021  
  Keywords  
  Abstract This paper proposes a novel approach to estimate the

Normalized Difference Vegetation Index (NDVI) just from

an RGB image. The NDVI values are obtained by using

images from the visible spectral band together with a synthetic near infrared image obtained by a cycled GAN. The

cycled GAN network is able to obtain a NIR image from

a given gray scale image. It is trained by using unpaired

set of gray scale and NIR images by using a U-net architecture and a multiple loss function (gray scale images are

obtained from the provided RGB images). Then, the NIR

image estimated with the proposed cycle generative adversarial network is used to compute the NDVI index. Experimental results are provided showing the validity of the proposed approach. Additionally, comparisons with previous

approaches are also provided.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 106  
Permanent link to this record
 

 
Author Julien Poujol; Cristhian A. Aguilera; Etienne Danos; Boris X. Vintimilla; Ricardo Toledo; Angel D. Sappa pdf  url
openurl 
  Title A visible-Thermal Fusion based Monocular Visual Odometry Type Conference Article
  Year 2015 Publication Iberian Robotics Conference (ROBOT 2015), International Conference on, Lisbon, Portugal, 2015 Abbreviated Journal  
  Volume 417 Issue Pages (down) 517-528  
  Keywords Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion  
  Abstract The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained representations are evaluated under a visual odometry framework, highlighting their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 44  
Permanent link to this record
 

 
Author Jorge L. Charco; Angel D. Sappa; Boris X. Vintimilla; Henry O. Velesaca pdf  isbn
openurl 
  Title Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem Type Conference Article
  Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal  
  Volume 4 Issue Pages (down) 498-505  
  Keywords Relative Camera Pose Estimation, Siamese Architecture, Synthetic Data, Deep Learning, Multi-View Environments, Extrinsic Camera Parameters.  
  Abstract This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model

a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The

transfer learning consist of first training the network using pairs of images from the virtual-world scenario

considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight

of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose

estimation accuracy using the proposed model, as well as further improvements when the transfer learning

strategy (synthetic-world data – transfer learning – real-world data) is considered to tackle the limitation on

the training due to the reduced number of pairs of real-images on most of the public data sets.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-989758402-2 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 120  
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel D. Sappa; Boris X. Vintimilla; Lin Guo; Jiankun Hou; Armin Mehri; Parichehr Behjati; Ardakani Heena Patel; Vishal Chudasama; Kalpesh Prajapati; Kishor P. Upla; Raghavendra Ramachandra; Kiran Raja; Christoph Busch; Feras Almasri; Olivier Debeir; Sabari Nathan; Priya Kansal; Nolan Gutierrez; Bardia Mojra; William J. Beksi pdf  isbn
openurl 
  Title Thermal Image Super-Resolution Challenge – PBVS 2020 Type Conference Article
  Year 2020 Publication The 16th IEEE Workshop on Perception Beyond the Visible Spectrum on the Conference on Computer Vision and Pattern Recongnition (CVPR 2020) Abbreviated Journal  
  Volume 2020-June Issue 9151059 Pages (down) 432-439  
  Keywords  
  Abstract This paper summarizes the top contributions to the first challenge on thermal image super-resolution (TISR) which was organized as part of the Perception Beyond the Visible Spectrum (PBVS) 2020 workshop. In this challenge, a novel thermal image dataset is considered together with stateof-the-art approaches evaluated under a common framework.

The dataset used in the challenge consists of 1021 thermal images, obtained from three distinct thermal cameras at different resolutions (low-resolution, mid-resolution, and high-resolution), resulting in a total of 3063 thermal images. From each resolution, 951 images are used for training and 50 for testing while the 20 remaining images are used for two proposed evaluations. The first evaluation consists of downsampling the low-resolution, midresolution, and high-resolution thermal images by x2, x3 and x4 respectively, and comparing their super-resolution

results with the corresponding ground truth images. The second evaluation is comprised of obtaining the x2 superresolution from a given mid-resolution thermal image and comparing it with the corresponding semi-registered highresolution thermal image. Out of 51 registered participants, 6 teams reached the final validation phase.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 21607508 ISBN 978-172819360-1 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 123  
Permanent link to this record
 

 
Author Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic pdf  openurl
  Title Multi-sensor Fusion Module in a Fault Tolerant Perception System for Autonomous Vehicles Type Journal Article
  Year 2016 Publication Journal of Automation and Control Engineering (JOACE) Abbreviated Journal  
  Volume 4 Issue Pages (down) 430-436  
  Keywords Fault Tolerance, Data Fusion, Multi-sensor Fusion, Autonomous Vehicles, Perception System  
  Abstract Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 51  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  url
openurl 
  Title Vegetation Index Estimation from Monospectral Images Type Conference Article
  Year 2018 Publication 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science Abbreviated Journal  
  Volume vol 10882 Issue Pages (down) pp 353-362  
  Keywords  
  Abstract This paper proposes a novel approach to estimate Normalized

Difference Vegetation Index (NDVI) from just the red channel of

a RGB image. The NDVI index is defined as the ratio of the difference

of the red and infrared radiances over their sum. In other words, information

from the red channel of a RGB image and the corresponding

infrared spectral band are required for its computation. In the current

work the NDVI index is estimated just from the red channel by training a

Conditional Generative Adversarial Network (CGAN). The architecture

proposed for the generative network consists of a single level structure,

which combines at the final layer results from convolutional operations

together with the given red channel with Gaussian noise to enhance

details, resulting in a sharp NDVI image. Then, the discriminative model

estimates the probability that the NDVI generated index came from the

training dataset, rather than the index automatically generated. Experimental

results with a large set of real images are provided showing that

a Conditional GAN single level model represents an acceptable approach

to estimate NDVI index.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 82  
Permanent link to this record
 

 
Author Rafael E. Rivadeneira, Angel D. Sappa, Boris X. Vintimilla, Jin Kim, Dogun Kim et al. pdf  url
openurl 
  Title Thermal Image Super-Resolution Challenge Results- PBVS 2022. Type Conference Article
  Year 2022 Publication Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. Abbreviated Journal CONFERENCE  
  Volume 2022-June Issue Pages (down) 349 - 357  
  Keywords  
  Abstract This paper presents results from the third Thermal Image

Super-Resolution (TISR) challenge organized in the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop.

The challenge uses the same thermal image dataset as the

first two challenges, with 951 training images and 50 validation images at each resolution. A set of 20 images was

kept aside for testing. The evaluation tasks were to measure

the PSNR and SSIM between the SR image and the ground

truth (HR thermal noisy image downsampled by four), and

also to measure the PSNR and SSIM between the SR image

and the semi-registered HR image (acquired with another

camera). The results outperformed those from last year’s

challenge, improving both evaluation metrics. This year,

almost 100 teams participants registered for the challenge,

showing the community’s interest in this hot topic.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 175  
Permanent link to this record
 

 
Author Dennis G. Romero; A. Frizera; Angel D. Sappa; Boris X. Vintimilla; T.F. Bastos pdf  url
openurl 
  Title A predictive model for human activity recognition by observing actions and context Type Conference Article
  Year 2015 Publication ACIVS 2015 (Advanced Concepts for Intelligent Vision Systems), International Conference on, Catania, Italy, 2015 Abbreviated Journal  
  Volume Issue Pages (down) 323 - 333  
  Keywords Edge width, Image blu,r Defocus map, Edge model  
  Abstract This paper presents a novel model to estimate human activities – a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 43  
Permanent link to this record
 

 
Author Miguel Realpe; Boris X. Vintimilla; L. Vlacic pdf  openurl
  Title Towards Fault Tolerant Perception for autonomous vehicles: Local Fusion. Type Conference Article
  Year 2015 Publication IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Siem Reap, 2015. Abbreviated Journal  
  Volume Issue Pages (down) 253-258  
  Keywords  
  Abstract Many robust sensor fusion strategies have been developed in order to reliably detect the surrounding environments of an autonomous vehicle. However, in real situations there is always the possibility that sensors or other components may fail. Thus, internal modules and sensors need to be monitored to ensure their proper function. This paper introduces a general view of a perception architecture designed to detect and classify obstacles in an autonomous vehicle's environment using a fault tolerant framework, whereas elaborates the object detection and local fusion modules proposed in order to achieve the modularity and real-time process required by the system.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 37  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: