toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Rafael E. Rivadeneira; Angel D. Sappa; Boris X. Vintimilla pdf  isbn
openurl 
  Title Thermal Image Super-Resolution: a Novel Architecture and Dataset Type Conference Article
  Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal  
  Volume 4 Issue Pages 111-119  
  Keywords Thermal images, Far Infrared, Dataset, Super-Resolution.  
  Abstract This paper proposes a novel CycleGAN architecture for thermal image super-resolution, together with a large

dataset consisting of thermal images at different resolutions. The dataset has been acquired using three thermal

cameras at different resolutions, which acquire images from the same scenario at the same time. The thermal

cameras are mounted in rig trying to minimize the baseline distance to make easier the registration problem.

The proposed architecture is based on ResNet6 as a Generator and PatchGAN as Discriminator. The novelty

on the proposed unsupervised super-resolution training (CycleGAN) is possible due to the existence of aforementioned thermal images—images of the same scenario with different resolutions. The proposed approach

is evaluated in the dataset and compared with classical bicubic interpolation. The dataset and the network are

available.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-989758402-2 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial (down) 121  
Permanent link to this record
 

 
Author Jorge L. Charco; Angel D. Sappa; Boris X. Vintimilla; Henry O. Velesaca pdf  isbn
openurl 
  Title Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem Type Conference Article
  Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal  
  Volume 4 Issue Pages 498-505  
  Keywords Relative Camera Pose Estimation, Siamese Architecture, Synthetic Data, Deep Learning, Multi-View Environments, Extrinsic Camera Parameters.  
  Abstract This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model

a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The

transfer learning consist of first training the network using pairs of images from the virtual-world scenario

considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight

of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose

estimation accuracy using the proposed model, as well as further improvements when the transfer learning

strategy (synthetic-world data – transfer learning – real-world data) is considered to tackle the limitation on

the training due to the reduced number of pairs of real-images on most of the public data sets.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-989758402-2 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial (down) 120  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  openurl
  Title Image patch similarity through a meta-learning metric based approach Type Conference Article
  Year 2019 Publication 15th International Conference on Signal Image Technology & Internet based Systems (SITIS 2019); Sorrento, Italia Abbreviated Journal  
  Volume Issue Pages 511-517  
  Keywords  
  Abstract Comparing images regions are one of the core methods used on computer vision for tasks like image classification, scene understanding, object detection and recognition. Hence, this paper proposes a novel approach to determine similarity of image regions (patches), in order to obtain the best representation of image patches. This problem has been studied by many researchers presenting different approaches, however, the ability to find the better criteria to measure the similarity on image regions are still a challenge. The present work tackles this problem using a few-shot metric based meta-learning framework able to compare image regions and determining a similarity measure to decide if there is similarity between the compared patches. Our model is training end-to-end from scratch. Experimental results

have shown that the proposed approach effectively estimates the similarity of the patches and, comparing it with the state of the art approaches, shows better results.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial (down) 115  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud pdf  openurl
  Title Image Vegetation Index through a Cycle Generative Adversarial Network Type Conference Article
  Year 2019 Publication Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States Abbreviated Journal  
  Volume Issue Pages 1014-1021  
  Keywords  
  Abstract This paper proposes a novel approach to estimate the

Normalized Difference Vegetation Index (NDVI) just from

an RGB image. The NDVI values are obtained by using

images from the visible spectral band together with a synthetic near infrared image obtained by a cycled GAN. The

cycled GAN network is able to obtain a NIR image from

a given gray scale image. It is trained by using unpaired

set of gray scale and NIR images by using a U-net architecture and a multiple loss function (gray scale images are

obtained from the provided RGB images). Then, the NIR

image estimated with the proposed cycle generative adversarial network is used to compute the NDVI index. Experimental results are provided showing the validity of the proposed approach. Additionally, comparisons with previous

approaches are also provided.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial (down) 106  
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla. pdf  openurl
  Title Thermal Image SuperResolution through Deep Convolutional Neural Network. Type Conference Article
  Year 2019 Publication 16th International Conference on Image Analysis and Recognition (ICIAR 2019); Waterloo, Canadá Abbreviated Journal  
  Volume Issue Pages 417-426  
  Keywords  
  Abstract Due to the lack of thermal image datasets, a new dataset has been acquired for proposed a superesolution approach using a Deep Convolution Neural Network schema. In order to achieve this image enhancement process a new thermal images dataset is used. Di?erent experiments have been carried out, ?rstly, the proposed architecture has been trained using only images of the visible spectrum, and later it has been trained with images of the thermal spectrum, the results showed that with the network trained with thermal images, better results are obtained in the process of enhancing the images, maintaining the image details and perspective. The thermal dataset is available at http://www.cidis.espol.edu.ec/es/dataset  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial (down) 103  
Permanent link to this record
 

 
Author Jorge L. Charco; Boris X. Vintimilla; Angel D. Sappa pdf  openurl
  Title Deep learning based camera pose estimation in multi-view environment. Type Conference Article
  Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) Abbreviated Journal  
  Volume Issue Pages 224-228  
  Keywords  
  Abstract This paper proposes to use a deep learning network architecture for relative camera pose estimation on a multi-view environment. The proposed network is a variant architecture of AlexNet to use as regressor for prediction the relative translation and rotation as output. The proposed approach is trained from scratch on a large data set that takes as input a pair of images from the same scene. This new architecture is compared with a previous approach using standard metrics, obtaining better results on the relative camera pose.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial (down) 93  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  openurl
  Title Cross-spectral image dehaze through a dense stacked conditional GAN based approach. Type Conference Article
  Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) Abbreviated Journal  
  Volume Issue Pages 358-364  
  Keywords  
  Abstract This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial (down) 92  
Permanent link to this record
 

 
Author Milton Mendieta; F. Panchana; B. Andrade; B. Bayot; C. Vaca; Boris X. Vintimilla; Dennis G. Romero pdf  openurl
  Title Organ identification on shrimp histological images: A comparative study considering CNN and feature engineering. Type Conference Article
  Year 2018 Publication IEEE Ecuador Technical Chapters Meeting ETCM 2018. Cuenca, Ecuador Abbreviated Journal  
  Volume Issue Pages 1-6  
  Keywords  
  Abstract The identification of shrimp organs in biology using

histological images is a complex task. Shrimp histological images

poses a big challenge due to their texture and similarity among

classes. Image classification by using feature engineering and

convolutional neural networks (CNN) are suitable methods to

assist biologists when performing organ detection. This work

evaluates the Bag-of-Visual-Words (BOVW) and Pyramid-Bagof-

Words (PBOW) models for image classification leveraging big

data techniques; and transfer learning for the same classification

task by using a pre-trained CNN. A comparative analysis

of these two different techniques is performed, highlighting

the characteristics of both approaches on the shrimp organs

identification problem.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial (down) 87  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  url
openurl 
  Title Adaptive Harris Corners Detector Evaluated with Cross-Spectral Images Type Conference Article
  Year 2018 Publication International Conference on Information Technology & Systems (ICITS 2018). ICITS 2018. Advances in Intelligent Systems and Computing Abbreviated Journal  
  Volume 721 Issue Pages  
  Keywords  
  Abstract This paper proposes a novel approach to use cross-spectral

images to achieve a better performance with the proposed Adaptive Harris

corner detector comparing its obtained results with those achieved

with images of the visible spectra. The images of urban, field, old-building

and country category were used for the experiments, given the variety of

the textures present in these images, with which the complexity of the

proposal is much more challenging for its verification. It is a new scope,

which means improving the detection of characteristic points using crossspectral

images (NIR, G, B) and applying pruning techniques, the combination

of channels for this fusion is the one that generates the largest

variance based on the intensity of the merged pixels, therefore, it is that

which maximizes the entropy in the resulting Cross-spectral images.

Harris is one of the most widely used corner detection algorithm, so

any improvement in its efficiency is an important contribution in the

field of computer vision. The experiments conclude that the inclusion of

a (NIR) channel in the image as a result of the combination of the spectra,

greatly improves the corner detection due to better entropy of the

resulting image after the fusion, Therefore the fusion process applied to

the images improves the results obtained in subsequent processes such as

identification of objects or patterns, classification and/or segmentation.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes 1 Approved no  
  Call Number gtsi @ user @ Serial (down) 84  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud pdf  openurl
  Title Deep Learning based Single Image Dehazing Type Conference Article
  Year 2018 Publication 14th IEEE Workshop on Perception Beyond the Visible Spectrum – In conjunction with CVPR 2018. Salt Lake City, Utah. USA Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper proposes a novel approach to remove haze

degradations in RGB images using a stacked conditional

Generative Adversarial Network (GAN). It employs a triplet

of GAN to remove the haze on each color channel independently.

A multiple loss functions scheme, applied over a

conditional probabilistic model, is proposed. The proposed

GAN architecture learns to remove the haze, using as conditioned

entrance, the images with haze from which the clear

images will be obtained. Such formulation ensures a fast

model training convergence and a homogeneous model generalization.

Experiments showed that the proposed method

generates high-quality clear images.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial (down) 83  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: