toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Jorge Alvarez Tello; Mireya Zapata; Dennys Paillacho pdf  openurl
  Title Kinematic optimization of a robot head movements for the evaluation of human-robot interaction in social robotics. Type Conference Article
  Year 2019 Publication 10th International Conference on Applied Human Factors and Ergonomics and the Affiliated Conferences (AHFE 2019), Washington D.C.; United States. Advances in Intelligent Systems and Computing Abbreviated Journal  
  Volume 975 Issue Pages 108-118  
  Keywords  
  Abstract (up) This paper presents the simplification of the head movements from

the analysis of the biomechanical parameters of the head and neck at the

mechanical and structural level through CAD modeling and construction with

additive printing in ABS/PLA to implement non-verbal communication strategies and establish behavior patterns in the social interaction. This is using in the

denominated MASHI (Multipurpose Assistant robot for Social Human-robot

Interaction) experimental robotic telepresence platform, implemented by a

display with a fish-eye camera along with the mechanical mechanism, which

permits 4 degrees of freedom (DoF). In the development of mathematicalmechanical modeling for the kinematics codification that governs the robot and

the autonomy of movement, we have the Pitch, Roll, and Yaw movements, and

the combination of all of them to establish an active communication through

telepresence. For the computational implementation, it will be show the rotational matrix to describe the movement.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved yes  
  Call Number gtsi @ user @ Serial 108  
Permanent link to this record
 

 
Author Xavier Soria; Edgar Riba; Angel D. Sappa pdf  isbn
openurl 
  Title Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection Type Conference Article
  Year 2020 Publication 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) Abbreviated Journal  
  Volume Issue 9093290 Pages 1912-1921  
  Keywords  
  Abstract (up) This paper proposes a Deep Learning based edge de- tector, which is inspired on both HED (Holistically-Nested Edge Detection) and Xception networks. The proposed ap- proach generates thin edge-maps that are plausible for hu- man eyes; it can be used in any edge detection task without previous training or fine tuning process. As a second contri- bution, a large dataset with carefully annotated edges, has been generated. This dataset has been used for training the proposed approach as well the state-of-the-art algorithms for comparisons. Quantitative and qualitative evaluations have been performed on different benchmarks showing im- provements with the proposed method when F-measure of ODS and OIS are considered.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-172816553-0 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 126  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  url
openurl 
  Title Vegetation Index Estimation from Monospectral Images Type Conference Article
  Year 2018 Publication 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science Abbreviated Journal  
  Volume 10882 Issue Pages 353-362  
  Keywords  
  Abstract (up) This paper proposes a novel approach to estimate Normalized

Difference Vegetation Index (NDVI) from just the red channel of

a RGB image. The NDVI index is defined as the ratio of the difference

of the red and infrared radiances over their sum. In other words, information

from the red channel of a RGB image and the corresponding

infrared spectral band are required for its computation. In the current

work the NDVI index is estimated just from the red channel by training a

Conditional Generative Adversarial Network (CGAN). The architecture

proposed for the generative network consists of a single level structure,

which combines at the final layer results from convolutional operations

together with the given red channel with Gaussian noise to enhance

details, resulting in a sharp NDVI image. Then, the discriminative model

estimates the probability that the NDVI generated index came from the

training dataset, rather than the index automatically generated. Experimental

results with a large set of real images are provided showing that

a Conditional GAN single level model represents an acceptable approach

to estimate NDVI index.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 82  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud pdf  openurl
  Title Image Vegetation Index through a Cycle Generative Adversarial Network Type Conference Article
  Year 2019 Publication Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States Abbreviated Journal  
  Volume Issue Pages 1014-1021  
  Keywords  
  Abstract (up) This paper proposes a novel approach to estimate the

Normalized Difference Vegetation Index (NDVI) just from

an RGB image. The NDVI values are obtained by using

images from the visible spectral band together with a synthetic near infrared image obtained by a cycled GAN. The

cycled GAN network is able to obtain a NIR image from

a given gray scale image. It is trained by using unpaired

set of gray scale and NIR images by using a U-net architecture and a multiple loss function (gray scale images are

obtained from the provided RGB images). Then, the NIR

image estimated with the proposed cycle generative adversarial network is used to compute the NDVI index. Experimental results are provided showing the validity of the proposed approach. Additionally, comparisons with previous

approaches are also provided.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 106  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud pdf  openurl
  Title Deep Learning based Single Image Dehazing Type Conference Article
  Year 2018 Publication 14th IEEE Workshop on Perception Beyond the Visible Spectrum – In conjunction with CVPR 2018. Salt Lake City, Utah. USA Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract (up) This paper proposes a novel approach to remove haze

degradations in RGB images using a stacked conditional

Generative Adversarial Network (GAN). It employs a triplet

of GAN to remove the haze on each color channel independently.

A multiple loss functions scheme, applied over a

conditional probabilistic model, is proposed. The proposed

GAN architecture learns to remove the haze, using as conditioned

entrance, the images with haze from which the clear

images will be obtained. Such formulation ensures a fast

model training convergence and a homogeneous model generalization.

Experiments showed that the proposed method

generates high-quality clear images.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 83  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  openurl
  Title Cross-spectral image dehaze through a dense stacked conditional GAN based approach. Type Conference Article
  Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) Abbreviated Journal  
  Volume Issue Pages 358-364  
  Keywords  
  Abstract (up) This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 92  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  url
openurl 
  Title Adaptive Harris Corners Detector Evaluated with Cross-Spectral Images Type Conference Article
  Year 2018 Publication International Conference on Information Technology & Systems (ICITS 2018). ICITS 2018. Advances in Intelligent Systems and Computing Abbreviated Journal  
  Volume 721 Issue Pages  
  Keywords  
  Abstract (up) This paper proposes a novel approach to use cross-spectral

images to achieve a better performance with the proposed Adaptive Harris

corner detector comparing its obtained results with those achieved

with images of the visible spectra. The images of urban, field, old-building

and country category were used for the experiments, given the variety of

the textures present in these images, with which the complexity of the

proposal is much more challenging for its verification. It is a new scope,

which means improving the detection of characteristic points using crossspectral

images (NIR, G, B) and applying pruning techniques, the combination

of channels for this fusion is the one that generates the largest

variance based on the intensity of the merged pixels, therefore, it is that

which maximizes the entropy in the resulting Cross-spectral images.

Harris is one of the most widely used corner detection algorithm, so

any improvement in its efficiency is an important contribution in the

field of computer vision. The experiments conclude that the inclusion of

a (NIR) channel in the image as a result of the combination of the spectra,

greatly improves the corner detection due to better entropy of the

resulting image after the fusion, Therefore the fusion process applied to

the images improves the results obtained in subsequent processes such as

identification of objects or patterns, classification and/or segmentation.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes 1 Approved no  
  Call Number gtsi @ user @ Serial 84  
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel D. Sappa; Boris X. Vintimilla pdf  isbn
openurl 
  Title Thermal Image Super-Resolution: a Novel Architecture and Dataset Type Conference Article
  Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal  
  Volume 4 Issue Pages 111-119  
  Keywords Thermal images, Far Infrared, Dataset, Super-Resolution.  
  Abstract (up) This paper proposes a novel CycleGAN architecture for thermal image super-resolution, together with a large

dataset consisting of thermal images at different resolutions. The dataset has been acquired using three thermal

cameras at different resolutions, which acquire images from the same scenario at the same time. The thermal

cameras are mounted in rig trying to minimize the baseline distance to make easier the registration problem.

The proposed architecture is based on ResNet6 as a Generator and PatchGAN as Discriminator. The novelty

on the proposed unsupervised super-resolution training (CycleGAN) is possible due to the existence of aforementioned thermal images—images of the same scenario with different resolutions. The proposed approach

is evaluated in the dataset and compared with classical bicubic interpolation. The dataset and the network are

available.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-989758402-2 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 121  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud pdf  openurl
  Title Near InfraRed Imagery Colorization Type Conference Article
  Year 2018 Publication 25 th IEEE International Conference on Image Processing, ICIP 2018 Abbreviated Journal  
  Volume Issue Pages 2237-2241  
  Keywords  
  Abstract (up) This paper proposes a stacked conditional Generative

Adversarial Network-based method for Near InfraRed

(NIR) imagery colorization. We propose a variant architecture

of Generative Adversarial Network (GAN) that uses multiple

loss functions over a conditional probabilistic generative model.

We show that this new architecture/loss-function yields better

generalization and representation of the generated colored IR

images. The proposed approach is evaluated on a large test

dataset and compared to recent state of the art methods using

standard metrics.1

Index Terms—Convolutional Neural Networks (CNN), Generative

Adversarial Network (GAN), Infrared Imagery colorization.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 81  
Permanent link to this record
 

 
Author Jorge L. Charco; Boris X. Vintimilla; Angel D. Sappa pdf  openurl
  Title Deep learning based camera pose estimation in multi-view environment. Type Conference Article
  Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) Abbreviated Journal  
  Volume Issue Pages 224-228  
  Keywords  
  Abstract (up) This paper proposes to use a deep learning network architecture for relative camera pose estimation on a multi-view environment. The proposed network is a variant architecture of AlexNet to use as regressor for prediction the relative translation and rotation as output. The proposed approach is trained from scratch on a large data set that takes as input a pair of images from the same scene. This new architecture is compared with a previous approach using standard metrics, obtaining better results on the relative camera pose.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 93  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: