toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author José Reyes; Axel Godoy; Miguel Realpe. pdf  openurl
  Title (up) Uso de software de código abierto para fusión de imágenes agrícolas multiespectrales adquiridas con drones. Type Conference Article
  Year 2019 Publication International Multi-Conference of Engineering, Education and Technology (LACCEI 2019); Montego Bay, Jamaica Abbreviated Journal  
  Volume 2019-July Issue Pages  
  Keywords  
  Abstract Los drones o aeronaves no tripuladas son muy útiles para la adquisición de imágenes, de forma mucho más simple que los satélites o aviones. Sin embargo, las imágenes adquiridas por drones deben ser combinadas de alguna forma para convertirse en información de valor sobre un terreno o cultivo. Existen diferentes programas que reciben imágenes y las combinan en una sola imagen, cada uno con diferentes características (rendimiento, precisión, resultados, precio, etc.). En este estudio se revisaron diferentes programas de código abierto para fusión de imágenes, con el ?n de establecer cuál de ellos es más útil, especí?camente para ser utilizado por pequeños y medianos agricultores en Ecuador. Los resultados pueden ser de interés para diseñadores de software, ya que al utilizar código abierto, es posible modi?car e integrar los programas en un ?ujo de trabajo más simpli?cado. Además, que permite disminuir costos debido a que no requiere de pagos de licencias para su uso, lo cual puede repercutir en un mayor acceso a la tecnología para los pequeños y medianos agricultores. Como parte de los resultados de este estudio se ha creado un repositorio de acceso público con algoritmos de pre-procesamiento necesarios para manipular las imágenes adquiridas por una cámara multiespectral y para luego obtener un mapa completo en formatos RGB, CIR y NDVI.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 102  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  url
openurl 
  Title (up) Vegetation Index Estimation from Monospectral Images Type Conference Article
  Year 2018 Publication 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science Abbreviated Journal  
  Volume 10882 Issue Pages 353-362  
  Keywords  
  Abstract This paper proposes a novel approach to estimate Normalized

Difference Vegetation Index (NDVI) from just the red channel of

a RGB image. The NDVI index is defined as the ratio of the difference

of the red and infrared radiances over their sum. In other words, information

from the red channel of a RGB image and the corresponding

infrared spectral band are required for its computation. In the current

work the NDVI index is estimated just from the red channel by training a

Conditional Generative Adversarial Network (CGAN). The architecture

proposed for the generative network consists of a single level structure,

which combines at the final layer results from convolutional operations

together with the given red channel with Gaussian noise to enhance

details, resulting in a sharp NDVI image. Then, the discriminative model

estimates the probability that the NDVI generated index came from the

training dataset, rather than the index automatically generated. Experimental

results with a large set of real images are provided showing that

a Conditional GAN single level model represents an acceptable approach

to estimate NDVI index.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 82  
Permanent link to this record
 

 
Author Henry O. Velesaca, Patricia L. Suárez, Dario Carpio, Rafael E. Rivadeneira, Ángel Sánchez, Angel D. Sappa. url  openurl
  Title (up) Video Analytics in Urban Environments: Challenges and Approaches. Type Book Chapter
  Year 2022 Publication ICT Applications for Smart Cities Part of the Intelligent Systems Reference Library book series Abbreviated Journal BOOK  
  Volume 224 Issue Pages 101-122  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 196  
Permanent link to this record
 

 
Author Nayeth I. Solorzano, L. C. H., Leslie del R. Lima, Dennys F. Paillacho & Jonathan S. Paillacho url  openurl
  Title (up) Visual Metrics for Educational Videogames Linked to Socially Assistive Robots in an Inclusive Education Framework Type Conference Article
  Year 2022 Publication Smart Innovation, Systems and Technologies. International Conference in Information Technology & Education (ICITED 21), julio 15-17 Abbreviated Journal  
  Volume 256 Issue Pages 119-132  
  Keywords  
  Abstract In gamification, the development of “visual metrics for educational

video games linked to social assistance robots in the framework of inclusive education” seeks to provide support, not only to regular children but also to children with specific psychosocial disabilities, such as those diagnosed with autism spectrum disorder (ASD). However, personalizing each child's experiences represents a limitation, especially for those with atypical behaviors. 'LOLY,' a social assistance robot, works together with mobile applications associated with the family of educational video game series called 'MIDI-AM,' forming a social robotic platform. This platform offers the user curricular digital content to reinforce the teaching-learning processes and motivate regular children and those with ASD. In the present study, technical, programmatic experiments and focus groups were carried out, using open-source facial recognition algorithms to monitor and evaluate the degree of user attention throughout the interaction. The objective is to evaluate the management of a social robot linked to educational video games

through established metrics, which allow monitoring the user's facial expressions

during its use and define a scenario that ensures consistency in the results for its applicability in therapies and reinforcement in the teaching process, mainly

adaptable for inclusive early childhood education.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 180  
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla pdf  url
openurl 
  Title (up) Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type Journal Article
  Year 2016 Publication Sensors Journal Abbreviated Journal  
  Volume Vol. 16 Issue Pages pp. 1-15  
  Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform  
  Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 47  
Permanent link to this record
 

 
Author Xavier Soria; Angel D. Sappa; Riad Hammoud pdf  openurl
  Title (up) Wide-Band Color Imagery Restoration for RGB-NIR Single Sensor Image. Sensors 2018 ,2059. Type Journal Article
  Year 2018 Publication Abbreviated Journal  
  Volume Vol. 18 Issue Issue 7 Pages  
  Keywords  
  Abstract Multi-spectral RGB-NIR sensors have become ubiquitous in recent years. These sensors allow the visible and near-infrared spectral bands of a given scene to be captured at the same time. With such cameras, the acquired imagery has a compromised RGB color representation due to near-infrared bands (700–1100 nm) cross-talking with the visible bands (400–700 nm). This paper proposes two deep learning-based architectures to recover the full RGB color images, thus removing the NIR information from the visible bands. The proposed approaches directly restore the high-resolution RGB image by means of convolutional neural networks. They are evaluated with several outdoor images; both architectures reach a similar performance when evaluated in different scenarios and using different similarity metrics. Both of them improve the state of the art approaches.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 96  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: