G.A. Rubio, & Wilton Agila. (2019). Transients analysis in Proton Exchange Membrane Fuel Cells: A critical review. In 8th International Conference on Renewable Energy Research and Applications (ICRERA 2019); Brasov, Rumania (pp. 249–252).
Abstract: When a proton exchange fuel cell operates it produces in addition to electrical
energy, heat and water as sub products, which impact on the performance of the cell. This
paper analyzes the issue of transients and proposes a model that describes the dynamic
operation of the fuel cell. The model considers the transients produced by electrochemical
reactions, by flow water and by heat transfer. Two-phase flow transients result in
increased the parasitic power losses and thermal transients may result in flooding or dryout of the GDL and membrane, understanding transient behavior is critical for reliable
and predictable performance from the cell.
|
Pabelco Zambrano, F. C., Héctor Villegas, Jonathan Paillacho, Doménica Pazmiño, Miguel Realpe. (2023). UAV Remote Sensing applications and current trends in crop monitoring and diagnostics: A Systematic Literature Review. In IEEE 13th International Conference on Pattern Recognition Systems (ICPRS) 2023, 4-7 julio 2023.
|
Stalin Francis Quinde. (2019). Un nuevo modelo BM3D-RNCA para mejorar la estimación de la imagen libre de ruido producida por el método BM3D. (Ph.D. Angel Sappa, Director.). M.Sc. thesis. In Ediciones FIEC-ESPOL.
|
José Reyes, Axel Godoy, & Miguel Realpe. (2019). Uso de software de código abierto para fusión de imágenes agrícolas multiespectrales adquiridas con drones. In International Multi-Conference of Engineering, Education and Technology (LACCEI 2019); Montego Bay, Jamaica (Vol. 2019-July).
Abstract: Los drones o aeronaves no tripuladas son muy útiles para la adquisición de imágenes, de forma mucho más simple que los satélites o aviones. Sin embargo, las imágenes adquiridas por drones deben ser combinadas de alguna forma para convertirse en información de valor sobre un terreno o cultivo. Existen diferentes programas que reciben imágenes y las combinan en una sola imagen, cada uno con diferentes características (rendimiento, precisión, resultados, precio, etc.). En este estudio se revisaron diferentes programas de código abierto para fusión de imágenes, con el ?n de establecer cuál de ellos es más útil, especí?camente para ser utilizado por pequeños y medianos agricultores en Ecuador. Los resultados pueden ser de interés para diseñadores de software, ya que al utilizar código abierto, es posible modi?car e integrar los programas en un ?ujo de trabajo más simpli?cado. Además, que permite disminuir costos debido a que no requiere de pagos de licencias para su uso, lo cual puede repercutir en un mayor acceso a la tecnología para los pequeños y medianos agricultores. Como parte de los resultados de este estudio se ha creado un repositorio de acceso público con algoritmos de pre-procesamiento necesarios para manipular las imágenes adquiridas por una cámara multiespectral y para luego obtener un mapa completo en formatos RGB, CIR y NDVI.
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Vegetation Index Estimation from Monospectral Images. In 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science (Vol. 10882, pp. 353–362).
Abstract: This paper proposes a novel approach to estimate Normalized
Difference Vegetation Index (NDVI) from just the red channel of
a RGB image. The NDVI index is defined as the ratio of the difference
of the red and infrared radiances over their sum. In other words, information
from the red channel of a RGB image and the corresponding
infrared spectral band are required for its computation. In the current
work the NDVI index is estimated just from the red channel by training a
Conditional Generative Adversarial Network (CGAN). The architecture
proposed for the generative network consists of a single level structure,
which combines at the final layer results from convolutional operations
together with the given red channel with Gaussian noise to enhance
details, resulting in a sharp NDVI image. Then, the discriminative model
estimates the probability that the NDVI generated index came from the
training dataset, rather than the index automatically generated. Experimental
results with a large set of real images are provided showing that
a Conditional GAN single level model represents an acceptable approach
to estimate NDVI index.
|
Henry O. Velesaca, P. L. S., Dario Carpio, Rafael E. Rivadeneira, Ángel Sánchez, Angel D. Sappa. (2022). Video Analytics in Urban Environments: Challenges and Approaches. In ICT Applications for Smart Cities Part of the Intelligent Systems Reference Library book series (Vol. 224, pp. 101–122).
|
Nayeth I. Solorzano, L. C. H., Leslie del R. Lima, Dennys F. Paillacho & Jonathan S. Paillacho. (2022). Visual Metrics for Educational Videogames Linked to Socially Assistive Robots in an Inclusive Education Framework. In Smart Innovation, Systems and Technologies. International Conference in Information Technology & Education (ICITED 21), julio 15-17 (Vol. 256, pp. 119–132).
Abstract: In gamification, the development of "visual metrics for educational
video games linked to social assistance robots in the framework of inclusive education" seeks to provide support, not only to regular children but also to children with specific psychosocial disabilities, such as those diagnosed with autism spectrum disorder (ASD). However, personalizing each child's experiences represents a limitation, especially for those with atypical behaviors. 'LOLY,' a social assistance robot, works together with mobile applications associated with the family of educational video game series called 'MIDI-AM,' forming a social robotic platform. This platform offers the user curricular digital content to reinforce the teaching-learning processes and motivate regular children and those with ASD. In the present study, technical, programmatic experiments and focus groups were carried out, using open-source facial recognition algorithms to monitor and evaluate the degree of user attention throughout the interaction. The objective is to evaluate the management of a social robot linked to educational video games
through established metrics, which allow monitoring the user's facial expressions
during its use and define a scenario that ensures consistency in the results for its applicability in therapies and reinforcement in the teaching process, mainly
adaptable for inclusive early childhood education.
|
Angel D. Sappa, Juan A. Carvajal, Cristhian A. Aguilera, Miguel Oliveira, Dennis G. Romero, & Boris X. Vintimilla. (2016). Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study. Sensors Journal, Vol. 16, pp. 1–15.
Abstract: This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).
|
Xavier Soria, Angel D. Sappa, & Riad Hammoud. (2018). Wide-Band Color Imagery Restoration for RGB-NIR Single Sensor Image. Sensors 2018 ,2059.Vol. 18(Issue 7).
Abstract: Multi-spectral RGB-NIR sensors have become ubiquitous in recent years. These sensors allow the visible and near-infrared spectral bands of a given scene to be captured at the same time. With such cameras, the acquired imagery has a compromised RGB color representation due to near-infrared bands (700–1100 nm) cross-talking with the visible bands (400–700 nm). This paper proposes two deep learning-based architectures to recover the full RGB color images, thus removing the NIR information from the visible bands. The proposed approaches directly restore the high-resolution RGB image by means of convolutional neural networks. They are evaluated with several outdoor images; both architectures reach a similar performance when evaluated in different scenarios and using different similarity metrics. Both of them improve the state of the art approaches.
|