G.A. Rubio, & Wilton Agila. (2019). Sustainable Energy: A Strategic View of Fuel Cells. In 8th International Conference on Renewable Energy Research and Applications (ICRERA 2019); Brasov, Rumania (pp. 239–243).
Abstract: Based on the model of the proton exchange fuel cell in a strategic context,
this document develops the issue of energy as one of the pillars to achieve the
sustainability of our planet, considering the future scenarios up to the year 2060 of the
situation energy, hydrogen as a strategic vector and the contribution of the fuel cell in
solving the serious problems of environmental pollution and economic inequity that
humanity faces; for its application in the energy generation, telecommunications and
vehicle manufacturing industries.
|
Wilton Agila, Gomer Rubio, Francisco Vidal, & B. Lima. (2019). Real time Qualitative Model for estimate Water content in PEM Fuel Cell. In 8th International Conference on Renewable Energy Research and Applications (ICRERA 2019); Brasov, Rumania (pp. 455–459).
Abstract: To maintain optimum performance of the electrical
response of a fuel cell, a real time identification of the
malfunction situations is required. Critical fuel cell states depend,
among others, on the variable demand of electric load and are
directly related to the membrane hydration level. The real time
perception of relevant states in the PEM fuel cell states space, is
still a challenge for the PEM fuel cell control systems. Current
work presents the design and implementation of a methodology
based upon fuzzy decision techniques that allows real time
characterization of the dehydration and flooding states of a PEM
fuel cell. Real time state estimation is accomplished through a
perturbation-perception process on the PEM fuel cell and further
on voltage oscillation analysis. The real time implementation of
the perturbation-perception algorithm to detect PEM fuel cell
critical states is a novelty and a step forwards the control of the
PEM fuel cell to reach and maintain optimal performance.
|
Jorge Alvarez Tello, Mireya Zapata, & Dennys Paillacho. (2019). Kinematic optimization of a robot head movements for the evaluation of human-robot interaction in social robotics. In 10th International Conference on Applied Human Factors and Ergonomics and the Affiliated Conferences (AHFE 2019), Washington D.C.; United States. Advances in Intelligent Systems and Computing (Vol. 975, pp. 108–118).
Abstract: This paper presents the simplification of the head movements from
the analysis of the biomechanical parameters of the head and neck at the
mechanical and structural level through CAD modeling and construction with
additive printing in ABS/PLA to implement non-verbal communication strategies and establish behavior patterns in the social interaction. This is using in the
denominated MASHI (Multipurpose Assistant robot for Social Human-robot
Interaction) experimental robotic telepresence platform, implemented by a
display with a fish-eye camera along with the mechanical mechanism, which
permits 4 degrees of freedom (DoF). In the development of mathematicalmechanical modeling for the kinematics codification that governs the robot and
the autonomy of movement, we have the Pitch, Roll, and Yaw movements, and
the combination of all of them to establish an active communication through
telepresence. For the computational implementation, it will be show the rotational matrix to describe the movement.
|
Angel Morera, Angel Sánchez, Angel D. Sappa, & José F. Vélez. (2019). Robust Detection of Outdoor Urban Advertising Panels in Static Images. In 17th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2019); Ávila, España. Communications in Computer and Information Science (Vol. 1047, pp. 246–256).
Abstract: One interesting publicity application for Smart City environments is recognizing brand information contained in urban advertising
panels. For such a purpose, a previous stage is to accurately detect and
locate the position of these panels in images. This work presents an effective solution to this problem using a Single Shot Detector (SSD) based
on a deep neural network architecture that minimizes the number of
false detections under multiple variable conditions regarding the panels and the scene. Achieved experimental results using the Intersection
over Union (IoU) accuracy metric make this proposal applicable in real
complex urban images.
|
Patricia L. Suarez, Angel D. Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2019). Image Vegetation Index through a Cycle Generative Adversarial Network. In Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States (pp. 1014–1021).
Abstract: This paper proposes a novel approach to estimate the
Normalized Difference Vegetation Index (NDVI) just from
an RGB image. The NDVI values are obtained by using
images from the visible spectral band together with a synthetic near infrared image obtained by a cycled GAN. The
cycled GAN network is able to obtain a NIR image from
a given gray scale image. It is trained by using unpaired
set of gray scale and NIR images by using a U-net architecture and a multiple loss function (gray scale images are
obtained from the provided RGB images). Then, the NIR
image estimated with the proposed cycle generative adversarial network is used to compute the NDVI index. Experimental results are provided showing the validity of the proposed approach. Additionally, comparisons with previous
approaches are also provided.
|
Armin Mehri, & Angel D. Sappa. (2019). Colorizing Near Infrared Images through a Cyclic Adversarial Approach of Unpaired Samples. In Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States (pp. 971–979).
Abstract: This paper presents a novel approach for colorizing
near infrared (NIR) images. The approach is based on
image-to-image translation using a Cycle-Consistent adversarial network for learning the color channels on unpaired dataset. This architecture is able to handle unpaired datasets. The approach uses as generators tailored
networks that require less computation times, converge
faster and generate high quality samples. The obtained results have been quantitatively—using standard evaluation
metrics—and qualitatively evaluated showing considerable
improvements with respect to the state of the art
|
Jorge Alvarez, Mireya Zapata, & Dennys Paillacho. (2019). Mechanical Design of a spatial mechanism for the robot head movements in social robotics for the evaluation of Human-Robot Interaction. In 2nd International Conference on Human Systems Engineering and Design: Future Trends and Applications (IHSED 2019); Munich, Alemania (Vol. 1026, pp. 160–165).
|
Rafael E. Rivadeneira, Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2019). Thermal Image SuperResolution through Deep Convolutional Neural Network. In 16th International Conference on Image Analysis and Recognition (ICIAR 2019); Waterloo, Canadá (pp. 417–426).
Abstract: Due to the lack of thermal image datasets, a new dataset has been acquired for proposed a superesolution approach using a Deep Convolution Neural Network schema. In order to achieve this image enhancement process a new thermal images dataset is used. Di?erent experiments have been carried out, ?rstly, the proposed architecture has been trained using only images of the visible spectrum, and later it has been trained with images of the thermal spectrum, the results showed that with the network trained with thermal images, better results are obtained in the process of enhancing the images, maintaining the image details and perspective. The thermal dataset is available at http://www.cidis.espol.edu.ec/es/dataset
|
José Reyes, Axel Godoy, & Miguel Realpe. (2019). Uso de software de código abierto para fusión de imágenes agrícolas multiespectrales adquiridas con drones. In International Multi-Conference of Engineering, Education and Technology (LACCEI 2019); Montego Bay, Jamaica (Vol. 2019-July).
Abstract: Los drones o aeronaves no tripuladas son muy útiles para la adquisición de imágenes, de forma mucho más simple que los satélites o aviones. Sin embargo, las imágenes adquiridas por drones deben ser combinadas de alguna forma para convertirse en información de valor sobre un terreno o cultivo. Existen diferentes programas que reciben imágenes y las combinan en una sola imagen, cada uno con diferentes características (rendimiento, precisión, resultados, precio, etc.). En este estudio se revisaron diferentes programas de código abierto para fusión de imágenes, con el ?n de establecer cuál de ellos es más útil, especí?camente para ser utilizado por pequeños y medianos agricultores en Ecuador. Los resultados pueden ser de interés para diseñadores de software, ya que al utilizar código abierto, es posible modi?car e integrar los programas en un ?ujo de trabajo más simpli?cado. Además, que permite disminuir costos debido a que no requiere de pagos de licencias para su uso, lo cual puede repercutir en un mayor acceso a la tecnología para los pequeños y medianos agricultores. Como parte de los resultados de este estudio se ha creado un repositorio de acceso público con algoritmos de pre-procesamiento necesarios para manipular las imágenes adquiridas por una cámara multiespectral y para luego obtener un mapa completo en formatos RGB, CIR y NDVI.
|
Roberto Jacome Galarza, Miguel-Andrés Realpe-Robalino, Chamba-Eras LuisAntonio, & Viñán-Ludeña MarlonSantiago and Sinche-Freire Javier-Francisco. (2019). Computer vision for image understanding. A comprehensive review. In International Conference on Advances in Emerging Trends and Technologies (ICAETT 2019); Quito, Ecuador (pp. 248–259).
Abstract: Computer Vision has its own Turing test: Can a machine describe the contents of an image or a video in the way a human being would do? In this paper, the progress of Deep Learning for image recognition is analyzed in order to know the answer to this question. In recent years, Deep Learning has increased considerably the precision rate of many tasks related to computer vision. Many datasets of labeled images are now available online, which leads to pre-trained models for many computer vision applications. In this work, we gather information of the latest techniques to perform image understanding and description. As a conclusion we obtained that the combination of Natural Language Processing (using Recurrent Neural Networks and Long Short-Term Memory) plus Image Understanding (using Convolutional Neural Networks) could bring new types of powerful and useful applications in which the computer will be able to answer questions about the content of images and videos. In order to build datasets of labeled images, we need a lot of work and most of the datasets are built using crowd work. These new applications have the potential to increase the human machine interaction to new levels of usability and user’s satisfaction.
|