|
Lukas Danev, Marten Hamann, Nicolas Fricke, Tobias Hollarek, & Dennys Paillacho. (2017). Development of animated facial expression to express emotions in a robot: RobotIcon. In IEEE Ecuador Technical Chapter Meeting (ETCM) (Vol. 2017-January, pp. 1–6).
|
|
|
Xavier Soria, Angel D. Sappa, & Arash Akbarinia. (2017). Multispectral Single-Sensor RGB-NIR Imaging: New Challenges an Oppotunities. In The 7th International Conference on Image Processing Theory, Tools and Application (pp. 1–6).
|
|
|
Milton Mendieta, F. Panchana, B. Andrade, B. Bayot, C. Vaca, Boris X. Vintimilla, et al. (2018). Organ identification on shrimp histological images: A comparative study considering CNN and feature engineering. In IEEE Ecuador Technical Chapters Meeting ETCM 2018. Cuenca, Ecuador (pp. 1–6).
Abstract: The identification of shrimp organs in biology using
histological images is a complex task. Shrimp histological images
poses a big challenge due to their texture and similarity among
classes. Image classification by using feature engineering and
convolutional neural networks (CNN) are suitable methods to
assist biologists when performing organ detection. This work
evaluates the Bag-of-Visual-Words (BOVW) and Pyramid-Bagof-
Words (PBOW) models for image classification leveraging big
data techniques; and transfer learning for the same classification
task by using a pre-trained CNN. A comparative analysis
of these two different techniques is performed, highlighting
the characteristics of both approaches on the shrimp organs
identification problem.
|
|
|
Patricia L. Suarez, Angel D. Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2018). Near InfraRed Imagery Colorization. In 25 th IEEE International Conference on Image Processing, ICIP 2018 (pp. 2237–2241).
Abstract: This paper proposes a stacked conditional Generative
Adversarial Network-based method for Near InfraRed
(NIR) imagery colorization. We propose a variant architecture
of Generative Adversarial Network (GAN) that uses multiple
loss functions over a conditional probabilistic generative model.
We show that this new architecture/loss-function yields better
generalization and representation of the generated colored IR
images. The proposed approach is evaluated on a large test
dataset and compared to recent state of the art methods using
standard metrics.1
Index Terms—Convolutional Neural Networks (CNN), Generative
Adversarial Network (GAN), Infrared Imagery colorization.
|
|
|
Wilton Agila, Gomer Rubio, L. Miranda, & L. Vázquez. (2018). Qualitative Model of Control in the Pressure Stabilization of PEM Fuel Cell. In 7th International Conference on Renewable Energy Research and Applications, ICRERA 2018. Paris, Francia. (pp. 1221–1226).
Abstract: This work describes an approximate reasoning
technique to deal with the non-linearity that occurs in the
stabilization of the pressure of anodic and cathodic gases of a
proton exchange membrane fuel cell (PEM). The implementation
of a supervisory element in the stabilization of the pressure of the
PEM cell is described. The fuzzy supervisor is a reference
control, it varies the value of the reference given to the classic
low-level controller, Proportional – Integral – Derivative (PID),
according to the speed of change of the measured pressure and
the change in the error of the pressure. The objective of the fuzzy
supervisor is to achieve a rapid response over time of the variable
pressure, avoiding unwanted overruns with respect to the
reference value. A comparative analysis is detailed with the
classic PID control to evaluate the operation of the "fuzzy
supervisor", with different flow values and different sizes of
active area of the PEM cell (electric power generated).
|
|
|
Xavier Soria, & Angel D. Sappa. (2018). Improving Edge Detection in RGB Images by Adding NIR Channel. In 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) (pp. 266–273).
|
|
|
Xavier Soria, Angel D. Sappa, & Riad Hammoud. (2018). Wide-Band Color Imagery Restoration for RGB-NIR Single Sensor Image. Sensors 2018 ,2059.Vol. 18(Issue 7).
Abstract: Multi-spectral RGB-NIR sensors have become ubiquitous in recent years. These sensors allow the visible and near-infrared spectral bands of a given scene to be captured at the same time. With such cameras, the acquired imagery has a compromised RGB color representation due to near-infrared bands (700–1100 nm) cross-talking with the visible bands (400–700 nm). This paper proposes two deep learning-based architectures to recover the full RGB color images, thus removing the NIR information from the visible bands. The proposed approaches directly restore the high-resolution RGB image by means of convolutional neural networks. They are evaluated with several outdoor images; both architectures reach a similar performance when evaluated in different scenarios and using different similarity metrics. Both of them improve the state of the art approaches.
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Cross-spectral image dehaze through a dense stacked conditional GAN based approach. In 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) (pp. 358–364).
Abstract: This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.
|
|
|
Dennys Paillacho, N. S., Michael Arce, María Plues & Edwin Eras. (2023). Advanced metrics to evaluate autistic children's attention and emotions from facial characteristics using a human robot-game interface. In Communications in Computer and Information Science. 11th Conferencia Ecuatoriana de Tecnologías de la Información y Comunicación TICEC 2023 (Vol. 1885 CCIS, pp. 234–247).
|
|
|
Marta Diaz, Dennys Paillacho, & Cecilio Angulo. (2015). Evaluating Group-Robot Interaction in Crowded Public Spaces: A Week-Long Exploratory Study in the Wild with a Humanoid Robot Guiding Visitors Through a Science Museum. International Journal of Humanoid Robotics, Vol. 12.
Abstract: This paper describes an exploratory study on group interaction with a robot-guide in an open large-scale busy environment. For an entire week a humanoid robot was deployed in the popular Cosmocaixa Science Museum in Barcelona and guided hundreds of people through the museum facilities. The main goal of this experience is to study in the wild the episodes of the robot guiding visitors to a requested destination focusing on the group behavior during displacement. The walking behavior follow-me and the face to face communication in a populated environment are analyzed in terms of guide- visitors interaction, grouping patterns and spatial formations. Results from observational data show that the space configurations spontaneously formed by the robot guide and visitors walking together did not always meet the robot communicative and navigational requirements for successful guidance. Therefore additional verbal and nonverbal prompts must be considered to regulate effectively the walking together and follow-me behaviors. Finally, we discuss lessons learned and recommendations for robot’s spatial behavior in dense crowded scenarios.
|
|