|
Juan A. Carvajal, Dennis G. Romero, & Angel D. Sappa. (2017). Fine-tuning deep convolutional networks for lepidopterous genus recognition. Lecture Notes in Computer Science, Vol. 10125 LNCS, pp. 467–475.
|
|
|
Henry O. Velesaca, S. A., Patricia L. Suarez, Ángel Sanchez & Angel D. Sappa. (2020). Off-the-Shelf Based System for Urban Environment Video Analytics. In The 27th International Conference on Systems, Signals and Image Processing (IWSSIP 2020) (Vol. 2020-July, pp. 459–464).
Abstract: This paper presents the design and implementation details of a system build-up by using off-the-shelf algorithms for urban video analytics. The system allows the connection to public video surveillance camera networks to obtain the necessary
information to generate statistics from urban scenarios (e.g., amount of vehicles, type of cars, direction, numbers of persons, etc.). The obtained information could be used not only for traffic management but also to estimate the carbon footprint of urban scenarios. As a case study, a university campus is selected to
evaluate the performance of the proposed system. The system is implemented in a modular way so that it is being used as a testbed to evaluate different algorithms. Implementation results are provided showing the validity and utility of the proposed approach.
|
|
|
Wilton Agila, Gomer Rubio, Francisco Vidal, & B. Lima. (2019). Real time Qualitative Model for estimate Water content in PEM Fuel Cell. In 8th International Conference on Renewable Energy Research and Applications (ICRERA 2019); Brasov, Rumania (pp. 455–459).
Abstract: To maintain optimum performance of the electrical
response of a fuel cell, a real time identification of the
malfunction situations is required. Critical fuel cell states depend,
among others, on the variable demand of electric load and are
directly related to the membrane hydration level. The real time
perception of relevant states in the PEM fuel cell states space, is
still a challenge for the PEM fuel cell control systems. Current
work presents the design and implementation of a methodology
based upon fuzzy decision techniques that allows real time
characterization of the dehydration and flooding states of a PEM
fuel cell. Real time state estimation is accomplished through a
perturbation-perception process on the PEM fuel cell and further
on voltage oscillation analysis. The real time implementation of
the perturbation-perception algorithm to detect PEM fuel cell
critical states is a novelty and a step forwards the control of the
PEM fuel cell to reach and maintain optimal performance.
|
|
|
Rafael E. Rivadeneira, Angel D. Sappa, Boris X. Vintimilla, Lin Guo, Jiankun Hou, Armin Mehri, et al. (2020). Thermal Image Super-Resolution Challenge – PBVS 2020. In The 16th IEEE Workshop on Perception Beyond the Visible Spectrum on the Conference on Computer Vision and Pattern Recongnition (CVPR 2020) (Vol. 2020-June, pp. 432–439).
Abstract: This paper summarizes the top contributions to the first challenge on thermal image super-resolution (TISR) which was organized as part of the Perception Beyond the Visible Spectrum (PBVS) 2020 workshop. In this challenge, a novel thermal image dataset is considered together with stateof-the-art approaches evaluated under a common framework.
The dataset used in the challenge consists of 1021 thermal images, obtained from three distinct thermal cameras at different resolutions (low-resolution, mid-resolution, and high-resolution), resulting in a total of 3063 thermal images. From each resolution, 951 images are used for training and 50 for testing while the 20 remaining images are used for two proposed evaluations. The first evaluation consists of downsampling the low-resolution, midresolution, and high-resolution thermal images by x2, x3 and x4 respectively, and comparing their super-resolution
results with the corresponding ground truth images. The second evaluation is comprised of obtaining the x2 superresolution from a given mid-resolution thermal image and comparing it with the corresponding semi-registered highresolution thermal image. Out of 51 registered participants, 6 teams reached the final validation phase.
|
|
|
Miguel Realpe, Boris X. Vintimilla, & Ljubo Vlacic. (2016). Multi-sensor Fusion Module in a Fault Tolerant Perception System for Autonomous Vehicles. Journal of Automation and Control Engineering (JOACE), Vol. 4, pp. 430–436.
Abstract: Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset.
|
|
|
Rafael E. Rivadeneira, Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2019). Thermal Image SuperResolution through Deep Convolutional Neural Network. In 16th International Conference on Image Analysis and Recognition (ICIAR 2019); Waterloo, Canadá (pp. 417–426).
Abstract: Due to the lack of thermal image datasets, a new dataset has been acquired for proposed a superesolution approach using a Deep Convolution Neural Network schema. In order to achieve this image enhancement process a new thermal images dataset is used. Di?erent experiments have been carried out, ?rstly, the proposed architecture has been trained using only images of the visible spectrum, and later it has been trained with images of the thermal spectrum, the results showed that with the network trained with thermal images, better results are obtained in the process of enhancing the images, maintaining the image details and perspective. The thermal dataset is available at http://www.cidis.espol.edu.ec/es/dataset
|
|
|
Low S., I. N., Nina O., Sappa A. and Blasch E. (2022). Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 417–425).
Abstract: This paper details the results and main findings of the
second iteration of the Multi-modal Aerial View Object
Classification (MAVOC) challenge. This year’s MAVOC
challenge is the second iteration. The primary goal of
both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input
modalities. Teams are encouraged/challenged to develop
multi-modal approaches that incorporate complementary
information from both domains. While the 2021 challenge
showed a proof of concept that both modalities could be
used together, the 2022 challenge focuses on the detailed
multi-modal models. Using the same UNIfied COincident
Optical and Radar for recognitioN (UNICORN) dataset and
competition format that was used in 2021. Specifically, the
challenge focuses on two techniques, (1) SAR classification
and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods
and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For
SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement
from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average
improvement over 2021.
|
|
|
Steven Silva, D. P., David Soque, María Guerra & Jonathan Paillacho. (2021). Autonomous Intelligent Navigation For Mobile Robots In Closed Environments. In The 2nd International Conference on Applied Technologies (ICAT 2020), diciembre 2-4. Communications in Computer and Information Science (Vol. 1388, pp. 391–402).
|
|
|
Rangnekar, A., Mulhollan, Z., Vodacek, A., Hoffman, M., Sappa, A. D., & Yu, J. et al. (2022). Semi-Supervised Hyperspectral Object Detection Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 389–397).
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Cross-spectral image dehaze through a dense stacked conditional GAN based approach. In 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) (pp. 358–364).
Abstract: This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.
|
|