|
Xavier Soria, Y. L., Mohammad Rouhani & Angel D. Sappa. (2023). Tiny and Efficient Model for the Edge Detection Generalization. In Proceedings – 2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023 (pp. 1356–1365).
|
|
|
Rubio Abel, Agila Wilton, González Leandro, & Aviles Jonathan. (2023). A Numerical Model for the Transport of Reactants in Proton Exchange Fuel Cells. In 12th IEEE International Conference on Renewable Energy Research and Applications, ICRERA 2023 Oshawa 29 August – 1 September 2023 (pp. 273–278).
|
|
|
Xavier Soria, A. S., Patricio Humanante, Arash Akbarinia. (2023). Dense extreme inception network for edge detection. Pattern Recognition, Vol. 139.
|
|
|
Suarez Patricia, Carpio Dario, & Sappa Angel D. (2023). A Deep Learning Based Approach for Synthesizing Realistic Depth Maps. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 22nd International Conference on Image Analysis and Processing, ICIAP 2023 Udine 11 – 15 September 2023 (Vol. 14234 LNCS, pp. 369–380).
|
|
|
Steven Silva, N. V., Dennys Paillacho, Samuel Millan-Norman & Juan David Hernandez. (2023). Online Social Robot Navigation in Indoor, Large and Crowded Environments. In IEEE International Conference on Robotics and Automation (ICRA 2023) (Vol. 2023-May, pp. 9749–9756).
|
|
|
Cristhian A. Aguilera, C. A., Cristóbal A. Navarro, & Angel D. Sappa. (2020). Fast CNN Stereo Depth Estimation through Embedded GPU Devices. Sensors 2020, Vol. 2020-June(11), pp. 1–13.
Abstract: Current CNN-based stereo depth estimation models can barely run under real-time
constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art
evaluations usually do not consider model optimization techniques, being that it is unknown what is
the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models
on three different embedded GPU devices, with and without optimization methods, presenting
performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth
estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture
for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically
augmenting the runtime speed of current models. In our experiments, we achieve real-time inference
speed, in the range of 5–32 ms, for 1216 368 input stereo images on the Jetson TX2, Jetson Xavier,
and Jetson Nano embedded devices.
|
|
|
Dennys Paillacho, N. S., Michael Arce, María Plues & Edwin Eras. (2023). Advanced metrics to evaluate autistic children's attention and emotions from facial characteristics using a human robot-game interface. In Communications in Computer and Information Science. 11th Conferencia Ecuatoriana de Tecnologías de la Información y Comunicación TICEC 2023 (Vol. 1885 CCIS, pp. 234–247).
|
|
|
Abel Rubio, W. A., Leandro González & Jonathan Aviles-Cedeno. (2023). Distributed Intelligence in Autonomous PEM Fuel Cell Control. Energies 2023, Vol. 16(Issue 12).
|
|
|
Henry O. Velesaca, S. A., Patricia L. Suarez, Ángel Sanchez & Angel D. Sappa. (2020). Off-the-Shelf Based System for Urban Environment Video Analytics. In The 27th International Conference on Systems, Signals and Image Processing (IWSSIP 2020) (Vol. 2020-July, pp. 459–464).
Abstract: This paper presents the design and implementation details of a system build-up by using off-the-shelf algorithms for urban video analytics. The system allows the connection to public video surveillance camera networks to obtain the necessary
information to generate statistics from urban scenarios (e.g., amount of vehicles, type of cars, direction, numbers of persons, etc.). The obtained information could be used not only for traffic management but also to estimate the carbon footprint of urban scenarios. As a case study, a university campus is selected to
evaluate the performance of the proposed system. The system is implemented in a modular way so that it is being used as a testbed to evaluate different algorithms. Implementation results are provided showing the validity and utility of the proposed approach.
|
|
|
Rafael E. Rivadeneira, Angel D. Sappa, Boris X. Vintimilla, Lin Guo, Jiankun Hou, Armin Mehri, et al. (2020). Thermal Image Super-Resolution Challenge – PBVS 2020. In The 16th IEEE Workshop on Perception Beyond the Visible Spectrum on the Conference on Computer Vision and Pattern Recongnition (CVPR 2020) (Vol. 2020-June, pp. 432–439).
Abstract: This paper summarizes the top contributions to the first challenge on thermal image super-resolution (TISR) which was organized as part of the Perception Beyond the Visible Spectrum (PBVS) 2020 workshop. In this challenge, a novel thermal image dataset is considered together with stateof-the-art approaches evaluated under a common framework.
The dataset used in the challenge consists of 1021 thermal images, obtained from three distinct thermal cameras at different resolutions (low-resolution, mid-resolution, and high-resolution), resulting in a total of 3063 thermal images. From each resolution, 951 images are used for training and 50 for testing while the 20 remaining images are used for two proposed evaluations. The first evaluation consists of downsampling the low-resolution, midresolution, and high-resolution thermal images by x2, x3 and x4 respectively, and comparing their super-resolution
results with the corresponding ground truth images. The second evaluation is comprised of obtaining the x2 superresolution from a given mid-resolution thermal image and comparing it with the corresponding semi-registered highresolution thermal image. Out of 51 registered participants, 6 teams reached the final validation phase.
|
|