|
W. Agila, Gomer Rubio, L. Miranda, & D. Sanaguano. (2019). Open Control Architecture for the Characterization and Control of the PEM Fuel Cell. In IEEE ETCM 2019 Fourth Ecuador Technical Chapters Meeting; Guayaquil, Ecuador (pp. 1–5).
Abstract: Proton exchange membrane (PEM) fuel cells, are an efficient and clean source of electrical energy. The analysis of its operation requires experimental work, which allows measuring, modeling and optimizing PEM fuel cells electrical behavior under different operating conditions. Therefore, having an experimentation platform that allows to easily carry out its study and control is essential. This research presents the design and development of an open instrumental system that allows measuring, controlling and determining the operating parameters of a PEM fuel cell. As results, the polarization curves, voltage-current, obtained by the system itself in different experimental conditions are shown. These curves are a very useful tool to evaluate the electrical behavior of the PEM battery.
|
|
|
Cristhian A. Aguilera, C. A., Cristóbal A. Navarro, & Angel D. Sappa. (2020). Fast CNN Stereo Depth Estimation through Embedded GPU Devices. Sensors 2020, Vol. 2020-June(11), pp. 1–13.
Abstract: Current CNN-based stereo depth estimation models can barely run under real-time
constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art
evaluations usually do not consider model optimization techniques, being that it is unknown what is
the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models
on three different embedded GPU devices, with and without optimization methods, presenting
performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth
estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture
for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically
augmenting the runtime speed of current models. In our experiments, we achieve real-time inference
speed, in the range of 5–32 ms, for 1216 368 input stereo images on the Jetson TX2, Jetson Xavier,
and Jetson Nano embedded devices.
|
|
|
Ángel Morera, Á. S., A. Belén Moreno, Angel D. Sappa, & José F. Vélez. (2020). SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. In Sensors, Vol. 2020-August(16), pp. 1–23.
Abstract: This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO)
deep neural networks for the outdoor advertisement panel detection problem by handling multiple
and combined variabilities in the scenes. Publicity panel detection in images oers important
advantages both in the real world as well as in the virtual one. For example, applications like Google
Street View can be used for Internet publicity and when detecting these ads panels in images, it could
be possible to replace the publicity appearing inside the panels by another from a funding company.
In our experiments, both SSD and YOLO detectors have produced acceptable results under variable
sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex
background and multiple panels in scenes. Due to the diculty of finding annotated images for the
considered problem, we created our own dataset for conducting the experiments. The major strength
of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable
when the publicity contained inside the panel is analyzed after detecting them. On the other side,
YOLO produced better panel localization results detecting a higher number of True Positive (TP)
panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models
with dierent types of semantic segmentation networks and using the same evaluation metrics is
also included.
|
|
|
Rafael Rivadeneira, H. V. & A. S. (2024). Cross-Spectral Image Registration: a Comparative Study and a New Benchmark Dataset. In Lecture Notes in Networks and Systems: 4th International Conference on Innovations in Computational Intelligence and Computer Vision (ICICV 2024) (Vol. Vol. 1117 LNNS, pp. 1–12).
|
|
|
Luis Chuquimarca, B. V. & S. V. (2024). A Review of External Quality Inspection for Fruit Grading using CNN Models (Vol. Vol. 14).
|
|
|
Juca Aulestia M., L. J. M., Guaman Quinche J., Coronel Romero E., Chamba Eras L., & Roberto Jacome Galarza. (2020). Open innovation at university: a systematic literature review. Advances in Intelligent Systems and Computing, 1159 AISC, 2020, 3–14.
|
|
|
Suárez P. (2021). Processing and Representation of Multispectral Images Using Deep Learning Techniques. In Electronic Letters on Computer Vision and Image Analysis, Vol. 19(Issue 2), pp. 5–8.
|
|
|
Morocho-Cayamcela, M. E. (2020). Increasing the Segmentation Accuracy of Aerial Images with Dilated Spatial Pyramid Pooling. Electronic Letters on Computer Vision and Image Analysis (ELCVIA), Vol. 19(Issue 2), pp. 17–21.
|
|
|
Viñán-Ludeña M.S., D. C. L. M., Roberto Jacome Galarza, & Sinche Freire, J. (2020). Social media influence: a comprehensive review in general and in tourism domain. Smart Innovation, Systems and Technologies., 171, 2020, 25–35.
|
|
|
Angel D. Sappa, Cristhian A. Aguilera, Juan A. Carvajal Ayala, Miguel Oliveira, Dennis Romero, Boris X. Vintimilla, et al. (2016). Monocular visual odometry: a cross-spectral image fusion based approach. Robotics and Autonomous Systems Journal, Vol. 86, pp. 26–36.
Abstract: This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
|
|