|
Ulises Gildardo Quiroz Antúnez, A. I. M. R., María Fernanda Calderón Vega, Adán Guillermo Ramírez García. (2022). APTITUDE OF COFFEE (COFFEA ARABICA L.) AND CACAO (THEOBROMA CACAO L.) CROPS CONSIDERING CLIMATE CHANGE. Granja, 36(2).
|
|
|
Ángel Morera, Á. S., A. Belén Moreno, Angel D. Sappa, & José F. Vélez. (2020). SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. In Sensors, Vol. 2020-August(16), pp. 1–23.
Abstract: This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO)
deep neural networks for the outdoor advertisement panel detection problem by handling multiple
and combined variabilities in the scenes. Publicity panel detection in images oers important
advantages both in the real world as well as in the virtual one. For example, applications like Google
Street View can be used for Internet publicity and when detecting these ads panels in images, it could
be possible to replace the publicity appearing inside the panels by another from a funding company.
In our experiments, both SSD and YOLO detectors have produced acceptable results under variable
sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex
background and multiple panels in scenes. Due to the diculty of finding annotated images for the
considered problem, we created our own dataset for conducting the experiments. The major strength
of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable
when the publicity contained inside the panel is analyzed after detecting them. On the other side,
YOLO produced better panel localization results detecting a higher number of True Positive (TP)
panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models
with dierent types of semantic segmentation networks and using the same evaluation metrics is
also included.
|
|
|
Cristhian A. Aguilera, C. A., Cristóbal A. Navarro, & Angel D. Sappa. (2020). Fast CNN Stereo Depth Estimation through Embedded GPU Devices. Sensors 2020, Vol. 2020-June(11), pp. 1–13.
Abstract: Current CNN-based stereo depth estimation models can barely run under real-time
constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art
evaluations usually do not consider model optimization techniques, being that it is unknown what is
the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models
on three different embedded GPU devices, with and without optimization methods, presenting
performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth
estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture
for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically
augmenting the runtime speed of current models. In our experiments, we achieve real-time inference
speed, in the range of 5–32 ms, for 1216 368 input stereo images on the Jetson TX2, Jetson Xavier,
and Jetson Nano embedded devices.
|
|
|
Roberto Jacome Galarza, Miguel-Andrés Realpe-Robalino, Chamba-Eras LuisAntonio, & Viñán-Ludeña MarlonSantiago and Sinche-Freire Javier-Francisco. (2019). Computer vision for image understanding. A comprehensive review. In International Conference on Advances in Emerging Trends and Technologies (ICAETT 2019); Quito, Ecuador.
Abstract: Computer Vision has its own Turing test: Can a machine describe the contents of an image or a video in the way a human being would do? In this paper, the progress of Deep Learning for image recognition is analyzed in order to know the answer to this question. In recent years, Deep Learning has increased considerably the precision rate of many tasks related to computer vision. Many datasets of labeled images are now available online, which leads to pre-trained models for many computer vision applications. In this work, we gather information of the latest techniques to perform image understanding and description. As a conclusion we obtained that the combination of Natural Language Processing (using Recurrent Neural Networks and Long Short-Term Memory) plus Image Understanding (using Convolutional Neural Networks) could bring new types of powerful and useful applications in which the computer will be able to answer questions about the content of images and videos. In order to build datasets of labeled images, we need a lot of work and most of the datasets are built using crowd work. These new applications have the potential to increase the human machine interaction to new levels of usability and user’s satisfaction.
|
|
|
Cristhian A. Aguilera, Cristhian Aguilera, & Angel D. Sappa. (2018). Melamine faced panels defect classification beyond the visible spectrum. In Sensors 2018, .
Abstract: In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond
the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.
|
|
|
Juan A. Carvajal, Dennis G. Romero, & Angel D. Sappa. (2017). Fine-tuning deep convolutional networks for lepidopterous genus recognition. Lecture Notes in Computer Science, .
|
|
|
Cristhian A. Aguilera, Angel D. Sappa, & Ricardo Toledo. (2017). Cross-Spectral Local Descriptors via Quadruplet Network. In Sensors Journal, 17, 873.
|
|
|
Victor Santos, Angel D. Sappa, & Miguel Oliveira. (2017). Spcial Issue on Autonomous Driving an Driver Assistance Systems. In Robotics and Autonomous Systems Journal, .
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Colorizing Infrared Images through a Triplet Condictional DCGAN Architecture. In 19th International Conference on Image Analysis and Processing..
|
|
|
Byron Lima, Ricardo Cajo, Victor Huilcapi, & Wilton Agila. (2017). Modeling and comparative study of linear and nonlinear controllers for rotary inverted pendulum. In Journal of Physics: Conference Series (Vol. 783).
Abstract: The rotary inverted pendulum (RIP) is a problem difficult to control, several studies have been conducted where different control techniques have been applied. Literature reports that, although problem is nonlinear, classical PID controllers presents appropriate performances when applied to the system. In this paper, a comparative study of the performances of linear and nonlinear PID structures is carried out. The control algorithms are evaluated in the RIP system, using indices of performance and power consumption, which allow the categorization of control strategies according to their performance. This article also presents the modeling system, which has been estimated some of the parameters involved in the RIP system, using computer-aided design tools (CAD) and experimental methods or techniques proposed by several authors attended. The results indicate a better performance of the nonlinear controller with an increase in the robustness and faster response than the linear controller
|
|