Velesaca, H. O., Suárez, P. L., Mira, R., & Sappa, A.D. (2021). Computer Vision based Food Grain Classification: a Comprehensive Survey. In Computers and Electronics in Agriculture Journal. (Article number 106287), Vol. 187.
|
Suárez P. (2021). Processing and Representation of Multispectral Images Using Deep Learning Techniques. In Electronic Letters on Computer Vision and Image Analysis, Vol. 19(Issue 2), pp. 5–8.
|
Steven Silva, D. P., David Soque, María Guerra & Jonathan Paillacho. (2021). Autonomous Intelligent Navigation For Mobile Robots In Closed Environments. In The 2nd International Conference on Applied Technologies (ICAT 2020), diciembre 2-4. Communications in Computer and Information Science (Vol. 1388, pp. 391–402).
|
Santos, V., Sappa, A.D., Oliveira, M. & de la Escalera, A. (2021). Editorial: Special Issue on Autonomous Driving and Driver Assistance Systems – Some Main Trends. In Journal: Robotics and Autonomous Systems. (Article number 103832), Vol. 144.
|
Rubio, G. A., Agila, W.E. (2021). A fuzzy model to manage water in polymer electrolyte membrane fuel cells. In Processes Journal. (Article number 904), Vol. 9(Issue 6).
Abstract: In this paper, a fuzzy model is presented to determine in real-time the degree of dehydration or flooding of a proton exchange membrane of a fuel cell, to optimize its electrical response and consequently, its autonomous operation. By applying load, current and flux variations in the dry, normal, and flooded states of the membrane, it was determined that the temporal evolution of the fuel cell voltage is characterized by changes in slope and by its voltage oscillations. The results were validated using electrochemical impedance spectroscopy and show slope changes from 0.435 to 0.52 and oscillations from 3.6 mV to 5.2 mV in the dry state, and slope changes from 0.2 to 0.3 and oscillations from 1 mV to 2 mV in the flooded state. The use of fuzzy logic is a novelty and constitutes a step towards the progressive automation of the supervision, perception, and intelligent control of fuel cells, allowing them to reduce their risks and increase their economic benefits.
|
Rivadeneira R.E., S. A. D., Vintimilla B.X., Nathan S., Kansal P., Mehri A et al. (2021). Thermal Image Super-Resolution Challenge – PBVS 2021. In In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021., junio 19 – 25, 2021 (pp. 4354–4362).
|
Pereira J., M. M. & W. A. (2021). Qualitative Model to Maximize Shrimp Growth at Low Cost. 5th Ecuador Technical Chapters Meeting (ETCM 2021), Octubre 12 – 15, .
|
Patricia L. Suárez, D. C., and Angel Sappa. (2021). Non-Homogeneous Haze Removal through a Multiple Attention Module Architecture. In 16 International Symposium on Visual Computing. Octubre 4-6, 2021. Lecture Notes in Computer Science (Vol. 13018, pp. 178–190).
|
Patricia L. Suárez, A. D. S., Boris X. Vintimilla. (2021). Cycle generative adversarial network: towards a low-cost vegetation index estimation. In IEEE International Conference on Image Processing (ICIP 2021) (Vol. 2021-September, pp. 2783–2787).
Abstract: This paper presents a novel unsupervised approach to estimate the Normalized Difference Vegetation Index (NDVI).The NDVI is obtained as the ratio between information from the visible and near infrared spectral bands; in the current work, the NDVI is estimated just from an image of the visible spectrum through a Cyclic Generative Adversarial Network (CyclicGAN). This unsupervised architecture learns to estimate the NDVI index by means of an image translation between the red channel of a given RGB image and the NDVI unpaired index’s image. The translation is obtained by means of a ResNET architecture and a multiple loss function. Experimental results obtained with this unsupervised scheme show the validity of the implemented model. Additionally, comparisons with the state of the art approaches are provided showing improvements with the proposed approach.
|
Patricia L. Suárez, A. D. S. and B. X. V. (2021). Deep learning-based vegetation index estimation. In Generative Adversarial Networks for Image-to-Image Translation Book. (Vol. Chapter 9, pp. 205–232).
|