|
Santos, V., Sappa, A.D., Oliveira, M. & de la Escalera, A. (2021). Editorial: Special Issue on Autonomous Driving and Driver Assistance Systems – Some Main Trends. In Journal: Robotics and Autonomous Systems. (Article number 103832), Vol. 144.
|
|
|
Rubio, G. A., Agila, W.E. (2021). A fuzzy model to manage water in polymer electrolyte membrane fuel cells. In Processes Journal. (Article number 904), Vol. 9(Issue 6).
Abstract: In this paper, a fuzzy model is presented to determine in real-time the degree of dehydration or flooding of a proton exchange membrane of a fuel cell, to optimize its electrical response and consequently, its autonomous operation. By applying load, current and flux variations in the dry, normal, and flooded states of the membrane, it was determined that the temporal evolution of the fuel cell voltage is characterized by changes in slope and by its voltage oscillations. The results were validated using electrochemical impedance spectroscopy and show slope changes from 0.435 to 0.52 and oscillations from 3.6 mV to 5.2 mV in the dry state, and slope changes from 0.2 to 0.3 and oscillations from 1 mV to 2 mV in the flooded state. The use of fuzzy logic is a novelty and constitutes a step towards the progressive automation of the supervision, perception, and intelligent control of fuel cells, allowing them to reduce their risks and increase their economic benefits.
|
|
|
Santos V., Angel D. Sappa., & Oliveira M. & de la Escalera A. (2019). Special Issue on Autonomous Driving and Driver Assistance Systems. In Robotics and Autonomous Systems, 121.
|
|
|
Victor Santos, Angel D. Sappa, & Miguel Oliveira. (2017). Special Issue on Autonomous Driving an Driver Assistance Systems. In Robotics and Autonomous Systems Journal, Vol. 91, pp. 208–209.
|
|
|
Cristhian A. Aguilera, Cristhian Aguilera, & Angel D. Sappa. (2018). Melamine faced panels defect classification beyond the visible spectrum. In Sensors 2018, Vol. 11(Issue 11).
Abstract: In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond
the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.
|
|
|
Cristhian A. Aguilera, Angel D. Sappa, & Ricardo Toledo. (2017). Cross-Spectral Local Descriptors via Quadruplet Network. In Sensors Journal, Vol. 17, pp. 873.
|
|
|
M. Oliveira, L. Seabra Lopes, G. Hyun Lim, S. Hamidreza Kasaei, Angel D. Sappa, & A. Tomé. (2015). Concurrent Learning of Visual Codebooks and Object Categories in Open- ended Domains. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, Hamburg, Germany, 2015 (pp. 2488–2495). Hamburg, Germany: IEEE.
Abstract: In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are usually constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using codebooks constructed offline.
|
|
|
Angel D. Sappa. (2022). ICT Applications for Smart Cities. In Intelligent Systems Reference Library (Vol. 224).
|
|
|
Roberto Jacome Galarza, Miguel-Andrés Realpe-Robalino, Chamba-Eras LuisAntonio, & Viñán-Ludeña MarlonSantiago and Sinche-Freire Javier-Francisco. (2019). Computer vision for image understanding. A comprehensive review. In International Conference on Advances in Emerging Trends and Technologies (ICAETT 2019); Quito, Ecuador (pp. 248–259).
Abstract: Computer Vision has its own Turing test: Can a machine describe the contents of an image or a video in the way a human being would do? In this paper, the progress of Deep Learning for image recognition is analyzed in order to know the answer to this question. In recent years, Deep Learning has increased considerably the precision rate of many tasks related to computer vision. Many datasets of labeled images are now available online, which leads to pre-trained models for many computer vision applications. In this work, we gather information of the latest techniques to perform image understanding and description. As a conclusion we obtained that the combination of Natural Language Processing (using Recurrent Neural Networks and Long Short-Term Memory) plus Image Understanding (using Convolutional Neural Networks) could bring new types of powerful and useful applications in which the computer will be able to answer questions about the content of images and videos. In order to build datasets of labeled images, we need a lot of work and most of the datasets are built using crowd work. These new applications have the potential to increase the human machine interaction to new levels of usability and user’s satisfaction.
|
|
|
Miguel Realpe, Jonathan S. Paillacho Corredores, & Joe Saverio & Allan Alarcon. (2019). Open Source system for identification of corn leaf chlorophyll contents based on multispectral images. In International Conference on Applied Technologies (ICAT 2019); Quito, Ecuador (pp. 572–581).
Abstract: It is important for farmers to know the level of chlorophyll in plants since this depends on the treatment they should give to their crops. There are two common classic methods to get chlorophyll values: from laboratory analysis and electronic devices. Both methods obtain the chlorophyll level of one sample at a time, although they can be destructive. The objective of this research is to develop a system that allows obtaining the chlorophyll level of plants using images.
Python programming language and different libraries of that language were used to develop the solution. It was decided to implement an image labeling module, a simple linear regression and a prediction module. The first module was used to create a database that links the values of the images with those of chlorophyll, which was then used to obtain linear regression in order to determine the relationship between these variables. Finally, the linear
regression was used in the prediction system to obtain chlorophyll values from the images. The linear regression was trained with 92 images, obtaining a root-mean-square error of 7.27 SPAD units. While the testing was perform using 10 values getting a maximum error of 15.5%.
It is concluded that the system is appropriate for chlorophyll contents identification of corn leaves in field tests.
However, it can also be adapted for other measurement and crops. The system can be downloaded at github.com/JoeSvr95/NDVI-Checking [1].
|
|