|
Monica Villavicencio, & Alain Abran. (2011). Educational Issues in the Teaching of Software Measurement in Software Engineering Undergraduate Programs. In Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (pp. 239–244). IEEE.
Abstract: In mature engineering disciplines and science, mathematics and measurement are considered as important subjects to be taught in university programs. This paper discusses about these subjects in terms of their respective meanings and complementarities. It also presents a discussion regarding their maturity, relevance and innovations in their teaching in engineering programs. This paper pays special attention to the teaching of software measurement in higher education, in particular with respect to mathematics and measurement in engineering in general. The findings from this analysis will be useful for researchers and educators interested in the enhancement of educational issues related to software measurement.
|
|
|
Milton Mendieta, F. Panchana, B. Andrade, B. Bayot, C. Vaca, Boris X. Vintimilla, et al. (2018). Organ identification on shrimp histological images: A comparative study considering CNN and feature engineering. In IEEE Ecuador Technical Chapters Meeting ETCM 2018. Cuenca, Ecuador (pp. 1–6).
Abstract: The identification of shrimp organs in biology using
histological images is a complex task. Shrimp histological images
poses a big challenge due to their texture and similarity among
classes. Image classification by using feature engineering and
convolutional neural networks (CNN) are suitable methods to
assist biologists when performing organ detection. This work
evaluates the Bag-of-Visual-Words (BOVW) and Pyramid-Bagof-
Words (PBOW) models for image classification leveraging big
data techniques; and transfer learning for the same classification
task by using a pre-trained CNN. A comparative analysis
of these two different techniques is performed, highlighting
the characteristics of both approaches on the shrimp organs
identification problem.
|
|
|
Mildred Cruz, Cristhian A. Aguilera, Boris X. Vintimilla, Ricardo Toledo, & Ángel D. Sappa. (2015). Cross-spectral image registration and fusion: an evaluation study. In 2nd International Conference on Machine Vision and Machine Learning (Vol. 331). Barcelona, Spain: Computer Vision Center.
Abstract: This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented.
|
|
|
Miguel Realpe, Jonathan S. Paillacho Corredores, & Joe Saverio & Allan Alarcon. (2019). Open Source system for identification of corn leaf chlorophyll contents based on multispectral images. In International Conference on Applied Technologies (ICAT 2019); Quito, Ecuador (pp. 572–581).
Abstract: It is important for farmers to know the level of chlorophyll in plants since this depends on the treatment they should give to their crops. There are two common classic methods to get chlorophyll values: from laboratory analysis and electronic devices. Both methods obtain the chlorophyll level of one sample at a time, although they can be destructive. The objective of this research is to develop a system that allows obtaining the chlorophyll level of plants using images.
Python programming language and different libraries of that language were used to develop the solution. It was decided to implement an image labeling module, a simple linear regression and a prediction module. The first module was used to create a database that links the values of the images with those of chlorophyll, which was then used to obtain linear regression in order to determine the relationship between these variables. Finally, the linear
regression was used in the prediction system to obtain chlorophyll values from the images. The linear regression was trained with 92 images, obtaining a root-mean-square error of 7.27 SPAD units. While the testing was perform using 10 values getting a maximum error of 15.5%.
It is concluded that the system is appropriate for chlorophyll contents identification of corn leaves in field tests.
However, it can also be adapted for other measurement and crops. The system can be downloaded at github.com/JoeSvr95/NDVI-Checking [1].
|
|
|
Miguel Realpe, Boris X. Vintimilla, & Ljubo Vlacic. (2015). Sensor Fault Detection and Diagnosis for autonomous vehicles. In 2nd International Conference on Mechatronics, Automation and Manufacturing (ICMAM 2015), International Conference on, Singapur, 2015 (Vol. 30, pp. 1–6). EDP Sciences.
Abstract: In recent years testing autonomous vehicles on public roads has become a reality. However, before having autonomous vehicles completely accepted on the roads, they have to demonstrate safe operation and reliable interaction with other traffic participants. Furthermore, in real situations and long term operation, there is always the possibility that diverse components may fail. This paper deals with possible sensor faults by defining a federated sensor data fusion architecture. The proposed architecture is designed to detect obstacles in an autonomous vehicle’s environment while detecting a faulty sensor using SVM models for fault detection and diagnosis. Experimental results using sensor information from the KITTI dataset confirm the feasibility of the proposed architecture to detect soft and hard faults from a particular sensor.
|
|
|
Miguel Realpe, Boris X. Vintimilla, & Ljubo Vlacic. (2016). Multi-sensor Fusion Module in a Fault Tolerant Perception System for Autonomous Vehicles. Journal of Automation and Control Engineering (JOACE), Vol. 4, pp. 430–436.
Abstract: Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset.
|
|
|
Miguel Realpe, Boris X. Vintimilla, & Ljubo Vlacic. (2016). A Fault Tolerant Perception system for autonomous vehicles. In 35th Chinese Control Conference (CCC2016), International Conference on, Chengdu (pp. 1–6).
Abstract: Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset.
|
|
|
Miguel Realpe, Boris X. Vintimilla, & L. Vlacic. (2015). Towards Fault Tolerant Perception for autonomous vehicles: Local Fusion. In IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Siem Reap, 2015. (pp. 253–258).
Abstract: Many robust sensor fusion strategies have been developed in order to reliably detect the surrounding environments of an autonomous vehicle. However, in real situations there is always the possibility that sensors or other components may fail. Thus, internal modules and sensors need to be monitored to ensure their proper function. This paper introduces a general view of a perception architecture designed to detect and classify obstacles in an autonomous vehicle's environment using a fault tolerant framework, whereas elaborates the object detection and local fusion modules proposed in order to achieve the modularity and real-time process required by the system.
|
|
|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, Paulo Dias, & A. Paulo Moreira. (2016). Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives. Robotics and Autonomous Systems Journal, Vol. 83, pp. 312–325.
Abstract: When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
|
|
|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, Paulo Dias, & A. Paulo Moreira. (2016). Incremental Texture Mapping for Autonomous Driving. Robotics and Autonomous Systems Journal, Vol. 84, pp. 113–128.
Abstract: Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
|
|