|   | 
Details
   web
Records
Author Ma. Paz Velarde; Erika Perugachi; Dennis G. Romero; Ángel D. Sappa; Boris X. Vintimilla
Title Análisis del movimiento de las extremidades superiores aplicado a la rehabilitación física de una persona usando técnicas de visión artificial. Type Journal Article
Year 2015 Publication Revista Tecnológica ESPOL-RTE Abbreviated Journal
Volume Vol. 28 Issue Pages pp. 1-7
Keywords Rehabilitation; RGB-D Sensor; Computer Vision; Upper limb
Abstract Comúnmente durante la rehabilitación física, el diagnóstico dado por el especialista se basa en observaciones cualitativas que sugieren, en algunos casos, conclusiones subjetivas. El presente trabajo propone un enfoque cuantitativo, orientado a servir de ayuda a fisioterapeutas, a través de una herramienta interactiva y de bajo costo que permite medir los movimientos de miembros superiores. Estos movimientos son capturados por un sensor RGB-D y procesados mediante la metodología propuesta, dando como resultado una eficiente representación de movimientos, permitiendo la evaluación cuantitativa de movimientos de los miembros superiores.
Address
Corporate Author Thesis
Publisher ESPOL Place of Publication Editor
Language English Summary Language (up) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 39
Permanent link to this record
 

 
Author Cristhian A. Aguilera; Angel D. Sappa; R. Toledo
Title LGHD: A feature descriptor for matching across non-linear intensity variations Type Conference Article
Year 2015 Publication IEEE International Conference on, Quebec City, QC, 2015 Abbreviated Journal
Volume Issue Pages 178 - 181
Keywords Feature descriptor, multi-modal, multispectral, NIR, LWIR
Abstract This paper presents a new feature descriptor suitable to the task of matching features points between images with nonlinear intensity variations. This includes image pairs with significant illuminations changes, multi-modal image pairs and multi-spectral image pairs. The proposed method describes the neighbourhood of feature points combining frequency and spatial information using multi-scale and multi-oriented Log- Gabor filters. Experimental results show the validity of the proposed approach and also the improvements with respect to the state of the art.
Address
Corporate Author Thesis
Publisher IEEE Place of Publication Quebec City, QC, Canada Editor
Language English Summary Language (up) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference 2015 IEEE International Conference on Image Processing (ICIP)
Notes Approved no
Call Number cidis @ cidis @ Serial 40
Permanent link to this record
 

 
Author M. Oliveira; L. Seabra Lopes; G. Hyun Lim; S. Hamidreza Kasaei; Angel D. Sappa; A. Tomé
Title Concurrent Learning of Visual Codebooks and Object Categories in Open- ended Domains Type Conference Article
Year 2015 Publication Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, Hamburg, Germany, 2015 Abbreviated Journal
Volume Issue Pages 2488 - 2495
Keywords Birds, Training, Legged locomotion, Visualization, Histograms, Object recognition, Gaussian mixture model
Abstract In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are usually constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using codebooks constructed offline.
Address
Corporate Author Thesis
Publisher IEEE Place of Publication Hamburg, Germany Editor
Language English Summary Language (up) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Notes Approved no
Call Number cidis @ cidis @ Serial 41
Permanent link to this record
 

 
Author Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic
Title Sensor Fault Detection and Diagnosis for autonomous vehicles Type Conference Article
Year 2015 Publication 2nd International Conference on Mechatronics, Automation and Manufacturing (ICMAM 2015), International Conference on, Singapur, 2015 Abbreviated Journal
Volume 30 Issue MATEC Web of Conferences Pages 1-6
Keywords
Abstract In recent years testing autonomous vehicles on public roads has become a reality. However, before having autonomous vehicles completely accepted on the roads, they have to demonstrate safe operation and reliable interaction with other traffic participants. Furthermore, in real situations and long term operation, there is always the possibility that diverse components may fail. This paper deals with possible sensor faults by defining a federated sensor data fusion architecture. The proposed architecture is designed to detect obstacles in an autonomous vehicle’s environment while detecting a faulty sensor using SVM models for fault detection and diagnosis. Experimental results using sensor information from the KITTI dataset confirm the feasibility of the proposed architecture to detect soft and hard faults from a particular sensor.
Address
Corporate Author Thesis
Publisher EDP Sciences Place of Publication Editor
Language English Summary Language (up) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 42
Permanent link to this record
 

 
Author Julien Poujol; Cristhian A. Aguilera; Etienne Danos; Boris X. Vintimilla; Ricardo Toledo; Angel D. Sappa
Title A visible-Thermal Fusion based Monocular Visual Odometry Type Conference Article
Year 2015 Publication Iberian Robotics Conference (ROBOT 2015), International Conference on, Lisbon, Portugal, 2015 Abbreviated Journal
Volume 417 Issue Pages 517-528
Keywords Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion
Abstract The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained representations are evaluated under a visual odometry framework, highlighting their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language (up) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 44
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias
Title Scene representations for autonomous driving: an approach based on polygonal primitives Type Conference Article
Year 2015 Publication Iberian Robotics Conference (ROBOT 2015), Lisbon, Portugal, 2015 Abbreviated Journal
Volume 417 Issue Pages 503-515
Keywords Scene reconstruction, Point cloud, Autonomous vehicles
Abstract In this paper, we present a novel methodology to compute a 3D scene representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
Address
Corporate Author Thesis
Publisher Springer International Publishing Switzerland 2016 Place of Publication Editor
Language English Summary Language (up) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference Second Iberian Robotics Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 45
Permanent link to this record
 

 
Author Wilton Agila; Ricardo Cajo; Douglas Plaza
Title Experts Agents in PEM Fuel Cell Control Type Conference Article
Year 2015 Publication 4ta International Conference on Renewable Energy Research and Applications Abbreviated Journal
Volume Issue Pages 896 - 900
Keywords s- PEM Fuel Cell; Expert Agent; Perceptive Agents; Acting Agent; Fuzzy Controller
Abstract In the control of the PEM (Proton Exchange Membrane) fuel cell, the existence of both deliberative and reactive processes that facilitate the tasks of control resulting from a wide range of operating scenarios and range of conditions it is required. The latter is essential to adjust its parameters to the multiplicity of circumstances that may occur in the operation of the PEM stack. In this context, the design and development of an expert-agents based architecture for autonomous control of the PEM stack in top working conditions is presented. The architecture integrates perception and control algorithms using sensory and context information. It is structured in a hierarchy of levels with different time window and level of abstraction. The monitoring model and autonomic control of PEM stack has been validated with different types of PEM stacks and operating conditions demonstrating high reliability in achieving the objective of the proposed energy efficiency. Dynamic control of the wetting of the membrane is a clear example.
Address
Corporate Author Thesis
Publisher IEEE Place of Publication Palermo, Italy Editor
Language English Summary Language (up) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference 2015 International Conference on Renewable Energy Research and Applications (ICRERA)
Notes Approved no
Call Number cidis @ cidis @ Serial 46
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla
Title Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type Journal Article
Year 2016 Publication Sensors Journal Abbreviated Journal
Volume Vol. 16 Issue Pages pp. 1-15
Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform
Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language (up) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 47
Permanent link to this record
 

 
Author Cristhian A. Aguilera; Francisco J. Aguilera; Angel D. Sappa; Ricardo Toledo
Title Learning crossspectral similarity measures with deep convolutional neural networks Type Conference Article
Year 2016 Publication IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops Abbreviated Journal
Volume Issue Pages 267-275
Keywords
Abstract The simultaneous use of images from different spectra can be helpful to improve the performance of many com- puter vision tasks. The core idea behind the usage of cross- spectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN archi- tectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Ex- perimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Ad- ditionally, our experiments show that some CNN architec- tures are capable of generalizing between different cross- spectral domains.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language (up) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 48
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira
Title Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives Type Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 83 Issue Pages pp. 312-325
Keywords Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives
Abstract When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language (up) English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 49
Permanent link to this record