toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic pdf  url
openurl 
  Title Sensor Fault Detection and Diagnosis for autonomous vehicles Type Conference Article
  Year 2015 Publication 2nd International Conference on Mechatronics, Automation and Manufacturing (ICMAM 2015), International Conference on, Singapur, 2015 Abbreviated Journal  
  Volume 30 Issue MATEC Web of Conferences Pages (down) 1-6  
  Keywords  
  Abstract In recent years testing autonomous vehicles on public roads has become a reality. However, before having autonomous vehicles completely accepted on the roads, they have to demonstrate safe operation and reliable interaction with other traffic participants. Furthermore, in real situations and long term operation, there is always the possibility that diverse components may fail. This paper deals with possible sensor faults by defining a federated sensor data fusion architecture. The proposed architecture is designed to detect obstacles in an autonomous vehicle’s environment while detecting a faulty sensor using SVM models for fault detection and diagnosis. Experimental results using sensor information from the KITTI dataset confirm the feasibility of the proposed architecture to detect soft and hard faults from a particular sensor.  
  Address  
  Corporate Author Thesis  
  Publisher EDP Sciences Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 42  
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla pdf  url
openurl 
  Title Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type Journal Article
  Year 2016 Publication Sensors Journal Abbreviated Journal  
  Volume vol 16 Issue Pages (down) 1-15  
  Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform  
  Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 47  
Permanent link to this record
 

 
Author Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic pdf  openurl
  Title A Fault Tolerant Perception system for autonomous vehicles Type Conference Article
  Year 2016 Publication 35th Chinese Control Conference (CCC2016), International Conference on, Chengdu Abbreviated Journal  
  Volume Issue Pages (down) 1-6  
  Keywords Fault Tolerant Perception, Sensor Data Fusion, Fault Tolerance, Autonomous Vehicles, Federated Architecture  
  Abstract Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 52  
Permanent link to this record
 

 
Author Juan A. Carvajal; Dennis G. Romero; Angel D. Sappa pdf  openurl
  Title Fine-tuning based deep covolutional networks for lepidopterous genus recognition Type Conference Article
  Year 2016 Publication XXI IberoAmerican Congress on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages (down) 1-9  
  Keywords  
  Abstract This paper describes an image classi cation approach ori- ented to identify specimens of lepidopterous insects recognized at Ecuado- rian ecological reserves. This work seeks to contribute to studies in the area of biology about genus of butter ies and also to facilitate the reg- istration of unrecognized specimens. The proposed approach is based on the ne-tuning of three widely used pre-trained Convolutional Neural Networks (CNNs). This strategy is intended to overcome the reduced number of labeled images. Experimental results with a dataset labeled by expert biologists, is presented|a recognition accuracy above 92% is reached. 1 Introductio  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 53  
Permanent link to this record
 

 
Author Cristhian A. Aguilera, Cristhian Aguilera, Cristóbal A. Navarro, & Angel D. Sappa pdf  openurl
  Title Fast CNN Stereo Depth Estimation through Embedded GPU Devices Type Journal Article
  Year 2020 Publication Sensors 2020 Abbreviated Journal  
  Volume Vol. 2020-June Issue 11 Pages (down) pp. 1-13  
  Keywords stereo matching; deep learning; embedded GPU  
  Abstract Current CNN-based stereo depth estimation models can barely run under real-time

constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art

evaluations usually do not consider model optimization techniques, being that it is unknown what is

the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models

on three different embedded GPU devices, with and without optimization methods, presenting

performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth

estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture

for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically

augmenting the runtime speed of current models. In our experiments, we achieve real-time inference

speed, in the range of 5–32 ms, for 1216  368 input stereo images on the Jetson TX2, Jetson Xavier,

and Jetson Nano embedded devices.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 14248220 ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 132  
Permanent link to this record
 

 
Author Ángel Morera, Ángel Sánchez, A. Belén Moreno, Angel D. Sappa, & José F. Vélez pdf  isbn
openurl 
  Title SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. Type Journal Article
  Year 2020 Publication Abbreviated Journal In Sensors  
  Volume Vol. 2020-August Issue 16 Pages (down) pp. 1-23  
  Keywords object detection; urban outdoor panels; one-stage detectors; Single Shot MultiBox Detector (SSD); You Only Look Once (YOLO); detection metrics; object and scene imaging variabilities  
  Abstract This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO)

deep neural networks for the outdoor advertisement panel detection problem by handling multiple

and combined variabilities in the scenes. Publicity panel detection in images o ers important

advantages both in the real world as well as in the virtual one. For example, applications like Google

Street View can be used for Internet publicity and when detecting these ads panels in images, it could

be possible to replace the publicity appearing inside the panels by another from a funding company.

In our experiments, both SSD and YOLO detectors have produced acceptable results under variable

sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex

background and multiple panels in scenes. Due to the diculty of finding annotated images for the

considered problem, we created our own dataset for conducting the experiments. The major strength

of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable

when the publicity contained inside the panel is analyzed after detecting them. On the other side,

YOLO produced better panel localization results detecting a higher number of True Positive (TP)

panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models

with di erent types of semantic segmentation networks and using the same evaluation metrics is

also included.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 14248220 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 133  
Permanent link to this record
 

 
Author Roberto Jacome Galarza; Miguel-Andrés Realpe-Robalino; Chamba-Eras LuisAntonio; Viñán-Ludeña MarlonSantiago and Sinche-Freire Javier-Francisco pdf  openurl
  Title Computer vision for image understanding. A comprehensive review Type Conference Article
  Year 2019 Publication International Conference on Advances in Emerging Trends and Technologies (ICAETT 2019); Quito, Ecuador Abbreviated Journal  
  Volume Issue Pages (down)  
  Keywords  
  Abstract Computer Vision has its own Turing test: Can a machine describe the contents of an image or a video in the way a human being would do? In this paper, the progress of Deep Learning for image recognition is analyzed in order to know the answer to this question. In recent years, Deep Learning has increased considerably the precision rate of many tasks related to computer vision. Many datasets of labeled images are now available online, which leads to pre-trained models for many computer vision applications. In this work, we gather information of the latest techniques to perform image understanding and description. As a conclusion we obtained that the combination of Natural Language Processing (using Recurrent Neural Networks and Long Short-Term Memory) plus Image Understanding (using Convolutional Neural Networks) could bring new types of powerful and useful applications in which the computer will be able to answer questions about the content of images and videos. In order to build datasets of labeled images, we need a lot of work and most of the datasets are built using crowd work. These new applications have the potential to increase the human machine interaction to new levels of usability and user’s satisfaction.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 97  
Permanent link to this record
 

 
Author Cristhian A. Aguilera; Cristhian Aguilera; Angel D. Sappa pdf  openurl
  Title Melamine faced panels defect classification beyond the visible spectrum. Type Journal Article
  Year 2018 Publication In Sensors 2018 Abbreviated Journal  
  Volume Issue Pages (down)  
  Keywords  
  Abstract In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond

the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 89  
Permanent link to this record
 

 
Author Juan A. Carvajal; Dennis G. Romero; Angel D. Sappa pdf  openurl
  Title Fine-tuning deep convolutional networks for lepidopterous genus recognition Type Journal Article
  Year 2017 Publication Lecture Notes in Computer Science Abbreviated Journal  
  Volume Issue Pages (down)  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 63  
Permanent link to this record
 

 
Author Victor Santos; Angel D. Sappa; Miguel Oliveira pdf  openurl
  Title Spcial Issue on Autonomous Driving an Driver Assistance Systems Type Journal Article
  Year 2017 Publication In Robotics and Autonomous Systems Journal Abbreviated Journal  
  Volume Issue Pages (down)  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 65  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: