|
Records |
Links |
|
Author |
Dennis G. Romero; Anselmo Frizera N.; Teodiano Freire B. |

|
|
Title |
Reconocimiento en-l?nea de acciones humanas basado en patrones de RWE aplicado en ventanas dinámicas de momentos invariantes |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Revista Iberoamericana de Automática e Informática industrial 00 (2014) |
Abbreviated Journal |
|
|
|
Volume |
11 |
Issue |
|
Pages |
202-211 |
|
|
Keywords |
Vision por ordenador, Mapas de profundidad, Reconocimiento de acciones humanas, Relative Wavelet Energy, Distancia de ´ Mahalanobis |
|
|
Abstract |
Durante los últimos años ha existido un fuerte incremento en el acceso a internet, causando que los centros de datos (DC) deban adaptar dinámicamente su infraestructura de red de cara a enfrentar posibles problemas de congestión, la cual no siempre se da de forma oportuna. Ante esto, nuevas topologías de red se han propuesto en los últimos años, como una forma de brindar mejores condiciones para el manejo de tráfico interno, sin embargo es común que para el estudio de estas mejoras, se necesite recrear el comportamiento de un verdadero DC en modelos de simulación/emulación. Por lo tanto se vuelve esencial validar dichos modelos, de cara a obtener resultados coherentes con la realidad. Esta validación es posible por medio de la identificación de ciertas propiedades que se deducen a partir de las variables y los parámetros que describen la red, y que se mantienen en las topologías de los DC para diversos escenarios y/o configuraciones. Estas propiedades, conocidas como invariantes, son una expresión del funcionamiento de la red en ambientes reales, como por ejemplo la ruta más larga entre dos nodos o el número de enlaces mínimo que deben fallar antes de una pérdida de conectividad en alguno de los nodos de la red. En el presente trabajo se realiza la identificación, formulación y comprobación de dos invariantes para la topología Fat-Tree, utilizando como software emulador a mininet. Las conclusiones muestran resultados concordantes entre lo analítico y lo práctico. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
Español |
Summary Language |
Español |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
30 |
|
Permanent link to this record |
|
|
|
|
Author |
Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla |

|
|
Title |
Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Sensors Journal |
Abbreviated Journal |
|
|
|
Volume |
vol 16 |
Issue |
|
Pages |
1-15 |
|
|
Keywords |
image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform |
|
|
Abstract |
This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR). |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
47 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira |

|
|
Title |
Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Robotics and Autonomous Systems Journal |
Abbreviated Journal |
|
|
|
Volume |
83 |
Issue |
|
Pages |
312-325 |
|
|
Keywords |
Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives |
|
|
Abstract |
When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
49 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira |

|
|
Title |
Incremental Texture Mapping for Autonomous Driving |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Robotics and Autonomous Systems Journal |
Abbreviated Journal |
|
|
|
Volume |
84 |
Issue |
|
Pages |
113-128 |
|
|
Keywords |
Scene reconstruction, Autonomous driving, Texture mapping |
|
|
Abstract |
Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
50 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic |

|
|
Title |
Multi-sensor Fusion Module in a Fault Tolerant Perception System for Autonomous Vehicles |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Journal of Automation and Control Engineering (JOACE) |
Abbreviated Journal |
|
|
|
Volume |
4 |
Issue |
|
Pages |
430-436 |
|
|
Keywords |
Fault Tolerance, Data Fusion, Multi-sensor Fusion, Autonomous Vehicles, Perception System |
|
|
Abstract |
Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
51 |
|
Permanent link to this record |
|
|
|
|
Author |
Angel D. Sappa; Cristhian A. Aguilera; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo |


|
|
Title |
Monocular visual odometry: a cross-spectral image fusion based approach |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Robotics and Autonomous Systems Journal |
Abbreviated Journal |
|
|
|
Volume |
vol 86 |
Issue |
|
Pages |
26-36 |
|
|
Keywords |
Monocular visual odometry LWIR-RGB cross-spectral imaging Image fusion |
|
|
Abstract |
This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
Enlgish |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
54 |
|
Permanent link to this record |
|
|
|
|
Author |
Marjorie Chalen; Boris X. Vintimilla |

|
|
Title |
Towards Action Prediction Applying Deep Learning |
Type |
Journal Article |
|
Year |
2019 |
Publication |
Latin American Conference on Computational Intelligence (LA-CCI); Guayaquil, Ecuador; 11-15 Noviembre 2019 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
action prediction, early recognition, early detec- tion, action anticipation, cnn, deep learning, rnn, lstm. |
|
|
Abstract |
Considering the incremental development future action prediction by video analysis task of computer vision where it is done based upon incomplete action executions. Deep learning is playing an important role in this task framework. Thus, this paper describes recently techniques and pertinent datasets utilized in human action prediction task. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
129 |
|
Permanent link to this record |
|
|
|
|
Author |
Santos V.; Angel D. Sappa.; Oliveira M. & de la Escalera A. |

|
|
Title |
Special Issue on Autonomous Driving and Driver Assistance Systems |
Type |
Journal Article |
|
Year |
2019 |
Publication |
In Robotics and Autonomous Systems |
Abbreviated Journal |
|
|
|
Volume |
Vol. 121 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
gtsi @ user @ |
Serial |
119 |
|
Permanent link to this record |
|
|
|
|
Author |
Cristhian A. Aguilera, Cristhian Aguilera, Cristóbal A. Navarro, & Angel D. Sappa |

|
|
Title |
Fast CNN Stereo Depth Estimation through Embedded GPU Devices |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Sensors 2020 |
Abbreviated Journal |
|
|
|
Volume |
Vol. 2020-June |
Issue |
11 |
Pages |
pp. 1-13 |
|
|
Keywords |
stereo matching; deep learning; embedded GPU |
|
|
Abstract |
Current CNN-based stereo depth estimation models can barely run under real-time
constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art
evaluations usually do not consider model optimization techniques, being that it is unknown what is
the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models
on three different embedded GPU devices, with and without optimization methods, presenting
performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth
estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture
for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically
augmenting the runtime speed of current models. In our experiments, we achieve real-time inference
speed, in the range of 5–32 ms, for 1216 368 input stereo images on the Jetson TX2, Jetson Xavier,
and Jetson Nano embedded devices. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
14248220 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
132 |
|
Permanent link to this record |
|
|
|
|
Author |
Ángel Morera, Ángel Sánchez, A. Belén Moreno, Angel D. Sappa, & José F. Vélez |

|
|
Title |
SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. |
Type |
Journal Article |
|
Year |
2020 |
Publication |
|
Abbreviated Journal |
In Sensors |
|
|
Volume |
Vol. 2020-August |
Issue |
16 |
Pages |
pp. 1-23 |
|
|
Keywords |
object detection; urban outdoor panels; one-stage detectors; Single Shot MultiBox Detector (SSD); You Only Look Once (YOLO); detection metrics; object and scene imaging variabilities |
|
|
Abstract |
This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO)
deep neural networks for the outdoor advertisement panel detection problem by handling multiple
and combined variabilities in the scenes. Publicity panel detection in images oers important
advantages both in the real world as well as in the virtual one. For example, applications like Google
Street View can be used for Internet publicity and when detecting these ads panels in images, it could
be possible to replace the publicity appearing inside the panels by another from a funding company.
In our experiments, both SSD and YOLO detectors have produced acceptable results under variable
sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex
background and multiple panels in scenes. Due to the diculty of finding annotated images for the
considered problem, we created our own dataset for conducting the experiments. The major strength
of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable
when the publicity contained inside the panel is analyzed after detecting them. On the other side,
YOLO produced better panel localization results detecting a higher number of True Positive (TP)
panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models
with dierent types of semantic segmentation networks and using the same evaluation metrics is
also included. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher  |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
14248220 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
133 |
|
Permanent link to this record |