toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Marta Diaz; Dennys Paillacho; Cecilio Angulo pdf  openurl
  Title Evaluating Group-Robot Interaction in Crowded Public Spaces: A Week-Long Exploratory Study in the Wild with a Humanoid Robot Guiding Visitors Through a Science Museum. Type Journal Article
  Year 2015 Publication International Journal of Humanoid Robotics Abbreviated Journal  
  Volume Vol. 12 Issue Pages  
  Keywords Group-robot interaction; robotic-guide; social navigation; space management; spatial formations; group walking behavior; crowd behavior  
  Abstract This paper describes an exploratory study on group interaction with a robot-guide in an open large-scale busy environment. For an entire week a humanoid robot was deployed in the popular Cosmocaixa Science Museum in Barcelona and guided hundreds of people through the museum facilities. The main goal of this experience is to study in the wild the episodes of the robot guiding visitors to a requested destination focusing on the group behavior during displacement. The walking behavior follow-me and the face to face communication in a populated environment are analyzed in terms of guide- visitors interaction, grouping patterns and spatial formations. Results from observational data show that the space configurations spontaneously formed by the robot guide and visitors walking together did not always meet the robot communicative and navigational requirements for successful guidance. Therefore additional verbal and nonverbal prompts must be considered to regulate effectively the walking together and follow-me behaviors. Finally, we discuss lessons learned and recommendations for robot’s spatial behavior in dense crowded scenarios.  
  Address  
  Corporate Author Thesis  
  Publisher International Journal of Humanoid Robotics Place of Publication Editor  
  Language (up) English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 34  
Permanent link to this record
 

 
Author Ma. Paz Velarde; Erika Perugachi; Dennis G. Romero; Ángel D. Sappa; Boris X. Vintimilla pdf  url
openurl 
  Title Análisis del movimiento de las extremidades superiores aplicado a la rehabilitación física de una persona usando técnicas de visión artificial. Type Journal Article
  Year 2015 Publication Revista Tecnológica ESPOL-RTE Abbreviated Journal  
  Volume Vol. 28 Issue Pages pp. 1-7  
  Keywords Rehabilitation; RGB-D Sensor; Computer Vision; Upper limb  
  Abstract Comúnmente durante la rehabilitación física, el diagnóstico dado por el especialista se basa en observaciones cualitativas que sugieren, en algunos casos, conclusiones subjetivas. El presente trabajo propone un enfoque cuantitativo, orientado a servir de ayuda a fisioterapeutas, a través de una herramienta interactiva y de bajo costo que permite medir los movimientos de miembros superiores. Estos movimientos son capturados por un sensor RGB-D y procesados mediante la metodología propuesta, dando como resultado una eficiente representación de movimientos, permitiendo la evaluación cuantitativa de movimientos de los miembros superiores.  
  Address  
  Corporate Author Thesis  
  Publisher ESPOL Place of Publication Editor  
  Language (up) English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 39  
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla pdf  url
openurl 
  Title Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type Journal Article
  Year 2016 Publication Sensors Journal Abbreviated Journal  
  Volume Vol. 16 Issue Pages pp. 1-15  
  Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform  
  Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language (up) English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 47  
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira pdf  url
openurl 
  Title Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives Type Journal Article
  Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal  
  Volume Vol. 83 Issue Pages pp. 312-325  
  Keywords Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives  
  Abstract When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language (up) English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 49  
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira pdf  url
openurl 
  Title Incremental Texture Mapping for Autonomous Driving Type Journal Article
  Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal  
  Volume Vol. 84 Issue Pages pp. 113-128  
  Keywords Scene reconstruction, Autonomous driving, Texture mapping  
  Abstract Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language (up) English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 50  
Permanent link to this record
 

 
Author Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic pdf  openurl
  Title Multi-sensor Fusion Module in a Fault Tolerant Perception System for Autonomous Vehicles Type Journal Article
  Year 2016 Publication Journal of Automation and Control Engineering (JOACE) Abbreviated Journal  
  Volume Vol. 4 Issue Pages pp. 430-436  
  Keywords Fault Tolerance, Data Fusion, Multi-sensor Fusion, Autonomous Vehicles, Perception System  
  Abstract Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language (up) English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 51  
Permanent link to this record
 

 
Author Cristhian A. Aguilera, Cristhian Aguilera, Cristóbal A. Navarro, & Angel D. Sappa pdf  openurl
  Title Fast CNN Stereo Depth Estimation through Embedded GPU Devices Type Journal Article
  Year 2020 Publication Sensors 2020 Abbreviated Journal  
  Volume Vol. 2020-June Issue 11 Pages pp. 1-13  
  Keywords stereo matching; deep learning; embedded GPU  
  Abstract Current CNN-based stereo depth estimation models can barely run under real-time

constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art

evaluations usually do not consider model optimization techniques, being that it is unknown what is

the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models

on three different embedded GPU devices, with and without optimization methods, presenting

performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth

estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture

for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically

augmenting the runtime speed of current models. In our experiments, we achieve real-time inference

speed, in the range of 5–32 ms, for 1216  368 input stereo images on the Jetson TX2, Jetson Xavier,

and Jetson Nano embedded devices.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language (up) English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 14248220 ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 132  
Permanent link to this record
 

 
Author Ángel Morera, Ángel Sánchez, A. Belén Moreno, Angel D. Sappa, & José F. Vélez pdf  isbn
openurl 
  Title SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. Type Journal Article
  Year 2020 Publication Abbreviated Journal In Sensors  
  Volume Vol. 2020-August Issue 16 Pages pp. 1-23  
  Keywords object detection; urban outdoor panels; one-stage detectors; Single Shot MultiBox Detector (SSD); You Only Look Once (YOLO); detection metrics; object and scene imaging variabilities  
  Abstract This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO)

deep neural networks for the outdoor advertisement panel detection problem by handling multiple

and combined variabilities in the scenes. Publicity panel detection in images o ers important

advantages both in the real world as well as in the virtual one. For example, applications like Google

Street View can be used for Internet publicity and when detecting these ads panels in images, it could

be possible to replace the publicity appearing inside the panels by another from a funding company.

In our experiments, both SSD and YOLO detectors have produced acceptable results under variable

sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex

background and multiple panels in scenes. Due to the diculty of finding annotated images for the

considered problem, we created our own dataset for conducting the experiments. The major strength

of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable

when the publicity contained inside the panel is analyzed after detecting them. On the other side,

YOLO produced better panel localization results detecting a higher number of True Positive (TP)

panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models

with di erent types of semantic segmentation networks and using the same evaluation metrics is

also included.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language (up) English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 14248220 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 133  
Permanent link to this record
 

 
Author Morocho-Cayamcela, M.E. & W. Lim pdf  openurl
  Title Lateral confinement of high-impedance surface-waves through reinforcement learning Type Journal Article
  Year 2020 Publication Electronics Letters Abbreviated Journal  
  Volume Vol. 56 Issue 23, 12 November 2020 Pages pp. 1262-1264  
  Keywords  
  Abstract The authors present a model-free policy-based reinforcement learning

model that introduces perturbations on the pattern of a metasurface.

The objective is to learn a policy that changes the size of the

patches, and therefore the impedance in the sides of an artificially structured

material. The proposed iterative model assigns the highest reward

when the patch sizes allow the transmission along a constrained path

and penalties when the patch sizes make the surface wave radiate to

the sides of the metamaterial. After convergence, the proposed

model learns an optimal patch pattern that achieves lateral confinement

along the metasurface. Simulation results show that the proposed

learned-pattern can effectively guide the electromagnetic wave

through a metasurface, maintaining its instantaneous eigenstate when

the homogeneity is perturbed. Moreover, the pattern learned to

prevent reflections by changing the patch sizes adiabatically. The

reflection coefficient S1, 2 shows that most of the power gets transferred

from the source to the destination with the proposed design.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language (up) English Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 139  
Permanent link to this record
 

 
Author Charco, J.L., Sappa, A.D., Vintimilla, B.X., Velesaca, H.O. pdf  openurl
  Title Camera pose estimation in multi-view environments:from virtual scenarios to the real world Type Journal Article
  Year 2021 Publication In Image and Vision Computing Journal. (Article number 104182) Abbreviated Journal  
  Volume Vol. 110 Issue Pages  
  Keywords Relative camera pose estimation, Domain adaptation, Siamese architecture, Synthetic data, Multi-view environments  
  Abstract This paper presents a domain adaptation strategy to efficiently train network architectures for estimating the relative camera pose in multi-view scenarios. The network architectures are fed by a pair of simultaneously acquired

images, hence in order to improve the accuracy of the solutions, and due to the lack of large datasets with pairs of

overlapped images, a domain adaptation strategy is proposed. The domain adaptation strategy consists on transferring the knowledge learned from synthetic images to real-world scenarios. For this, the networks are firstly

trained using pairs of synthetic images, which are captured at the same time by a pair of cameras in a virtual environment; and then, the learned weights of the networks are transferred to the real-world case, where the networks are retrained with a few real images. Different virtual 3D scenarios are generated to evaluate the

relationship between the accuracy on the result and the similarity between virtual and real scenarios—similarity

on both geometry of the objects contained in the scene as well as relative pose between camera and objects in the

scene. Experimental results and comparisons are provided showing that the accuracy of all the evaluated networks for estimating the camera pose improves when the proposed domain adaptation strategy is used,

highlighting the importance on the similarity between virtual-real scenarios.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language (up) English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 147  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: