|
Xavier Soria, & Angel D. Sappa. (2018). Improving Edge Detection in RGB Images by Adding NIR Channel. In 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018).
|
|
|
Xavier Soria, Angel D. Sappa, & Riad Hammoud. (2018). Wide-Band Color Imagery Restoration for RGB-NIR Single Sensor Image. Sensors 2018, 18(7), 2059.
Abstract: Multi-spectral RGB-NIR sensors have become ubiquitous in recent years. These sensors allow the visible and near-infrared spectral bands of a given scene to be captured at the same time. With such cameras, the acquired imagery has a compromised RGB color representation due to near-infrared bands (700–1100 nm) cross-talking with the visible bands (400–700 nm). This paper proposes two deep learning-based architectures to recover the full RGB color images, thus removing the NIR information from the visible bands. The proposed approaches directly restore the high-resolution RGB image by means of convolutional neural networks. They are evaluated with several outdoor images; both architectures reach a similar performance when evaluated in different scenarios and using different similarity metrics. Both of them improve the state of the art approaches.
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Cross-spectral image dehaze through a dense stacked conditional GAN based approach. In 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018).
Abstract: This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.
|
|
|
Dennis G. Romero, A. F. Neto, T. F. Bastos, & Boris X. Vintimilla. (2012). RWE patterns extraction for on-line human action recognition through window-based analysis of invariant moments. In 5th Workshop in applied Robotics and Automation (RoboControl).
Abstract: This paper presents a method for on-line human action recognition on video sequences. An analysis based on Mahalanobis distance is performed to identify the “idle” state, which defines the beginning and end of the person movement, for posterior patterns extraction based on Relative Wavelet Energy from sequences of invariant moments.
|
|
|
Marta Diaz, Dennys Paillacho, & Cecilio Angulo. (2015). Evaluating Group-Robot Interaction in Crowded Public Spaces: A Week-Long Exploratory Study in the Wild with a Humanoid Robot Guiding Visitors Through a Science Museum. International Journal of Humanoid Robotics, 12.
Abstract: This paper describes an exploratory study on group interaction with a robot-guide in an open large-scale busy environment. For an entire week a humanoid robot was deployed in the popular Cosmocaixa Science Museum in Barcelona and guided hundreds of people through the museum facilities. The main goal of this experience is to study in the wild the episodes of the robot guiding visitors to a requested destination focusing on the group behavior during displacement. The walking behavior follow-me and the face to face communication in a populated environment are analyzed in terms of guide- visitors interaction, grouping patterns and spatial formations. Results from observational data show that the space configurations spontaneously formed by the robot guide and visitors walking together did not always meet the robot communicative and navigational requirements for successful guidance. Therefore additional verbal and nonverbal prompts must be considered to regulate effectively the walking together and follow-me behaviors. Finally, we discuss lessons learned and recommendations for robot’s spatial behavior in dense crowded scenarios.
|
|
|
Mildred Cruz, Cristhian A. Aguilera, Boris X. Vintimilla, Ricardo Toledo, & Ángel D. Sappa. (2015). Cross-spectral image registration and fusion: an evaluation study. In 2nd International Conference on Machine Vision and Machine Learning (Vol. 331). Barcelona, Spain: Computer Vision Center.
Abstract: This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented.
|
|
|
Miguel Realpe, Boris X. Vintimilla, & L. Vlacic. (2015). Towards Fault Tolerant Perception for autonomous vehicles: Local Fusion. In IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Siem Reap, 2015. (pp. 253–258).
Abstract: Many robust sensor fusion strategies have been developed in order to reliably detect the surrounding environments of an autonomous vehicle. However, in real situations there is always the possibility that sensors or other components may fail. Thus, internal modules and sensors need to be monitored to ensure their proper function. This paper introduces a general view of a perception architecture designed to detect and classify obstacles in an autonomous vehicle's environment using a fault tolerant framework, whereas elaborates the object detection and local fusion modules proposed in order to achieve the modularity and real-time process required by the system.
|
|
|
Dennys Paillacho, Cecilio Angulo, & Marta Díaz. (2015). An Exploratory Study of Group-Robot Social Interactions in a Cultural Center. In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015, International Conference on, Hamburg, Germany, 2015.
Abstract: This article describes an exploratory study of social human-robot interaction with the experimental robotic platform MASHI. The experiences were carried out in La B`obila Cultural Center in Barcelona, Spain to study the visitor preferences, characterize the groups and their spatial relationships in this open and unstructured environment. Results showed that visitors prefers to play and dialogue with the robot. Children have the highest interest in interacting with the robot, more than young and adult visitors. Most of the groups consisted of more than 3 visitors, however the size of the groups during interactions was continuously changed. In static situations, the observed spatial relationships denotes a social cohesion in the human-robot interactions.
|
|
|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, & Paulo Dias. (2015). Scene representations for autonomous driving: an approach based on polygonal primitives. In Iberian Robotics Conference (ROBOT 2015), Lisbon, Portugal, 2015 (Vol. 417, pp. 503–515). Springer International Publishing Switzerland 2016.
Abstract: In this paper, we present a novel methodology to compute a 3D scene representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
|
|
|
Cristhian A. Aguilera, Francisco J. Aguilera, Angel D. Sappa, & Ricardo Toledo. (2016). Learning crossspectral similarity measures with deep convolutional neural networks. In IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
Abstract: The simultaneous use of images from different spectra can be helpful to improve the performance of many com- puter vision tasks. The core idea behind the usage of cross- spectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN archi- tectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Ex- perimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Ad- ditionally, our experiments show that some CNN architec- tures are capable of generalizing between different cross- spectral domains.
|
|