|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, Paulo Dias, & A. Paulo Moreira. (2016). Incremental Texture Mapping for Autonomous Driving. Robotics and Autonomous Systems Journal, Vol. 84, pp. 113–128.
Abstract: Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
|
|
|
Angel D. Sappa, Cristhian A. Aguilera, Juan A. Carvajal Ayala, Miguel Oliveira, Dennis Romero, Boris X. Vintimilla, et al. (2016). Monocular visual odometry: a cross-spectral image fusion based approach. Robotics and Autonomous Systems Journal, Vol. 86, pp. 26–36.
Abstract: This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
|
|
|
Rafael E. Rivadeneira, A. D. S., Vintimilla B. X. and Hammoud R. (2022). A Novel Domain Transfer-Based Approach for Unsupervised Thermal Image Super- Resolution. Sensors, Vol. 22(Issue 6).
|
|
|
Cristhian A. Aguilera, C. A., Cristóbal A. Navarro, & Angel D. Sappa. (2020). Fast CNN Stereo Depth Estimation through Embedded GPU Devices. Sensors 2020, Vol. 2020-June(11), pp. 1–13.
Abstract: Current CNN-based stereo depth estimation models can barely run under real-time
constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art
evaluations usually do not consider model optimization techniques, being that it is unknown what is
the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models
on three different embedded GPU devices, with and without optimization methods, presenting
performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth
estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture
for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically
augmenting the runtime speed of current models. In our experiments, we achieve real-time inference
speed, in the range of 5–32 ms, for 1216 368 input stereo images on the Jetson TX2, Jetson Xavier,
and Jetson Nano embedded devices.
|
|
|
Ricaurte P, Chilán C, Cristhian A. Aguilera, Boris X. Vintimilla, & Angel D. Sappa. (2014). Feature Point Descriptors: Infrared and Visible Spectra. Sensors Journal, Vol. 14, pp. 3690–3701.
Abstract: This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.
|
|
|
Angel D. Sappa, Juan A. Carvajal, Cristhian A. Aguilera, Miguel Oliveira, Dennis G. Romero, & Boris X. Vintimilla. (2016). Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study. Sensors Journal, Vol. 16, pp. 1–15.
Abstract: This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).
|
|
|
Nayeth I. Solorzano, L. C. H., Leslie del R. Lima, Dennys F. Paillacho & Jonathan S. Paillacho. (2022). Visual Metrics for Educational Videogames Linked to Socially Assistive Robots in an Inclusive Education Framework. In Smart Innovation, Systems and Technologies. International Conference in Information Technology & Education (ICITED 21), julio 15-17 (Vol. 256, pp. 119–132).
Abstract: In gamification, the development of "visual metrics for educational
video games linked to social assistance robots in the framework of inclusive education" seeks to provide support, not only to regular children but also to children with specific psychosocial disabilities, such as those diagnosed with autism spectrum disorder (ASD). However, personalizing each child's experiences represents a limitation, especially for those with atypical behaviors. 'LOLY,' a social assistance robot, works together with mobile applications associated with the family of educational video game series called 'MIDI-AM,' forming a social robotic platform. This platform offers the user curricular digital content to reinforce the teaching-learning processes and motivate regular children and those with ASD. In the present study, technical, programmatic experiments and focus groups were carried out, using open-source facial recognition algorithms to monitor and evaluate the degree of user attention throughout the interaction. The objective is to evaluate the management of a social robot linked to educational video games
through established metrics, which allow monitoring the user's facial expressions
during its use and define a scenario that ensures consistency in the results for its applicability in therapies and reinforcement in the teaching process, mainly
adaptable for inclusive early childhood education.
|
|
|
Michael Teutsch, A. S. & R. H. (2021). Computer Vision in the Infrared Spectrum: Challenges and ApproachesComputer Vision in the Infrared Spectrum: Challenges and Approaches. Synthesis Lectures on Computer Vision, Vol. 10 No. 2, pp. 138.
|
|
|
Jorge L. Charco, Angel D. Sappa, Boris X. Vintimilla, & Henry O. Velesaca. (2020). Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem. In The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 (Vol. 4, pp. 498–505).
Abstract: This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model
a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The
transfer learning consist of first training the network using pairs of images from the virtual-world scenario
considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight
of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose
estimation accuracy using the proposed model, as well as further improvements when the transfer learning
strategy (synthetic-world data – transfer learning – real-world data) is considered to tackle the limitation on
the training due to the reduced number of pairs of real-images on most of the public data sets.
|
|
|
Rafael E. Rivadeneira, Angel D. Sappa, & Boris X. Vintimilla. (2020). Thermal Image Super-Resolution: a Novel Architecture and Dataset. In The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 (Vol. 4, pp. 111–119).
Abstract: This paper proposes a novel CycleGAN architecture for thermal image super-resolution, together with a large
dataset consisting of thermal images at different resolutions. The dataset has been acquired using three thermal
cameras at different resolutions, which acquire images from the same scenario at the same time. The thermal
cameras are mounted in rig trying to minimize the baseline distance to make easier the registration problem.
The proposed architecture is based on ResNet6 as a Generator and PatchGAN as Discriminator. The novelty
on the proposed unsupervised super-resolution training (CycleGAN) is possible due to the existence of aforementioned thermal images—images of the same scenario with different resolutions. The proposed approach
is evaluated in the dataset and compared with classical bicubic interpolation. The dataset and the network are
available.
|
|