Viñán-Ludeña M.S., D. C. L. M., Roberto Jacome Galarza, & Sinche Freire, J. (2020). Social media influence: a comprehensive review in general and in tourism domain. Smart Innovation, Systems and Technologies., 171, 2020, 25–35.
|
Santos, V., Sappa, A.D., Oliveira, M. & de la Escalera, A. (2021). Editorial: Special Issue on Autonomous Driving and Driver Assistance Systems – Some Main Trends. In Journal: Robotics and Autonomous Systems. (Article number 103832), Vol. 144.
|
Xavier Soria, A. S., Patricio Humanante, Arash Akbarinia. (2023). Dense extreme inception network for edge detection. Pattern Recognition, Vol. 139.
|
Santos V., Angel D. Sappa., & Oliveira M. & de la Escalera A. (2019). Special Issue on Autonomous Driving and Driver Assistance Systems. In Robotics and Autonomous Systems, 121.
|
Charco, J. L., Sappa, A.D., Vintimilla, B.X., Velesaca, H.O. (2021). Camera pose estimation in multi-view environments:from virtual scenarios to the real world. In Image and Vision Computing Journal. (Article number 104182), Vol. 110.
Abstract: This paper presents a domain adaptation strategy to efficiently train network architectures for estimating the relative camera pose in multi-view scenarios. The network architectures are fed by a pair of simultaneously acquired
images, hence in order to improve the accuracy of the solutions, and due to the lack of large datasets with pairs of
overlapped images, a domain adaptation strategy is proposed. The domain adaptation strategy consists on transferring the knowledge learned from synthetic images to real-world scenarios. For this, the networks are firstly
trained using pairs of synthetic images, which are captured at the same time by a pair of cameras in a virtual environment; and then, the learned weights of the networks are transferred to the real-world case, where the networks are retrained with a few real images. Different virtual 3D scenarios are generated to evaluate the
relationship between the accuracy on the result and the similarity between virtual and real scenarios—similarity
on both geometry of the objects contained in the scene as well as relative pose between camera and objects in the
scene. Experimental results and comparisons are provided showing that the accuracy of all the evaluated networks for estimating the camera pose improves when the proposed domain adaptation strategy is used,
highlighting the importance on the similarity between virtual-real scenarios.
|
Victor Santos, Angel D. Sappa, & Miguel Oliveira. (2017). Special Issue on Autonomous Driving an Driver Assistance Systems. In Robotics and Autonomous Systems Journal, Vol. 91, pp. 208–209.
|
Angel D. Sappa, Cristhian A. Aguilera, Juan A. Carvajal Ayala, Miguel Oliveira, Dennis Romero, Boris X. Vintimilla, et al. (2016). Monocular visual odometry: a cross-spectral image fusion based approach. Robotics and Autonomous Systems Journal, Vol. 86, pp. 26–36.
Abstract: This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, Paulo Dias, & A. Paulo Moreira. (2016). Incremental Texture Mapping for Autonomous Driving. Robotics and Autonomous Systems Journal, Vol. 84, pp. 113–128.
Abstract: Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, Paulo Dias, & A. Paulo Moreira. (2016). Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives. Robotics and Autonomous Systems Journal, Vol. 83, pp. 312–325.
Abstract: When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
|
Henry O. Velesaca, G. B., Mohammad Rouhani, Angel D. Sappa. (2024). Multimodal image registration techniques: a comprehensive survey. Multimedia Tools and Applications, Vol. 83, 63919–63947.
|