Rafael E. Rivadeneira, Angel D. Sappa, & Boris X. Vintimilla. (2020). Thermal Image Super-Resolution: a Novel Architecture and Dataset. In The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 (Vol. 4, pp. 111–119).
Abstract: This paper proposes a novel CycleGAN architecture for thermal image super-resolution, together with a large
dataset consisting of thermal images at different resolutions. The dataset has been acquired using three thermal
cameras at different resolutions, which acquire images from the same scenario at the same time. The thermal
cameras are mounted in rig trying to minimize the baseline distance to make easier the registration problem.
The proposed architecture is based on ResNet6 as a Generator and PatchGAN as Discriminator. The novelty
on the proposed unsupervised super-resolution training (CycleGAN) is possible due to the existence of aforementioned thermal images—images of the same scenario with different resolutions. The proposed approach
is evaluated in the dataset and compared with classical bicubic interpolation. The dataset and the network are
available.
|
Jorge L. Charco, Angel D. Sappa, Boris X. Vintimilla, & Henry O. Velesaca. (2020). Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem. In The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 (Vol. 4, pp. 498–505).
Abstract: This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model
a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The
transfer learning consist of first training the network using pairs of images from the virtual-world scenario
considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight
of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose
estimation accuracy using the proposed model, as well as further improvements when the transfer learning
strategy (synthetic-world data – transfer learning – real-world data) is considered to tackle the limitation on
the training due to the reduced number of pairs of real-images on most of the public data sets.
|
Ma. Paz Velarde, Erika Perugachi, Dennis G. Romero, Ángel D. Sappa, & Boris X. Vintimilla. (2015). Análisis del movimiento de las extremidades superiores aplicado a la rehabilitación física de una persona usando técnicas de visión artificial. Revista Tecnológica ESPOL-RTE, Vol. 28, pp. 1–7.
Abstract: Comúnmente durante la rehabilitación física, el diagnóstico dado por el especialista se basa en observaciones cualitativas que sugieren, en algunos casos, conclusiones subjetivas. El presente trabajo propone un enfoque cuantitativo, orientado a servir de ayuda a fisioterapeutas, a través de una herramienta interactiva y de bajo costo que permite medir los movimientos de miembros superiores. Estos movimientos son capturados por un sensor RGB-D y procesados mediante la metodología propuesta, dando como resultado una eficiente representación de movimientos, permitiendo la evaluación cuantitativa de movimientos de los miembros superiores.
|
Dennis G. Romero, A. F. Neto, T. F. Bastos, & Boris X. Vintimilla. (2012). An approach to automatic assistance in physiotherapy based on on-line movement identification. In VI Andean Region International Conference – ANDESCON 2012. Andean Region International Conference (ANDESCON), 2012 VI: IEEE.
Abstract: This paper describes a method for on-line movement identification, oriented to patient’s movement evaluation during physiotherapy. An analysis based on Mahalanobis distance between temporal windows is performed to identify the “idle/motion” state, which defines the beginning and end of the patient’s movement, for posterior patterns extraction based on Relative Wavelet Energy from sequences of invariant moments.
|
Mildred Cruz, Cristhian A. Aguilera, Boris X. Vintimilla, Ricardo Toledo, & Ángel D. Sappa. (2015). Cross-spectral image registration and fusion: an evaluation study. In 2nd International Conference on Machine Vision and Machine Learning (Vol. 331). Barcelona, Spain: Computer Vision Center.
Abstract: This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented.
|
Jorge L. Charco, A. D. S., Boris X. Vintimilla. (2022). Human Pose Estimation through A Novel Multi-View Scheme. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISIGRAPP 2022 (Vol. 5, pp. 855–862).
Abstract: This paper presents a multi-view scheme to tackle the challenging problem of the self-occlusion in human
pose estimation problem. The proposed approach first obtains the human body joints of a set of images,
which are captured from different views at the same time. Then, it enhances the obtained joints by using a
multi-view scheme. Basically, the joints from a given view are used to enhance poorly estimated joints from
another view, especially intended to tackle the self occlusions cases. A network architecture initially proposed
for the monocular case is adapted to be used in the proposed multi-view scheme. Experimental results and
comparisons with the state-of-the-art approaches on Human3.6m dataset are presented showing improvements
in the accuracy of body joints estimations.
|
Julien Poujol, Cristhian A. Aguilera, Etienne Danos, Boris X. Vintimilla, Ricardo Toledo, & Angel D. Sappa. (2015). A visible-Thermal Fusion based Monocular Visual Odometry. In Iberian Robotics Conference (ROBOT 2015), International Conference on, Lisbon, Portugal, 2015 (Vol. 417, pp. 517–528).
Abstract: The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained representations are evaluated under a visual odometry framework, highlighting their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.
|
Angel D. Sappa, Cristhian A. Aguilera, Juan A. Carvajal Ayala, Miguel Oliveira, Dennis Romero, Boris X. Vintimilla, et al. (2016). Monocular visual odometry: a cross-spectral image fusion based approach. Robotics and Autonomous Systems Journal, Vol. 86, pp. 26–36.
Abstract: This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
|
Angel D. Sappa, Juan A. Carvajal, Cristhian A. Aguilera, Miguel Oliveira, Dennis G. Romero, & Boris X. Vintimilla. (2016). Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study. Sensors Journal, Vol. 16, pp. 1–15.
Abstract: This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).
|
Dennis G. Romero, A. F. Neto, T. F. Bastos, & Boris X. Vintimilla. (2012). RWE patterns extraction for on-line human action recognition through window-based analysis of invariant moments. In 5th Workshop in applied Robotics and Automation (RoboControl).
Abstract: This paper presents a method for on-line human action recognition on video sequences. An analysis based on Mahalanobis distance is performed to identify the “idle” state, which defines the beginning and end of the person movement, for posterior patterns extraction based on Relative Wavelet Energy from sequences of invariant moments.
|