Henry O. Velesaca, S. A., Patricia L. Suarez, Ángel Sanchez & Angel D. Sappa. (2020). Off-the-Shelf Based System for Urban Environment Video Analytics. In The 27th International Conference on Systems, Signals and Image Processing (IWSSIP 2020) (Vol. 2020-July, pp. 459–464).
Abstract: This paper presents the design and implementation details of a system build-up by using off-the-shelf algorithms for urban video analytics. The system allows the connection to public video surveillance camera networks to obtain the necessary
information to generate statistics from urban scenarios (e.g., amount of vehicles, type of cars, direction, numbers of persons, etc.). The obtained information could be used not only for traffic management but also to estimate the carbon footprint of urban scenarios. As a case study, a university campus is selected to
evaluate the performance of the proposed system. The system is implemented in a modular way so that it is being used as a testbed to evaluate different algorithms. Implementation results are provided showing the validity and utility of the proposed approach.
|
Henry O. Velesaca, Raul A. Mira, Patricia L. Suarez, Christian X. Larrea, & Angel D. Sappa. (2020). Deep Learning based Corn Kernel Classification. In The 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture on the Conference Computer on Vision and Pattern Recongnition (CVPR 2020) (Vol. 2020-June, pp. 294–302).
Abstract: This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learning based
approach, the Mask R-CNN architecture, while the classification is performed by means of a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered.
As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and
the classification modules. Quantitative evaluations have been performed and comparisons with other approaches provided showing improvements with the proposed pipeline.
|
Rafael E. Rivadeneira, Angel D. Sappa, Boris X. Vintimilla, Lin Guo, Jiankun Hou, Armin Mehri, et al. (2020). Thermal Image Super-Resolution Challenge – PBVS 2020. In The 16th IEEE Workshop on Perception Beyond the Visible Spectrum on the Conference on Computer Vision and Pattern Recongnition (CVPR 2020) (Vol. 2020-June, pp. 432–439).
Abstract: This paper summarizes the top contributions to the first challenge on thermal image super-resolution (TISR) which was organized as part of the Perception Beyond the Visible Spectrum (PBVS) 2020 workshop. In this challenge, a novel thermal image dataset is considered together with stateof-the-art approaches evaluated under a common framework.
The dataset used in the challenge consists of 1021 thermal images, obtained from three distinct thermal cameras at different resolutions (low-resolution, mid-resolution, and high-resolution), resulting in a total of 3063 thermal images. From each resolution, 951 images are used for training and 50 for testing while the 20 remaining images are used for two proposed evaluations. The first evaluation consists of downsampling the low-resolution, midresolution, and high-resolution thermal images by x2, x3 and x4 respectively, and comparing their super-resolution
results with the corresponding ground truth images. The second evaluation is comprised of obtaining the x2 superresolution from a given mid-resolution thermal image and comparing it with the corresponding semi-registered highresolution thermal image. Out of 51 registered participants, 6 teams reached the final validation phase.
|
Jorge L. Charco, Angel D. Sappa, Boris X. Vintimilla, & Henry O. Velesaca. (2020). Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem. In The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 (Vol. 4, pp. 498–505).
Abstract: This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model
a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The
transfer learning consist of first training the network using pairs of images from the virtual-world scenario
considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight
of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose
estimation accuracy using the proposed model, as well as further improvements when the transfer learning
strategy (synthetic-world data – transfer learning – real-world data) is considered to tackle the limitation on
the training due to the reduced number of pairs of real-images on most of the public data sets.
|
Rafael E. Rivadeneira, Angel D. Sappa, & Boris X. Vintimilla. (2020). Thermal Image Super-Resolution: a Novel Architecture and Dataset. In The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 (Vol. 4, pp. 111–119).
Abstract: This paper proposes a novel CycleGAN architecture for thermal image super-resolution, together with a large
dataset consisting of thermal images at different resolutions. The dataset has been acquired using three thermal
cameras at different resolutions, which acquire images from the same scenario at the same time. The thermal
cameras are mounted in rig trying to minimize the baseline distance to make easier the registration problem.
The proposed architecture is based on ResNet6 as a Generator and PatchGAN as Discriminator. The novelty
on the proposed unsupervised super-resolution training (CycleGAN) is possible due to the existence of aforementioned thermal images—images of the same scenario with different resolutions. The proposed approach
is evaluated in the dataset and compared with classical bicubic interpolation. The dataset and the network are
available.
|
Michael Teutsch, A. S. & R. H. (2021). Computer Vision in the Infrared Spectrum: Challenges and ApproachesComputer Vision in the Infrared Spectrum: Challenges and Approaches. Synthesis Lectures on Computer Vision, Vol. 10 No. 2, pp. 138.
|
Nayeth I. Solorzano, L. C. H., Leslie del R. Lima, Dennys F. Paillacho & Jonathan S. Paillacho. (2022). Visual Metrics for Educational Videogames Linked to Socially Assistive Robots in an Inclusive Education Framework. In Smart Innovation, Systems and Technologies. International Conference in Information Technology & Education (ICITED 21), julio 15-17 (Vol. 256, pp. 119–132).
Abstract: In gamification, the development of "visual metrics for educational
video games linked to social assistance robots in the framework of inclusive education" seeks to provide support, not only to regular children but also to children with specific psychosocial disabilities, such as those diagnosed with autism spectrum disorder (ASD). However, personalizing each child's experiences represents a limitation, especially for those with atypical behaviors. 'LOLY,' a social assistance robot, works together with mobile applications associated with the family of educational video game series called 'MIDI-AM,' forming a social robotic platform. This platform offers the user curricular digital content to reinforce the teaching-learning processes and motivate regular children and those with ASD. In the present study, technical, programmatic experiments and focus groups were carried out, using open-source facial recognition algorithms to monitor and evaluate the degree of user attention throughout the interaction. The objective is to evaluate the management of a social robot linked to educational video games
through established metrics, which allow monitoring the user's facial expressions
during its use and define a scenario that ensures consistency in the results for its applicability in therapies and reinforcement in the teaching process, mainly
adaptable for inclusive early childhood education.
|
Viñán-Ludeña M.S., D. C. L. M., Roberto Jacome Galarza, & Sinche Freire, J. (2020). Social media influence: a comprehensive review in general and in tourism domain. Smart Innovation, Systems and Technologies., 171, 2020, 25–35.
|
Ricaurte P, Chilán C, Cristhian A. Aguilera, Boris X. Vintimilla, & Angel D. Sappa. (2014). Feature Point Descriptors: Infrared and Visible Spectra. Sensors Journal, Vol. 14, pp. 3690–3701.
Abstract: This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.
|
Angel D. Sappa, Juan A. Carvajal, Cristhian A. Aguilera, Miguel Oliveira, Dennis G. Romero, & Boris X. Vintimilla. (2016). Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study. Sensors Journal, Vol. 16, pp. 1–15.
Abstract: This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).
|