|
Patricia Súarez, H. V., Dario Carpio & Angel Sappa. (2023). Corn Kernel Classification From Few Training Samples. In journal Artificial Intelligence in Agriculture, Vol. 9, pp. 89–99.
|
|
|
Dennis G. Romero, A. F. N., & Teodiano Freire B. (2014). Reconocimiento en-línea de acciones humanas basado en patrones de RWE aplicado en ventanas dinámicas de momentos invariantes. Revista Iberoamericana de Automática e Informática industrial 00 (2014), Vol. 11, pp. 202–211.
|
|
|
Daniela Rato, M. O., Victor Santos, Manuel Gomes & Angel Sappa. (2022). A Sensor-to-Pattern Calibration Framework for Multi-Modal Industrial Collaborative Cells. Journal of Manufacturing Systems, Vol. 64, pp. 497–507.
|
|
|
Xavier Soria, A. S., Patricio Humanante, Arash Akbarinia. (2023). Dense extreme inception network for edge detection. Pattern Recognition, Vol. 139.
|
|
|
Patricia L. Suárez, A. D. S. and B. X. V. (2021). Deep learning-based vegetation index estimation. In Generative Adversarial Networks for Image-to-Image Translation Book. (Vol. Chapter 9, pp. 205–232).
|
|
|
Velesaca, H. O., Suárez, P. L., Sappa, A. D., Carpio, D., Rivadeneira, R. E., & Sanchez, A. (2022). Review on Common Techniques for Urban Environment Video Analytics. In WORKSHOP BRASILEIRO DE CIDADES INTELIGENTES (WBCI 2022) (pp. 107–118).
|
|
|
Emmanuel Moran Barreiro & Boris Vintimilla. (2023). Towards a Robust Solution for the Supermarket Shelf Audit Problem: Obsolete Price Tags in Shelves. In Lecture Notes in Computer Science. 26th Iberoamerican Congress on Pattern Recognition (Vol. 14469 LNCS, pp. 257–271).
|
|
|
Rafael E. Rivadeneira, A. D. S., Boris X. Vintimilla, Jin Kim, Dogun Kim et al. (2022). Thermal Image Super-Resolution Challenge Results- PBVS 2022. In Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 349–357).
Abstract: This paper presents results from the third Thermal Image
Super-Resolution (TISR) challenge organized in the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop.
The challenge uses the same thermal image dataset as the
first two challenges, with 951 training images and 50 validation images at each resolution. A set of 20 images was
kept aside for testing. The evaluation tasks were to measure
the PSNR and SSIM between the SR image and the ground
truth (HR thermal noisy image downsampled by four), and
also to measure the PSNR and SSIM between the SR image
and the semi-registered HR image (acquired with another
camera). The results outperformed those from last year’s
challenge, improving both evaluation metrics. This year,
almost 100 teams participants registered for the challenge,
showing the community’s interest in this hot topic.
|
|
|
Low S., I. N., Nina O., Sappa A. and Blasch E. (2022). Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 417–425).
Abstract: This paper details the results and main findings of the
second iteration of the Multi-modal Aerial View Object
Classification (MAVOC) challenge. This year’s MAVOC
challenge is the second iteration. The primary goal of
both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input
modalities. Teams are encouraged/challenged to develop
multi-modal approaches that incorporate complementary
information from both domains. While the 2021 challenge
showed a proof of concept that both modalities could be
used together, the 2022 challenge focuses on the detailed
multi-modal models. Using the same UNIfied COincident
Optical and Radar for recognitioN (UNICORN) dataset and
competition format that was used in 2021. Specifically, the
challenge focuses on two techniques, (1) SAR classification
and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods
and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For
SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement
from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average
improvement over 2021.
|
|
|
Angel D. Sappa, Cristhian A. Aguilera, Juan A. Carvajal Ayala, Miguel Oliveira, Dennis Romero, Boris X. Vintimilla, et al. (2016). Monocular visual odometry: a cross-spectral image fusion based approach. Robotics and Autonomous Systems Journal, Vol. 86, pp. 26–36.
Abstract: This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.
|
|