Low S., I. N., Nina O., Sappa A. and Blasch E. (2022). Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 417–425).
Abstract: This paper details the results and main findings of the
second iteration of the Multi-modal Aerial View Object
Classification (MAVOC) challenge. This year’s MAVOC
challenge is the second iteration. The primary goal of
both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input
modalities. Teams are encouraged/challenged to develop
multi-modal approaches that incorporate complementary
information from both domains. While the 2021 challenge
showed a proof of concept that both modalities could be
used together, the 2022 challenge focuses on the detailed
multi-modal models. Using the same UNIfied COincident
Optical and Radar for recognitioN (UNICORN) dataset and
competition format that was used in 2021. Specifically, the
challenge focuses on two techniques, (1) SAR classification
and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods
and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For
SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement
from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average
improvement over 2021.
|
Spencer Low, O. N., Angel D. Sappa, Erik Blasch, Nathan Inkawhich. (2023). Multi-modal Aerial View Object Classification Challenge Results – PBVS 2023. In 19th IEEE Workshop on Perception Beyond the Visible Spectrum de la Conferencia Computer Vision & Pattern Recognition (CVPR 2023) Vancouver, 18-28 junio 2023 (Vol. 2023-June, pp. 412–421).
|
Steven Silva, D. P., David Soque, María Guerra & Jonathan Paillacho. (2021). Autonomous Intelligent Navigation For Mobile Robots In Closed Environments. In The 2nd International Conference on Applied Technologies (ICAT 2020), diciembre 2-4. Communications in Computer and Information Science (Vol. 1388, pp. 391–402).
|
Rangnekar, A., Mulhollan, Z., Vodacek, A., Hoffman, M., Sappa, A. D., & Yu, J. et al. (2022). Semi-Supervised Hyperspectral Object Detection Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 389–397).
|
Suarez Patricia, Carpio Dario, & Sappa Angel D. (2023). A Deep Learning Based Approach for Synthesizing Realistic Depth Maps. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 22nd International Conference on Image Analysis and Processing, ICIAP 2023 Udine 11 – 15 September 2023 (Vol. 14234 LNCS, pp. 369–380).
|
Wilton Agila, G. R., Raul M. del Toro, Livington Miranda. (2023). Qualitative model for an oxygen therapy system based on Renewable Energy. In 12th International Conference on Renewable Energy Research and Applications (ICRERA 2023) Oshawa 29 August – 1 September 2023 (365–371).
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Cross-spectral image dehaze through a dense stacked conditional GAN based approach. In 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) (pp. 358–364).
Abstract: This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Vegetation Index Estimation from Monospectral Images. In 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science (Vol. 10882, pp. 353–362).
Abstract: This paper proposes a novel approach to estimate Normalized
Difference Vegetation Index (NDVI) from just the red channel of
a RGB image. The NDVI index is defined as the ratio of the difference
of the red and infrared radiances over their sum. In other words, information
from the red channel of a RGB image and the corresponding
infrared spectral band are required for its computation. In the current
work the NDVI index is estimated just from the red channel by training a
Conditional Generative Adversarial Network (CGAN). The architecture
proposed for the generative network consists of a single level structure,
which combines at the final layer results from convolutional operations
together with the given red channel with Gaussian noise to enhance
details, resulting in a sharp NDVI image. Then, the discriminative model
estimates the probability that the NDVI generated index came from the
training dataset, rather than the index automatically generated. Experimental
results with a large set of real images are provided showing that
a Conditional GAN single level model represents an acceptable approach
to estimate NDVI index.
|
Rafael E. Rivadeneira, A. D. S., Boris X. Vintimilla, Jin Kim, Dogun Kim et al. (2022). Thermal Image Super-Resolution Challenge Results- PBVS 2022. In Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 349–357).
Abstract: This paper presents results from the third Thermal Image
Super-Resolution (TISR) challenge organized in the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop.
The challenge uses the same thermal image dataset as the
first two challenges, with 951 training images and 50 validation images at each resolution. A set of 20 images was
kept aside for testing. The evaluation tasks were to measure
the PSNR and SSIM between the SR image and the ground
truth (HR thermal noisy image downsampled by four), and
also to measure the PSNR and SSIM between the SR image
and the semi-registered HR image (acquired with another
camera). The results outperformed those from last year’s
challenge, improving both evaluation metrics. This year,
almost 100 teams participants registered for the challenge,
showing the community’s interest in this hot topic.
|
Patricia L. Suarez, D. C., Angel Sappa. (2023). Depth Map Estimation from a Single 2D Image. In 17th International Conference On Signal Image Technology & Internet Based Systems, Bangkok, 8-10 November 2023 (pp. 347–353).
|