|
Steven Silva, N. V., Dennys Paillacho, Samuel Millan-Norman & Juan David Hernandez. (2023). Online Social Robot Navigation in Indoor, Large and Crowded Environments. In IEEE International Conference on Robotics and Automation (ICRA 2023) Londres, 29 may 2023 – 2 jun 2023 (Vol. 2023-May, pp. 9749–9756).
|
|
|
Spencer Low, O. N., Angel D. Sappa, Erik Blasch, Nathan Inkawhich. (2023). Multi-modal Aerial View Image Challenge: Translation from Synthetic Aperture Radar to Electro-Optical Domain Results – PBVS 2023. In 19th IEEE Workshop on Perception Beyond the Visible Spectrum de la Conferencia Computer Vision & Pattern Recognition (CVPR 2023) Vancouver, 18-28 junio 2023 (Vol. 2023-June, pp. 515–523).
|
|
|
Rafael E. Rivadeneira, A. D. S., Boris X. Vintimilla, Chenyang Wang, Junjun Jiang, Xianming Liu, Zhiwei Zhong, Dai Bin, Li Ruodi, Li Shengye. (2023). Thermal Image Super-Resolution Challenge Results – PBVS 2023. In 19th IEEE Workshop on Perception Beyond the Visible Spectrum de la Conferencia Computer Vision & Pattern Recognition (CVPR 2023) Vancouver, 18-28 junio 2023 (Vol. 2023-June, pp. 470–478).
|
|
|
Spencer Low, O. N., Angel D. Sappa, Erik Blasch, Nathan Inkawhich. (2023). Multi-modal Aerial View Object Classification Challenge Results – PBVS 2023. In 19th IEEE Workshop on Perception Beyond the Visible Spectrum de la Conferencia Computer Vision & Pattern Recognition (CVPR 2023) Vancouver, 18-28 junio 2023 (Vol. 2023-June, pp. 412–421).
|
|
|
Rafael E. Rivadeneira, A. D. S., Boris X. Vintimilla, Jin Kim, Dogun Kim et al. (2022). Thermal Image Super-Resolution Challenge Results- PBVS 2022. In Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 349–357).
Abstract: This paper presents results from the third Thermal Image
Super-Resolution (TISR) challenge organized in the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop.
The challenge uses the same thermal image dataset as the
first two challenges, with 951 training images and 50 validation images at each resolution. A set of 20 images was
kept aside for testing. The evaluation tasks were to measure
the PSNR and SSIM between the SR image and the ground
truth (HR thermal noisy image downsampled by four), and
also to measure the PSNR and SSIM between the SR image
and the semi-registered HR image (acquired with another
camera). The results outperformed those from last year’s
challenge, improving both evaluation metrics. This year,
almost 100 teams participants registered for the challenge,
showing the community’s interest in this hot topic.
|
|
|
Rangnekar, A., Mulhollan, Z., Vodacek, A., Hoffman, M., Sappa, A. D., & Yu, J. et al. (2022). Semi-Supervised Hyperspectral Object Detection Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 389–397).
|
|
|
Low S., I. N., Nina O., Sappa A. and Blasch E. (2022). Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 417–425).
Abstract: This paper details the results and main findings of the
second iteration of the Multi-modal Aerial View Object
Classification (MAVOC) challenge. This year’s MAVOC
challenge is the second iteration. The primary goal of
both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input
modalities. Teams are encouraged/challenged to develop
multi-modal approaches that incorporate complementary
information from both domains. While the 2021 challenge
showed a proof of concept that both modalities could be
used together, the 2022 challenge focuses on the detailed
multi-modal models. Using the same UNIfied COincident
Optical and Radar for recognitioN (UNICORN) dataset and
competition format that was used in 2021. Specifically, the
challenge focuses on two techniques, (1) SAR classification
and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods
and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For
SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement
from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average
improvement over 2021.
|
|
|
Silva Steven, P. D., Verdezoto Nervo, Hernandez Juan David. (2022). TOWARDS ONLINE SOCIALLY ACCEPTABLE ROBOT NAVIGATION. In IEEE INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING, (Vol. 2022-August, pp. 707–714).
|
|
|
Benítez-Quintero J., Q. - P. O., Calderon, Fernanda. (2022). Notes on Sulfur Fluxes in Urban Areas with Industrial Activity. In 20th LACCEI International Multi-Conference for Engineering, Education Caribbean Conference for Engineering and Technology, LACCEI 2022, (Vol. 2022-July).
|
|
|
Patricia L. Suárez, A. D. S., Boris X. Vintimilla. (2021). Cycle generative adversarial network: towards a low-cost vegetation index estimation. In IEEE International Conference on Image Processing (ICIP 2021) (Vol. 2021-September, pp. 2783–2787).
Abstract: This paper presents a novel unsupervised approach to estimate the Normalized Difference Vegetation Index (NDVI).The NDVI is obtained as the ratio between information from the visible and near infrared spectral bands; in the current work, the NDVI is estimated just from an image of the visible spectrum through a Cyclic Generative Adversarial Network (CyclicGAN). This unsupervised architecture learns to estimate the NDVI index by means of an image translation between the red channel of a given RGB image and the NDVI unpaired index’s image. The translation is obtained by means of a ResNET architecture and a multiple loss function. Experimental results obtained with this unsupervised scheme show the validity of the implemented model. Additionally, comparisons with the state of the art approaches are provided showing improvements with the proposed approach.
|
|