|
Xavier Soria, A. S., Patricio Humanante, Arash Akbarinia. (2023). Dense extreme inception network for edge detection. Pattern Recognition, Vol. 139.
|
|
|
Emmanuel Moran Barreiro & Boris Vintimilla. (2023). Towards a Robust Solution for the Supermarket Shelf Audit Problem: Obsolete Price Tags in Shelves. In Lecture Notes in Computer Science. 26th Iberoamerican Congress on Pattern Recognition (Vol. 14469 LNCS, pp. 257–271).
|
|
|
Rafael E. Rivadeneira, A. D. S., Boris X. Vintimilla, Jin Kim, Dogun Kim et al. (2022). Thermal Image Super-Resolution Challenge Results- PBVS 2022. In Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 349–357).
Abstract: This paper presents results from the third Thermal Image
Super-Resolution (TISR) challenge organized in the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop.
The challenge uses the same thermal image dataset as the
first two challenges, with 951 training images and 50 validation images at each resolution. A set of 20 images was
kept aside for testing. The evaluation tasks were to measure
the PSNR and SSIM between the SR image and the ground
truth (HR thermal noisy image downsampled by four), and
also to measure the PSNR and SSIM between the SR image
and the semi-registered HR image (acquired with another
camera). The results outperformed those from last year’s
challenge, improving both evaluation metrics. This year,
almost 100 teams participants registered for the challenge,
showing the community’s interest in this hot topic.
|
|
|
Low S., I. N., Nina O., Sappa A. and Blasch E. (2022). Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 417–425).
Abstract: This paper details the results and main findings of the
second iteration of the Multi-modal Aerial View Object
Classification (MAVOC) challenge. This year’s MAVOC
challenge is the second iteration. The primary goal of
both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input
modalities. Teams are encouraged/challenged to develop
multi-modal approaches that incorporate complementary
information from both domains. While the 2021 challenge
showed a proof of concept that both modalities could be
used together, the 2022 challenge focuses on the detailed
multi-modal models. Using the same UNIfied COincident
Optical and Radar for recognitioN (UNICORN) dataset and
competition format that was used in 2021. Specifically, the
challenge focuses on two techniques, (1) SAR classification
and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods
and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For
SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement
from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average
improvement over 2021.
|
|
|
Cristhian A. Aguilera, Francisco J. Aguilera, Angel D. Sappa, & Ricardo Toledo. (2016). Learning crossspectral similarity measures with deep convolutional neural networks. In IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (pp. 267–275).
Abstract: The simultaneous use of images from different spectra can be helpful to improve the performance of many com- puter vision tasks. The core idea behind the usage of cross- spectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN archi- tectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Ex- perimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Ad- ditionally, our experiments show that some CNN architec- tures are capable of generalizing between different cross- spectral domains.
|
|
|
Juan A. Carvajal, Dennis G. Romero, & Angel D. Sappa. (2016). Fine-tuning based deep covolutional networks for lepidopterous genus recognition. In XXI IberoAmerican Congress on Pattern Recognition (pp. 1–9).
Abstract: This paper describes an image classication approach ori- ented to identify specimens of lepidopterous insects recognized at Ecuado- rian ecological reserves. This work seeks to contribute to studies in the area of biology about genus of butter ies and also to facilitate the reg- istration of unrecognized specimens. The proposed approach is based on the ne-tuning of three widely used pre-trained Convolutional Neural Networks (CNNs). This strategy is intended to overcome the reduced number of labeled images. Experimental results with a dataset labeled by expert biologists, is presented|a recognition accuracy above 92% is reached. 1 Introductio
|
|
|
Armin Mehri, & Angel D. Sappa. (2019). Colorizing Near Infrared Images through a Cyclic Adversarial Approach of Unpaired Samples. In Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States (pp. 971–979).
Abstract: This paper presents a novel approach for colorizing
near infrared (NIR) images. The approach is based on
image-to-image translation using a Cycle-Consistent adversarial network for learning the color channels on unpaired dataset. This architecture is able to handle unpaired datasets. The approach uses as generators tailored
networks that require less computation times, converge
faster and generate high quality samples. The obtained results have been quantitatively—using standard evaluation
metrics—and qualitatively evaluated showing considerable
improvements with respect to the state of the art
|
|
|
Patricia L. Suarez, Angel D. Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2019). Image Vegetation Index through a Cycle Generative Adversarial Network. In Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States (pp. 1014–1021).
Abstract: This paper proposes a novel approach to estimate the
Normalized Difference Vegetation Index (NDVI) just from
an RGB image. The NDVI values are obtained by using
images from the visible spectral band together with a synthetic near infrared image obtained by a cycled GAN. The
cycled GAN network is able to obtain a NIR image from
a given gray scale image. It is trained by using unpaired
set of gray scale and NIR images by using a U-net architecture and a multiple loss function (gray scale images are
obtained from the provided RGB images). Then, the NIR
image estimated with the proposed cycle generative adversarial network is used to compute the NDVI index. Experimental results are provided showing the validity of the proposed approach. Additionally, comparisons with previous
approaches are also provided.
|
|
|
Mehri, A., Ardakani, P.B., Sappa, A.D. (2021). LiNet: A Lightweight Network for Image Super Resolution. In 25th International Conference on Pattern Recognition (ICPR), enero 10-15, 2021 (pp. 7196–7202).
|
|
|
Rivadeneira R.E., S. A. D., Vintimilla B.X., Nathan S., Kansal P., Mehri A et al. (2021). Thermal Image Super-Resolution Challenge – PBVS 2021. In In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021., junio 19 – 25, 2021 (pp. 4354–4362).
|
|