N. Onkarappa, Cristhian A. Aguilera, B. X. Vintimilla, & Angel D. Sappa. (2014). Cross-spectral Stereo Correspondence using Dense Flow Fields. In Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 (Vol. 3, pp. 613–617). IEEE.
Abstract: This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach.
|
Armin Mehri, & Angel D. Sappa. (2019). Colorizing Near Infrared Images through a Cyclic Adversarial Approach of Unpaired Samples. In Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States (pp. 971–979).
Abstract: This paper presents a novel approach for colorizing
near infrared (NIR) images. The approach is based on
image-to-image translation using a Cycle-Consistent adversarial network for learning the color channels on unpaired dataset. This architecture is able to handle unpaired datasets. The approach uses as generators tailored
networks that require less computation times, converge
faster and generate high quality samples. The obtained results have been quantitatively—using standard evaluation
metrics—and qualitatively evaluated showing considerable
improvements with respect to the state of the art
|
Patricia L. Suarez, Angel D. Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2019). Image Vegetation Index through a Cycle Generative Adversarial Network. In Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States (pp. 1014–1021).
Abstract: This paper proposes a novel approach to estimate the
Normalized Difference Vegetation Index (NDVI) just from
an RGB image. The NDVI values are obtained by using
images from the visible spectral band together with a synthetic near infrared image obtained by a cycled GAN. The
cycled GAN network is able to obtain a NIR image from
a given gray scale image. It is trained by using unpaired
set of gray scale and NIR images by using a U-net architecture and a multiple loss function (gray scale images are
obtained from the provided RGB images). Then, the NIR
image estimated with the proposed cycle generative adversarial network is used to compute the NDVI index. Experimental results are provided showing the validity of the proposed approach. Additionally, comparisons with previous
approaches are also provided.
|
Rangnekar, A., Mulhollan, Z., Vodacek, A., Hoffman, M., Sappa, A. D., & Yu, J. et al. (2022). Semi-Supervised Hyperspectral Object Detection Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 389–397).
|
Low S., I. N., Nina O., Sappa A. and Blasch E. (2022). Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 417–425).
Abstract: This paper details the results and main findings of the
second iteration of the Multi-modal Aerial View Object
Classification (MAVOC) challenge. This year’s MAVOC
challenge is the second iteration. The primary goal of
both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input
modalities. Teams are encouraged/challenged to develop
multi-modal approaches that incorporate complementary
information from both domains. While the 2021 challenge
showed a proof of concept that both modalities could be
used together, the 2022 challenge focuses on the detailed
multi-modal models. Using the same UNIfied COincident
Optical and Radar for recognitioN (UNICORN) dataset and
competition format that was used in 2021. Specifically, the
challenge focuses on two techniques, (1) SAR classification
and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods
and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For
SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement
from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average
improvement over 2021.
|
Roberto Jacome Galarza. (2022). Multimodal deep learning for crop yield prediction. In Doctoral Symposium on Information and Communication Technologies –DSICT 2022. Octubre 12-14. (Vol. 1647, pp. 106–117).
|
Stalin Francis Quinde. (2019). Un nuevo modelo BM3D-RNCA para mejorar la estimación de la imagen libre de ruido producida por el método BM3D. (Ph.D. Angel Sappa, Director.). M.Sc. thesis. In Ediciones FIEC-ESPOL.
|
Shendry Rosero Vásquez. (2019). Reconocimiento facial: técnicas tradicionales y técnicas de aprendizaje profundo, un análisis. (Ph.D. Angel Sappa, Director & Ph.D. Boris Vintimilla, Codirector.). M.Sc. thesis. In Ediciones FIEC-ESPOL.
|
Patricia L. Suarez. (2020). Procesamiento y representación de imágenes multiespectrales usando técnicas de aprendizaje profundo (Ph.D. Angel Sappa, Director & Ph.D. Boris Vintimilla, Codirector.). Ph.D. thesis. In Ediciones FIEC-ESPOL..
|
Morocho-Cayamcela, M. E. (2020). Increasing the Segmentation Accuracy of Aerial Images with Dilated Spatial Pyramid Pooling. Electronic Letters on Computer Vision and Image Analysis (ELCVIA), Vol. 19(Issue 2), pp. 17–21.
|