|
Patricia Suarez, H. V., Dario Carpio, Angel Sappa, Patricia Urdiales, Francisca Burgos. (2022). Deep Learning based Shrimp Classification. In 17th International Symposium on Visual Computing, San Diego, USA, Octubre 3-5. Lecture Notes in Computer Science (LNCS) (Vol. 13598 LNCS, pp. 36 – 45).
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Vegetation Index Estimation from Monospectral Images. In 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science (Vol. vol 10882, pp pp. 353–362).
Abstract: This paper proposes a novel approach to estimate Normalized
Difference Vegetation Index (NDVI) from just the red channel of
a RGB image. The NDVI index is defined as the ratio of the difference
of the red and infrared radiances over their sum. In other words, information
from the red channel of a RGB image and the corresponding
infrared spectral band are required for its computation. In the current
work the NDVI index is estimated just from the red channel by training a
Conditional Generative Adversarial Network (CGAN). The architecture
proposed for the generative network consists of a single level structure,
which combines at the final layer results from convolutional operations
together with the given red channel with Gaussian noise to enhance
details, resulting in a sharp NDVI image. Then, the discriminative model
estimates the probability that the NDVI generated index came from the
training dataset, rather than the index automatically generated. Experimental
results with a large set of real images are provided showing that
a Conditional GAN single level model represents an acceptable approach
to estimate NDVI index.
|
|
|
Rafael E. Rivadeneira, A. D. S., Boris X. Vintimilla, Jin Kim, Dogun Kim et al. (2022). Thermal Image Super-Resolution Challenge Results- PBVS 2022. In Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 349–357).
Abstract: This paper presents results from the third Thermal Image
Super-Resolution (TISR) challenge organized in the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop.
The challenge uses the same thermal image dataset as the
first two challenges, with 951 training images and 50 validation images at each resolution. A set of 20 images was
kept aside for testing. The evaluation tasks were to measure
the PSNR and SSIM between the SR image and the ground
truth (HR thermal noisy image downsampled by four), and
also to measure the PSNR and SSIM between the SR image
and the semi-registered HR image (acquired with another
camera). The results outperformed those from last year’s
challenge, improving both evaluation metrics. This year,
almost 100 teams participants registered for the challenge,
showing the community’s interest in this hot topic.
|
|
|
Rangnekar, A., Mulhollan, Z., Vodacek, A., Hoffman, M., Sappa, A. D., & Yu, J. et al. (2022). Semi-Supervised Hyperspectral Object Detection Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 389–397).
|
|
|
Low S., I. N., Nina O., Sappa A. and Blasch E. (2022). Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 417–425).
Abstract: This paper details the results and main findings of the
second iteration of the Multi-modal Aerial View Object
Classification (MAVOC) challenge. This year’s MAVOC
challenge is the second iteration. The primary goal of
both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input
modalities. Teams are encouraged/challenged to develop
multi-modal approaches that incorporate complementary
information from both domains. While the 2021 challenge
showed a proof of concept that both modalities could be
used together, the 2022 challenge focuses on the detailed
multi-modal models. Using the same UNIfied COincident
Optical and Radar for recognitioN (UNICORN) dataset and
competition format that was used in 2021. Specifically, the
challenge focuses on two techniques, (1) SAR classification
and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods
and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For
SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement
from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average
improvement over 2021.
|
|
|
Silva Steven, P. D., Verdezoto Nervo, Hernandez Juan David. (2022). TOWARDS ONLINE SOCIALLY ACCEPTABLE ROBOT NAVIGATION. In IEEE INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING, (Vol. 2022-August, pp 707 – 714).
|
|
|
Benítez-Quintero J., Q. - P. O., Calderon, Fernanda. (2022). Notes on Sulfur Fluxes in Urban Areas with Industrial Activity. In 20th LACCEI International Multi-Conference for Engineering, Education Caribbean Conference for Engineering and Technology, LACCEI 2022, (Vol. 2022-July).
|
|
|
Rafael E. Rivadeneira, Angel D. Sappa, Boris X. Vintimilla, Lin Guo, Jiankun Hou, Armin Mehri, et al. (2020). Thermal Image Super-Resolution Challenge – PBVS 2020. In The 16th IEEE Workshop on Perception Beyond the Visible Spectrum on the Conference on Computer Vision and Pattern Recongnition (CVPR 2020) (Vol. 2020-June, pp. 432–439).
Abstract: This paper summarizes the top contributions to the first challenge on thermal image super-resolution (TISR) which was organized as part of the Perception Beyond the Visible Spectrum (PBVS) 2020 workshop. In this challenge, a novel thermal image dataset is considered together with stateof-the-art approaches evaluated under a common framework.
The dataset used in the challenge consists of 1021 thermal images, obtained from three distinct thermal cameras at different resolutions (low-resolution, mid-resolution, and high-resolution), resulting in a total of 3063 thermal images. From each resolution, 951 images are used for training and 50 for testing while the 20 remaining images are used for two proposed evaluations. The first evaluation consists of downsampling the low-resolution, midresolution, and high-resolution thermal images by x2, x3 and x4 respectively, and comparing their super-resolution
results with the corresponding ground truth images. The second evaluation is comprised of obtaining the x2 superresolution from a given mid-resolution thermal image and comparing it with the corresponding semi-registered highresolution thermal image. Out of 51 registered participants, 6 teams reached the final validation phase.
|
|
|
Henry O. Velesaca, Raul A. Mira, Patricia L. Suarez, Christian X. Larrea, & Angel D. Sappa. (2020). Deep Learning based Corn Kernel Classification. In The 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture on the Conference Computer on Vision and Pattern Recongnition (CVPR 2020) (Vol. 2020-June, pp. 294–302).
Abstract: This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learning based
approach, the Mask R-CNN architecture, while the classification is performed by means of a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered.
As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and
the classification modules. Quantitative evaluations have been performed and comparisons with other approaches provided showing improvements with the proposed pipeline.
|
|
|
Henry O. Velesaca, S. A., Patricia L. Suarez, Ángel Sanchez & Angel D. Sappa. (2020). Off-the-Shelf Based System for Urban Environment Video Analytics. In The 27th International Conference on Systems, Signals and Image Processing (IWSSIP 2020) (Vol. 2020-July, pp. 459–464).
Abstract: This paper presents the design and implementation details of a system build-up by using off-the-shelf algorithms for urban video analytics. The system allows the connection to public video surveillance camera networks to obtain the necessary
information to generate statistics from urban scenarios (e.g., amount of vehicles, type of cars, direction, numbers of persons, etc.). The obtained information could be used not only for traffic management but also to estimate the carbon footprint of urban scenarios. As a case study, a university campus is selected to
evaluate the performance of the proposed system. The system is implemented in a modular way so that it is being used as a testbed to evaluate different algorithms. Implementation results are provided showing the validity and utility of the proposed approach.
|
|