|
Rafael E. Rivadeneira, Angel D. Sappa, Boris X. Vintimilla, Lin Guo, Jiankun Hou, Armin Mehri, et al. (2020). Thermal Image Super-Resolution Challenge – PBVS 2020. In The 16th IEEE Workshop on Perception Beyond the Visible Spectrum on the Conference on Computer Vision and Pattern Recongnition (CVPR 2020) (Vol. 2020-June, pp. 432–439).
Abstract: This paper summarizes the top contributions to the first challenge on thermal image super-resolution (TISR) which was organized as part of the Perception Beyond the Visible Spectrum (PBVS) 2020 workshop. In this challenge, a novel thermal image dataset is considered together with stateof-the-art approaches evaluated under a common framework.
The dataset used in the challenge consists of 1021 thermal images, obtained from three distinct thermal cameras at different resolutions (low-resolution, mid-resolution, and high-resolution), resulting in a total of 3063 thermal images. From each resolution, 951 images are used for training and 50 for testing while the 20 remaining images are used for two proposed evaluations. The first evaluation consists of downsampling the low-resolution, midresolution, and high-resolution thermal images by x2, x3 and x4 respectively, and comparing their super-resolution
results with the corresponding ground truth images. The second evaluation is comprised of obtaining the x2 superresolution from a given mid-resolution thermal image and comparing it with the corresponding semi-registered highresolution thermal image. Out of 51 registered participants, 6 teams reached the final validation phase.
|
|
|
Xavier Soria, Edgar Riba, & Angel D. Sappa. (2020). Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection. In 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1912–1921).
Abstract: This paper proposes a Deep Learning based edge de- tector, which is inspired on both HED (Holistically-Nested Edge Detection) and Xception networks. The proposed ap- proach generates thin edge-maps that are plausible for hu- man eyes; it can be used in any edge detection task without previous training or fine tuning process. As a second contri- bution, a large dataset with carefully annotated edges, has been generated. This dataset has been used for training the proposed approach as well the state-of-the-art algorithms for comparisons. Quantitative and qualitative evaluations have been performed on different benchmarks showing im- provements with the proposed method when F-measure of ODS and OIS are considered.
|
|
|
Henry O. Velesaca, Raul A. Mira, Patricia L. Suarez, Christian X. Larrea, & Angel D. Sappa. (2020). Deep Learning based Corn Kernel Classification. In The 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture on the Conference Computer on Vision and Pattern Recongnition (CVPR 2020) (Vol. 2020-June, pp. 294–302).
Abstract: This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learning based
approach, the Mask R-CNN architecture, while the classification is performed by means of a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered.
As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and
the classification modules. Quantitative evaluations have been performed and comparisons with other approaches provided showing improvements with the proposed pipeline.
|
|
|
Henry O. Velesaca, S. A., Patricia L. Suarez, Ángel Sanchez & Angel D. Sappa. (2020). Off-the-Shelf Based System for Urban Environment Video Analytics. In The 27th International Conference on Systems, Signals and Image Processing (IWSSIP 2020) (Vol. 2020-July, pp. 459–464).
Abstract: This paper presents the design and implementation details of a system build-up by using off-the-shelf algorithms for urban video analytics. The system allows the connection to public video surveillance camera networks to obtain the necessary
information to generate statistics from urban scenarios (e.g., amount of vehicles, type of cars, direction, numbers of persons, etc.). The obtained information could be used not only for traffic management but also to estimate the carbon footprint of urban scenarios. As a case study, a university campus is selected to
evaluate the performance of the proposed system. The system is implemented in a modular way so that it is being used as a testbed to evaluate different algorithms. Implementation results are provided showing the validity and utility of the proposed approach.
|
|
|
Cristhian A. Aguilera, C. A., Cristóbal A. Navarro, & Angel D. Sappa. (2020). Fast CNN Stereo Depth Estimation through Embedded GPU Devices. Sensors 2020, Vol. 2020-June(11), pp. 1–13.
Abstract: Current CNN-based stereo depth estimation models can barely run under real-time
constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art
evaluations usually do not consider model optimization techniques, being that it is unknown what is
the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models
on three different embedded GPU devices, with and without optimization methods, presenting
performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth
estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture
for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically
augmenting the runtime speed of current models. In our experiments, we achieve real-time inference
speed, in the range of 5–32 ms, for 1216 368 input stereo images on the Jetson TX2, Jetson Xavier,
and Jetson Nano embedded devices.
|
|
|
Ángel Morera, Á. S., A. Belén Moreno, Angel D. Sappa, & José F. Vélez. (2020). SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. In Sensors, Vol. 2020-August(16), pp. 1–23.
Abstract: This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO)
deep neural networks for the outdoor advertisement panel detection problem by handling multiple
and combined variabilities in the scenes. Publicity panel detection in images oers important
advantages both in the real world as well as in the virtual one. For example, applications like Google
Street View can be used for Internet publicity and when detecting these ads panels in images, it could
be possible to replace the publicity appearing inside the panels by another from a funding company.
In our experiments, both SSD and YOLO detectors have produced acceptable results under variable
sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex
background and multiple panels in scenes. Due to the diculty of finding annotated images for the
considered problem, we created our own dataset for conducting the experiments. The major strength
of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable
when the publicity contained inside the panel is analyzed after detecting them. On the other side,
YOLO produced better panel localization results detecting a higher number of True Positive (TP)
panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models
with dierent types of semantic segmentation networks and using the same evaluation metrics is
also included.
|
|
|
Armin Mehri, & Angel D. Sappa. (2019). Colorizing Near Infrared Images through a Cyclic Adversarial Approach of Unpaired Samples. In Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States (pp. 971–979).
Abstract: This paper presents a novel approach for colorizing
near infrared (NIR) images. The approach is based on
image-to-image translation using a Cycle-Consistent adversarial network for learning the color channels on unpaired dataset. This architecture is able to handle unpaired datasets. The approach uses as generators tailored
networks that require less computation times, converge
faster and generate high quality samples. The obtained results have been quantitatively—using standard evaluation
metrics—and qualitatively evaluated showing considerable
improvements with respect to the state of the art
|
|
|
Patricia L. Suarez, Angel D. Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2019). Image Vegetation Index through a Cycle Generative Adversarial Network. In Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States (pp. 1014–1021).
Abstract: This paper proposes a novel approach to estimate the
Normalized Difference Vegetation Index (NDVI) just from
an RGB image. The NDVI values are obtained by using
images from the visible spectral band together with a synthetic near infrared image obtained by a cycled GAN. The
cycled GAN network is able to obtain a NIR image from
a given gray scale image. It is trained by using unpaired
set of gray scale and NIR images by using a U-net architecture and a multiple loss function (gray scale images are
obtained from the provided RGB images). Then, the NIR
image estimated with the proposed cycle generative adversarial network is used to compute the NDVI index. Experimental results are provided showing the validity of the proposed approach. Additionally, comparisons with previous
approaches are also provided.
|
|
|
Angel Morera, Angel Sánchez, Angel D. Sappa, & José F. Vélez. (2019). Robust Detection of Outdoor Urban Advertising Panels in Static Images. In 17th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2019); Ávila, España. Communications in Computer and Information Science (Vol. 1047, pp. 246–256).
Abstract: One interesting publicity application for Smart City environments is recognizing brand information contained in urban advertising
panels. For such a purpose, a previous stage is to accurately detect and
locate the position of these panels in images. This work presents an effective solution to this problem using a Single Shot Detector (SSD) based
on a deep neural network architecture that minimizes the number of
false detections under multiple variable conditions regarding the panels and the scene. Achieved experimental results using the Intersection
over Union (IoU) accuracy metric make this proposal applicable in real
complex urban images.
|
|
|
Rafael E. Rivadeneira, Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2019). Thermal Image SuperResolution through Deep Convolutional Neural Network. In 16th International Conference on Image Analysis and Recognition (ICIAR 2019); Waterloo, Canadá (pp. 417–426).
Abstract: Due to the lack of thermal image datasets, a new dataset has been acquired for proposed a superesolution approach using a Deep Convolution Neural Network schema. In order to achieve this image enhancement process a new thermal images dataset is used. Di?erent experiments have been carried out, ?rstly, the proposed architecture has been trained using only images of the visible spectrum, and later it has been trained with images of the thermal spectrum, the results showed that with the network trained with thermal images, better results are obtained in the process of enhancing the images, maintaining the image details and perspective. The thermal dataset is available at http://www.cidis.espol.edu.ec/es/dataset
|
|