|
Records |
Links |
|
Author |
Ricaurte P; Chilán C; Cristhian A. Aguilera; Boris X. Vintimilla; Angel D. Sappa |
|
|
Title |
Feature Point Descriptors: Infrared and Visible Spectra |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Sensors Journal |
Abbreviated Journal |
|
|
|
Volume |
Vol. 14 |
Issue |
|
Pages |
pp. 3690-3701 |
|
|
Keywords |
cross-spectral imaging; feature point descriptors |
|
|
Abstract |
This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
28 |
|
Permanent link to this record |
|
|
|
|
Author |
Cristhian A. Aguilera, Cristhian Aguilera, Cristóbal A. Navarro, & Angel D. Sappa |
|
|
Title |
Fast CNN Stereo Depth Estimation through Embedded GPU Devices |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Sensors 2020 |
Abbreviated Journal |
|
|
|
Volume |
Vol. 2020-June |
Issue |
11 |
Pages |
pp. 1-13 |
|
|
Keywords |
stereo matching; deep learning; embedded GPU |
|
|
Abstract |
Current CNN-based stereo depth estimation models can barely run under real-time
constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art
evaluations usually do not consider model optimization techniques, being that it is unknown what is
the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models
on three different embedded GPU devices, with and without optimization methods, presenting
performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth
estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture
for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically
augmenting the runtime speed of current models. In our experiments, we achieve real-time inference
speed, in the range of 5–32 ms, for 1216 368 input stereo images on the Jetson TX2, Jetson Xavier,
and Jetson Nano embedded devices. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
14248220 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
132 |
|
Permanent link to this record |
|
|
|
|
Author |
Angel D. Sappa, Patricia L. Suárez, Henry O. Velesaca, Darío Carpio |
|
|
Title |
Domain adaptation in image dehazing: exploring the usage of images from virtual scenarios. |
Type |
Conference Article |
|
Year |
2022 |
Publication |
16th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing (CGVCVIP 2022), julio 20-22 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
85-92 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
182 |
|
Permanent link to this record |
|
|
|
|
Author |
Xavier Soria; Edgar Riba; Angel D. Sappa |
|
|
Title |
Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection |
Type |
Conference Article |
|
Year |
2020 |
Publication |
2020 IEEE Winter Conference on Applications of Computer Vision (WACV) |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
9093290 |
Pages |
1912-1921 |
|
|
Keywords |
|
|
|
Abstract |
This paper proposes a Deep Learning based edge de- tector, which is inspired on both HED (Holistically-Nested Edge Detection) and Xception networks. The proposed ap- proach generates thin edge-maps that are plausible for hu- man eyes; it can be used in any edge detection task without previous training or fine tuning process. As a second contri- bution, a large dataset with carefully annotated edges, has been generated. This dataset has been used for training the proposed approach as well the state-of-the-art algorithms for comparisons. Quantitative and qualitative evaluations have been performed on different benchmarks showing im- provements with the proposed method when F-measure of ODS and OIS are considered. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-172816553-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
126 |
|
Permanent link to this record |
|
|
|
|
Author |
Patricia L. Suárez, Angel D. Sappa and Boris X. Vintimilla |
|
|
Title |
Deep learning-based vegetation index estimation |
Type |
Book Chapter |
|
Year |
2021 |
Publication |
Generative Adversarial Networks for Image-to-Image Translation Book. |
Abbreviated Journal |
|
|
|
Volume |
Chapter 9 |
Issue |
Issue 2 |
Pages |
205-232 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
137 |
|
Permanent link to this record |
|
|
|
|
Author |
Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud |
|
|
Title |
Deep Learning based Single Image Dehazing |
Type |
Conference Article |
|
Year |
2018 |
Publication |
14th IEEE Workshop on Perception Beyond the Visible Spectrum – In conjunction with CVPR 2018. Salt Lake City, Utah. USA |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
This paper proposes a novel approach to remove haze
degradations in RGB images using a stacked conditional
Generative Adversarial Network (GAN). It employs a triplet
of GAN to remove the haze on each color channel independently.
A multiple loss functions scheme, applied over a
conditional probabilistic model, is proposed. The proposed
GAN architecture learns to remove the haze, using as conditioned
entrance, the images with haze from which the clear
images will be obtained. Such formulation ensures a fast
model training convergence and a homogeneous model generalization.
Experiments showed that the proposed method
generates high-quality clear images. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
gtsi @ user @ |
Serial |
83 |
|
Permanent link to this record |
|
|
|
|
Author |
Henry O. Velesaca; Raul A. Mira; Patricia L. Suarez; Christian X. Larrea; Angel D. Sappa. |
|
|
Title |
Deep Learning based Corn Kernel Classification. |
Type |
Conference Article |
|
Year |
2020 |
Publication |
The 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture on the Conference Computer on Vision and Pattern Recongnition (CVPR 2020) |
Abbreviated Journal |
|
|
|
Volume |
2020-June |
Issue |
9150684 |
Pages |
294-302 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learning based
approach, the Mask R-CNN architecture, while the classification is performed by means of a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered.
As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and
the classification modules. Quantitative evaluations have been performed and comparisons with other approaches provided showing improvements with the proposed pipeline. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
21607508 |
ISBN |
978-172819360-1 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
124 |
|
Permanent link to this record |
|
|
|
|
Author |
Jorge L. Charco; Boris X. Vintimilla; Angel D. Sappa |
|
|
Title |
Deep learning based camera pose estimation in multi-view environment. |
Type |
Conference Article |
|
Year |
2018 |
Publication |
14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
224-228 |
|
|
Keywords |
|
|
|
Abstract |
This paper proposes to use a deep learning network architecture for relative camera pose estimation on a multi-view environment. The proposed network is a variant architecture of AlexNet to use as regressor for prediction the relative translation and rotation as output. The proposed approach is trained from scratch on a large data set that takes as input a pair of images from the same scene. This new architecture is compared with a previous approach using standard metrics, obtaining better results on the relative camera pose. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
gtsi @ user @ |
Serial |
93 |
|
Permanent link to this record |
|
|
|
|
Author |
Patricia L. Suárez, Angel D. Sappa, Boris X. Vintimilla |
|
|
Title |
Cycle generative adversarial network: towards a low-cost vegetation index estimation |
Type |
Conference Article |
|
Year |
2021 |
Publication |
IEEE International Conference on Image Processing (ICIP 2021) |
Abbreviated Journal |
|
|
|
Volume |
2021-September |
Issue |
|
Pages |
2783-2787 |
|
|
Keywords |
CyclicGAN, NDVI, near infrared spectra, instance normalization. |
|
|
Abstract |
This paper presents a novel unsupervised approach to estimate the Normalized Difference Vegetation Index (NDVI).The NDVI is obtained as the ratio between information from the visible and near infrared spectral bands; in the current work, the NDVI is estimated just from an image of the visible spectrum through a Cyclic Generative Adversarial Network (CyclicGAN). This unsupervised architecture learns to estimate the NDVI index by means of an image translation between the red channel of a given RGB image and the NDVI unpaired index’s image. The translation is obtained by means of a ResNET architecture and a multiple loss function. Experimental results obtained with this unsupervised scheme show the validity of the implemented model. Additionally, comparisons with the state of the art approaches are provided showing improvements with the proposed approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
164 |
|
Permanent link to this record |
|
|
|
|
Author |
N. Onkarappa; Cristhian A. Aguilera; B. X. Vintimilla; Angel D. Sappa |
|
|
Title |
Cross-spectral Stereo Correspondence using Dense Flow Fields |
Type |
Conference Article |
|
Year |
2014 |
Publication |
Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 |
Abbreviated Journal |
|
|
|
Volume |
3 |
Issue |
|
Pages |
613 - 617 |
|
|
Keywords |
Cross-spectral Stereo Correspondence, Dense Optical Flow, Infrared and Visible Spectrum |
|
|
Abstract |
This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE |
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
2014 International Conference on Computer Vision Theory and Applications (VISAPP) |
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
27 |
|
Permanent link to this record |