Home | << 1 2 3 4 5 6 7 8 >> |
Records | |||||
---|---|---|---|---|---|
Author | Santos V.; Angel D. Sappa.; Oliveira M. & de la Escalera A. | ||||
Title | Special Issue on Autonomous Driving and Driver Assistance Systems | Type | Journal Article | ||
Year | 2019 | Publication | In Robotics and Autonomous Systems | Abbreviated Journal | |
Volume | 121 | Issue | Pages | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 119 | ||
Permanent link to this record | |||||
Author | Cristhian A. Aguilera, Cristhian Aguilera, Cristóbal A. Navarro, & Angel D. Sappa | ||||
Title | Fast CNN Stereo Depth Estimation through Embedded GPU Devices | Type | Journal Article | ||
Year | 2020 | Publication | Sensors 2020 | Abbreviated Journal | |
Volume | Vol. 2020-June | Issue | 11 | Pages | pp. 1-13 |
Keywords | stereo matching; deep learning; embedded GPU | ||||
Abstract | Current CNN-based stereo depth estimation models can barely run under real-time constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art evaluations usually do not consider model optimization techniques, being that it is unknown what is the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models on three different embedded GPU devices, with and without optimization methods, presenting performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically augmenting the runtime speed of current models. In our experiments, we achieve real-time inference speed, in the range of 5–32 ms, for 1216 368 input stereo images on the Jetson TX2, Jetson Xavier, and Jetson Nano embedded devices. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 14248220 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 132 | ||
Permanent link to this record | |||||
Author | Ángel Morera, Ángel Sánchez, A. Belén Moreno, Angel D. Sappa, & José F. Vélez | ||||
Title | SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. | Type | Journal Article | ||
Year | 2020 | Publication | Abbreviated Journal | In Sensors | |
Volume | Vol. 2020-August | Issue | 16 | Pages | pp. 1-23 |
Keywords | object detection; urban outdoor panels; one-stage detectors; Single Shot MultiBox Detector (SSD); You Only Look Once (YOLO); detection metrics; object and scene imaging variabilities | ||||
Abstract | This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO) deep neural networks for the outdoor advertisement panel detection problem by handling multiple and combined variabilities in the scenes. Publicity panel detection in images oers important advantages both in the real world as well as in the virtual one. For example, applications like Google Street View can be used for Internet publicity and when detecting these ads panels in images, it could be possible to replace the publicity appearing inside the panels by another from a funding company. In our experiments, both SSD and YOLO detectors have produced acceptable results under variable sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex background and multiple panels in scenes. Due to the diculty of finding annotated images for the considered problem, we created our own dataset for conducting the experiments. The major strength of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable when the publicity contained inside the panel is analyzed after detecting them. On the other side, YOLO produced better panel localization results detecting a higher number of True Positive (TP) panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models with dierent types of semantic segmentation networks and using the same evaluation metrics is also included. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 14248220 | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 133 | ||
Permanent link to this record | |||||
Author | Armin Mehri, Parichehr Behjati, Dario Carpio, and Angel D. Sappa | ||||
Title | SRFormer: Efficient Yet Powerful Transformer Network For Single Image Super Resolution | Type | Journal Article | ||
Year | 2023 | Publication | IEEE access | Abbreviated Journal | |
Volume | Vol. 11 | Issue | Pages | 121457 - 121469 | |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 21693536 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 227 | ||
Permanent link to this record | |||||
Author | Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla | ||||
Title | Colorizing Infrared Images through a Triplet Condictional DCGAN Architecture | Type | Conference Article | ||
Year | 2017 | Publication | 19th International Conference on Image Analysis and Processing. | Abbreviated Journal | |
Volume | Issue | Pages | 287-297 | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 66 | ||
Permanent link to this record | |||||
Author | Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla | ||||
Title | Learning Image Vegetation Index through a Conditional Generative Adversarial Network | Type | Conference Article | ||
Year | 2017 | Publication | 2nd IEEE Ecuador Tehcnnical Chapters Meeting (ETCM) | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 70 | ||
Permanent link to this record | |||||
Author | Xavier Soria; Angel D. Sappa; Arash Akbarinia | ||||
Title | Multispectral Single-Sensor RGB-NIR Imaging: New Challenges an Oppotunities | Type | Conference Article | ||
Year | 2017 | Publication | The 7th International Conference on Image Processing Theory, Tools and Application | Abbreviated Journal | |
Volume | Issue | Pages | 1-6 | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 72 | ||
Permanent link to this record | |||||
Author | Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud | ||||
Title | Deep Learning based Single Image Dehazing | Type | Conference Article | ||
Year | 2018 | Publication | 14th IEEE Workshop on Perception Beyond the Visible Spectrum – In conjunction with CVPR 2018. Salt Lake City, Utah. USA | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper proposes a novel approach to remove haze degradations in RGB images using a stacked conditional Generative Adversarial Network (GAN). It employs a triplet of GAN to remove the haze on each color channel independently. A multiple loss functions scheme, applied over a conditional probabilistic model, is proposed. The proposed GAN architecture learns to remove the haze, using as conditioned entrance, the images with haze from which the clear images will be obtained. Such formulation ensures a fast model training convergence and a homogeneous model generalization. Experiments showed that the proposed method generates high-quality clear images. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 83 | ||
Permanent link to this record | |||||
Author | Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla | ||||
Title | Vegetation Index Estimation from Monospectral Images | Type | Conference Article | ||
Year | 2018 | Publication | 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science | Abbreviated Journal | |
Volume | 10882 | Issue | Pages | 353-362 | |
Keywords | |||||
Abstract | This paper proposes a novel approach to estimate Normalized Difference Vegetation Index (NDVI) from just the red channel of a RGB image. The NDVI index is defined as the ratio of the difference of the red and infrared radiances over their sum. In other words, information from the red channel of a RGB image and the corresponding infrared spectral band are required for its computation. In the current work the NDVI index is estimated just from the red channel by training a Conditional Generative Adversarial Network (CGAN). The architecture proposed for the generative network consists of a single level structure, which combines at the final layer results from convolutional operations together with the given red channel with Gaussian noise to enhance details, resulting in a sharp NDVI image. Then, the discriminative model estimates the probability that the NDVI generated index came from the training dataset, rather than the index automatically generated. Experimental results with a large set of real images are provided showing that a Conditional GAN single level model represents an acceptable approach to estimate NDVI index. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 82 | ||
Permanent link to this record | |||||
Author | Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla; Riad I. Hammoud | ||||
Title | Near InfraRed Imagery Colorization | Type | Conference Article | ||
Year | 2018 | Publication | 25 th IEEE International Conference on Image Processing, ICIP 2018 | Abbreviated Journal | |
Volume | Issue | Pages | 2237-2241 | ||
Keywords | |||||
Abstract | This paper proposes a stacked conditional Generative Adversarial Network-based method for Near InfraRed (NIR) imagery colorization. We propose a variant architecture of Generative Adversarial Network (GAN) that uses multiple loss functions over a conditional probabilistic generative model. We show that this new architecture/loss-function yields better generalization and representation of the generated colored IR images. The proposed approach is evaluated on a large test dataset and compared to recent state of the art methods using standard metrics.1 Index Terms—Convolutional Neural Networks (CNN), Generative Adversarial Network (GAN), Infrared Imagery colorization. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 81 | ||
Permanent link to this record |