|   | 
Details
   web
Records
Author (up) Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias
Title Scene representations for autonomous driving: an approach based on polygonal primitives Type Conference Article
Year 2015 Publication Iberian Robotics Conference (ROBOT 2015), Lisbon, Portugal, 2015 Abbreviated Journal
Volume 417 Issue Pages 503-515
Keywords Scene reconstruction, Point cloud, Autonomous vehicles
Abstract In this paper, we present a novel methodology to compute a 3D scene representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
Address
Corporate Author Thesis
Publisher Springer International Publishing Switzerland 2016 Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference Second Iberian Robotics Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 45
Permanent link to this record
 

 
Author (up) Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira
Title Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives Type Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 83 Issue Pages pp. 312-325
Keywords Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives
Abstract When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 49
Permanent link to this record
 

 
Author (up) Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira
Title Incremental Texture Mapping for Autonomous Driving Type Journal Article
Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal
Volume Vol. 84 Issue Pages pp. 113-128
Keywords Scene reconstruction, Autonomous driving, Texture mapping
Abstract Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 50
Permanent link to this record
 

 
Author (up) N. Onkarappa; Cristhian A. Aguilera; B. X. Vintimilla; Angel D. Sappa
Title Cross-spectral Stereo Correspondence using Dense Flow Fields Type Conference Article
Year 2014 Publication Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 Abbreviated Journal
Volume 3 Issue Pages 613 - 617
Keywords Cross-spectral Stereo Correspondence, Dense Optical Flow, Infrared and Visible Spectrum
Abstract This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach.
Address
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference 2014 International Conference on Computer Vision Theory and Applications (VISAPP)
Notes Approved no
Call Number cidis @ cidis @ Serial 27
Permanent link to this record
 

 
Author (up) P. Ricaurte; C. Chilán; C. A. Aguilera-Carrasco; B. X. Vintimilla; Angel D. Sappa
Title Performance Evaluation of Feature Point Descriptors in the Infrared Domain Type Conference Article
Year 2014 Publication Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2013 Abbreviated Journal
Volume 1 Issue Pages 545 -550
Keywords Infrared Imaging, Feature Point Descriptors
Abstract This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered.
Address
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference 2014 International Conference on Computer Vision Theory and Applications (VISAPP)
Notes Approved no
Call Number cidis @ cidis @ Serial 26
Permanent link to this record
 

 
Author (up) Patricia L. Suárez, Angel D. Sappa and Boris X. Vintimilla
Title Deep learning-based vegetation index estimation Type Book Chapter
Year 2021 Publication Generative Adversarial Networks for Image-to-Image Translation Book. Abbreviated Journal
Volume Chapter 9 Issue Issue 2 Pages 205-232
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 137
Permanent link to this record
 

 
Author (up) Patricia L. Suárez, Angel D. Sappa, Boris X. Vintimilla
Title Cycle generative adversarial network: towards a low-cost vegetation index estimation Type Conference Article
Year 2021 Publication IEEE International Conference on Image Processing (ICIP 2021) Abbreviated Journal
Volume 2021-September Issue Pages 2783-2787
Keywords CyclicGAN, NDVI, near infrared spectra, instance normalization.
Abstract This paper presents a novel unsupervised approach to estimate the Normalized Difference Vegetation Index (NDVI).The NDVI is obtained as the ratio between information from the visible and near infrared spectral bands; in the current work, the NDVI is estimated just from an image of the visible spectrum through a Cyclic Generative Adversarial Network (CyclicGAN). This unsupervised architecture learns to estimate the NDVI index by means of an image translation between the red channel of a given RGB image and the NDVI unpaired index’s image. The translation is obtained by means of a ResNET architecture and a multiple loss function. Experimental results obtained with this unsupervised scheme show the validity of the implemented model. Additionally, comparisons with the state of the art approaches are provided showing improvements with the proposed approach.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 164
Permanent link to this record
 

 
Author (up) Patricia L. Suarez, Dario Carpio, Angel D. Sappa and Henry O. Velesaca
Title Transformer based Image Dehazing. Type Conference Article
Year 2022 Publication 16TH International Conference On Signal Image Technology & Internet Based Systems SITIS 2022. Abbreviated Journal
Volume Issue Pages 148-154
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number cidis @ cidis @ Serial 195
Permanent link to this record
 

 
Author (up) Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla
Title Image patch similarity through a meta-learning metric based approach Type Conference Article
Year 2019 Publication 15th International Conference on Signal Image Technology & Internet based Systems (SITIS 2019); Sorrento, Italia Abbreviated Journal
Volume Issue Pages 511-517
Keywords
Abstract Comparing images regions are one of the core methods used on computer vision for tasks like image classification, scene understanding, object detection and recognition. Hence, this paper proposes a novel approach to determine similarity of image regions (patches), in order to obtain the best representation of image patches. This problem has been studied by many researchers presenting different approaches, however, the ability to find the better criteria to measure the similarity on image regions are still a challenge. The present work tackles this problem using a few-shot metric based meta-learning framework able to compare image regions and determining a similarity measure to decide if there is similarity between the compared patches. Our model is training end-to-end from scratch. Experimental results

have shown that the proposed approach effectively estimates the similarity of the patches and, comparing it with the state of the art approaches, shows better results.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 115
Permanent link to this record
 

 
Author (up) Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla
Title Cross-spectral image dehaze through a dense stacked conditional GAN based approach. Type Conference Article
Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) Abbreviated Journal
Volume Issue Pages 358-364
Keywords
Abstract This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number gtsi @ user @ Serial 92
Permanent link to this record