toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links (down)
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  url
openurl 
  Title Vegetation Index Estimation from Monospectral Images Type Conference Article
  Year 2018 Publication 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science Abbreviated Journal  
  Volume 10882 Issue Pages 353-362  
  Keywords  
  Abstract This paper proposes a novel approach to estimate Normalized

Difference Vegetation Index (NDVI) from just the red channel of

a RGB image. The NDVI index is defined as the ratio of the difference

of the red and infrared radiances over their sum. In other words, information

from the red channel of a RGB image and the corresponding

infrared spectral band are required for its computation. In the current

work the NDVI index is estimated just from the red channel by training a

Conditional Generative Adversarial Network (CGAN). The architecture

proposed for the generative network consists of a single level structure,

which combines at the final layer results from convolutional operations

together with the given red channel with Gaussian noise to enhance

details, resulting in a sharp NDVI image. Then, the discriminative model

estimates the probability that the NDVI generated index came from the

training dataset, rather than the index automatically generated. Experimental

results with a large set of real images are provided showing that

a Conditional GAN single level model represents an acceptable approach

to estimate NDVI index.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 82  
Permanent link to this record
 

 
Author Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla pdf  url
openurl 
  Title Adaptive Harris Corners Detector Evaluated with Cross-Spectral Images Type Conference Article
  Year 2018 Publication International Conference on Information Technology & Systems (ICITS 2018). ICITS 2018. Advances in Intelligent Systems and Computing Abbreviated Journal  
  Volume 721 Issue Pages  
  Keywords  
  Abstract This paper proposes a novel approach to use cross-spectral

images to achieve a better performance with the proposed Adaptive Harris

corner detector comparing its obtained results with those achieved

with images of the visible spectra. The images of urban, field, old-building

and country category were used for the experiments, given the variety of

the textures present in these images, with which the complexity of the

proposal is much more challenging for its verification. It is a new scope,

which means improving the detection of characteristic points using crossspectral

images (NIR, G, B) and applying pruning techniques, the combination

of channels for this fusion is the one that generates the largest

variance based on the intensity of the merged pixels, therefore, it is that

which maximizes the entropy in the resulting Cross-spectral images.

Harris is one of the most widely used corner detection algorithm, so

any improvement in its efficiency is an important contribution in the

field of computer vision. The experiments conclude that the inclusion of

a (NIR) channel in the image as a result of the combination of the spectra,

greatly improves the corner detection due to better entropy of the

resulting image after the fusion, Therefore the fusion process applied to

the images improves the results obtained in subsequent processes such as

identification of objects or patterns, classification and/or segmentation.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes 1 Approved no  
  Call Number gtsi @ user @ Serial 84  
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira pdf  url
openurl 
  Title Incremental Texture Mapping for Autonomous Driving Type Journal Article
  Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal  
  Volume Vol. 84 Issue Pages pp. 113-128  
  Keywords Scene reconstruction, Autonomous driving, Texture mapping  
  Abstract Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 50  
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira pdf  url
openurl 
  Title Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives Type Journal Article
  Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal  
  Volume Vol. 83 Issue Pages pp. 312-325  
  Keywords Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives  
  Abstract When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 49  
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla pdf  url
openurl 
  Title Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type Journal Article
  Year 2016 Publication Sensors Journal Abbreviated Journal  
  Volume Vol. 16 Issue Pages pp. 1-15  
  Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform  
  Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 47  
Permanent link to this record
 

 
Author Ricaurte P; Chilán C; Cristhian A. Aguilera; Boris X. Vintimilla; Angel D. Sappa pdf  url
openurl 
  Title Feature Point Descriptors: Infrared and Visible Spectra Type Journal Article
  Year 2014 Publication Sensors Journal Abbreviated Journal  
  Volume Vol. 14 Issue Pages pp. 3690-3701  
  Keywords cross-spectral imaging; feature point descriptors  
  Abstract This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 28  
Permanent link to this record
 

 
Author M. Oliveira; L. Seabra Lopes; G. Hyun Lim; S. Hamidreza Kasaei; Angel D. Sappa; A. Tomé pdf  url
openurl 
  Title Concurrent Learning of Visual Codebooks and Object Categories in Open- ended Domains Type Conference Article
  Year 2015 Publication Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, Hamburg, Germany, 2015 Abbreviated Journal  
  Volume Issue Pages 2488 - 2495  
  Keywords Birds, Training, Legged locomotion, Visualization, Histograms, Object recognition, Gaussian mixture model  
  Abstract In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are usually constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using codebooks constructed offline.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Hamburg, Germany Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 41  
Permanent link to this record
 

 
Author Cristhian A. Aguilera; Angel D. Sappa; R. Toledo pdf  url
openurl 
  Title LGHD: A feature descriptor for matching across non-linear intensity variations Type Conference Article
  Year 2015 Publication IEEE International Conference on, Quebec City, QC, 2015 Abbreviated Journal  
  Volume Issue Pages 178 - 181  
  Keywords Feature descriptor, multi-modal, multispectral, NIR, LWIR  
  Abstract This paper presents a new feature descriptor suitable to the task of matching features points between images with nonlinear intensity variations. This includes image pairs with significant illuminations changes, multi-modal image pairs and multi-spectral image pairs. The proposed method describes the neighbourhood of feature points combining frequency and spatial information using multi-scale and multi-oriented Log- Gabor filters. Experimental results show the validity of the proposed approach and also the improvements with respect to the state of the art.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Quebec City, QC, Canada Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 2015 IEEE International Conference on Image Processing (ICIP)  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 40  
Permanent link to this record
 

 
Author N. Onkarappa; Cristhian A. Aguilera; B. X. Vintimilla; Angel D. Sappa pdf  url
openurl 
  Title Cross-spectral Stereo Correspondence using Dense Flow Fields Type Conference Article
  Year 2014 Publication Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 Abbreviated Journal  
  Volume 3 Issue Pages 613 - 617  
  Keywords Cross-spectral Stereo Correspondence, Dense Optical Flow, Infrared and Visible Spectrum  
  Abstract This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 2014 International Conference on Computer Vision Theory and Applications (VISAPP)  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 27  
Permanent link to this record
 

 
Author A. Amato; F. Lumbreras; Angel D. Sappa pdf  url
openurl 
  Title A general-purpose crowdsourcing platform for mobile devices Type Conference Article
  Year 2014 Publication Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 Abbreviated Journal  
  Volume 3 Issue Pages 211-215  
  Keywords Crowdsourcing Platform, Mobile Crowdsourcing  
  Abstract This paper presents details of a general purpose micro-taskon-demand platform based on the crowdsourcing philosophy. This platformwas specifically developed for mobile devices in order to exploit the strengths of such devices; namely: i) massivity, ii) ubiquityand iii) embedded sensors.The combined use of mobile platforms and the crowdsourcing model allows to tackle from the simplest to the most complex tasks.Users experience is the highlighted feature of this platform (this fact is extended to both task-proposer and task- solver).Proper tools according with a specific task are provided to a task-solver in order to perform his/her job in a simpler, faster and appealing way.Moreover, a task can be easily submitted by just selecting predefined templates, which cover a wide range of possible applications.Examples of its usage in computer vision and computer games are provided illustrating the potentiality of the platform.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Lisbon, Portugal Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference Computer Vision Theory and Applications (VISAPP), 2014 International Conference on  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 25  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: