toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author A. Amato; F. Lumbreras; Angel D. Sappa pdf  url
openurl 
  Title A general-purpose crowdsourcing platform for mobile devices Type Conference Article
  Year (up) 2014 Publication Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 Abbreviated Journal  
  Volume 3 Issue Pages 211-215  
  Keywords Crowdsourcing Platform, Mobile Crowdsourcing  
  Abstract This paper presents details of a general purpose micro-taskon-demand platform based on the crowdsourcing philosophy. This platformwas specifically developed for mobile devices in order to exploit the strengths of such devices; namely: i) massivity, ii) ubiquityand iii) embedded sensors.The combined use of mobile platforms and the crowdsourcing model allows to tackle from the simplest to the most complex tasks.Users experience is the highlighted feature of this platform (this fact is extended to both task-proposer and task- solver).Proper tools according with a specific task are provided to a task-solver in order to perform his/her job in a simpler, faster and appealing way.Moreover, a task can be easily submitted by just selecting predefined templates, which cover a wide range of possible applications.Examples of its usage in computer vision and computer games are provided illustrating the potentiality of the platform.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Lisbon, Portugal Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference Computer Vision Theory and Applications (VISAPP), 2014 International Conference on  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 25  
Permanent link to this record
 

 
Author P. Ricaurte; C. Chilán; C. A. Aguilera-Carrasco; B. X. Vintimilla; Angel D. Sappa pdf  url
openurl 
  Title Performance Evaluation of Feature Point Descriptors in the Infrared Domain Type Conference Article
  Year (up) 2014 Publication Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2013 Abbreviated Journal  
  Volume 1 Issue Pages 545 -550  
  Keywords Infrared Imaging, Feature Point Descriptors  
  Abstract This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 2014 International Conference on Computer Vision Theory and Applications (VISAPP)  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 26  
Permanent link to this record
 

 
Author N. Onkarappa; Cristhian A. Aguilera; B. X. Vintimilla; Angel D. Sappa pdf  url
openurl 
  Title Cross-spectral Stereo Correspondence using Dense Flow Fields Type Conference Article
  Year (up) 2014 Publication Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 Abbreviated Journal  
  Volume 3 Issue Pages 613 - 617  
  Keywords Cross-spectral Stereo Correspondence, Dense Optical Flow, Infrared and Visible Spectrum  
  Abstract This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 2014 International Conference on Computer Vision Theory and Applications (VISAPP)  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 27  
Permanent link to this record
 

 
Author Ricaurte P; Chilán C; Cristhian A. Aguilera; Boris X. Vintimilla; Angel D. Sappa pdf  url
openurl 
  Title Feature Point Descriptors: Infrared and Visible Spectra Type Journal Article
  Year (up) 2014 Publication Sensors Journal Abbreviated Journal  
  Volume Vol. 14 Issue Pages pp. 3690-3701  
  Keywords cross-spectral imaging; feature point descriptors  
  Abstract This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 28  
Permanent link to this record
 

 
Author Cristhian A. Aguilera; Angel D. Sappa; R. Toledo pdf  url
openurl 
  Title LGHD: A feature descriptor for matching across non-linear intensity variations Type Conference Article
  Year (up) 2015 Publication IEEE International Conference on, Quebec City, QC, 2015 Abbreviated Journal  
  Volume Issue Pages 178 - 181  
  Keywords Feature descriptor, multi-modal, multispectral, NIR, LWIR  
  Abstract This paper presents a new feature descriptor suitable to the task of matching features points between images with nonlinear intensity variations. This includes image pairs with significant illuminations changes, multi-modal image pairs and multi-spectral image pairs. The proposed method describes the neighbourhood of feature points combining frequency and spatial information using multi-scale and multi-oriented Log- Gabor filters. Experimental results show the validity of the proposed approach and also the improvements with respect to the state of the art.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Quebec City, QC, Canada Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 2015 IEEE International Conference on Image Processing (ICIP)  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 40  
Permanent link to this record
 

 
Author M. Oliveira; L. Seabra Lopes; G. Hyun Lim; S. Hamidreza Kasaei; Angel D. Sappa; A. Tomé pdf  url
openurl 
  Title Concurrent Learning of Visual Codebooks and Object Categories in Open- ended Domains Type Conference Article
  Year (up) 2015 Publication Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, Hamburg, Germany, 2015 Abbreviated Journal  
  Volume Issue Pages 2488 - 2495  
  Keywords Birds, Training, Legged locomotion, Visualization, Histograms, Object recognition, Gaussian mixture model  
  Abstract In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are usually constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using codebooks constructed offline.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Hamburg, Germany Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 41  
Permanent link to this record
 

 
Author Dennis G. Romero; A. Frizera; Angel D. Sappa; Boris X. Vintimilla; T.F. Bastos pdf  url
openurl 
  Title A predictive model for human activity recognition by observing actions and context Type Conference Article
  Year (up) 2015 Publication ACIVS 2015 (Advanced Concepts for Intelligent Vision Systems), International Conference on, Catania, Italy, 2015 Abbreviated Journal  
  Volume Issue Pages 323 - 333  
  Keywords Edge width, Image blu,r Defocus map, Edge model  
  Abstract This paper presents a novel model to estimate human activities – a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 43  
Permanent link to this record
 

 
Author Julien Poujol; Cristhian A. Aguilera; Etienne Danos; Boris X. Vintimilla; Ricardo Toledo; Angel D. Sappa pdf  url
openurl 
  Title A visible-Thermal Fusion based Monocular Visual Odometry Type Conference Article
  Year (up) 2015 Publication Iberian Robotics Conference (ROBOT 2015), International Conference on, Lisbon, Portugal, 2015 Abbreviated Journal  
  Volume 417 Issue Pages 517-528  
  Keywords Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion  
  Abstract The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained representations are evaluated under a visual odometry framework, highlighting their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 44  
Permanent link to this record
 

 
Author Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias pdf  openurl
  Title Scene representations for autonomous driving: an approach based on polygonal primitives Type Conference Article
  Year (up) 2015 Publication Iberian Robotics Conference (ROBOT 2015), Lisbon, Portugal, 2015 Abbreviated Journal  
  Volume 417 Issue Pages 503-515  
  Keywords Scene reconstruction, Point cloud, Autonomous vehicles  
  Abstract In this paper, we present a novel methodology to compute a 3D scene representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Springer International Publishing Switzerland 2016 Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference Second Iberian Robotics Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 45  
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla pdf  url
openurl 
  Title Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type Journal Article
  Year (up) 2016 Publication Sensors Journal Abbreviated Journal  
  Volume Vol. 16 Issue Pages pp. 1-15  
  Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform  
  Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 47  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: