toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Low S., Inkawhich N., Nina O., Sappa A. and Blasch E. pdf  url
openurl 
  Title Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. Type Conference Article
  Year 2022 Publication Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. Abbreviated Journal CONFERENCE  
  Volume 2022-June Issue Pages 417-425  
  Keywords  
  Abstract (up) This paper details the results and main findings of the

second iteration of the Multi-modal Aerial View Object

Classification (MAVOC) challenge. This year’s MAVOC

challenge is the second iteration. The primary goal of

both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input

modalities. Teams are encouraged/challenged to develop

multi-modal approaches that incorporate complementary

information from both domains. While the 2021 challenge

showed a proof of concept that both modalities could be

used together, the 2022 challenge focuses on the detailed

multi-modal models. Using the same UNIfied COincident

Optical and Radar for recognitioN (UNICORN) dataset and

competition format that was used in 2021. Specifically, the

challenge focuses on two techniques, (1) SAR classification

and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods

and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For

SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement

from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average

improvement over 2021.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 177  
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla pdf  url
openurl 
  Title Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type Journal Article
  Year 2016 Publication Sensors Journal Abbreviated Journal  
  Volume Vol. 16 Issue Pages pp. 1-15  
  Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform  
  Abstract (up) This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 47  
Permanent link to this record
 

 
Author Nayeth I. Solorzano Alcivar, Robert Loor, Stalyn Gonzabay Yagual, & Boris X. Vintimilla pdf  openurl
  Title Statistical Representations of a Dashboard to Monitor Educational Videogames in Natural Language Type Conference Article
  Year 2020 Publication ETLTC – ACM Chapter: International Conference on Educational Technology, Language and Technical Communication; Fukushima, Japan, 27-31 Enero 2020 Abbreviated Journal  
  Volume 77 Issue Pages  
  Keywords  
  Abstract (up) This paper explains how Natural Language (NL) processing by computers, through smart

programs as a way of Machine Learning (ML), can represent large sets of quantitative data as written

statements. The study recognized the need to improve the implemented web platform using a

dashboard in which we collected a set of extensive data to measure assessment factors of using

children´s educational games. In this case, applying NL is a strategy to give assessments, build, and

display more precise written statements to enhance the understanding of children´s gaming behavior.

We propose the development of a new tool to assess the use of written explanations rather than a

statistical representation of feedback information for the comprehension of parents and teachers with

a lack of primary level knowledge in statistics. Applying fuzzy logic theory, we present verbatim

explanations of children´s behavior playing educational videogames as NL interpretation instead of

statistical representations. An educational series of digital game applications for mobile devices,

identified as MIDI (Spanish acronym of “Interactive Didactic Multimedia for Children”) linked to a

dashboard in the cloud, is evaluated using the dashboard metrics. MIDI games tested in local primary

schools helps to evaluate the results of using the proposed tool. The guiding results allow analyzing

the degrees of playability and usability factors obtained from the data produced when children play a

MIDI game. The results obtained are presented in a comprehensive guiding evaluation report

applying NL for parents and teachers. These guiding evaluations are useful to enhance children's

learning understanding related to the school curricula applied to ludic digital games.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 131  
Permanent link to this record
 

 
Author P. Ricaurte; C. Chilán; C. A. Aguilera-Carrasco; B. X. Vintimilla; Angel D. Sappa pdf  url
openurl 
  Title Performance Evaluation of Feature Point Descriptors in the Infrared Domain Type Conference Article
  Year 2014 Publication Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2013 Abbreviated Journal  
  Volume 1 Issue Pages 545 -550  
  Keywords Infrared Imaging, Feature Point Descriptors  
  Abstract (up) This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 2014 International Conference on Computer Vision Theory and Applications (VISAPP)  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 26  
Permanent link to this record
 

 
Author Charco, J.L., Sappa, A.D., Vintimilla, B.X., Velesaca, H.O. pdf  openurl
  Title Camera pose estimation in multi-view environments:from virtual scenarios to the real world Type Journal Article
  Year 2021 Publication In Image and Vision Computing Journal. (Article number 104182) Abbreviated Journal  
  Volume Vol. 110 Issue Pages  
  Keywords Relative camera pose estimation, Domain adaptation, Siamese architecture, Synthetic data, Multi-view environments  
  Abstract (up) This paper presents a domain adaptation strategy to efficiently train network architectures for estimating the relative camera pose in multi-view scenarios. The network architectures are fed by a pair of simultaneously acquired

images, hence in order to improve the accuracy of the solutions, and due to the lack of large datasets with pairs of

overlapped images, a domain adaptation strategy is proposed. The domain adaptation strategy consists on transferring the knowledge learned from synthetic images to real-world scenarios. For this, the networks are firstly

trained using pairs of synthetic images, which are captured at the same time by a pair of cameras in a virtual environment; and then, the learned weights of the networks are transferred to the real-world case, where the networks are retrained with a few real images. Different virtual 3D scenarios are generated to evaluate the

relationship between the accuracy on the result and the similarity between virtual and real scenarios—similarity

on both geometry of the objects contained in the scene as well as relative pose between camera and objects in the

scene. Experimental results and comparisons are provided showing that the accuracy of all the evaluated networks for estimating the camera pose improves when the proposed domain adaptation strategy is used,

highlighting the importance on the similarity between virtual-real scenarios.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 147  
Permanent link to this record
 

 
Author Henry O. Velesaca; Raul A. Mira; Patricia L. Suarez; Christian X. Larrea; Angel D. Sappa. pdf  isbn
openurl 
  Title Deep Learning based Corn Kernel Classification. Type Conference Article
  Year 2020 Publication The 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture on the Conference Computer on Vision and Pattern Recongnition (CVPR 2020) Abbreviated Journal  
  Volume 2020-June Issue 9150684 Pages 294-302  
  Keywords  
  Abstract (up) This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learning based

approach, the Mask R-CNN architecture, while the classification is performed by means of a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered.

As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and

the classification modules. Quantitative evaluations have been performed and comparisons with other approaches provided showing improvements with the proposed pipeline.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 21607508 ISBN 978-172819360-1 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 124  
Permanent link to this record
 

 
Author Dennis G. Romero; A. F. Neto; T. F. Bastos; Boris X. Vintimilla pdf  openurl
  Title RWE patterns extraction for on-line human action recognition through window-based analysis of invariant moments Type Conference Article
  Year 2012 Publication 5th Workshop in applied Robotics and Automation (RoboControl) Abbreviated Journal  
  Volume Issue Pages  
  Keywords Human action recognition, Relative Wavelet Energy, Window-based temporal analysis.  
  Abstract (up) This paper presents a method for on-line human action recognition on video sequences. An analysis based on Mahalanobis distance is performed to identify the “idle” state, which defines the beginning and end of the person movement, for posterior patterns extraction based on Relative Wavelet Energy from sequences of invariant moments.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 23  
Permanent link to this record
 

 
Author Jorge L. Charco, Angel D. Sappa, Boris X. Vintimilla pdf  openurl
  Title Human Pose Estimation through A Novel Multi-View Scheme Type Conference Article
  Year 2022 Publication Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISIGRAPP 2022 Abbreviated Journal  
  Volume 5 Issue Pages 855-862  
  Keywords Multi-View Scheme, Human Pose Estimation, Relative Camera Pose, Monocular Approach  
  Abstract (up) This paper presents a multi-view scheme to tackle the challenging problem of the self-occlusion in human

pose estimation problem. The proposed approach first obtains the human body joints of a set of images,

which are captured from different views at the same time. Then, it enhances the obtained joints by using a

multi-view scheme. Basically, the joints from a given view are used to enhance poorly estimated joints from

another view, especially intended to tackle the self occlusions cases. A network architecture initially proposed

for the monocular case is adapted to be used in the proposed multi-view scheme. Experimental results and

comparisons with the state-of-the-art approaches on Human3.6m dataset are presented showing improvements

in the accuracy of body joints estimations.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved yes  
  Call Number cidis @ cidis @ Serial 169  
Permanent link to this record
 

 
Author Cristhian A. Aguilera; Angel D. Sappa; R. Toledo pdf  url
openurl 
  Title LGHD: A feature descriptor for matching across non-linear intensity variations Type Conference Article
  Year 2015 Publication IEEE International Conference on, Quebec City, QC, 2015 Abbreviated Journal  
  Volume Issue Pages 178 - 181  
  Keywords Feature descriptor, multi-modal, multispectral, NIR, LWIR  
  Abstract (up) This paper presents a new feature descriptor suitable to the task of matching features points between images with nonlinear intensity variations. This includes image pairs with significant illuminations changes, multi-modal image pairs and multi-spectral image pairs. The proposed method describes the neighbourhood of feature points combining frequency and spatial information using multi-scale and multi-oriented Log- Gabor filters. Experimental results show the validity of the proposed approach and also the improvements with respect to the state of the art.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Quebec City, QC, Canada Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 2015 IEEE International Conference on Image Processing (ICIP)  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 40  
Permanent link to this record
 

 
Author Armin Mehri; Angel D. Sappa pdf  openurl
  Title Colorizing Near Infrared Images through a Cyclic Adversarial Approach of Unpaired Samples Type Conference Article
  Year 2019 Publication Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States Abbreviated Journal  
  Volume Issue Pages 971-979  
  Keywords  
  Abstract (up) This paper presents a novel approach for colorizing

near infrared (NIR) images. The approach is based on

image-to-image translation using a Cycle-Consistent adversarial network for learning the color channels on unpaired dataset. This architecture is able to handle unpaired datasets. The approach uses as generators tailored

networks that require less computation times, converge

faster and generate high quality samples. The obtained results have been quantitatively—using standard evaluation

metrics—and qualitatively evaluated showing considerable

improvements with respect to the state of the art
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 105  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: