toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Henry O. Velesaca; Raul A. Mira; Patricia L. Suarez; Christian X. Larrea; Angel D. Sappa. pdf  isbn
openurl 
  Title Deep Learning based Corn Kernel Classification. Type Conference Article
  Year 2020 Publication The 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture on the Conference Computer on Vision and Pattern Recongnition (CVPR 2020) Abbreviated Journal  
  Volume 2020-June Issue 9150684 Pages 294-302  
  Keywords  
  Abstract (down) This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learning based

approach, the Mask R-CNN architecture, while the classification is performed by means of a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered.

As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and

the classification modules. Quantitative evaluations have been performed and comparisons with other approaches provided showing improvements with the proposed pipeline.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 21607508 ISBN 978-172819360-1 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 124  
Permanent link to this record
 

 
Author Charco, J.L., Sappa, A.D., Vintimilla, B.X., Velesaca, H.O. pdf  openurl
  Title Camera pose estimation in multi-view environments:from virtual scenarios to the real world Type Journal Article
  Year 2021 Publication In Image and Vision Computing Journal. (Article number 104182) Abbreviated Journal  
  Volume Vol. 110 Issue Pages  
  Keywords Relative camera pose estimation, Domain adaptation, Siamese architecture, Synthetic data, Multi-view environments  
  Abstract (down) This paper presents a domain adaptation strategy to efficiently train network architectures for estimating the relative camera pose in multi-view scenarios. The network architectures are fed by a pair of simultaneously acquired

images, hence in order to improve the accuracy of the solutions, and due to the lack of large datasets with pairs of

overlapped images, a domain adaptation strategy is proposed. The domain adaptation strategy consists on transferring the knowledge learned from synthetic images to real-world scenarios. For this, the networks are firstly

trained using pairs of synthetic images, which are captured at the same time by a pair of cameras in a virtual environment; and then, the learned weights of the networks are transferred to the real-world case, where the networks are retrained with a few real images. Different virtual 3D scenarios are generated to evaluate the

relationship between the accuracy on the result and the similarity between virtual and real scenarios—similarity

on both geometry of the objects contained in the scene as well as relative pose between camera and objects in the

scene. Experimental results and comparisons are provided showing that the accuracy of all the evaluated networks for estimating the camera pose improves when the proposed domain adaptation strategy is used,

highlighting the importance on the similarity between virtual-real scenarios.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 147  
Permanent link to this record
 

 
Author Nayeth I. Solorzano Alcivar, Robert Loor, Stalyn Gonzabay Yagual, & Boris X. Vintimilla pdf  openurl
  Title Statistical Representations of a Dashboard to Monitor Educational Videogames in Natural Language Type Conference Article
  Year 2020 Publication ETLTC – ACM Chapter: International Conference on Educational Technology, Language and Technical Communication; Fukushima, Japan, 27-31 Enero 2020 Abbreviated Journal  
  Volume 77 Issue Pages  
  Keywords  
  Abstract (down) This paper explains how Natural Language (NL) processing by computers, through smart

programs as a way of Machine Learning (ML), can represent large sets of quantitative data as written

statements. The study recognized the need to improve the implemented web platform using a

dashboard in which we collected a set of extensive data to measure assessment factors of using

children´s educational games. In this case, applying NL is a strategy to give assessments, build, and

display more precise written statements to enhance the understanding of children´s gaming behavior.

We propose the development of a new tool to assess the use of written explanations rather than a

statistical representation of feedback information for the comprehension of parents and teachers with

a lack of primary level knowledge in statistics. Applying fuzzy logic theory, we present verbatim

explanations of children´s behavior playing educational videogames as NL interpretation instead of

statistical representations. An educational series of digital game applications for mobile devices,

identified as MIDI (Spanish acronym of “Interactive Didactic Multimedia for Children”) linked to a

dashboard in the cloud, is evaluated using the dashboard metrics. MIDI games tested in local primary

schools helps to evaluate the results of using the proposed tool. The guiding results allow analyzing

the degrees of playability and usability factors obtained from the data produced when children play a

MIDI game. The results obtained are presented in a comprehensive guiding evaluation report

applying NL for parents and teachers. These guiding evaluations are useful to enhance children's

learning understanding related to the school curricula applied to ludic digital games.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 131  
Permanent link to this record
 

 
Author Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla pdf  url
openurl 
  Title Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study Type Journal Article
  Year 2016 Publication Sensors Journal Abbreviated Journal  
  Volume Vol. 16 Issue Pages pp. 1-15  
  Keywords image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform  
  Abstract (down) This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 47  
Permanent link to this record
 

 
Author Low S., Inkawhich N., Nina O., Sappa A. and Blasch E. pdf  url
openurl 
  Title Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. Type Conference Article
  Year 2022 Publication Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. Abbreviated Journal CONFERENCE  
  Volume 2022-June Issue Pages 417-425  
  Keywords  
  Abstract (down) This paper details the results and main findings of the

second iteration of the Multi-modal Aerial View Object

Classification (MAVOC) challenge. This year’s MAVOC

challenge is the second iteration. The primary goal of

both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input

modalities. Teams are encouraged/challenged to develop

multi-modal approaches that incorporate complementary

information from both domains. While the 2021 challenge

showed a proof of concept that both modalities could be

used together, the 2022 challenge focuses on the detailed

multi-modal models. Using the same UNIfied COincident

Optical and Radar for recognitioN (UNICORN) dataset and

competition format that was used in 2021. Specifically, the

challenge focuses on two techniques, (1) SAR classification

and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods

and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For

SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement

from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average

improvement over 2021.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 177  
Permanent link to this record
 

 
Author Marta Diaz; Dennys Paillacho; Cecilio Angulo pdf  openurl
  Title Evaluating Group-Robot Interaction in Crowded Public Spaces: A Week-Long Exploratory Study in the Wild with a Humanoid Robot Guiding Visitors Through a Science Museum. Type Journal Article
  Year 2015 Publication International Journal of Humanoid Robotics Abbreviated Journal  
  Volume Vol. 12 Issue Pages  
  Keywords Group-robot interaction; robotic-guide; social navigation; space management; spatial formations; group walking behavior; crowd behavior  
  Abstract (down) This paper describes an exploratory study on group interaction with a robot-guide in an open large-scale busy environment. For an entire week a humanoid robot was deployed in the popular Cosmocaixa Science Museum in Barcelona and guided hundreds of people through the museum facilities. The main goal of this experience is to study in the wild the episodes of the robot guiding visitors to a requested destination focusing on the group behavior during displacement. The walking behavior follow-me and the face to face communication in a populated environment are analyzed in terms of guide- visitors interaction, grouping patterns and spatial formations. Results from observational data show that the space configurations spontaneously formed by the robot guide and visitors walking together did not always meet the robot communicative and navigational requirements for successful guidance. Therefore additional verbal and nonverbal prompts must be considered to regulate effectively the walking together and follow-me behaviors. Finally, we discuss lessons learned and recommendations for robot’s spatial behavior in dense crowded scenarios.  
  Address  
  Corporate Author Thesis  
  Publisher International Journal of Humanoid Robotics Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 34  
Permanent link to this record
 

 
Author Dennis G. Romero; A. F. Neto; T. F. Bastos; Boris X. Vintimilla pdf  url
openurl 
  Title An approach to automatic assistance in physiotherapy based on on-line movement identification. Type Conference Article
  Year 2012 Publication VI Andean Region International Conference – ANDESCON 2012 Abbreviated Journal  
  Volume Issue Pages  
  Keywords patient rehabilitation, patient treatment, statistical analysis  
  Abstract (down) This paper describes a method for on-line movement identification, oriented to patient’s movement evaluation during physiotherapy. An analysis based on Mahalanobis distance between temporal windows is performed to identify the “idle/motion” state, which defines the beginning and end of the patient’s movement, for posterior patterns extraction based on Relative Wavelet Energy from sequences of invariant moments.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Andean Region International Conference (ANDESCON), 2012 VI Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 24  
Permanent link to this record
 

 
Author Angel D. Sappa; Cristhian A. Aguilera; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo pdf  url
doi  openurl
  Title Monocular visual odometry: a cross-spectral image fusion based approach Type Journal Article
  Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal  
  Volume Vol. 86 Issue Pages pp. 26-36  
  Keywords Monocular visual odometry LWIR-RGB cross-spectral imaging Image fusion  
  Abstract (down) This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Enlgish Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 54  
Permanent link to this record
 

 
Author Ricaurte P; Chilán C; Cristhian A. Aguilera; Boris X. Vintimilla; Angel D. Sappa pdf  url
openurl 
  Title Feature Point Descriptors: Infrared and Visible Spectra Type Journal Article
  Year 2014 Publication Sensors Journal Abbreviated Journal  
  Volume Vol. 14 Issue Pages pp. 3690-3701  
  Keywords cross-spectral imaging; feature point descriptors  
  Abstract (down) This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 28  
Permanent link to this record
 

 
Author Cristhian A. Aguilera; Francisco J. Aguilera; Angel D. Sappa; Ricardo Toledo pdf  openurl
  Title Learning crossspectral similarity measures with deep convolutional neural networks Type Conference Article
  Year 2016 Publication IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops Abbreviated Journal  
  Volume Issue Pages 267-275  
  Keywords  
  Abstract (down) The simultaneous use of images from different spectra can be helpful to improve the performance of many com- puter vision tasks. The core idea behind the usage of cross- spectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN archi- tectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Ex- perimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Ad- ditionally, our experiments show that some CNN architec- tures are capable of generalizing between different cross- spectral domains.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 48  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: