|
Records |
Links |
|
Author |
Charco, J.L., Sappa, A.D., Vintimilla, B.X., Velesaca, H.O. |
|
|
Title |
Camera pose estimation in multi-view environments:from virtual scenarios to the real world |
Type |
Journal Article |
|
Year |
2021 |
Publication |
In Image and Vision Computing Journal. (Article number 104182) |
Abbreviated Journal |
|
|
|
Volume |
Vol. 110 |
Issue |
|
Pages |
|
|
|
Keywords |
Relative camera pose estimation, Domain adaptation, Siamese architecture, Synthetic data, Multi-view environments |
|
|
Abstract |
This paper presents a domain adaptation strategy to efficiently train network architectures for estimating the relative camera pose in multi-view scenarios. The network architectures are fed by a pair of simultaneously acquired
images, hence in order to improve the accuracy of the solutions, and due to the lack of large datasets with pairs of
overlapped images, a domain adaptation strategy is proposed. The domain adaptation strategy consists on transferring the knowledge learned from synthetic images to real-world scenarios. For this, the networks are firstly
trained using pairs of synthetic images, which are captured at the same time by a pair of cameras in a virtual environment; and then, the learned weights of the networks are transferred to the real-world case, where the networks are retrained with a few real images. Different virtual 3D scenarios are generated to evaluate the
relationship between the accuracy on the result and the similarity between virtual and real scenarios—similarity
on both geometry of the objects contained in the scene as well as relative pose between camera and objects in the
scene. Experimental results and comparisons are provided showing that the accuracy of all the evaluated networks for estimating the camera pose improves when the proposed domain adaptation strategy is used,
highlighting the importance on the similarity between virtual-real scenarios. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
147 |
|
Permanent link to this record |
|
|
|
|
Author |
P. Ricaurte; C. Chilán; C. A. Aguilera-Carrasco; B. X. Vintimilla; Angel D. Sappa |
|
|
Title |
Performance Evaluation of Feature Point Descriptors in the Infrared Domain |
Type |
Conference Article |
|
Year |
2014 |
Publication |
Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2013 |
Abbreviated Journal |
|
|
|
Volume |
1 |
Issue |
|
Pages |
545 -550 |
|
|
Keywords |
Infrared Imaging, Feature Point Descriptors |
|
|
Abstract |
This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE |
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
2014 International Conference on Computer Vision Theory and Applications (VISAPP) |
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
26 |
|
Permanent link to this record |
|
|
|
|
Author |
Nayeth I. Solorzano Alcivar, Robert Loor, Stalyn Gonzabay Yagual, & Boris X. Vintimilla |
|
|
Title |
Statistical Representations of a Dashboard to Monitor Educational Videogames in Natural Language |
Type |
Conference Article |
|
Year |
2020 |
Publication |
ETLTC – ACM Chapter: International Conference on Educational Technology, Language and Technical Communication; Fukushima, Japan, 27-31 Enero 2020 |
Abbreviated Journal |
|
|
|
Volume |
77 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
This paper explains how Natural Language (NL) processing by computers, through smart
programs as a way of Machine Learning (ML), can represent large sets of quantitative data as written
statements. The study recognized the need to improve the implemented web platform using a
dashboard in which we collected a set of extensive data to measure assessment factors of using
children´s educational games. In this case, applying NL is a strategy to give assessments, build, and
display more precise written statements to enhance the understanding of children´s gaming behavior.
We propose the development of a new tool to assess the use of written explanations rather than a
statistical representation of feedback information for the comprehension of parents and teachers with
a lack of primary level knowledge in statistics. Applying fuzzy logic theory, we present verbatim
explanations of children´s behavior playing educational videogames as NL interpretation instead of
statistical representations. An educational series of digital game applications for mobile devices,
identified as MIDI (Spanish acronym of “Interactive Didactic Multimedia for Children”) linked to a
dashboard in the cloud, is evaluated using the dashboard metrics. MIDI games tested in local primary
schools helps to evaluate the results of using the proposed tool. The guiding results allow analyzing
the degrees of playability and usability factors obtained from the data produced when children play a
MIDI game. The results obtained are presented in a comprehensive guiding evaluation report
applying NL for parents and teachers. These guiding evaluations are useful to enhance children's
learning understanding related to the school curricula applied to ludic digital games. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
131 |
|
Permanent link to this record |
|
|
|
|
Author |
Angel D. Sappa; Juan A. Carvajal; Cristhian A. Aguilera; Miguel Oliveira; Dennis G. Romero; Boris X. Vintimilla |
|
|
Title |
Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Sensors Journal |
Abbreviated Journal |
|
|
|
Volume |
Vol. 16 |
Issue |
|
Pages |
pp. 1-15 |
|
|
Keywords |
image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform |
|
|
Abstract |
This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR). |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
47 |
|
Permanent link to this record |
|
|
|
|
Author |
Low S., Inkawhich N., Nina O., Sappa A. and Blasch E. |
|
|
Title |
Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. |
Type |
Conference Article |
|
Year |
2022 |
Publication |
Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. |
Abbreviated Journal |
CONFERENCE |
|
|
Volume |
2022-June |
Issue |
|
Pages |
417-425 |
|
|
Keywords |
|
|
|
Abstract |
This paper details the results and main findings of the
second iteration of the Multi-modal Aerial View Object
Classification (MAVOC) challenge. This year’s MAVOC
challenge is the second iteration. The primary goal of
both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input
modalities. Teams are encouraged/challenged to develop
multi-modal approaches that incorporate complementary
information from both domains. While the 2021 challenge
showed a proof of concept that both modalities could be
used together, the 2022 challenge focuses on the detailed
multi-modal models. Using the same UNIfied COincident
Optical and Radar for recognitioN (UNICORN) dataset and
competition format that was used in 2021. Specifically, the
challenge focuses on two techniques, (1) SAR classification
and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods
and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For
SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement
from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average
improvement over 2021. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
177 |
|
Permanent link to this record |
|
|
|
|
Author |
Juan A. Carvajal; Dennis G. Romero; Angel D. Sappa |
|
|
Title |
Fine-tuning based deep covolutional networks for lepidopterous genus recognition |
Type |
Conference Article |
|
Year |
2016 |
Publication |
XXI IberoAmerican Congress on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1-9 |
|
|
Keywords |
|
|
|
Abstract |
This paper describes an image classication approach ori- ented to identify specimens of lepidopterous insects recognized at Ecuado- rian ecological reserves. This work seeks to contribute to studies in the area of biology about genus of butter ies and also to facilitate the reg- istration of unrecognized specimens. The proposed approach is based on the ne-tuning of three widely used pre-trained Convolutional Neural Networks (CNNs). This strategy is intended to overcome the reduced number of labeled images. Experimental results with a dataset labeled by expert biologists, is presented|a recognition accuracy above 92% is reached. 1 Introductio |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
53 |
|
Permanent link to this record |
|
|
|
|
Author |
Marta Diaz; Dennys Paillacho; Cecilio Angulo |
|
|
Title |
Evaluating Group-Robot Interaction in Crowded Public Spaces: A Week-Long Exploratory Study in the Wild with a Humanoid Robot Guiding Visitors Through a Science Museum. |
Type |
Journal Article |
|
Year |
2015 |
Publication |
International Journal of Humanoid Robotics |
Abbreviated Journal |
|
|
|
Volume |
Vol. 12 |
Issue |
|
Pages |
|
|
|
Keywords |
Group-robot interaction; robotic-guide; social navigation; space management; spatial formations; group walking behavior; crowd behavior |
|
|
Abstract |
This paper describes an exploratory study on group interaction with a robot-guide in an open large-scale busy environment. For an entire week a humanoid robot was deployed in the popular Cosmocaixa Science Museum in Barcelona and guided hundreds of people through the museum facilities. The main goal of this experience is to study in the wild the episodes of the robot guiding visitors to a requested destination focusing on the group behavior during displacement. The walking behavior follow-me and the face to face communication in a populated environment are analyzed in terms of guide- visitors interaction, grouping patterns and spatial formations. Results from observational data show that the space configurations spontaneously formed by the robot guide and visitors walking together did not always meet the robot communicative and navigational requirements for successful guidance. Therefore additional verbal and nonverbal prompts must be considered to regulate effectively the walking together and follow-me behaviors. Finally, we discuss lessons learned and recommendations for robot’s spatial behavior in dense crowded scenarios. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
International Journal of Humanoid Robotics |
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
34 |
|
Permanent link to this record |
|
|
|
|
Author |
Dennis G. Romero; A. F. Neto; T. F. Bastos; Boris X. Vintimilla |
|
|
Title |
An approach to automatic assistance in physiotherapy based on on-line movement identification. |
Type |
Conference Article |
|
Year |
2012 |
Publication |
VI Andean Region International Conference – ANDESCON 2012 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
patient rehabilitation, patient treatment, statistical analysis |
|
|
Abstract |
This paper describes a method for on-line movement identification, oriented to patient’s movement evaluation during physiotherapy. An analysis based on Mahalanobis distance between temporal windows is performed to identify the “idle/motion” state, which defines the beginning and end of the patient’s movement, for posterior patterns extraction based on Relative Wavelet Energy from sequences of invariant moments. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE |
Place of Publication |
Andean Region International Conference (ANDESCON), 2012 VI |
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
24 |
|
Permanent link to this record |
|
|
|
|
Author |
Angel D. Sappa; Cristhian A. Aguilera; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo |
|
|
Title |
Monocular visual odometry: a cross-spectral image fusion based approach |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Robotics and Autonomous Systems Journal |
Abbreviated Journal |
|
|
|
Volume |
Vol. 86 |
Issue |
|
Pages |
pp. 26-36 |
|
|
Keywords |
Monocular visual odometry LWIR-RGB cross-spectral imaging Image fusion |
|
|
Abstract |
This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is em- pirically obtained by means of a mutual information based evaluation met- ric. The objective is to have a exible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odom- etry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
Enlgish |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
54 |
|
Permanent link to this record |
|
|
|
|
Author |
Ricaurte P; Chilán C; Cristhian A. Aguilera; Boris X. Vintimilla; Angel D. Sappa |
|
|
Title |
Feature Point Descriptors: Infrared and Visible Spectra |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Sensors Journal |
Abbreviated Journal |
|
|
|
Volume |
Vol. 14 |
Issue |
|
Pages |
pp. 3690-3701 |
|
|
Keywords |
cross-spectral imaging; feature point descriptors |
|
|
Abstract |
This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
cidis @ cidis @ |
Serial |
28 |
|
Permanent link to this record |