|
Angel D. Sappa, P. L. S., Henry O. Velesaca, Darío Carpio. (2022). Domain adaptation in image dehazing: exploring the usage of images from virtual scenarios. In 16th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing (CGVCVIP 2022), julio 20-22 (pp. 85–92).
|
|
|
Angel D. Sappa. (2022). ICT Applications for Smart Cities. In Intelligent Systems Reference Library (Vol. 224).
|
|
|
Benítez-Quintero J., Q. - P. O., Calderon, Fernanda. (2022). Notes on Sulfur Fluxes in Urban Areas with Industrial Activity. In 20th LACCEI International Multi-Conference for Engineering, Education Caribbean Conference for Engineering and Technology, LACCEI 2022, (Vol. 2022-July).
|
|
|
Daniela Rato, M. O., Victor Santos, Manuel Gomes & Angel Sappa. (2022). A Sensor-to-Pattern Calibration Framework for Multi-Modal Industrial Collaborative Cells. Journal of Manufacturing Systems, Vol. 64, pp. 497–507.
|
|
|
Henry O. Velesaca, P. L. S., Dario Carpio, Rafael E. Rivadeneira, Ángel Sánchez, Angel D. Sappa. (2022). Video Analytics in Urban Environments: Challenges and Approaches. In ICT Applications for Smart Cities Part of the Intelligent Systems Reference Library book series (Vol. 224, pp. 101–122).
|
|
|
Jacome-Galarza L.-R., R. R. M. - A., Paillacho Corredores J., Benavides Maldonado J.-L. (2022). Time series in sensor data using state of the art deep learning approaches: A systematic literature review. In VII International Conference on Science, Technology and Innovation for Society (CITIS 2021), mayo 26-28. Smart Innovation, Systems and Technologies. (Vol. Vol. 252, pp. 503–514).
Abstract: IoT (Internet of Things) and AI (Artificial Intelligence) are becoming
support tools for several current technological solutions due to significant advancements of these areas. The development of the IoT in various technological fields has contributed to predicting the behavior of various systems such as mechanical, electronic, and control using sensor networks. On the other hand, deep learning architectures have achieved excellent results in complex tasks, where patterns have been extracted in time series. This study has reviewed the most efficient deep learning architectures for forecasting and obtaining trends over time, together with data produced by IoT sensors. In this way, it is proposed to contribute to applications in fields in which IoT is contributing a technological advance such as smart cities, industry 4.0, sustainable agriculture, or robotics. Among the architectures studied in this article related to the process of time series data we have: LSTM (Long Short-Term Memory) for its high precision in prediction and the ability to automatically process input sequences; CNN (Convolutional Neural Networks) mainly in human activity
recognition; hybrid architectures in which there is a convolutional layer for data pre-processing and RNN (Recurrent Neural Networks) for data fusion from different sensors and their subsequent classification; and stacked LSTM Autoencoders that extract the variables from time series in an unsupervised way without the need of manual data pre-processing.Finally, well-known technologies in natural language processing are also used in time series data prediction, such as the attention mechanism and embeddings obtaining promising results.
|
|
|
Jorge L. Charco, A. D. S., Boris X. Vintimilla. (2022). Human Pose Estimation through A Novel Multi-View Scheme. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISIGRAPP 2022 (Vol. 5, pp. 855–862).
Abstract: This paper presents a multi-view scheme to tackle the challenging problem of the self-occlusion in human
pose estimation problem. The proposed approach first obtains the human body joints of a set of images,
which are captured from different views at the same time. Then, it enhances the obtained joints by using a
multi-view scheme. Basically, the joints from a given view are used to enhance poorly estimated joints from
another view, especially intended to tackle the self occlusions cases. A network architecture initially proposed
for the monocular case is adapted to be used in the proposed multi-view scheme. Experimental results and
comparisons with the state-of-the-art approaches on Human3.6m dataset are presented showing improvements
in the accuracy of body joints estimations.
|
|
|
Jorge L. Charco, A. D. S., Boris X. Vintimilla, Henry O. Velesaca. (2022). Human Body Pose Estimation in Multi-view Environments. In ICT Applications for Smart Cities Part of the Intelligent Systems Reference Library book series (Vol. 224, pp. 79–99).
|
|
|
Low S., I. N., Nina O., Sappa A. and Blasch E. (2022). Multi-modal Aerial View Object Classification Challenge Results-PBVS 2022. In Conference on Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. (Vol. 2022-June, pp. 417–425).
Abstract: This paper details the results and main findings of the
second iteration of the Multi-modal Aerial View Object
Classification (MAVOC) challenge. This year’s MAVOC
challenge is the second iteration. The primary goal of
both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) input
modalities. Teams are encouraged/challenged to develop
multi-modal approaches that incorporate complementary
information from both domains. While the 2021 challenge
showed a proof of concept that both modalities could be
used together, the 2022 challenge focuses on the detailed
multi-modal models. Using the same UNIfied COincident
Optical and Radar for recognitioN (UNICORN) dataset and
competition format that was used in 2021. Specifically, the
challenge focuses on two techniques, (1) SAR classification
and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods
and describing their performance on our blind test set. Notably, all of the top ten teams outperform our baseline. For
SAR classification, the top team showed a 129% improvement over our baseline and an 8% average improvement
from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average
improvement over 2021.
|
|
|
Nayeth I. Solorzano, L. C. H., Leslie del R. Lima, Dennys F. Paillacho & Jonathan S. Paillacho. (2022). Visual Metrics for Educational Videogames Linked to Socially Assistive Robots in an Inclusive Education Framework. In Smart Innovation, Systems and Technologies. International Conference in Information Technology & Education (ICITED 21), julio 15-17 (Vol. 256, pp. 119–132).
Abstract: In gamification, the development of "visual metrics for educational
video games linked to social assistance robots in the framework of inclusive education" seeks to provide support, not only to regular children but also to children with specific psychosocial disabilities, such as those diagnosed with autism spectrum disorder (ASD). However, personalizing each child's experiences represents a limitation, especially for those with atypical behaviors. 'LOLY,' a social assistance robot, works together with mobile applications associated with the family of educational video game series called 'MIDI-AM,' forming a social robotic platform. This platform offers the user curricular digital content to reinforce the teaching-learning processes and motivate regular children and those with ASD. In the present study, technical, programmatic experiments and focus groups were carried out, using open-source facial recognition algorithms to monitor and evaluate the degree of user attention throughout the interaction. The objective is to evaluate the management of a social robot linked to educational video games
through established metrics, which allow monitoring the user's facial expressions
during its use and define a scenario that ensures consistency in the results for its applicability in therapies and reinforcement in the teaching process, mainly
adaptable for inclusive early childhood education.
|
|