Luis C. Herrera, L. del R. L., Nayeth I. Solorzano, Jonathan S. Paillacho & Dennys Paillacho. (2021). Metrics Design of Usability and Behavior Analysis of a Human-Robot-Game Platform. In The 2nd International Conference on Applied Technologies (ICAT 2020), diciembre 2-4. Communication in Computer and Information Science (Vol. 1388, pp. 164–178).
|
Luis Chuquimarca, B. V. & S. V. (2023). Banana Ripeness Level Classification using a Simple CNN Model Trained with Real and Synthetic Datasets. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISIGRAPP 2023 (pp. 536–543).
|
Luis Chuquimarca, B. X. V. & S. V. (2024). Classifying Healthy and Defective Fruits with a Siamese Architecture and CNN Models. In Accepted in 14th International Conference on Pattern Recognition Systems (ICPRS).
|
Luis Chuquimarca, R. P., Paula Gonzalez, Boris Vintimilla & Sergio Velastin. (2023). Fruit defect detection using CNN models with real and virtual data. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISIGRAPP 2023 (pp. 272–279).
|
Lukas Danev, Marten Hamann, Nicolas Fricke, Tobias Hollarek, & Dennys Paillacho. (2017). Development of animated facial expression to express emotions in a robot: RobotIcon. In IEEE Ecuador Technical Chapter Meeting (ETCM) (Vol. 2017-January, pp. 1–6).
|
Ma. Paz Velarde, Erika Perugachi, Dennis G. Romero, Ángel D. Sappa, & Boris X. Vintimilla. (2015). Análisis del movimiento de las extremidades superiores aplicado a la rehabilitación física de una persona usando técnicas de visión artificial. Revista Tecnológica ESPOL-RTE, Vol. 28, pp. 1–7.
Abstract: Comúnmente durante la rehabilitación física, el diagnóstico dado por el especialista se basa en observaciones cualitativas que sugieren, en algunos casos, conclusiones subjetivas. El presente trabajo propone un enfoque cuantitativo, orientado a servir de ayuda a fisioterapeutas, a través de una herramienta interactiva y de bajo costo que permite medir los movimientos de miembros superiores. Estos movimientos son capturados por un sensor RGB-D y procesados mediante la metodología propuesta, dando como resultado una eficiente representación de movimientos, permitiendo la evaluación cuantitativa de movimientos de los miembros superiores.
|
Marjorie Chalen, & Boris X. Vintimilla. (2019). Towards Action Prediction Applying Deep Learning. Latin American Conference on Computational Intelligence (LA-CCI); Guayaquil, Ecuador; 11-15 Noviembre 2019, , pp. 1–3.
Abstract: Considering the incremental development future action prediction by video analysis task of computer vision where it is done based upon incomplete action executions. Deep learning is playing an important role in this task framework. Thus, this paper describes recently techniques and pertinent datasets utilized in human action prediction task.
|
Marta Diaz, Dennys Paillacho, & Cecilio Angulo. (2015). Evaluating Group-Robot Interaction in Crowded Public Spaces: A Week-Long Exploratory Study in the Wild with a Humanoid Robot Guiding Visitors Through a Science Museum. International Journal of Humanoid Robotics, Vol. 12.
Abstract: This paper describes an exploratory study on group interaction with a robot-guide in an open large-scale busy environment. For an entire week a humanoid robot was deployed in the popular Cosmocaixa Science Museum in Barcelona and guided hundreds of people through the museum facilities. The main goal of this experience is to study in the wild the episodes of the robot guiding visitors to a requested destination focusing on the group behavior during displacement. The walking behavior follow-me and the face to face communication in a populated environment are analyzed in terms of guide- visitors interaction, grouping patterns and spatial formations. Results from observational data show that the space configurations spontaneously formed by the robot guide and visitors walking together did not always meet the robot communicative and navigational requirements for successful guidance. Therefore additional verbal and nonverbal prompts must be considered to regulate effectively the walking together and follow-me behaviors. Finally, we discuss lessons learned and recommendations for robot’s spatial behavior in dense crowded scenarios.
|
Mehri, A., Ardakani, P.B., Sappa, A.D. (2021). LiNet: A Lightweight Network for Image Super Resolution. In 25th International Conference on Pattern Recognition (ICPR), enero 10-15, 2021 (pp. 7196–7202).
|
Mehri, A., Ardakani, P.B., Sappa, A.D. (2021). MPRNet: Multi-Path Residual Network for Lightweight Image Super Resolution. In In IEEE Winter Conference on Applications of Computer Vision WACV 2021, enero 5-9, 2021 (pp. 2703–2712).
|