Luis Chuquimarca, B. V. & S. V. (2024). A Review of External Quality Inspection for Fruit Grading using CNN Models (Vol. Vol. 14).
|
Luis Chuquimarca, B. X. V. & S. V. (2024). Classifying Healthy and Defective Fruits with a Multi-Input Architecture and CNN Models. In 14th International Conference on Pattern Recognition Systems (ICPRS) Londres 15 – 18 July 2024.
|
Luis Chuquimarca, R. P., Paula Gonzalez, Boris Vintimilla & Sergio Velastin. (2023). Fruit defect detection using CNN models with real and virtual data. In 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) Lisbon 19-21 Febraury 2024 (Vol. Vol. 4, pp. 272–279).
|
Luis Jacome-Galarza, M. V. - C., Miguel Realpe-Robalino, Jose Benavides-Maldonado. (2021). Software Engineering and Distributed Computing in image processing intelligent systems: a systematic literature review. In 19th LACCEI International Multi-Conference for Engineering, Education, and Technology.
Abstract: Deep learning is experiencing an upward technology trend that is revolutionizing intelligent systems in several domains, such as image and speech recognition, machine translation, social network filtering, and the like. By reviewing a total of 80 studies reported from 2016 to 2020, the present article evaluates the application of software engineering to the field
of intelligent image processing systems, it also offers insights about aspects related to distributed computing for this type of systems. Results indicate that several topics of software engineering are mostly applied when academics are involved in developing projects associated to this kind of intelligent systems. The findings provide evidences that Apache Spark is the most
utilized distributed computing framework for image processing. In addition, Tensorflow is a popular framework used to build convolutional neural networks, which are the prevailing deep learning algorithms used in intelligent image processing systems.
Also, among big cloud providers, Amazon Web Services is the preferred computing platform across the industry sectors, followed by Google cloud.
|
Lukas Danev, Marten Hamann, Nicolas Fricke, Tobias Hollarek, & Dennys Paillacho. (2017). Development of animated facial expression to express emotions in a robot: RobotIcon. In IEEE Ecuador Technical Chapter Meeting (ETCM) (Vol. 2017-January, pp. 1–6).
|
M. Diaz, Dennys Paillacho, C. Angulo, O. Torres, J. Gonzálalez, & J. Albo Canals. (2014). A Week-long Study on Robot-Visitors Spatial Relationships during Guidance in a Sciences Museum. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 152–153).
Abstract: In order to observe spatial relationships in social human- robot interactions, a field trial was carried out within the CosmoCaixa Science Museum in Barcelona. The follow me episodes studied showed that the space configurations formed by guide and visitors walking together did not always fit the robot social affordances and navigation requirements to perform the guidance successfully, thus additional commu- nication prompts are considered to regulate effectively the walking together and follow me behaviors.
|
M. Oliveira, L. Seabra Lopes, G. Hyun Lim, S. Hamidreza Kasaei, Angel D. Sappa, & A. Tomé. (2015). Concurrent Learning of Visual Codebooks and Object Categories in Open- ended Domains. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, Hamburg, Germany, 2015 (pp. 2488–2495). Hamburg, Germany: IEEE.
Abstract: In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are usually constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using codebooks constructed offline.
|
Ma. Paz Velarde, Erika Perugachi, Dennis G. Romero, Ángel D. Sappa, & Boris X. Vintimilla. (2015). Análisis del movimiento de las extremidades superiores aplicado a la rehabilitación física de una persona usando técnicas de visión artificial. Revista Tecnológica ESPOL-RTE, Vol. 28, pp. 1–7.
Abstract: Comúnmente durante la rehabilitación física, el diagnóstico dado por el especialista se basa en observaciones cualitativas que sugieren, en algunos casos, conclusiones subjetivas. El presente trabajo propone un enfoque cuantitativo, orientado a servir de ayuda a fisioterapeutas, a través de una herramienta interactiva y de bajo costo que permite medir los movimientos de miembros superiores. Estos movimientos son capturados por un sensor RGB-D y procesados mediante la metodología propuesta, dando como resultado una eficiente representación de movimientos, permitiendo la evaluación cuantitativa de movimientos de los miembros superiores.
|
Marjorie Chalen, & Boris X. Vintimilla. (2019). Towards Action Prediction Applying Deep Learning. Latin American Conference on Computational Intelligence (LA-CCI); Guayaquil, Ecuador; 11-15 Noviembre 2019, , pp. 1–3.
Abstract: Considering the incremental development future action prediction by video analysis task of computer vision where it is done based upon incomplete action executions. Deep learning is playing an important role in this task framework. Thus, this paper describes recently techniques and pertinent datasets utilized in human action prediction task.
|
Marta Diaz, Dennys Paillacho, & Cecilio Angulo. (2015). Evaluating Group-Robot Interaction in Crowded Public Spaces: A Week-Long Exploratory Study in the Wild with a Humanoid Robot Guiding Visitors Through a Science Museum. International Journal of Humanoid Robotics, Vol. 12.
Abstract: This paper describes an exploratory study on group interaction with a robot-guide in an open large-scale busy environment. For an entire week a humanoid robot was deployed in the popular Cosmocaixa Science Museum in Barcelona and guided hundreds of people through the museum facilities. The main goal of this experience is to study in the wild the episodes of the robot guiding visitors to a requested destination focusing on the group behavior during displacement. The walking behavior follow-me and the face to face communication in a populated environment are analyzed in terms of guide- visitors interaction, grouping patterns and spatial formations. Results from observational data show that the space configurations spontaneously formed by the robot guide and visitors walking together did not always meet the robot communicative and navigational requirements for successful guidance. Therefore additional verbal and nonverbal prompts must be considered to regulate effectively the walking together and follow-me behaviors. Finally, we discuss lessons learned and recommendations for robot’s spatial behavior in dense crowded scenarios.
|