|
Gisel Bastidas G., P. M. V., Boris Vintimilla & Angel D. Sappa. (2025). Application-Guided Image Fusion: A Path to Improve Results in High-Level Vision Tasks. 20th International Conference on Computer Vision Theory and Applications VISIGRAPP 2025 Porto 26-28 Febrero 2025, Vol. 3, 178–187.
|
|
|
Henry O. Velesaca, A. D. S. & J. A. H. (2025). A Case Study of Anomaly Detection in Tinplate Lids: Supervised vs Unsupervised approaches. 11th International Conference on Automation, Robotics, and Applications (ICARA 2025), .
|
|
|
Henry O. Velesaca & Angel D. Sappa. (2025). Seeing the Unseen: AI-Powered Camouflaged Pest Detection. 9th International Conference on Machine Vision and Information Technology (CMVIT 2025), .
|
|
|
Constantine Macías A., T. P. A., Realpe Miguel, Suárez Moncada Jenifer, Páez Rosas Diego & Jarrín Enrique Peláez. (2024). Leveraging Deep Learning Techniques for Marine and Coastal Wildlife Using Instance Segmentation: A Study on Galápagos Sea Lions. In 8th Ecuador Technical Chapters Meeting (ETCM 2024) Cuenca, October 15 – October 18, 2024, .
|
|
|
Leo Thomas Ramos & Angel D. Sappa. (2025). Leveraging U-Net and selective feature extraction for land cover classification using remote sensing imagery. Scientific Reports, Vol. 15.
|
|
|
Henry O. Velesaca and Juan A. Holgado-Terriza. (2025). OPC-UA in artificial intelligence: a systematic review of the integration of data mining and NLP in industrial processes. Manufacturing Review, Vol. 12.
|
|
|
Nathan Inkawhich, C. T., Justice Wheelwright, Oliver Nina, Dylan Bowald, Angel Sappa, Erik Blasch. (2025). 4th Multi-modal Aerial View Image Challenge: SAR CLASSIFICATION – PBVS 2025. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops CVPRW 2025, .
|
|
|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, Paulo Dias, & A. Paulo Moreira. (2016). Incremental Texture Mapping for Autonomous Driving. Robotics and Autonomous Systems Journal, Vol. 84, pp. 113–128.
Abstract: Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
|
|
|
Ma. Paz Velarde, Erika Perugachi, Dennis G. Romero, Ángel D. Sappa, & Boris X. Vintimilla. (2015). Análisis del movimiento de las extremidades superiores aplicado a la rehabilitación física de una persona usando técnicas de visión artificial. Revista Tecnológica ESPOL-RTE, Vol. 28, pp. 1–7.
Abstract: Comúnmente durante la rehabilitación física, el diagnóstico dado por el especialista se basa en observaciones cualitativas que sugieren, en algunos casos, conclusiones subjetivas. El presente trabajo propone un enfoque cuantitativo, orientado a servir de ayuda a fisioterapeutas, a través de una herramienta interactiva y de bajo costo que permite medir los movimientos de miembros superiores. Estos movimientos son capturados por un sensor RGB-D y procesados mediante la metodología propuesta, dando como resultado una eficiente representación de movimientos, permitiendo la evaluación cuantitativa de movimientos de los miembros superiores.
|
|
|
Marjorie Chalen, & Boris X. Vintimilla. (2019). Towards Action Prediction Applying Deep Learning. Latin American Conference on Computational Intelligence (LA-CCI); Guayaquil, Ecuador; 11-15 Noviembre 2019, , pp. 1–3.
Abstract: Considering the incremental development future action prediction by video analysis task of computer vision where it is done based upon incomplete action executions. Deep learning is playing an important role in this task framework. Thus, this paper describes recently techniques and pertinent datasets utilized in human action prediction task.
|
|