Luis Chuquimarca, B. V. & S. V. (2023). Banana Ripeness Level Classification using a Simple CNN Model Trained with Real and Synthetic Datasets. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) Lisbon, 19-21 Febrero 2023 (pp. 536–543).
|
Miguel A. Murillo, J. E. A., & Miguel Realpe. (2021). Beyond visual and radio line of sight UAVs monitoring system through open software in a simulated environment. In The 2nd International Conference on Applied Technologies (ICAT 2020), diciembre 2-4. Communications in Computer and Information Science (Vol. 1388, pp. 629–642).
Abstract: The problem of loss of line of sight when operating drones has be-come a reality with adverse effects for professional and amateur drone opera-tors, since it brings technical problems such as loss of data collected by the de-vice in one or more instants of time during the flight and even misunderstand-ings of legal nature when the drone flies over prohibited or private places. This paper describes the implementation of a drone monitoring system using the In-ternet as a long-range communication network in order to avoid the problem of loss of communication between the ground station and the device. For this, a simulated environment is used through an appropriate open software tool. The operation of the system is based on a client that makes requests to a server, the latter in turn communicates with several servers, each of which has a drone connected to it. In the proposed system when a drone is ready to start a flight, its server informs the main server of the system, which in turn gives feedback to the client informing it that the device is ready to carry out the flight; this way customers can send a mission to the device and keep track of its progress in real time on the screen of their web application.
|
Patricia L. Suarez, D. C., Angel Sappa. (2023). Boosting Guided Super-Resolution Performance with Synthesized Images. In 17th International Conference On Signal Image Technology & Internet Based Systems, Bangkok, 8-10 November 2023 (pp. 189–195).
|
Carlos Monsalve, Alain April, & Alain Abran. (2011). BPM and requirements elicitation at multiple levels of abstraction: A review. In IADIS International Conference on Information Systems 2011 (pp. 237–242).
Abstract: Business process models can be useful for requirements elicitation, among other things. Software development depends on the quality of the requirements elicitation activities, and so adequately modeling business processes (BPs) is critical. A key factor in achieving this is the active participation of all the stakeholders in the development of a shared vision of BPs.
Unfortunately, organizations often find themselves left with inconsistent BPs that do not cover all the stakeholders’ needs
and constraints. However, consolidation of the various stakeholder requirements may be facilitated through the use of multiple levels of abstraction (MLA). This article contributes to the research into MLA use in business process modeling (BPM) for software requirements by reviewing the theoretical foundations of MLA and their use in various BP-oriented approaches.
|
Charco, J. L., Sappa, A.D., Vintimilla, B.X., Velesaca, H.O. (2021). Camera pose estimation in multi-view environments:from virtual scenarios to the real world. In Image and Vision Computing Journal. (Article number 104182), Vol. 110.
Abstract: This paper presents a domain adaptation strategy to efficiently train network architectures for estimating the relative camera pose in multi-view scenarios. The network architectures are fed by a pair of simultaneously acquired
images, hence in order to improve the accuracy of the solutions, and due to the lack of large datasets with pairs of
overlapped images, a domain adaptation strategy is proposed. The domain adaptation strategy consists on transferring the knowledge learned from synthetic images to real-world scenarios. For this, the networks are firstly
trained using pairs of synthetic images, which are captured at the same time by a pair of cameras in a virtual environment; and then, the learned weights of the networks are transferred to the real-world case, where the networks are retrained with a few real images. Different virtual 3D scenarios are generated to evaluate the
relationship between the accuracy on the result and the similarity between virtual and real scenarios—similarity
on both geometry of the objects contained in the scene as well as relative pose between camera and objects in the
scene. Experimental results and comparisons are provided showing that the accuracy of all the evaluated networks for estimating the camera pose improves when the proposed domain adaptation strategy is used,
highlighting the importance on the similarity between virtual-real scenarios.
|
Luis Chuquimarca, B. X. V. & S. V. (2024). Classifying Healthy and Defective Fruits with a Siamese Architecture and CNN Models. In 14th International Conference on Pattern Recognition Systems (ICPRS).
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Colorizing Infrared Images through a Triplet Condictional DCGAN Architecture. In 19th International Conference on Image Analysis and Processing. (pp. 287–297).
|
Armin Mehri, & Angel D. Sappa. (2019). Colorizing Near Infrared Images through a Cyclic Adversarial Approach of Unpaired Samples. In Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2019); Long Beach, California, United States (pp. 971–979).
Abstract: This paper presents a novel approach for colorizing
near infrared (NIR) images. The approach is based on
image-to-image translation using a Cycle-Consistent adversarial network for learning the color channels on unpaired dataset. This architecture is able to handle unpaired datasets. The approach uses as generators tailored
networks that require less computation times, converge
faster and generate high quality samples. The obtained results have been quantitatively—using standard evaluation
metrics—and qualitatively evaluated showing considerable
improvements with respect to the state of the art
|
Velesaca, H. O., Suárez, P. L., Mira, R., & Sappa, A.D. (2021). Computer Vision based Food Grain Classification: a Comprehensive Survey. In Computers and Electronics in Agriculture Journal. (Article number 106287), Vol. 187.
|
Roberto Jacome Galarza, Miguel-Andrés Realpe-Robalino, Chamba-Eras LuisAntonio, & Viñán-Ludeña MarlonSantiago and Sinche-Freire Javier-Francisco. (2019). Computer vision for image understanding. A comprehensive review. In International Conference on Advances in Emerging Trends and Technologies (ICAETT 2019); Quito, Ecuador (pp. 248–259).
Abstract: Computer Vision has its own Turing test: Can a machine describe the contents of an image or a video in the way a human being would do? In this paper, the progress of Deep Learning for image recognition is analyzed in order to know the answer to this question. In recent years, Deep Learning has increased considerably the precision rate of many tasks related to computer vision. Many datasets of labeled images are now available online, which leads to pre-trained models for many computer vision applications. In this work, we gather information of the latest techniques to perform image understanding and description. As a conclusion we obtained that the combination of Natural Language Processing (using Recurrent Neural Networks and Long Short-Term Memory) plus Image Understanding (using Convolutional Neural Networks) could bring new types of powerful and useful applications in which the computer will be able to answer questions about the content of images and videos. In order to build datasets of labeled images, we need a lot of work and most of the datasets are built using crowd work. These new applications have the potential to increase the human machine interaction to new levels of usability and user’s satisfaction.
|