|
Dennis G. Romero, A. Frizera, Angel D. Sappa, Boris X. Vintimilla, & T.F. Bastos. (2015). A predictive model for human activity recognition by observing actions and context. In ACIVS 2015 (Advanced Concepts for Intelligent Vision Systems), International Conference on, Catania, Italy, 2015 (pp. 323–333).
Abstract: This paper presents a novel model to estimate human activities – a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.
|
|
|
Velez R., P. A., Silva S., Paillacho D., and Paillacho J. (2022). Implementation of a UVC lights disinfection system for a diferential robot applying security methods in indoor. In Communications in Computer and Information Science, International Conference on Applied Technologies (ICAT 2021), octubre 27-29 (Vol. 1535, pp. 319–331).
|
|
|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, Paulo Dias, & A. Paulo Moreira. (2016). Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives. Robotics and Autonomous Systems Journal, Vol. 83, pp. 312–325.
Abstract: When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
|
|
|
Carlos Monsalve, & Alain April and Alain Abran. (2011). Measuring software functional size from business process models. International Journal of Software Engineering and Knowledge Engineering, Vol. 21, pp. 311–338.
Abstract: ISO 14143-1 specifies that a functional size measurement (FSM) method must provide measurement procedures to quantify the functional user requirements (FURs) of software. Such quantitative information, functional size, is typically used, for instance, in software estimation. One of the international standards for FSM is the COSMIC FSM method — ISO 19761 — which was designed to be applied both to the business application (BA) software domain and to the real-time software domain. A recurrent problem in FSM is the availability and quality of the inputs required for measurement purposes; that is, well documented FURs. Business process (BP) models, as they are commonly used to gather requirements from the early stages of a project, could be a valuable source of information for FSM. In a previous article, the feasibility of such an approach for the BA domain was analyzed using the Qualigram BP modeling notation. This paper complements that work by: (1) analyzing the use of BPMN for FSM in the BA domain; (2) presenting notation-independent guidelines for the BA domain; and (3) analyzing the possibility of using BP models to perform FSM in the real-time domain. The measurement results obtained from BP models are compared with those of previous FSM case studies.
|
|
|
Rafael E. Rivadeneira, H. O. V., Angel D. Sappa. (2023). Object Detection in Very Low-Resolution Thermal Images through a Guided-Based Super-Resolution Approach. In 17th International Conference On Signal Image Technology & Internet Based System, Bangkok, 8-10 November 2023 (pp. 311–318).
|
|
|
Henry O. Velesaca, Raul A. Mira, Patricia L. Suarez, Christian X. Larrea, & Angel D. Sappa. (2020). Deep Learning based Corn Kernel Classification. In The 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture on the Conference Computer on Vision and Pattern Recongnition (CVPR 2020) (Vol. 2020-June, pp. 294–302).
Abstract: This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learning based
approach, the Mask R-CNN architecture, while the classification is performed by means of a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered.
As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and
the classification modules. Quantitative evaluations have been performed and comparisons with other approaches provided showing improvements with the proposed pipeline.
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Colorizing Infrared Images through a Triplet Condictional DCGAN Architecture. In 19th International Conference on Image Analysis and Processing. (pp. 287–297).
|
|
|
Rubio Abel, Agila Wilton, González Leandro, & Aviles Jonathan. (2023). A Numerical Model for the Transport of Reactants in Proton Exchange Fuel Cells. In 12th IEEE International Conference on Renewable Energy Research and Applications, ICRERA 2023 Oshawa 29 August – 1 September 2023 (pp. 273–278).
|
|
|
Luis Chuquimarca, R. P., Paula Gonzalez, Boris Vintimilla & Sergio Velastin. (2023). Fruit defect detection using CNN models with real and virtual data. In 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) Lisbon 19-21 Febraury 2024 (Vol. Vol. 4, pp. 272–279).
|
|
|
Cristhian A. Aguilera, Francisco J. Aguilera, Angel D. Sappa, & Ricardo Toledo. (2016). Learning crossspectral similarity measures with deep convolutional neural networks. In IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (pp. 267–275).
Abstract: The simultaneous use of images from different spectra can be helpful to improve the performance of many com- puter vision tasks. The core idea behind the usage of cross- spectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN archi- tectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Ex- perimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Ad- ditionally, our experiments show that some CNN architec- tures are capable of generalizing between different cross- spectral domains.
|
|