|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Colorizing Infrared Images through a Triplet Condictional DCGAN Architecture. In 19th International Conference on Image Analysis and Processing. (pp. 287–297).
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Cross-spectral Image Patch Similarity using Convolutional Neural Network. In 2017 IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics (ECMSM) (pp. 1–5).
|
|
|
Angel J. Valencia, Roger M. Idrovo, Angel D. Sappa, Douglas Plaza G., & Daniel Ochoa. (2017). A 3D Vision Based Approach for Optimal Grasp of Vacuum Grippers. In 2017 IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics (ECMSM) (pp. 1–6).
|
|
|
Xavier Soria, Edgar Riba, & Angel D. Sappa. (2020). Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection. In 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1912–1921).
Abstract: This paper proposes a Deep Learning based edge de- tector, which is inspired on both HED (Holistically-Nested Edge Detection) and Xception networks. The proposed ap- proach generates thin edge-maps that are plausible for hu- man eyes; it can be used in any edge detection task without previous training or fine tuning process. As a second contri- bution, a large dataset with carefully annotated edges, has been generated. This dataset has been used for training the proposed approach as well the state-of-the-art algorithms for comparisons. Quantitative and qualitative evaluations have been performed on different benchmarks showing im- provements with the proposed method when F-measure of ODS and OIS are considered.
|
|
|
Patricia L. Suarez, Angel D. Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2018). Near InfraRed Imagery Colorization. In 25 th IEEE International Conference on Image Processing, ICIP 2018 (pp. 2237–2241).
Abstract: This paper proposes a stacked conditional Generative
Adversarial Network-based method for Near InfraRed
(NIR) imagery colorization. We propose a variant architecture
of Generative Adversarial Network (GAN) that uses multiple
loss functions over a conditional probabilistic generative model.
We show that this new architecture/loss-function yields better
generalization and representation of the generated colored IR
images. The proposed approach is evaluated on a large test
dataset and compared to recent state of the art methods using
standard metrics.1
Index Terms—Convolutional Neural Networks (CNN), Generative
Adversarial Network (GAN), Infrared Imagery colorization.
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Learning Image Vegetation Index through a Conditional Generative Adversarial Network. In 2nd IEEE Ecuador Tehcnnical Chapters Meeting (ETCM).
|
|
|
Angel D. Sappa, S. L., Oliver Nina, Erik Blasch, Dylan Bowald & Nathan Inkawhich. (2024). Multi-modal Aerial View Image Challenge: SAR Classification. In Accepted in 20th IEEE Workshop on Perception Beyond the Visible Spectrum of the 2024 Conference on Computer Vision and Pattern Recognition.
|
|
|
Angel D. Sappa, S. L., Oliver Nina, Erik Blasch, Dylan Bowald & Nathan Inkawhich. (2024). Multi-modal Aerial View Image Challenge: Sensor Domain Translation. In Accepted in 20th IEEE Workshop on Perception Beyond the Visible Spectrum of the 2024 Conference on Computer Vision and Pattern Recognition.
|
|
|
Rafael E. Rivadeneira, A. D. S., Chenyang Wang, Junjun Jiang, Zhiwei Zhong, Peilin Chen & Shiqi Wang. (2024). Thermal Image Super Resolution Challenge Results – PBVS 2024. In Accepted in 20th IEEE Workshop on Perception Beyond the Visible Spectrum of the 2024 Conference on Computer Vision and Pattern Recognition.
|
|
|
Dennis G. Romero, A. Frizera, Angel D. Sappa, Boris X. Vintimilla, & T.F. Bastos. (2015). A predictive model for human activity recognition by observing actions and context. In ACIVS 2015 (Advanced Concepts for Intelligent Vision Systems), International Conference on, Catania, Italy, 2015 (pp. 323–333).
Abstract: This paper presents a novel model to estimate human activities – a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.
|
|