|
Rafael E. Rivadeneira, A. D. S. and B. X. V. (2022). Multi-Image Super-Resolution for Thermal Images. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISIGRAPP 2022 (Vol. 4, pp. 635–642).
|
|
|
A. Amato, F. Lumbreras, & Angel D. Sappa. (2014). A general-purpose crowdsourcing platform for mobile devices. In Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 (Vol. 3, pp. 211–215). Lisbon, Portugal: IEEE.
Abstract: This paper presents details of a general purpose micro-taskon-demand platform based on the crowdsourcing philosophy. This platformwas specifically developed for mobile devices in order to exploit the strengths of such devices; namely: i) massivity, ii) ubiquityand iii) embedded sensors.The combined use of mobile platforms and the crowdsourcing model allows to tackle from the simplest to the most complex tasks.Users experience is the highlighted feature of this platform (this fact is extended to both task-proposer and task- solver).Proper tools according with a specific task are provided to a task-solver in order to perform his/her job in a simpler, faster and appealing way.Moreover, a task can be easily submitted by just selecting predefined templates, which cover a wide range of possible applications.Examples of its usage in computer vision and computer games are provided illustrating the potentiality of the platform.
|
|
|
N. Onkarappa, Cristhian A. Aguilera, B. X. Vintimilla, & Angel D. Sappa. (2014). Cross-spectral Stereo Correspondence using Dense Flow Fields. In Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 (Vol. 3, pp. 613–617). IEEE.
Abstract: This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach.
|
|
|
Patricia Suarez, A. D. S. (2024). A Generative Model for Guided Thermal Image Super-Resolution. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2024) Rome 27 – 29 February 2024 (Vol. Vol. 3: VISAPP, pp. 765–771).
|
|
|
P. Ricaurte, C. Chilán, C. A. Aguilera-Carrasco, B. X. Vintimilla, & Angel D. Sappa. (2014). Performance Evaluation of Feature Point Descriptors in the Infrared Domain. In Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2013 (Vol. 1, pp. 545–550). IEEE.
Abstract: This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered.
|
|
|
Roberto Jacome Galarza, Miguel-Andrés Realpe-Robalino, Chamba-Eras LuisAntonio, & Viñán-Ludeña MarlonSantiago and Sinche-Freire Javier-Francisco. (2019). Computer vision for image understanding. A comprehensive review. In International Conference on Advances in Emerging Trends and Technologies (ICAETT 2019); Quito, Ecuador (pp. 248–259).
Abstract: Computer Vision has its own Turing test: Can a machine describe the contents of an image or a video in the way a human being would do? In this paper, the progress of Deep Learning for image recognition is analyzed in order to know the answer to this question. In recent years, Deep Learning has increased considerably the precision rate of many tasks related to computer vision. Many datasets of labeled images are now available online, which leads to pre-trained models for many computer vision applications. In this work, we gather information of the latest techniques to perform image understanding and description. As a conclusion we obtained that the combination of Natural Language Processing (using Recurrent Neural Networks and Long Short-Term Memory) plus Image Understanding (using Convolutional Neural Networks) could bring new types of powerful and useful applications in which the computer will be able to answer questions about the content of images and videos. In order to build datasets of labeled images, we need a lot of work and most of the datasets are built using crowd work. These new applications have the potential to increase the human machine interaction to new levels of usability and user’s satisfaction.
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Colorizing Infrared Images through a Triplet Condictional DCGAN Architecture. In 19th International Conference on Image Analysis and Processing. (pp. 287–297).
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Learning Image Vegetation Index through a Conditional Generative Adversarial Network. In 2nd IEEE Ecuador Tehcnnical Chapters Meeting (ETCM).
|
|
|
Xavier Soria, Angel D. Sappa, & Arash Akbarinia. (2017). Multispectral Single-Sensor RGB-NIR Imaging: New Challenges an Oppotunities. In The 7th International Conference on Image Processing Theory, Tools and Application (pp. 1–6).
|
|
|
Milton Mendieta, F. Panchana, B. Andrade, B. Bayot, C. Vaca, Boris X. Vintimilla, et al. (2018). Organ identification on shrimp histological images: A comparative study considering CNN and feature engineering. In IEEE Ecuador Technical Chapters Meeting ETCM 2018. Cuenca, Ecuador (pp. 1–6).
Abstract: The identification of shrimp organs in biology using
histological images is a complex task. Shrimp histological images
poses a big challenge due to their texture and similarity among
classes. Image classification by using feature engineering and
convolutional neural networks (CNN) are suitable methods to
assist biologists when performing organ detection. This work
evaluates the Bag-of-Visual-Words (BOVW) and Pyramid-Bagof-
Words (PBOW) models for image classification leveraging big
data techniques; and transfer learning for the same classification
task by using a pre-trained CNN. A comparative analysis
of these two different techniques is performed, highlighting
the characteristics of both approaches on the shrimp organs
identification problem.
|
|