Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Adaptive Harris Corners Detector Evaluated with Cross-Spectral Images. In International Conference on Information Technology & Systems (ICITS 2018). ICITS 2018. Advances in Intelligent Systems and Computing (Vol. 721).
Abstract: This paper proposes a novel approach to use cross-spectral
images to achieve a better performance with the proposed Adaptive Harris
corner detector comparing its obtained results with those achieved
with images of the visible spectra. The images of urban, field, old-building
and country category were used for the experiments, given the variety of
the textures present in these images, with which the complexity of the
proposal is much more challenging for its verification. It is a new scope,
which means improving the detection of characteristic points using crossspectral
images (NIR, G, B) and applying pruning techniques, the combination
of channels for this fusion is the one that generates the largest
variance based on the intensity of the merged pixels, therefore, it is that
which maximizes the entropy in the resulting Cross-spectral images.
Harris is one of the most widely used corner detection algorithm, so
any improvement in its efficiency is an important contribution in the
field of computer vision. The experiments conclude that the inclusion of
a (NIR) channel in the image as a result of the combination of the spectra,
greatly improves the corner detection due to better entropy of the
resulting image after the fusion, Therefore the fusion process applied to
the images improves the results obtained in subsequent processes such as
identification of objects or patterns, classification and/or segmentation.
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Cross-spectral image dehaze through a dense stacked conditional GAN based approach. In 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) (pp. 358–364).
Abstract: This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Cross-spectral Image Patch Similarity using Convolutional Neural Network. In 2017 IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics (ECMSM) (pp. 1–5).
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Learning to Colorize Infrared Images. In 15th International Conference on Practical Applications of Agents and Multi-Agent Systems.
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Infrared Image Colorization based on a Triplet DCGAN Architecture. In 13th IEEE Workshop on Perception Beyond the Visible Spectrum – In conjunction with CVPR 2017. (This paper has been selected as “Best Paper Award” ) (Vol. 2017-July, pp. 212–217).
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2019). Image patch similarity through a meta-learning metric based approach. In 15th International Conference on Signal Image Technology & Internet based Systems (SITIS 2019); Sorrento, Italia (pp. 511–517).
Abstract: Comparing images regions are one of the core methods used on computer vision for tasks like image classification, scene understanding, object detection and recognition. Hence, this paper proposes a novel approach to determine similarity of image regions (patches), in order to obtain the best representation of image patches. This problem has been studied by many researchers presenting different approaches, however, the ability to find the better criteria to measure the similarity on image regions are still a challenge. The present work tackles this problem using a few-shot metric based meta-learning framework able to compare image regions and determining a similarity measure to decide if there is similarity between the compared patches. Our model is training end-to-end from scratch. Experimental results
have shown that the proposed approach effectively estimates the similarity of the patches and, comparing it with the state of the art approaches, shows better results.
|
Patricia L. Suarez, D. C., Angel D. Sappa and Henry O. Velesaca. (2022). Transformer based Image Dehazing. In 16TH International Conference On Signal Image Technology & Internet Based Systems SITIS 2022. (pp. 148–154).
|
Patricia L. Suárez, D. C., and Angel Sappa. (2021). Non-Homogeneous Haze Removal through a Multiple Attention Module Architecture. In 16 International Symposium on Visual Computing. Octubre 4-6, 2021. Lecture Notes in Computer Science (Vol. 13018, pp. 178–190).
|
Patricia L. Suárez, A. D. S., Boris X. Vintimilla. (2021). Cycle generative adversarial network: towards a low-cost vegetation index estimation. In IEEE International Conference on Image Processing (ICIP 2021) (Vol. 2021-September, pp. 2783–2787).
Abstract: This paper presents a novel unsupervised approach to estimate the Normalized Difference Vegetation Index (NDVI).The NDVI is obtained as the ratio between information from the visible and near infrared spectral bands; in the current work, the NDVI is estimated just from an image of the visible spectrum through a Cyclic Generative Adversarial Network (CyclicGAN). This unsupervised architecture learns to estimate the NDVI index by means of an image translation between the red channel of a given RGB image and the NDVI unpaired index’s image. The translation is obtained by means of a ResNET architecture and a multiple loss function. Experimental results obtained with this unsupervised scheme show the validity of the implemented model. Additionally, comparisons with the state of the art approaches are provided showing improvements with the proposed approach.
|
Patricia L. Suárez, A. D. S. and B. X. V. (2021). Deep learning-based vegetation index estimation. In Generative Adversarial Networks for Image-to-Image Translation Book. (Vol. Chapter 9, pp. 205–232).
|