|
Alex Ferrin, Julio Larrea, Miguel Realpe, & Daniel Ochoa. (2018). Detection of utility poles from noisy Point Cloud Data in Urban environments. In Artificial Intelligence and Cloud Computing Conference (AICCC 2018) (pp. 53–57).
Abstract: In recent years 3D urban maps have become more common, thus providing complex point clouds that include diverse urban furniture such as pole-like objects. Utility poles detection in urban environment is of particular interest for electric utility companies in order to maintain an updated inventory for better planning and management. The present study develops an automatic method for the detection of utility poles from noisy point cloud data of Guayaquil – Ecuador, where many poles are located next to buildings, or houses are built until the border of the sidewalk getting very close to poles, which increases the difficulty of discriminating poles, walls, columns, fences and building corners.
|
|
|
Cristhian A. Aguilera, Cristhian Aguilera, & Angel D. Sappa. (2018). Melamine faced panels defect classification beyond the visible spectrum. In Sensors 2018, Vol. 11(Issue 11).
Abstract: In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond
the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.
|
|
|
Gomer Rubio, & Wilton Agila. (2018). Dynamic Modeling of Fuel Cells in a Strategic Context. In 7th International Conference on Renewable Energy Research and Applications, ICRERA 2018. Paris, Francia..
|
|
|
Jorge L. Charco, Boris X. Vintimilla, & Angel D. Sappa. (2018). Deep learning based camera pose estimation in multi-view environment. In 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) (pp. 224–228).
Abstract: This paper proposes to use a deep learning network architecture for relative camera pose estimation on a multi-view environment. The proposed network is a variant architecture of AlexNet to use as regressor for prediction the relative translation and rotation as output. The proposed approach is trained from scratch on a large data set that takes as input a pair of images from the same scene. This new architecture is compared with a previous approach using standard metrics, obtaining better results on the relative camera pose.
|
|
|
Milton Mendieta, F. Panchana, B. Andrade, B. Bayot, C. Vaca, Boris X. Vintimilla, et al. (2018). Organ identification on shrimp histological images: A comparative study considering CNN and feature engineering. In IEEE Ecuador Technical Chapters Meeting ETCM 2018. Cuenca, Ecuador (pp. 1–6).
Abstract: The identification of shrimp organs in biology using
histological images is a complex task. Shrimp histological images
poses a big challenge due to their texture and similarity among
classes. Image classification by using feature engineering and
convolutional neural networks (CNN) are suitable methods to
assist biologists when performing organ detection. This work
evaluates the Bag-of-Visual-Words (BOVW) and Pyramid-Bagof-
Words (PBOW) models for image classification leveraging big
data techniques; and transfer learning for the same classification
task by using a pre-trained CNN. A comparative analysis
of these two different techniques is performed, highlighting
the characteristics of both approaches on the shrimp organs
identification problem.
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Cross-spectral image dehaze through a dense stacked conditional GAN based approach. In 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) (pp. 358–364).
Abstract: This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results.
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Vegetation Index Estimation from Monospectral Images. In 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science (Vol. 10882, pp. 353–362).
Abstract: This paper proposes a novel approach to estimate Normalized
Difference Vegetation Index (NDVI) from just the red channel of
a RGB image. The NDVI index is defined as the ratio of the difference
of the red and infrared radiances over their sum. In other words, information
from the red channel of a RGB image and the corresponding
infrared spectral band are required for its computation. In the current
work the NDVI index is estimated just from the red channel by training a
Conditional Generative Adversarial Network (CGAN). The architecture
proposed for the generative network consists of a single level structure,
which combines at the final layer results from convolutional operations
together with the given red channel with Gaussian noise to enhance
details, resulting in a sharp NDVI image. Then, the discriminative model
estimates the probability that the NDVI generated index came from the
training dataset, rather than the index automatically generated. Experimental
results with a large set of real images are provided showing that
a Conditional GAN single level model represents an acceptable approach
to estimate NDVI index.
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2018). Adaptive Harris Corners Detector Evaluated with Cross-Spectral Images. In International Conference on Information Technology & Systems (ICITS 2018). ICITS 2018. Advances in Intelligent Systems and Computing (Vol. 721).
Abstract: This paper proposes a novel approach to use cross-spectral
images to achieve a better performance with the proposed Adaptive Harris
corner detector comparing its obtained results with those achieved
with images of the visible spectra. The images of urban, field, old-building
and country category were used for the experiments, given the variety of
the textures present in these images, with which the complexity of the
proposal is much more challenging for its verification. It is a new scope,
which means improving the detection of characteristic points using crossspectral
images (NIR, G, B) and applying pruning techniques, the combination
of channels for this fusion is the one that generates the largest
variance based on the intensity of the merged pixels, therefore, it is that
which maximizes the entropy in the resulting Cross-spectral images.
Harris is one of the most widely used corner detection algorithm, so
any improvement in its efficiency is an important contribution in the
field of computer vision. The experiments conclude that the inclusion of
a (NIR) channel in the image as a result of the combination of the spectra,
greatly improves the corner detection due to better entropy of the
resulting image after the fusion, Therefore the fusion process applied to
the images improves the results obtained in subsequent processes such as
identification of objects or patterns, classification and/or segmentation.
|
|
|
Patricia L. Suarez, Angel D. Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2018). Deep Learning based Single Image Dehazing. In 14th IEEE Workshop on Perception Beyond the Visible Spectrum – In conjunction with CVPR 2018. Salt Lake City, Utah. USA.
Abstract: This paper proposes a novel approach to remove haze
degradations in RGB images using a stacked conditional
Generative Adversarial Network (GAN). It employs a triplet
of GAN to remove the haze on each color channel independently.
A multiple loss functions scheme, applied over a
conditional probabilistic model, is proposed. The proposed
GAN architecture learns to remove the haze, using as conditioned
entrance, the images with haze from which the clear
images will be obtained. Such formulation ensures a fast
model training convergence and a homogeneous model generalization.
Experiments showed that the proposed method
generates high-quality clear images.
|
|
|
Patricia L. Suarez, Angel D. Sappa, Boris X. Vintimilla, & Riad I. Hammoud. (2018). Near InfraRed Imagery Colorization. In 25 th IEEE International Conference on Image Processing, ICIP 2018 (pp. 2237–2241).
Abstract: This paper proposes a stacked conditional Generative
Adversarial Network-based method for Near InfraRed
(NIR) imagery colorization. We propose a variant architecture
of Generative Adversarial Network (GAN) that uses multiple
loss functions over a conditional probabilistic generative model.
We show that this new architecture/loss-function yields better
generalization and representation of the generated colored IR
images. The proposed approach is evaluated on a large test
dataset and compared to recent state of the art methods using
standard metrics.1
Index Terms—Convolutional Neural Networks (CNN), Generative
Adversarial Network (GAN), Infrared Imagery colorization.
|
|