Cristhian A. Aguilera, Cristhian Aguilera, & Angel D. Sappa. (2018). Melamine faced panels defect classification beyond the visible spectrum. In Sensors 2018, Vol. 11(Issue 11).
Abstract: In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond
the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.
|
Cristhian A. Aguilera, Angel D. Sappa, & Ricardo Toledo. (2017). Cross-Spectral Local Descriptors via Quadruplet Network. In Sensors Journal, Vol. 17, pp. 873.
|
Cristhian A. Aguilera, Xaver Soria, Angel D. Sappa, & Ricardo Toledo. (2017). RGBN Multispectral Images: a Novel Color Restoration Approach. In 15th International Conference on Practical Applications of Agents and Multi-Agent Systems (Vol. 619, pp. 155–163).
|
N. Onkarappa, Cristhian A. Aguilera, B. X. Vintimilla, & Angel D. Sappa. (2014). Cross-spectral Stereo Correspondence using Dense Flow Fields. In Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, Lisbon, Portugal, 2014 (Vol. 3, pp. 613–617). IEEE.
Abstract: This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach.
|
Ricaurte P, Chilán C, Cristhian A. Aguilera, Boris X. Vintimilla, & Angel D. Sappa. (2014). Feature Point Descriptors: Infrared and Visible Spectra. Sensors Journal, Vol. 14, pp. 3690–3701.
Abstract: This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.
|
Mildred Cruz, Cristhian A. Aguilera, Boris X. Vintimilla, Ricardo Toledo, & Ángel D. Sappa. (2015). Cross-spectral image registration and fusion: an evaluation study. In 2nd International Conference on Machine Vision and Machine Learning (Vol. 331). Barcelona, Spain: Computer Vision Center.
Abstract: This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented.
|
Cristhian A. Aguilera, Angel D. Sappa, & R. Toledo. (2015). LGHD: A feature descriptor for matching across non-linear intensity variations. In IEEE International Conference on, Quebec City, QC, 2015 (pp. 178–181). Quebec City, QC, Canada: IEEE.
Abstract: This paper presents a new feature descriptor suitable to the task of matching features points between images with nonlinear intensity variations. This includes image pairs with significant illuminations changes, multi-modal image pairs and multi-spectral image pairs. The proposed method describes the neighbourhood of feature points combining frequency and spatial information using multi-scale and multi-oriented Log- Gabor filters. Experimental results show the validity of the proposed approach and also the improvements with respect to the state of the art.
|
Julien Poujol, Cristhian A. Aguilera, Etienne Danos, Boris X. Vintimilla, Ricardo Toledo, & Angel D. Sappa. (2015). A visible-Thermal Fusion based Monocular Visual Odometry. In Iberian Robotics Conference (ROBOT 2015), International Conference on, Lisbon, Portugal, 2015 (Vol. 417, pp. 517–528).
Abstract: The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained representations are evaluated under a visual odometry framework, highlighting their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.
|
Angel D. Sappa, Juan A. Carvajal, Cristhian A. Aguilera, Miguel Oliveira, Dennis G. Romero, & Boris X. Vintimilla. (2016). Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study. Sensors Journal, Vol. 16, pp. 1–15.
Abstract: This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and LongWave InfraRed (LWIR).
|
Cristhian A. Aguilera, Francisco J. Aguilera, Angel D. Sappa, & Ricardo Toledo. (2016). Learning crossspectral similarity measures with deep convolutional neural networks. In IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (pp. 267–275).
Abstract: The simultaneous use of images from different spectra can be helpful to improve the performance of many com- puter vision tasks. The core idea behind the usage of cross- spectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN archi- tectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Ex- perimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Ad- ditionally, our experiments show that some CNN architec- tures are capable of generalizing between different cross- spectral domains.
|