Home | << 1 2 3 4 5 6 7 >> |
Records | |||||
---|---|---|---|---|---|
Author | Juan A. Carvajal; Dennis G. Romero; Angel D. Sappa | ||||
Title | Fine-tuning deep convolutional networks for lepidopterous genus recognition | Type | Journal Article | ||
Year | 2017 | Publication | Lecture Notes in Computer Science | Abbreviated Journal | |
Volume | Vol. 10125 LNCS | Issue | Pages | pp. 467-475 | |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 63 | ||
Permanent link to this record | |||||
Author | Henry O. Velesaca, Steven Araujo, Patricia L. Suarez, Ángel Sanchez & Angel D. Sappa | ||||
Title | Off-the-Shelf Based System for Urban Environment Video Analytics. | Type | Conference Article | ||
Year | 2020 | Publication | The 27th International Conference on Systems, Signals and Image Processing (IWSSIP 2020) | Abbreviated Journal | |
Volume | 2020-July | Issue | 9145121 | Pages | 459-464 |
Keywords | Greenhouse gases, carbon footprint, object detection, object tracking, website framework, off-the-shelf video analytics. | ||||
Abstract | This paper presents the design and implementation details of a system build-up by using off-the-shelf algorithms for urban video analytics. The system allows the connection to public video surveillance camera networks to obtain the necessary information to generate statistics from urban scenarios (e.g., amount of vehicles, type of cars, direction, numbers of persons, etc.). The obtained information could be used not only for traffic management but also to estimate the carbon footprint of urban scenarios. As a case study, a university campus is selected to evaluate the performance of the proposed system. The system is implemented in a modular way so that it is being used as a testbed to evaluate different algorithms. Implementation results are provided showing the validity and utility of the proposed approach. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 21578672 | ISBN | 978-172817539-3 | Medium | |
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 125 | ||
Permanent link to this record | |||||
Author | Rafael E. Rivadeneira; Angel D. Sappa; Boris X. Vintimilla; Lin Guo; Jiankun Hou; Armin Mehri; Parichehr Behjati; Ardakani Heena Patel; Vishal Chudasama; Kalpesh Prajapati; Kishor P. Upla; Raghavendra Ramachandra; Kiran Raja; Christoph Busch; Feras Almasri; Olivier Debeir; Sabari Nathan; Priya Kansal; Nolan Gutierrez; Bardia Mojra; William J. Beksi | ||||
Title | Thermal Image Super-Resolution Challenge – PBVS 2020 | Type | Conference Article | ||
Year | 2020 | Publication | The 16th IEEE Workshop on Perception Beyond the Visible Spectrum on the Conference on Computer Vision and Pattern Recongnition (CVPR 2020) | Abbreviated Journal | |
Volume | 2020-June | Issue | 9151059 | Pages | 432-439 |
Keywords | |||||
Abstract | This paper summarizes the top contributions to the first challenge on thermal image super-resolution (TISR) which was organized as part of the Perception Beyond the Visible Spectrum (PBVS) 2020 workshop. In this challenge, a novel thermal image dataset is considered together with stateof-the-art approaches evaluated under a common framework. The dataset used in the challenge consists of 1021 thermal images, obtained from three distinct thermal cameras at different resolutions (low-resolution, mid-resolution, and high-resolution), resulting in a total of 3063 thermal images. From each resolution, 951 images are used for training and 50 for testing while the 20 remaining images are used for two proposed evaluations. The first evaluation consists of downsampling the low-resolution, midresolution, and high-resolution thermal images by x2, x3 and x4 respectively, and comparing their super-resolution results with the corresponding ground truth images. The second evaluation is comprised of obtaining the x2 superresolution from a given mid-resolution thermal image and comparing it with the corresponding semi-registered highresolution thermal image. Out of 51 registered participants, 6 teams reached the final validation phase. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 21607508 | ISBN | 978-172819360-1 | Medium | |
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 123 | ||
Permanent link to this record | |||||
Author | Rafael E. Rivadeneira; Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla. | ||||
Title | Thermal Image SuperResolution through Deep Convolutional Neural Network. | Type | Conference Article | ||
Year | 2019 | Publication | 16th International Conference on Image Analysis and Recognition (ICIAR 2019); Waterloo, Canadá | Abbreviated Journal | |
Volume | Issue | Pages | 417-426 | ||
Keywords | |||||
Abstract | Due to the lack of thermal image datasets, a new dataset has been acquired for proposed a superesolution approach using a Deep Convolution Neural Network schema. In order to achieve this image enhancement process a new thermal images dataset is used. Di?erent experiments have been carried out, ?rstly, the proposed architecture has been trained using only images of the visible spectrum, and later it has been trained with images of the thermal spectrum, the results showed that with the network trained with thermal images, better results are obtained in the process of enhancing the images, maintaining the image details and perspective. The thermal dataset is available at http://www.cidis.espol.edu.ec/es/dataset | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 103 | ||
Permanent link to this record | |||||
Author | Spencer Low, Oliver Nina, Angel D. Sappa, Erik Blasch, Nathan Inkawhich | ||||
Title | Multi-modal Aerial View Object Classification Challenge Results – PBVS 2023 | Type | Conference Article | ||
Year | 2023 | Publication | 19th IEEE Workshop on Perception Beyond the Visible Spectrum de la Conferencia Computer Vision & Pattern Recognition CVPR 2023, junio 18-28 | Abbreviated Journal | |
Volume | 2023-June | Issue | Pages | 412 - 421 | |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 21607508 | ISBN | 979-835030249-3 | Medium | |
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 212 | ||
Permanent link to this record | |||||
Author | Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla | ||||
Title | Cross-spectral image dehaze through a dense stacked conditional GAN based approach. | Type | Conference Article | ||
Year | 2018 | Publication | 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) | Abbreviated Journal | |
Volume | Issue | Pages | 358-364 | ||
Keywords | |||||
Abstract | This paper proposes a novel approach to remove haze from RGB images using a near infrared images based on a dense stacked conditional Generative Adversarial Network (CGAN). The architecture of the deep network implemented receives, besides the images with haze, its corresponding image in the near infrared spectrum, which serve to accelerate the learning process of the details of the characteristics of the images. The model uses a triplet layer that allows the independence learning of each channel of the visible spectrum image to remove the haze on each color channel separately. A multiple loss function scheme is proposed, which ensures balanced learning between the colors and the structure of the images. Experimental results have shown that the proposed method effectively removes the haze from the images. Additionally, the proposed approach is compared with a state of the art approach showing better results. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 92 | ||
Permanent link to this record | |||||
Author | Patricia L. Suarez; Angel D. Sappa; Boris X. Vintimilla | ||||
Title | Vegetation Index Estimation from Monospectral Images | Type | Conference Article | ||
Year | 2018 | Publication | 15th International Conference, Image Analysis and Recognition (ICIAR 2018), Póvoa de Varzim, Portugal. Lecture Notes in Computer Science | Abbreviated Journal | |
Volume | 10882 | Issue | Pages | 353-362 | |
Keywords | |||||
Abstract | This paper proposes a novel approach to estimate Normalized Difference Vegetation Index (NDVI) from just the red channel of a RGB image. The NDVI index is defined as the ratio of the difference of the red and infrared radiances over their sum. In other words, information from the red channel of a RGB image and the corresponding infrared spectral band are required for its computation. In the current work the NDVI index is estimated just from the red channel by training a Conditional Generative Adversarial Network (CGAN). The architecture proposed for the generative network consists of a single level structure, which combines at the final layer results from convolutional operations together with the given red channel with Gaussian noise to enhance details, resulting in a sharp NDVI image. Then, the discriminative model estimates the probability that the NDVI generated index came from the training dataset, rather than the index automatically generated. Experimental results with a large set of real images are provided showing that a Conditional GAN single level model represents an acceptable approach to estimate NDVI index. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 82 | ||
Permanent link to this record | |||||
Author | Rafael E. Rivadeneira, Angel D. Sappa, Boris X. Vintimilla, Jin Kim, Dogun Kim et al. | ||||
Title | Thermal Image Super-Resolution Challenge Results- PBVS 2022. | Type | Conference Article | ||
Year | 2022 | Publication | Computer Vision and Pattern Recognition Workshops, (CVPRW 2022), junio 19-24. | Abbreviated Journal | CONFERENCE |
Volume | 2022-June | Issue | Pages | 349-357 | |
Keywords | |||||
Abstract | This paper presents results from the third Thermal Image Super-Resolution (TISR) challenge organized in the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop. The challenge uses the same thermal image dataset as the first two challenges, with 951 training images and 50 validation images at each resolution. A set of 20 images was kept aside for testing. The evaluation tasks were to measure the PSNR and SSIM between the SR image and the ground truth (HR thermal noisy image downsampled by four), and also to measure the PSNR and SSIM between the SR image and the semi-registered HR image (acquired with another camera). The results outperformed those from last year’s challenge, improving both evaluation metrics. This year, almost 100 teams participants registered for the challenge, showing the community’s interest in this hot topic. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 175 | ||
Permanent link to this record | |||||
Author | Dennis G. Romero; A. Frizera; Angel D. Sappa; Boris X. Vintimilla; T.F. Bastos | ||||
Title | A predictive model for human activity recognition by observing actions and context | Type | Conference Article | ||
Year | 2015 | Publication | ACIVS 2015 (Advanced Concepts for Intelligent Vision Systems), International Conference on, Catania, Italy, 2015 | Abbreviated Journal | |
Volume | Issue | Pages | 323 - 333 | ||
Keywords | Edge width, Image blu,r Defocus map, Edge model | ||||
Abstract | This paper presents a novel model to estimate human activities – a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 43 | ||
Permanent link to this record | |||||
Author | Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira | ||||
Title | Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives | Type | Journal Article | ||
Year | 2016 | Publication | Robotics and Autonomous Systems Journal | Abbreviated Journal | |
Volume | Vol. 83 | Issue | Pages | pp. 312-325 | |
Keywords | Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives | ||||
Abstract | When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 49 | ||
Permanent link to this record |