Home | [1–10] << 11 12 13 14 15 16 17 18 19 20 >> [21–22] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira | ||||
Title | Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives | Type | Journal Article | ||
Year | 2016 | Publication | Robotics and Autonomous Systems Journal | Abbreviated Journal | |
Volume | Vol. 83 | Issue | Pages | pp. 312-325 | |
Keywords | Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives | ||||
Abstract | When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language ![]() |
English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 49 | ||
Permanent link to this record | |||||
Author | Monica Villavicencio; Alain Abran | ||||
Title | Educational Issues in the Teaching of Software Measurement in Software Engineering Undergraduate Programs | Type | Conference Article | ||
Year | 2011 | Publication | Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement | Abbreviated Journal | |
Volume | Issue | Pages | 239-244 | ||
Keywords | measurement; software engineering; higher education | ||||
Abstract | In mature engineering disciplines and science, mathematics and measurement are considered as important subjects to be taught in university programs. This paper discusses about these subjects in terms of their respective meanings and complementarities. It also presents a discussion regarding their maturity, relevance and innovations in their teaching in engineering programs. This paper pays special attention to the teaching of software measurement in higher education, in particular with respect to mathematics and measurement in engineering in general. The findings from this analysis will be useful for researchers and educators interested in the enhancement of educational issues related to software measurement. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | IEEE | Place of Publication | Editor | ||
Language ![]() |
English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | gtsi @ user @ | Serial | 68 | ||
Permanent link to this record | |||||
Author | Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira | ||||
Title | Incremental Texture Mapping for Autonomous Driving | Type | Journal Article | ||
Year | 2016 | Publication | Robotics and Autonomous Systems Journal | Abbreviated Journal | |
Volume | Vol. 84 | Issue | Pages | pp. 113-128 | |
Keywords | Scene reconstruction, Autonomous driving, Texture mapping | ||||
Abstract | Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language ![]() |
English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 50 | ||
Permanent link to this record | |||||
Author | Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic | ||||
Title | Multi-sensor Fusion Module in a Fault Tolerant Perception System for Autonomous Vehicles | Type | Journal Article | ||
Year | 2016 | Publication | Journal of Automation and Control Engineering (JOACE) | Abbreviated Journal | |
Volume | Vol. 4 | Issue | Pages | pp. 430-436 | |
Keywords | Fault Tolerance, Data Fusion, Multi-sensor Fusion, Autonomous Vehicles, Perception System | ||||
Abstract | Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language ![]() |
English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 51 | ||
Permanent link to this record | |||||
Author | Miguel Realpe; Boris X. Vintimilla; Ljubo Vlacic | ||||
Title | A Fault Tolerant Perception system for autonomous vehicles | Type | Conference Article | ||
Year | 2016 | Publication | 35th Chinese Control Conference (CCC2016), International Conference on, Chengdu | Abbreviated Journal | |
Volume | Issue | Pages | 1-6 | ||
Keywords | Fault Tolerant Perception, Sensor Data Fusion, Fault Tolerance, Autonomous Vehicles, Federated Architecture | ||||
Abstract | Driverless vehicles are currently being tested on public roads in order to examine their ability to perform in a safe and reliable way in real world situations. However, the long-term reliable operation of a vehicle’s diverse sensors and the effects of potential sensor faults in the vehicle system have not been tested yet. This paper is proposing a sensor fusion architecture that minimizes the influence of a sensor fault. Experimental results are presented simulating faults by introducing displacements in the sensor information from the KITTI dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language ![]() |
English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 52 | ||
Permanent link to this record | |||||
Author | Rafael E. Rivadeneira; Angel D. Sappa; Boris X. Vintimilla; Lin Guo; Jiankun Hou; Armin Mehri; Parichehr Behjati; Ardakani Heena Patel; Vishal Chudasama; Kalpesh Prajapati; Kishor P. Upla; Raghavendra Ramachandra; Kiran Raja; Christoph Busch; Feras Almasri; Olivier Debeir; Sabari Nathan; Priya Kansal; Nolan Gutierrez; Bardia Mojra; William J. Beksi | ||||
Title | Thermal Image Super-Resolution Challenge – PBVS 2020 | Type | Conference Article | ||
Year | 2020 | Publication | The 16th IEEE Workshop on Perception Beyond the Visible Spectrum on the Conference on Computer Vision and Pattern Recongnition (CVPR 2020) | Abbreviated Journal | |
Volume | 2020-June | Issue | 9151059 | Pages | 432-439 |
Keywords | |||||
Abstract | This paper summarizes the top contributions to the first challenge on thermal image super-resolution (TISR) which was organized as part of the Perception Beyond the Visible Spectrum (PBVS) 2020 workshop. In this challenge, a novel thermal image dataset is considered together with stateof-the-art approaches evaluated under a common framework. The dataset used in the challenge consists of 1021 thermal images, obtained from three distinct thermal cameras at different resolutions (low-resolution, mid-resolution, and high-resolution), resulting in a total of 3063 thermal images. From each resolution, 951 images are used for training and 50 for testing while the 20 remaining images are used for two proposed evaluations. The first evaluation consists of downsampling the low-resolution, midresolution, and high-resolution thermal images by x2, x3 and x4 respectively, and comparing their super-resolution results with the corresponding ground truth images. The second evaluation is comprised of obtaining the x2 superresolution from a given mid-resolution thermal image and comparing it with the corresponding semi-registered highresolution thermal image. Out of 51 registered participants, 6 teams reached the final validation phase. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language ![]() |
English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 21607508 | ISBN | 978-172819360-1 | Medium | |
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 123 | ||
Permanent link to this record | |||||
Author | Henry O. Velesaca; Raul A. Mira; Patricia L. Suarez; Christian X. Larrea; Angel D. Sappa. | ||||
Title | Deep Learning based Corn Kernel Classification. | Type | Conference Article | ||
Year | 2020 | Publication | The 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture on the Conference Computer on Vision and Pattern Recongnition (CVPR 2020) | Abbreviated Journal | |
Volume | 2020-June | Issue | 9150684 | Pages | 294-302 |
Keywords | |||||
Abstract | This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learning based approach, the Mask R-CNN architecture, while the classification is performed by means of a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered. As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and the classification modules. Quantitative evaluations have been performed and comparisons with other approaches provided showing improvements with the proposed pipeline. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language ![]() |
English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 21607508 | ISBN | 978-172819360-1 | Medium | |
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 124 | ||
Permanent link to this record | |||||
Author | Henry O. Velesaca, Steven Araujo, Patricia L. Suarez, Ángel Sanchez & Angel D. Sappa | ||||
Title | Off-the-Shelf Based System for Urban Environment Video Analytics. | Type | Conference Article | ||
Year | 2020 | Publication | The 27th International Conference on Systems, Signals and Image Processing (IWSSIP 2020) | Abbreviated Journal | |
Volume | 2020-July | Issue | 9145121 | Pages | 459-464 |
Keywords | Greenhouse gases, carbon footprint, object detection, object tracking, website framework, off-the-shelf video analytics. | ||||
Abstract | This paper presents the design and implementation details of a system build-up by using off-the-shelf algorithms for urban video analytics. The system allows the connection to public video surveillance camera networks to obtain the necessary information to generate statistics from urban scenarios (e.g., amount of vehicles, type of cars, direction, numbers of persons, etc.). The obtained information could be used not only for traffic management but also to estimate the carbon footprint of urban scenarios. As a case study, a university campus is selected to evaluate the performance of the proposed system. The system is implemented in a modular way so that it is being used as a testbed to evaluate different algorithms. Implementation results are provided showing the validity and utility of the proposed approach. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language ![]() |
English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 21578672 | ISBN | 978-172817539-3 | Medium | |
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 125 | ||
Permanent link to this record | |||||
Author | Cristhian A. Aguilera, Cristhian Aguilera, Cristóbal A. Navarro, & Angel D. Sappa | ||||
Title | Fast CNN Stereo Depth Estimation through Embedded GPU Devices | Type | Journal Article | ||
Year | 2020 | Publication | Sensors 2020 | Abbreviated Journal | |
Volume | Vol. 2020-June | Issue | 11 | Pages | pp. 1-13 |
Keywords | stereo matching; deep learning; embedded GPU | ||||
Abstract | Current CNN-based stereo depth estimation models can barely run under real-time constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art evaluations usually do not consider model optimization techniques, being that it is unknown what is the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models on three different embedded GPU devices, with and without optimization methods, presenting performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically augmenting the runtime speed of current models. In our experiments, we achieve real-time inference speed, in the range of 5–32 ms, for 1216 368 input stereo images on the Jetson TX2, Jetson Xavier, and Jetson Nano embedded devices. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language ![]() |
English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 14248220 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 132 | ||
Permanent link to this record | |||||
Author | Ángel Morera, Ángel Sánchez, A. Belén Moreno, Angel D. Sappa, & José F. Vélez | ||||
Title | SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. | Type | Journal Article | ||
Year | 2020 | Publication | Abbreviated Journal | In Sensors | |
Volume | Vol. 2020-August | Issue | 16 | Pages | pp. 1-23 |
Keywords | object detection; urban outdoor panels; one-stage detectors; Single Shot MultiBox Detector (SSD); You Only Look Once (YOLO); detection metrics; object and scene imaging variabilities | ||||
Abstract | This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO) deep neural networks for the outdoor advertisement panel detection problem by handling multiple and combined variabilities in the scenes. Publicity panel detection in images oers important advantages both in the real world as well as in the virtual one. For example, applications like Google Street View can be used for Internet publicity and when detecting these ads panels in images, it could be possible to replace the publicity appearing inside the panels by another from a funding company. In our experiments, both SSD and YOLO detectors have produced acceptable results under variable sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex background and multiple panels in scenes. Due to the diculty of finding annotated images for the considered problem, we created our own dataset for conducting the experiments. The major strength of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable when the publicity contained inside the panel is analyzed after detecting them. On the other side, YOLO produced better panel localization results detecting a higher number of True Positive (TP) panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models with dierent types of semantic segmentation networks and using the same evaluation metrics is also included. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language ![]() |
English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 14248220 | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | cidis @ cidis @ | Serial | 133 | ||
Permanent link to this record |