toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (up) Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias pdf  openurl
  Title Scene representations for autonomous driving: an approach based on polygonal primitives Type Conference Article
  Year 2015 Publication Iberian Robotics Conference (ROBOT 2015), Lisbon, Portugal, 2015 Abbreviated Journal  
  Volume 417 Issue Pages 503-515  
  Keywords Scene reconstruction, Point cloud, Autonomous vehicles  
  Abstract In this paper, we present a novel methodology to compute a 3D scene representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Springer International Publishing Switzerland 2016 Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference Second Iberian Robotics Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 45  
Permanent link to this record
 

 
Author (up) Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira pdf  url
openurl 
  Title Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives Type Journal Article
  Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal  
  Volume Vol. 83 Issue Pages pp. 312-325  
  Keywords Incremental scene reconstructionPoint cloudsAutonomous vehiclesPolygonal primitives  
  Abstract When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 49  
Permanent link to this record
 

 
Author (up) Miguel Oliveira; Vítor Santos; Angel D. Sappa; Paulo Dias; A. Paulo Moreira pdf  url
openurl 
  Title Incremental Texture Mapping for Autonomous Driving Type Journal Article
  Year 2016 Publication Robotics and Autonomous Systems Journal Abbreviated Journal  
  Volume Vol. 84 Issue Pages pp. 113-128  
  Keywords Scene reconstruction, Autonomous driving, Texture mapping  
  Abstract Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 50  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: