toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author (up) Henry O. Velesaca, Steven Araujo, Patricia L. Suarez, Ángel Sanchez & Angel D. Sappa pdf  isbn
openurl 
  Title Off-the-Shelf Based System for Urban Environment Video Analytics. Type Conference Article
  Year 2020 Publication The 27th International Conference on Systems, Signals and Image Processing (IWSSIP 2020) Abbreviated Journal  
  Volume 2020-July Issue 9145121 Pages 459-464  
  Keywords Greenhouse gases, carbon footprint, object detection, object tracking, website framework, off-the-shelf video analytics.  
  Abstract This paper presents the design and implementation details of a system build-up by using off-the-shelf algorithms for urban video analytics. The system allows the connection to public video surveillance camera networks to obtain the necessary

information to generate statistics from urban scenarios (e.g., amount of vehicles, type of cars, direction, numbers of persons, etc.). The obtained information could be used not only for traffic management but also to estimate the carbon footprint of urban scenarios. As a case study, a university campus is selected to

evaluate the performance of the proposed system. The system is implemented in a modular way so that it is being used as a testbed to evaluate different algorithms. Implementation results are provided showing the validity and utility of the proposed approach.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 21578672 ISBN 978-172817539-3 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 125  
Permanent link to this record
 

 
Author (up) Henry O. Velesaca; Raul A. Mira; Patricia L. Suarez; Christian X. Larrea; Angel D. Sappa. pdf  isbn
openurl 
  Title Deep Learning based Corn Kernel Classification. Type Conference Article
  Year 2020 Publication The 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture on the Conference Computer on Vision and Pattern Recongnition (CVPR 2020) Abbreviated Journal  
  Volume 2020-June Issue 9150684 Pages 294-302  
  Keywords  
  Abstract This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learning based

approach, the Mask R-CNN architecture, while the classification is performed by means of a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered.

As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and

the classification modules. Quantitative evaluations have been performed and comparisons with other approaches provided showing improvements with the proposed pipeline.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 21607508 ISBN 978-172819360-1 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 124  
Permanent link to this record
 

 
Author (up) Jorge L. Charco, Angel D. Sappa, Boris X. Vintimilla pdf  openurl
  Title Human Pose Estimation through A Novel Multi-View Scheme Type Conference Article
  Year 2022 Publication Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISIGRAPP 2022 Abbreviated Journal  
  Volume 5 Issue Pages 855-862  
  Keywords Multi-View Scheme, Human Pose Estimation, Relative Camera Pose, Monocular Approach  
  Abstract This paper presents a multi-view scheme to tackle the challenging problem of the self-occlusion in human

pose estimation problem. The proposed approach first obtains the human body joints of a set of images,

which are captured from different views at the same time. Then, it enhances the obtained joints by using a

multi-view scheme. Basically, the joints from a given view are used to enhance poorly estimated joints from

another view, especially intended to tackle the self occlusions cases. A network architecture initially proposed

for the monocular case is adapted to be used in the proposed multi-view scheme. Experimental results and

comparisons with the state-of-the-art approaches on Human3.6m dataset are presented showing improvements

in the accuracy of body joints estimations.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved yes  
  Call Number cidis @ cidis @ Serial 169  
Permanent link to this record
 

 
Author (up) Jorge L. Charco, Angel D. Sappa, Boris X. Vintimilla, Henry O. Velesaca. url  openurl
  Title Human Body Pose Estimation in Multi-view Environments. Type Book Chapter
  Year 2022 Publication ICT Applications for Smart Cities Part of the Intelligent Systems Reference Library book series Abbreviated Journal BOOK  
  Volume 224 Issue Pages 79-99  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 197  
Permanent link to this record
 

 
Author (up) Jorge L. Charco; Angel D. Sappa; Boris X. Vintimilla; Henry O. Velesaca pdf  isbn
openurl 
  Title Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem Type Conference Article
  Year 2020 Publication The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020); Valletta, Malta; 27-29 Febrero 2020 Abbreviated Journal  
  Volume 4 Issue Pages 498-505  
  Keywords Relative Camera Pose Estimation, Siamese Architecture, Synthetic Data, Deep Learning, Multi-View Environments, Extrinsic Camera Parameters.  
  Abstract This paper presents a novel Siamese network architecture, as a variant of Resnet-50, to estimate the relative camera pose on multi-view environments. In order to improve the performance of the proposed model

a transfer learning strategy, based on synthetic images obtained from a virtual-world, is considered. The

transfer learning consist of first training the network using pairs of images from the virtual-world scenario

considering different conditions (i.e., weather, illumination, objects, buildings, etc.); then, the learned weight

of the network are transferred to the real case, where images from real-world scenarios are considered. Experimental results and comparisons with the state of the art show both, improvements on the relative pose

estimation accuracy using the proposed model, as well as further improvements when the transfer learning

strategy (synthetic-world data – transfer learning – real-world data) is considered to tackle the limitation on

the training due to the reduced number of pairs of real-images on most of the public data sets.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-989758402-2 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 120  
Permanent link to this record
 

 
Author (up) Jorge L. Charco; Boris X. Vintimilla; Angel D. Sappa pdf  openurl
  Title Deep learning based camera pose estimation in multi-view environment. Type Conference Article
  Year 2018 Publication 14th IEEE International Conference on Signal Image Technology & Internet based Systems (SITIS 2018) Abbreviated Journal  
  Volume Issue Pages 224-228  
  Keywords  
  Abstract This paper proposes to use a deep learning network architecture for relative camera pose estimation on a multi-view environment. The proposed network is a variant architecture of AlexNet to use as regressor for prediction the relative translation and rotation as output. The proposed approach is trained from scratch on a large data set that takes as input a pair of images from the same scene. This new architecture is compared with a previous approach using standard metrics, obtaining better results on the relative camera pose.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 93  
Permanent link to this record
 

 
Author (up) Juan A. Carvajal; Dennis G. Romero; Angel D. Sappa pdf  openurl
  Title Fine-tuning deep convolutional networks for lepidopterous genus recognition Type Journal Article
  Year 2017 Publication Lecture Notes in Computer Science Abbreviated Journal  
  Volume Vol. 10125 LNCS Issue Pages pp. 467-475  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number gtsi @ user @ Serial 63  
Permanent link to this record
 

 
Author (up) Juan A. Carvajal; Dennis G. Romero; Angel D. Sappa pdf  openurl
  Title Fine-tuning based deep covolutional networks for lepidopterous genus recognition Type Conference Article
  Year 2016 Publication XXI IberoAmerican Congress on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1-9  
  Keywords  
  Abstract This paper describes an image classi cation approach ori- ented to identify specimens of lepidopterous insects recognized at Ecuado- rian ecological reserves. This work seeks to contribute to studies in the area of biology about genus of butter ies and also to facilitate the reg- istration of unrecognized specimens. The proposed approach is based on the ne-tuning of three widely used pre-trained Convolutional Neural Networks (CNNs). This strategy is intended to overcome the reduced number of labeled images. Experimental results with a dataset labeled by expert biologists, is presented|a recognition accuracy above 92% is reached. 1 Introductio  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 53  
Permanent link to this record
 

 
Author (up) Julien Poujol; Cristhian A. Aguilera; Etienne Danos; Boris X. Vintimilla; Ricardo Toledo; Angel D. Sappa pdf  url
openurl 
  Title A visible-Thermal Fusion based Monocular Visual Odometry Type Conference Article
  Year 2015 Publication Iberian Robotics Conference (ROBOT 2015), International Conference on, Lisbon, Portugal, 2015 Abbreviated Journal  
  Volume 417 Issue Pages 517-528  
  Keywords Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion  
  Abstract The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained representations are evaluated under a visual odometry framework, highlighting their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 44  
Permanent link to this record
 

 
Author (up) M. Oliveira; L. Seabra Lopes; G. Hyun Lim; S. Hamidreza Kasaei; Angel D. Sappa; A. Tomé pdf  url
openurl 
  Title Concurrent Learning of Visual Codebooks and Object Categories in Open- ended Domains Type Conference Article
  Year 2015 Publication Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, Hamburg, Germany, 2015 Abbreviated Journal  
  Volume Issue Pages 2488 - 2495  
  Keywords Birds, Training, Legged locomotion, Visualization, Histograms, Object recognition, Gaussian mixture model  
  Abstract In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are usually constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using codebooks constructed offline.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Hamburg, Germany Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
  Notes Approved no  
  Call Number cidis @ cidis @ Serial 41  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: