|
Rafael E. Rivadeneira, A. D. S., Boris X. Vintimilla, Chenyang Wang, Junjun Jiang, Xianming Liu, Zhiwei Zhong, Dai Bin, Li Ruodi, Li Shengye. (2023). Thermal Image Super-Resolution Challenge Results – PBVS 2023. In 19th IEEE Workshop on Perception Beyond the Visible Spectrum de la Conferencia Computer Vision & Pattern Recognition CVPR 2023, junio 18-28 (Vol. 2023-June, pp. 470–478).
|
|
|
Rafael E. Rivadeneira, Angel D. Sappa, Boris X. Vintimilla, Lin Guo, Jiankun Hou, Armin Mehri, et al. (2020). Thermal Image Super-Resolution Challenge – PBVS 2020. In The 16th IEEE Workshop on Perception Beyond the Visible Spectrum on the Conference on Computer Vision and Pattern Recongnition (CVPR 2020) (Vol. 2020-June, pp. 432–439).
Abstract: This paper summarizes the top contributions to the first challenge on thermal image super-resolution (TISR) which was organized as part of the Perception Beyond the Visible Spectrum (PBVS) 2020 workshop. In this challenge, a novel thermal image dataset is considered together with stateof-the-art approaches evaluated under a common framework.
The dataset used in the challenge consists of 1021 thermal images, obtained from three distinct thermal cameras at different resolutions (low-resolution, mid-resolution, and high-resolution), resulting in a total of 3063 thermal images. From each resolution, 951 images are used for training and 50 for testing while the 20 remaining images are used for two proposed evaluations. The first evaluation consists of downsampling the low-resolution, midresolution, and high-resolution thermal images by x2, x3 and x4 respectively, and comparing their super-resolution
results with the corresponding ground truth images. The second evaluation is comprised of obtaining the x2 superresolution from a given mid-resolution thermal image and comparing it with the corresponding semi-registered highresolution thermal image. Out of 51 registered participants, 6 teams reached the final validation phase.
|
|
|
Rafael E. Rivadeneira, A. D. S., Chenyang Wang, Junjun Jiang, Zhiwei Zhong, Peilin Chen & Shiqi Wang. (2024). Thermal Image Super Resolution Challenge Results – PBVS 2024. In Accepted in 20th IEEE Workshop on Perception Beyond the Visible Spectrum of the 2024 Conference on Computer Vision and Pattern Recognition.
|
|
|
Henry O. Velesaca, P. L. S., Dario Carpio, and Angel D. Sappa. (2021). Synthesized Image Datasets: Towards an Annotation-Free Instance Segmentation Strategy. In 16 International Symposium on Visual Computing. Octubre 4-6, 2021. Lecture Notes in Computer Science (Vol. 13017, pp. 131–143).
|
|
|
Ángel Morera, Á. S., A. Belén Moreno, Angel D. Sappa, & José F. Vélez. (2020). SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities. In Sensors, Vol. 2020-August(16), pp. 1–23.
Abstract: This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO)
deep neural networks for the outdoor advertisement panel detection problem by handling multiple
and combined variabilities in the scenes. Publicity panel detection in images oers important
advantages both in the real world as well as in the virtual one. For example, applications like Google
Street View can be used for Internet publicity and when detecting these ads panels in images, it could
be possible to replace the publicity appearing inside the panels by another from a funding company.
In our experiments, both SSD and YOLO detectors have produced acceptable results under variable
sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex
background and multiple panels in scenes. Due to the diculty of finding annotated images for the
considered problem, we created our own dataset for conducting the experiments. The major strength
of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable
when the publicity contained inside the panel is analyzed after detecting them. On the other side,
YOLO produced better panel localization results detecting a higher number of True Positive (TP)
panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models
with dierent types of semantic segmentation networks and using the same evaluation metrics is
also included.
|
|
|
Armin Mehri, P. B., Dario Carpio, and Angel D. Sappa. (2023). SRFormer: Efficient Yet Powerful Transformer Network For Single Image Super Resolution. IEEE access, Vol. 11, 121457–121469.
|
|
|
Santos V., Angel D. Sappa., & Oliveira M. & de la Escalera A. (2019). Special Issue on Autonomous Driving and Driver Assistance Systems. In Robotics and Autonomous Systems, 121.
|
|
|
Victor Santos, Angel D. Sappa, & Miguel Oliveira. (2017). Special Issue on Autonomous Driving an Driver Assistance Systems. In Robotics and Autonomous Systems Journal, Vol. 91, pp. 208–209.
|
|
|
Miguel Oliveira, Vítor Santos, Angel D. Sappa, & Paulo Dias. (2015). Scene representations for autonomous driving: an approach based on polygonal primitives. In Iberian Robotics Conference (ROBOT 2015), Lisbon, Portugal, 2015 (Vol. 417, pp. 503–515). Springer International Publishing Switzerland 2016.
Abstract: In this paper, we present a novel methodology to compute a 3D scene representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
|
|
|
Angel Morera, Angel Sánchez, Angel D. Sappa, & José F. Vélez. (2019). Robust Detection of Outdoor Urban Advertising Panels in Static Images. In 17th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2019); Ávila, España. Communications in Computer and Information Science (Vol. 1047, pp. 246–256).
Abstract: One interesting publicity application for Smart City environments is recognizing brand information contained in urban advertising
panels. For such a purpose, a previous stage is to accurately detect and
locate the position of these panels in images. This work presents an effective solution to this problem using a Single Shot Detector (SSD) based
on a deep neural network architecture that minimizes the number of
false detections under multiple variable conditions regarding the panels and the scene. Achieved experimental results using the Intersection
over Union (IoU) accuracy metric make this proposal applicable in real
complex urban images.
|
|