Jacome-Galarza L.-R., R. R. M. - A., Paillacho Corredores J., Benavides Maldonado J.-L. (2022). Time series in sensor data using state of the art deep learning approaches: A systematic literature review. In VII International Conference on Science, Technology and Innovation for Society (CITIS 2021), mayo 26-28. Smart Innovation, Systems and Technologies. (Vol. Vol. 252, pp. 503–514).
Abstract: IoT (Internet of Things) and AI (Artificial Intelligence) are becoming
support tools for several current technological solutions due to significant advancements of these areas. The development of the IoT in various technological fields has contributed to predicting the behavior of various systems such as mechanical, electronic, and control using sensor networks. On the other hand, deep learning architectures have achieved excellent results in complex tasks, where patterns have been extracted in time series. This study has reviewed the most efficient deep learning architectures for forecasting and obtaining trends over time, together with data produced by IoT sensors. In this way, it is proposed to contribute to applications in fields in which IoT is contributing a technological advance such as smart cities, industry 4.0, sustainable agriculture, or robotics. Among the architectures studied in this article related to the process of time series data we have: LSTM (Long Short-Term Memory) for its high precision in prediction and the ability to automatically process input sequences; CNN (Convolutional Neural Networks) mainly in human activity
recognition; hybrid architectures in which there is a convolutional layer for data pre-processing and RNN (Recurrent Neural Networks) for data fusion from different sensors and their subsequent classification; and stacked LSTM Autoencoders that extract the variables from time series in an unsupervised way without the need of manual data pre-processing.Finally, well-known technologies in natural language processing are also used in time series data prediction, such as the attention mechanism and embeddings obtaining promising results.
|
Henry O. Velesaca, P. L. S., Dario Carpio, Rafael E. Rivadeneira, Ángel Sánchez, Angel D. Sappa. (2022). Video Analytics in Urban Environments: Challenges and Approaches. In ICT Applications for Smart Cities Part of the Intelligent Systems Reference Library book series (Vol. 224, pp. 101–122).
|
Jorge L. Charco, A. D. S., Boris X. Vintimilla, Henry O. Velesaca. (2022). Human Body Pose Estimation in Multi-view Environments. In ICT Applications for Smart Cities Part of the Intelligent Systems Reference Library book series (Vol. 224, pp. 79–99).
|
Angel D. Sappa. (2022). ICT Applications for Smart Cities. In Intelligent Systems Reference Library (Vol. 224).
|
Velesaca, H. O., Suárez, P. L., Mira, R., & Sappa, A.D. (2021). Computer Vision based Food Grain Classification: a Comprehensive Survey. In Computers and Electronics in Agriculture Journal. (Article number 106287), Vol. 187.
|
Viñán-Ludeña M.S., D. C. L. M., Roberto Jacome Galarza, & Sinche Freire, J. (2020). Social media influence: a comprehensive review in general and in tourism domain. Smart Innovation, Systems and Technologies., 171, 2020, 25–35.
|
Santos, V., Sappa, A.D., Oliveira, M. & de la Escalera, A. (2021). Editorial: Special Issue on Autonomous Driving and Driver Assistance Systems – Some Main Trends. In Journal: Robotics and Autonomous Systems. (Article number 103832), Vol. 144.
|
Xavier Soria, A. S., Patricio Humanante, Arash Akbarinia. (2023). Dense extreme inception network for edge detection. Pattern Recognition, Vol. 139.
|
Santos V., Angel D. Sappa., & Oliveira M. & de la Escalera A. (2019). Special Issue on Autonomous Driving and Driver Assistance Systems. In Robotics and Autonomous Systems, 121.
|
Charco, J. L., Sappa, A.D., Vintimilla, B.X., Velesaca, H.O. (2021). Camera pose estimation in multi-view environments:from virtual scenarios to the real world. In Image and Vision Computing Journal. (Article number 104182), Vol. 110.
Abstract: This paper presents a domain adaptation strategy to efficiently train network architectures for estimating the relative camera pose in multi-view scenarios. The network architectures are fed by a pair of simultaneously acquired
images, hence in order to improve the accuracy of the solutions, and due to the lack of large datasets with pairs of
overlapped images, a domain adaptation strategy is proposed. The domain adaptation strategy consists on transferring the knowledge learned from synthetic images to real-world scenarios. For this, the networks are firstly
trained using pairs of synthetic images, which are captured at the same time by a pair of cameras in a virtual environment; and then, the learned weights of the networks are transferred to the real-world case, where the networks are retrained with a few real images. Different virtual 3D scenarios are generated to evaluate the
relationship between the accuracy on the result and the similarity between virtual and real scenarios—similarity
on both geometry of the objects contained in the scene as well as relative pose between camera and objects in the
scene. Experimental results and comparisons are provided showing that the accuracy of all the evaluated networks for estimating the camera pose improves when the proposed domain adaptation strategy is used,
highlighting the importance on the similarity between virtual-real scenarios.
|