|
Roberto Jacome Galarza, Miguel-Andrés Realpe-Robalino, Chamba-Eras LuisAntonio, & Viñán-Ludeña MarlonSantiago and Sinche-Freire Javier-Francisco. (2019). Computer vision for image understanding. A comprehensive review. In International Conference on Advances in Emerging Trends and Technologies (ICAETT 2019); Quito, Ecuador (pp. 248–259).
Abstract: Computer Vision has its own Turing test: Can a machine describe the contents of an image or a video in the way a human being would do? In this paper, the progress of Deep Learning for image recognition is analyzed in order to know the answer to this question. In recent years, Deep Learning has increased considerably the precision rate of many tasks related to computer vision. Many datasets of labeled images are now available online, which leads to pre-trained models for many computer vision applications. In this work, we gather information of the latest techniques to perform image understanding and description. As a conclusion we obtained that the combination of Natural Language Processing (using Recurrent Neural Networks and Long Short-Term Memory) plus Image Understanding (using Convolutional Neural Networks) could bring new types of powerful and useful applications in which the computer will be able to answer questions about the content of images and videos. In order to build datasets of labeled images, we need a lot of work and most of the datasets are built using crowd work. These new applications have the potential to increase the human machine interaction to new levels of usability and user’s satisfaction.
|
|
|
Juan A. Carvajal, Dennis G. Romero, & Angel D. Sappa. (2017). Fine-tuning deep convolutional networks for lepidopterous genus recognition. Lecture Notes in Computer Science, Vol. 10125 LNCS, pp. 467–475.
|
|
|
Cristhian A. Aguilera, Angel D. Sappa, & Ricardo Toledo. (2017). Cross-Spectral Local Descriptors via Quadruplet Network. In Sensors Journal, Vol. 17, pp. 873.
|
|
|
Victor Santos, Angel D. Sappa, & Miguel Oliveira. (2017). Special Issue on Autonomous Driving an Driver Assistance Systems. In Robotics and Autonomous Systems Journal, Vol. 91, pp. 208–209.
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Colorizing Infrared Images through a Triplet Condictional DCGAN Architecture. In 19th International Conference on Image Analysis and Processing. (pp. 287–297).
|
|
|
Byron Lima, Ricardo Cajo, Victor Huilcapi, & Wilton Agila. (2017). Modeling and comparative study of linear and nonlinear controllers for rotary inverted pendulum. In Journal of Physics: Conference Series (Vol. 783).
Abstract: The rotary inverted pendulum (RIP) is a problem difficult to control, several studies have been conducted where different control techniques have been applied. Literature reports that, although problem is nonlinear, classical PID controllers presents appropriate performances when applied to the system. In this paper, a comparative study of the performances of linear and nonlinear PID structures is carried out. The control algorithms are evaluated in the RIP system, using indices of performance and power consumption, which allow the categorization of control strategies according to their performance. This article also presents the modeling system, which has been estimated some of the parameters involved in the RIP system, using computer-aided design tools (CAD) and experimental methods or techniques proposed by several authors attended. The results indicate a better performance of the nonlinear controller with an increase in the robustness and faster response than the linear controller
|
|
|
Patricia L. Suarez, Angel D. Sappa, & Boris X. Vintimilla. (2017). Learning Image Vegetation Index through a Conditional Generative Adversarial Network. In 2nd IEEE Ecuador Tehcnnical Chapters Meeting (ETCM).
|
|
|
Lukas Danev, Marten Hamann, Nicolas Fricke, Tobias Hollarek, & Dennys Paillacho. (2017). Development of animated facial expression to express emotions in a robot: RobotIcon. In IEEE Ecuador Technical Chapter Meeting (ETCM) (Vol. 2017-January, pp. 1–6).
|
|
|
Xavier Soria, Angel D. Sappa, & Arash Akbarinia. (2017). Multispectral Single-Sensor RGB-NIR Imaging: New Challenges an Oppotunities. In The 7th International Conference on Image Processing Theory, Tools and Application (pp. 1–6).
|
|
|
Milton Mendieta, F. Panchana, B. Andrade, B. Bayot, C. Vaca, Boris X. Vintimilla, et al. (2018). Organ identification on shrimp histological images: A comparative study considering CNN and feature engineering. In IEEE Ecuador Technical Chapters Meeting ETCM 2018. Cuenca, Ecuador (pp. 1–6).
Abstract: The identification of shrimp organs in biology using
histological images is a complex task. Shrimp histological images
poses a big challenge due to their texture and similarity among
classes. Image classification by using feature engineering and
convolutional neural networks (CNN) are suitable methods to
assist biologists when performing organ detection. This work
evaluates the Bag-of-Visual-Words (BOVW) and Pyramid-Bagof-
Words (PBOW) models for image classification leveraging big
data techniques; and transfer learning for the same classification
task by using a pre-trained CNN. A comparative analysis
of these two different techniques is performed, highlighting
the characteristics of both approaches on the shrimp organs
identification problem.
|
|