[ad_1]
Credit: Gwangju Institute of Science and Technology
Computer vision has come a long way over the past decade and has found its way into all kinds of relevant applications, both in academia and in our daily lives. However, some tasks in this area are still extremely difficult for computers to perform with acceptable accuracy and speed. One example is object tracking, which involves recognizing persistent objects in video footage and tracking their movements. While computers can simultaneously track more objects than humans, they generally fail to distinguish the appearance of different objects. This, in turn, can cause the algorithm to mix up the objects in a scene and ultimately produce incorrect tracking results.
At the Gwangju Institute of Science and Technology in Korea, a team of researchers led by Prof. Moongu Jeon seeks to address these issues by incorporating deep learning techniques into a multi-object tracking framework. In a recent study published in Information science, they present a new tracking model based on a technique they call “Deep Time Appearance Matching Association (Deep-TAMA)” that promises innovative solutions to some of the most prevalent problems in multi-object tracking. This article was posted online in October 2020 and was published in volume 561 of the journal in June 2021.
Conventional tracking approaches determine object trajectories by associating a bounding box with each detected object and establishing geometric constraints. The difficulty inherent in this approach is to accurately match previously tracked objects with objects detected in the current image. Differentiating detected objects based on craft characteristics such as color usually fails due to changes in lighting conditions and occlusions. Thus, the researchers focused on activating the tracking model with the ability to accurately extract the known characteristics of the detected objects and compare them not only with those of other objects in the frame, but also with a history recorded known characteristics. To this end, they combined joint inference neural networks (JI-Nets) with long-short-term memory networks (LSTMs).
LSTMs help match the stored appearances with those in the current frame, while JI-Nets help compare the appearances of two objects detected simultaneously from scratch – one of the most unique aspects of this new approach. Using historical appearances in this way allowed the algorithm to overcome short-term occlusions of tracked objects. “Compared to conventional methods which pre-extract the characteristics of each object independently, the proposed joint inference method exhibited better precision in public surveillance tasks, namely pedestrian tracking,” says Dr Jeon. What’s more, the researchers also compensated for a main drawback of deep learning – the low speed – by adopting indexing-based GPU parallelization to reduce compute times. Tests on public surveillance datasets have confirmed that the proposed monitoring framework offers state-of-the-art accuracy and is therefore ready for deployment.
Multi-object tracking opens up a plethora of apps ranging from autonomous driving to public surveillance, which can help fight crime and reduce the frequency of accidents. âWe believe that our methods can inspire other researchers to develop new approaches based on deep learning to ultimately improve public safety,â concludes Dr. Jeon. For the good of all, let’s hope that their vision will soon become a reality!
###
Reference
Authors: Young-Chul Yoon (1), Du Yong Kim (2), Young-Min Song (4), Kwangjin Yoon (3) and Moongu Jeon (4)
Original article title: Online Tracking of Multiple Pedestrians Using Deep Time Appearance Match Association
Newspaper: Information science
DOI: https: /
Affiliations:
(1) Robotics laboratory, Hyundai Motor Company
(2) School of Engineering, RMIT University
(3) SI-Analytics Company, Ltd.
(4) School of Electrical and Computer Engineering, GIST
About Gwangju Institute of Science and Technology (GIST)
Gwangju Institute of Science and Technology (GIST) is a research-oriented university located in Gwangju, South Korea. As one of the most prestigious schools in South Korea, it was founded in 1993 and aims to create a strong research environment to stimulate scientific and technological advancements and promote collaboration among foreign research programs. and national. With the motto “A Proud Creator of Science and Technology of the Future”, GIST has consistently been awarded one of Korea’s highest academic rankings.
Website: http: // www.
About the authors
The first author, Young-Chul Yoon, is a researcher at Hyundai Motors’ Robotics Lab. This research was carried out while he was pursuing a Masters in Multi-Object Tracking at GIST EECS under the supervision of Dr Moongu Jeon. His research won the 3rd prize in the CVPR2019 multi-object monitoring challenge among 36 competitors.
The corresponding author, Dr Moongu Jeon, is a full professor at GIST. His main research interests are in artificial intelligence, machine learning, visual surveillance and autonomous driving. He has published over 200 technical papers in these research areas.
Warning: AAAS and EurekAlert! are not responsible for the accuracy of any press releases posted on EurekAlert! by contributing institutions or for the use of any information via the EurekAlert system.
[ad_2]