Qatar University Develops System To Manage Crowd During FIFA World Cup


(MENAFN- The Peninsula) The Peninsula

Doha: In collaboration with the Supreme Committee for Delivery and Legacy (SC), Qatar University (QU) College of Engineering has developed an intelligent crowd management and control systems, including multiple components for crowd counting, face recognition, and abnormal event detection (AED). 

The QU research team, led by Prof. Sumaya Al Maadeed as the lead principal investigator in the study, includes Dr. Noor Al Maadeed, Associate Dean of Graduate Studies for Academic Affairs and Associate Professor of Computer Engineering, Dr. Khalid Abualsaud, Lecturer of Computer Engineering, Prof. Amr Mohamed, Professor of Computer Engineering, Prof. Tamer Khattab, Professor of electrical engineering and Acting Director of Excellence in Teaching and Learning Center, Dr. Yassine Himeur, Post-doctoral Researcher, and Dr. Omar Elharrouss. Post-doctoral Researcher, Najmath Ottakath, Master's student.

The security and safety of players, spectators, and others associated with the FIFA World Cup Qatar 2022, is at the centre of the attention of the organising committee. Typically, security risks increase multi-fold, considering the large scale of the event and the significant number of fans expected to attend (more than 1.5 million fans). Thus, the security of the FIFA World Cup Qatar 2022 is challenging due to the increasing number of possible threats and use of technology.

Crowd management at the World Cup stadiums and their perimeters is crucial to ensure the safety and smoothness of the World Cup events due to the inherent occlusion and density of the crowd inside and outside the stadiums. Qatar 2022 will rely on the deployment of cutting-edge technologies, such as surveillance drones, ICT, and AI, to optimise crowd management. 

In this respect, the QU research team has first developed a crowd counting system from drones' data, which exploits the dilated and scaled neural networks to extract pertinent features and density crowd estimations. 

Additionally, a new dataset for crowd counting in sports facilities named Football Supporters Crowd Dataset (FSC-Set) is introduced. It includes 6000 images labeled manually and representing various types of scenes, containing thousands of people gathering in or around the stadiums. 

The research team's effort has also focused on developing a face recognition system, which considers faces under pose variations using a multitask convolutional neural network (CNN). Specifically, a cascade structure was employed to combine a pose estimation approach and a face identification module. The CNN-based pose estimation approach has been trained on three categories of face images, including left side, frontal, and right side captures. 

Moving on, three CNN architecture, namely VGG-16+PReLU left, VGG-16+PReLU front, and VGG-16+PReLU right, have been deployed to identify faces based on the estimated pose. Additionally, a skin-based face segmentation scheme, based on structure-texture decomposition and a color-invariant description, has been introduced to remove useless face information (e.g., background content). Empirical evaluations have been conducted on four popular face recognition datasets, where the proposed system has outperformed related state-of-the-art schemes. 

Recently, using drone-based video surveillance, abnormal event detection (AED) is receiving increasing attention due to its reliability and cost-effectiveness. Typically, drones augmented with cameras can spot violent behaviors in crowds during sports events. They can monitor crowds in the perimeter of stadiums and/or other public venues during the World Cup. 

To that end, the research team, led by Prof. Al Maadeed, has developed a novel AED, which aims at learning abnormal actions using both normal and abnormal segments. It enables to avoid the annotation of anomalous events in training video sequences to reduce the computational cost and hence be easily implemented on drones. Therefore, abnormal events are learned using a deep multiple instance ranking scheme, which leverages weakly annotated training video sequences. Put simply, training annotations are put on whole videos instead of specific clips.

 

MENAFN27062022000063011010ID1104436040


Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.