Hard Hat Detection: Training YOLO on custom data to ensure safety in construction environments
During the year 2019, in the United States of America, the number of preventable deaths increased by 2% which totals to 4,572 injuries that could have been avoided [Source]. Recent data from the US National Safety Council tells that such injuries are increasing rapidly with increasing construction, transportation, warehousing, and agriculture. In the year 2019, the industry which experienced the largest number of fatal injuries that could have been prevented was the construction industry.[Source]
Not wearing safety gear is an issue of great concern. Nowadays, companies often get involved in lawsuits, life insurance payouts, etc which costs them a lot of money.
Need of an AI-powered solution
Hard hats are life-saving but still enforcing hard hats can be a hectic task. A number of times people forget to wear so someone needs to keep a check on every worker working in construction environments, warehouse, or transportation. Earlier, humans used to perform manual monitoring but still, this isn’t effective because of many factors like humans are easy to distract, noisy environment, people and materials are always moving around.
Hard hats are one of the most effective protective tools and are used at all kinds of production sites, transportation sites, mining sites, and warehouses. We are well aware that human supervision is not possible at all these sites. Hence, a computer vision-based AI approach is used for the detection and monitoring of workers.
An AI-powered solution
Computer vision can be used to detect and identify whether a worker is wearing protective gear such as a safety helmet or not. In this blog, we will use YOLO (You-Only-Look-Once) v5 to detect hard hats which are being worn by the workers at a construction site, warehouse, transportation, etc.
The trained model then can be deployed to an IoT device for effective real time detection of hard hats.
How YOLO performs object detection?
YOLO is one of the best algorithms which is used for object detection. It divides all the given images into an MxM grid. Here, each grid is responsible for the task of object detection. Now, each cell in these grids predicts a boundary for the detected object. Thus, we have five main attributes namely ‘x’ and ‘y’ for coordinates, ‘w’ and ‘h’ for the height, and a confidence score which denotes the probability that the box is containing the object.
Implementation of the AI powered solution
Introduction to the dataset:
Hard Hat dataset is an object detection dataset of workers in workplace settings that require a hard hat. Annotations also include examples of just “person” and “head,” for when an individual may be present without a hard hat. The dataset contains 7041 images.
Here, we will provide you with an annotated dataset but in real-life scenarios, the dataset is specific for specific tasks. For this purpose, you can go through the following video which will take you through the task of data labeling. You just need to stay focused on your development and model building part and let Labellerr take up your data challenge and solve it. Labellerr is an automated, AI-enabled, SAAS platform that allows users to effectively annotate data for their deep learning models. Here is a short video elaborating on the same:
Getting started with the implementation
- Create a new Google Colab notebook. Then, go to runtime and change it to GPU.
- Now, the next step is to clone the YOLOv5 Github repository.
3. Installing the required dependencies which are essential for the YOLOv5 model.
4. Importing PyTorch, IPython utilities, and GPU information.
Here, we have Nvidia Tesla K80 GPU with 11441 MB of memory.
5. Now, you need to download and upload the dataset from this link in the /content directory of the Google Colab notebook.
6. Unzipping the dataset.
7. There is a specific YAML file which comes with the dataset. We are going to load this pre-written YAML file.
We can clearly observe that there are two folders namely train and val which contain training and validation images respectively. Also, the dataset has 3 classes and they are ‘head’, ‘helmet’, and ‘person’
8. Loading the YAML file and model configuration which we will be using here.
9. We need to customize the IPython writefiles so that we can write variables without any problem.
10. Starting the model training. This is a time consuming step and takes about 40 minutes to execute. Since, we had time contraints so we only trained it for 16 epochs. To increase the model accuracy, you can increase the number of epochs (don’t increase the number of epochs more than 50).
11. Displaying the ground truth data.
12. Printing the augmented training example
Here is the link to the Google Colab notebook where we have trained the YOLOv5 model for object detection.
How the solution is being used in the industry?
Many organizations are implementing AI-powered solutions to effectively monitor and help enforce safety rules. In day-to-day life, the power of artificial intelligence is being used to make workplaces safer which will, in turn, increase the safety standards and will help workplaces to operate in a more efficient manner. Hence, these all measures will also lead to more productivity.
The YOLO model is being deployed on drones or similar devices which helps to keep a track of workers while constantly detecting whether they are wearing safety gear such as hard hats or not.
About the writer
Hey Guys, congratulation on making it to the end! I am Akshat Dubey and I work as a Data Science Intern at Labellerr. I am a fourth-year student pursuing an Integrated Master of Science in Mathematics and Computing from Birla Institute of Technology – Mesra, Ranchi. My course focuses on the implementation of mathematics in the field of artificial intelligence. I am a Kaggle Master well versed in Machine Learning, Deep Learning, Computer Vision, and Natural Language Processing. My primary interests involve the application of artificial intelligence in the field of healthcare and retail. To connect with me, you can click on the following links:
Connect with the Labellerr Team: