This article will share with you how to use Python Face recognition - Detect and recognize a person in real-time video .
In this deep learning project , We will learn how to use Python Face recognition in real-time video . We will use python dlib The facial recognition network is built for this project .Dlib Is a general software library . Use dlib tool kit , We can make real-world machine learning applications .
In this project , We will first understand the working principle of face recognizer , Then we will use Python Constructing face recognition .
About dlib Face recognition :
Python Provides face_recognition API, It's through dlib The face recognition algorithm is constructed . This face_recognition API Allows us to implement face detection 、 Real time face tracking and face recognition applications .
Project preparation :
First , You need to get from PyPI install dlib Kuhe face_recognition API:
pip3 install dlib pip3 install face_recognition
Download source code :
Face recognition source code
use Python Steps to realize face recognition :
We will build this in two parts python project . First , Build two different... For these two parts python file :
First , Create a file in your working directory embedding.py. In this document , We will create face embedding for specific faces . Use face_recognition.face_encodings Methods make face embedding . These face embedding is a 128 Dimension vector . In this vector space , Different vectors of the same person image are close to each other . After face embedding , We store them in a pickle In file .
Paste the following code into this embedding.py In file .
import sys import cv2 import face_recognition import pickle
name=input("enter name")
ref_id=input("enter id")try:
f=open("ref_name.pkl","rb")
ref_dictt=pickle.load(f)
f.close()
except:
ref_dictt={}
ref_dictt[ref_id]=name
f=open("ref_name.pkl","wb")
pickle.dump(ref_dictt,f)
f.close()
try:
f=open("ref_embed.pkl","rb")
embed_dictt=pickle.load(f)
f.close()
except:
embed_dictt={}ad locum , We store specific people's embeddedness in embed_dictt In the dictionary . We have created... In the previous state embed_dictt. In this dictionary , We will use that person's ref_id As key .
To take an image , Please press “s” Five times . If you want to stop the camera , Please press “q”:
for i in range(5):
key = cv2. waitKey(1)
webcam = cv2.VideoCapture(0)
while True:
check, frame = webcam.read()
cv2.imshow("Capturing", frame)
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
rgb_small_frame = small_frame[:, :, ::-1]
key = cv2.waitKey(1)
if key == ord('s') :
face_locations = face_recognition.face_locations(rgb_small_frame)
if face_locations != []:
face_encoding = face_recognition.face_encodings(frame)[0]
if ref_id in embed_dictt:
embed_dictt[ref_id]+=[face_encoding]
else:
embed_dictt[ref_id]=[face_encoding]
webcam.release()
cv2.waitKey(1)
cv2.destroyAllWindows()
break
elif key == ord('q'):
print("Turning off camera.")
webcam.release()
print("Camera off.")
print("Program ended.")
cv2.destroyAllWindows()
breakad locum , We will embed_dictt Stored in pickle In file . therefore , To identify that person , We can load its embedded files directly from this file :
f=open("ref_embed.pkl","wb")
pickle.dump(embed_dictt,f)
f.close()function python File and use person names and their ref_id Get five image inputs :
python3 embedding.py
ad locum , We'll create people from the camera frame again . then , We embed the new with pickle The embedding stored in the file is matched . The new embedding of the same person will be close to its embedding in vector space . therefore , We will be able to identify this person .
Now? , Create a new python File identification .py And paste the following code :
import face_recognition import cv2 import numpy as np import glob import pickle
f=open("ref_name.pkl","rb")
ref_dictt=pickle.load(f)
f.close()
f=open("ref_embed.pkl","rb")
embed_dictt=pickle.load(f)
f.close()known_face_encodings = [] known_face_names = [] for ref_id , embed_list in embed_dictt.items(): for my_embed in embed_list: known_face_encodings +=[my_embed] known_face_names += [ref_id]
video_capture = cv2.VideoCapture(0)
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True
while True :
ret, frame = video_capture.read()
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
rgb_small_frame = small_frame[:, :, ::-1]
if process_this_frame:
face_locations = face_recognition.face_locations(rgb_small_frame)
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
face_names = []
for face_encoding in face_encodings:
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
face_names.append(name)
process_this_frame = not process_this_frame
for (top_s, right, bottom, left), name in zip(face_locations, face_names):
top_s *= 4
right *= 4
bottom *= 4
left *= 4
cv2.rectangle(frame, (left, top_s), (right, bottom), (0, 0, 255), 2)
cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, ref_dictt[name], (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()Now run the second part of the project to identify this person :
python3 recognise.py
This in-depth learning project shared today teaches you how to use python library dlib and face_recognition APIs(OpenCV Of ) Develop face recognition project . It introduces face_recognition API. We implemented this in two parts python project :
at present , From the application of face recognition technology in China , Mainly concentrated in three areas : Access control 、 Security and finance . Specific as : Security monitoring 、 Face detection in video 、 Face recognition 、 Traffic statistics, etc , It is widely used in residential areas 、 Intelligent access control of buildings , Suspicious personnel around the perimeter are detected 、 Statistics of tourist flow in the scenic spot, etc .
TSINGSEE Qingxi video is based on years of technical experience in the video field , take AI testing 、 Intelligent recognition technology is integrated into various application scenarios , Typical examples are EasyCVR Video convergence cloud service , have AI Face recognition 、 License plate recognition 、 Voice talk 、 Pan tilt control 、 Audible and visual alarm 、 The ability of monitoring video analysis and data collection .