Project At A Glance

Objective:

  • Sucessfully track people in motion by rendering keypoints, edges and connections on the active Video Capture device i.e. be accessible through both the Webcam and external Video files.
  • Further enhance this setup by looping through multiple people at once and pursue Multi-pose Detection.

Setup: OpenCV, MoveNet.Lightning Pre-Trained Model [Download]

Implementation:

  • Captures video input resized to Dimensions 32m x 32n where (m, n) are scaled to match the original dimensions closely.
  • Renders 17 keypoints inter-connected with edges to define Skeletal Blueprints on every person in the frame.
  • Assigns a Confidence Score to each keypoint. Points are rendered when the Confidence >= 0.3.

Results:

  • The model has been successfully tweaked to detect multiple people with tested high performance on resolutions upto 4K.

Deployment: View this project on GitHub.

Dependencies

!pip install tensorflow==2.8.0 tensorflow-gpu==2.8.0 tensorflow-hub opencv-python matplotlib

Requirement already satisfied: tensorflow==2.8.0 in c:\users\kunal\appdata\roaming\python\python39\site-packages (2.8.0)
Collecting tensorflow-gpu==2.8.0
  Downloading tensorflow_gpu-2.8.0-cp39-cp39-win_amd64.whl (438.0 MB)
     -------------------------------------- 438.0/438.0 MB 2.9 MB/s eta 0:00:00
Collecting tensorflow-hub
  Using cached tensorflow_hub-0.12.0-py2.py3-none-any.whl (108 kB)
Collecting opencv-python
  Using cached opencv_python-4.5.5.64-cp36-abi3-win_amd64.whl (35.4 MB)
Collecting matplotlib
  Downloading matplotlib-3.5.2-cp39-cp39-win_amd64.whl (7.2 MB)
     ---------------------------------------- 7.2/7.2 MB 21.9 MB/s eta 0:00:00
Collecting tf-estimator-nightly==2.8.0.dev2021122109
  Using cached tf_estimator_nightly-2.8.0.dev2021122109-py2.py3-none-any.whl (462 kB)
Requirement already satisfied: setuptools in c:\programdata\anaconda3\envs\movenet-multipose\lib\site-packages (from tensorflow==2.8.0) (62.2.0)
Requirement already satisfied: keras<2.9,>=2.8.0rc0 in c:\users\kunal\appdata\roaming\python\python39\site-packages (from tensorflow==2.8.0) (2.8.0)
Requirement already satisfied: google-pasta>=0.1.1 in c:\users\kunal\appdata\roaming\python\python39\site-packages (from tensorflow==2.8.0) (0.2.0)
Collecting libclang>=9.0.1
  Using cached libclang-14.0.1-py2.py3-none-win_amd64.whl (14.2 MB)
Collecting tensorboard<2.9,>=2.8
  Using cached tensorboard-2.8.0-py3-none-any.whl (5.8 MB)
Collecting protobuf>=3.9.2
  Downloading protobuf-3.20.1-cp39-cp39-win_amd64.whl (904 kB)
     ------------------------------------- 904.1/904.1 kB 14.2 MB/s eta 0:00:00
Collecting keras-preprocessing>=1.1.1
  Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Requirement already satisfied: wrapt>=1.11.0 in c:\users\kunal\appdata\roaming\python\python39\site-packages (from tensorflow==2.8.0) (1.12.1)
Collecting tensorflow-io-gcs-filesystem>=0.23.1
  Downloading tensorflow_io_gcs_filesystem-0.25.0-cp39-cp39-win_amd64.whl (1.5 MB)
     ---------------------------------------- 1.5/1.5 MB 18.5 MB/s eta 0:00:00
Requirement already satisfied: flatbuffers>=1.12 in c:\users\kunal\appdata\roaming\python\python39\site-packages (from tensorflow==2.8.0) (2.0)
Collecting numpy>=1.20
  Downloading numpy-1.22.3-cp39-cp39-win_amd64.whl (14.7 MB)
     --------------------------------------- 14.7/14.7 MB 21.8 MB/s eta 0:00:00
Collecting termcolor>=1.1.0
  Using cached termcolor-1.1.0-py3-none-any.whl
Requirement already satisfied: astunparse>=1.6.0 in c:\users\kunal\appdata\roaming\python\python39\site-packages (from tensorflow==2.8.0) (1.6.3)
Collecting typing-extensions>=3.6.6
  Downloading typing_extensions-4.2.0-py3-none-any.whl (24 kB)
Collecting opt-einsum>=2.3.2
  Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Requirement already satisfied: gast>=0.2.1 in c:\users\kunal\appdata\roaming\python\python39\site-packages (from tensorflow==2.8.0) (0.5.3)
Collecting absl-py>=0.4.0
  Using cached absl_py-1.0.0-py3-none-any.whl (126 kB)
Requirement already satisfied: h5py>=2.9.0 in c:\users\kunal\appdata\roaming\python\python39\site-packages (from tensorflow==2.8.0) (3.6.0)
Collecting grpcio<2.0,>=1.24.3
  Downloading grpcio-1.46.1-cp39-cp39-win_amd64.whl (3.5 MB)
     ---------------------------------------- 3.5/3.5 MB 20.3 MB/s eta 0:00:00
Requirement already satisfied: six>=1.12.0 in c:\programdata\anaconda3\envs\movenet-multipose\lib\site-packages (from tensorflow==2.8.0) (1.16.0)
Collecting cycler>=0.10
  Downloading cycler-0.11.0-py3-none-any.whl (6.4 kB)
Collecting fonttools>=4.22.0
  Downloading fonttools-4.33.3-py3-none-any.whl (930 kB)
     ------------------------------------- 930.9/930.9 kB 19.6 MB/s eta 0:00:00
Requirement already satisfied: packaging>=20.0 in c:\programdata\anaconda3\envs\movenet-multipose\lib\site-packages (from matplotlib) (21.3)
Requirement already satisfied: python-dateutil>=2.7 in c:\programdata\anaconda3\envs\movenet-multipose\lib\site-packages (from matplotlib) (2.8.2)
Requirement already satisfied: pyparsing>=2.2.1 in c:\programdata\anaconda3\envs\movenet-multipose\lib\site-packages (from matplotlib) (3.0.9)
Collecting kiwisolver>=1.0.1
  Downloading kiwisolver-1.4.2-cp39-cp39-win_amd64.whl (55 kB)
     ---------------------------------------- 55.4/55.4 kB 1.4 MB/s eta 0:00:00
Collecting pillow>=6.2.0
  Downloading Pillow-9.1.0-cp39-cp39-win_amd64.whl (3.3 MB)
     ---------------------------------------- 3.3/3.3 MB 20.9 MB/s eta 0:00:00
Requirement already satisfied: wheel<1.0,>=0.23.0 in c:\programdata\anaconda3\envs\movenet-multipose\lib\site-packages (from astunparse>=1.6.0->tensorflow==2.8.0) (0.37.1)
Collecting tensorboard-plugin-wit>=1.6.0
  Using cached tensorboard_plugin_wit-1.8.1-py3-none-any.whl (781 kB)
Collecting tensorboard-data-server<0.7.0,>=0.6.0
  Using cached tensorboard_data_server-0.6.1-py3-none-any.whl (2.4 kB)
Collecting google-auth<3,>=1.6.3
  Downloading google_auth-2.6.6-py2.py3-none-any.whl (156 kB)
     -------------------------------------- 156.7/156.7 kB 9.8 MB/s eta 0:00:00
Collecting markdown>=2.6.8
  Downloading Markdown-3.3.7-py3-none-any.whl (97 kB)
     ---------------------------------------- 97.8/97.8 kB 5.5 MB/s eta 0:00:00
Collecting werkzeug>=0.11.15
  Downloading Werkzeug-2.1.2-py3-none-any.whl (224 kB)
     ------------------------------------- 224.9/224.9 kB 13.4 MB/s eta 0:00:00
Collecting google-auth-oauthlib<0.5,>=0.4.1
  Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB)
Collecting requests<3,>=2.21.0
  Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB)
     ---------------------------------------- 63.1/63.1 kB 1.7 MB/s eta 0:00:00
Collecting pyasn1-modules>=0.2.1
  Using cached pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting cachetools<6.0,>=2.0.0
  Using cached cachetools-5.0.0-py3-none-any.whl (9.1 kB)
Collecting rsa<5,>=3.1.4
  Using cached rsa-4.8-py3-none-any.whl (39 kB)
Collecting requests-oauthlib>=0.7.0
  Using cached requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB)
Requirement already satisfied: importlib-metadata>=4.4 in c:\programdata\anaconda3\envs\movenet-multipose\lib\site-packages (from markdown>=2.6.8->tensorboard<2.9,>=2.8->tensorflow==2.8.0) (4.11.3)
Collecting urllib3<1.27,>=1.21.1
  Downloading urllib3-1.26.9-py2.py3-none-any.whl (138 kB)
     -------------------------------------- 139.0/139.0 kB 8.6 MB/s eta 0:00:00
Collecting charset-normalizer~=2.0.0
  Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB)
Collecting certifi>=2017.4.17
  Downloading certifi-2021.10.8-py2.py3-none-any.whl (149 kB)
     -------------------------------------- 149.2/149.2 kB 9.3 MB/s eta 0:00:00
Collecting idna<4,>=2.5
  Using cached idna-3.3-py3-none-any.whl (61 kB)
Requirement already satisfied: zipp>=0.5 in c:\programdata\anaconda3\envs\movenet-multipose\lib\site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.9,>=2.8->tensorflow==2.8.0) (3.8.0)
Collecting pyasn1<0.5.0,>=0.4.6
  Using cached pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Collecting oauthlib>=3.0.0
  Using cached oauthlib-3.2.0-py3-none-any.whl (151 kB)
Installing collected packages: tf-estimator-nightly, termcolor, tensorboard-plugin-wit, pyasn1, libclang, certifi, werkzeug, urllib3, typing-extensions, tensorflow-io-gcs-filesystem, tensorboard-data-server, rsa, pyasn1-modules, protobuf, pillow, oauthlib, numpy, kiwisolver, idna, grpcio, fonttools, cycler, charset-normalizer, cachetools, absl-py, tensorflow-hub, requests, opt-einsum, opencv-python, matplotlib, markdown, keras-preprocessing, google-auth, requests-oauthlib, google-auth-oauthlib, tensorboard, tensorflow-gpu
Successfully installed absl-py-1.0.0 cachetools-5.0.0 certifi-2021.10.8 charset-normalizer-2.0.12 cycler-0.11.0 fonttools-4.33.3 google-auth-2.6.6 google-auth-oauthlib-0.4.6 grpcio-1.46.1 idna-3.3 keras-preprocessing-1.1.2 kiwisolver-1.4.2 libclang-14.0.1 markdown-3.3.7 matplotlib-3.5.2 numpy-1.22.3 oauthlib-3.2.0 opencv-python-4.5.5.64 opt-einsum-3.3.0 pillow-9.1.0 protobuf-3.20.1 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-2.27.1 requests-oauthlib-1.3.1 rsa-4.8 tensorboard-2.8.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorflow-gpu-2.8.0 tensorflow-hub-0.12.0 tensorflow-io-gcs-filesystem-0.25.0 termcolor-1.1.0 tf-estimator-nightly-2.8.0.dev2021122109 typing-extensions-4.2.0 urllib3-1.26.9 werkzeug-2.1.2
import tensorflow as tf
import tensorflow_hub as hub
import cv2
from matplotlib import pyplot as plt
import numpy as np

Load the MoveNet.Lightning Model

model= hub.load('https://tfhub.dev/google/movenet/multipose/lightning/1')
movenet= model.signatures['serving_default']

VideoCapture

Master Function

cap= cv2.VideoCapture('football.mp4')
while cap.isOpened():
  ret, frame= cap.read()

  img= frame.copy()
  img= tf.image.resize_with_pad(tf.expand_dims(img, axis= 0), 544, 1024)
  input_img= tf.cast(img, dtype= tf.int32)
  # VideoCapture should be in dimensions 32m x 32n, closest to actual resolution.
  
  results= movenet(input_img)
  keypoints_and_scores= results['output_0'].numpy()[:, :, :51].reshape((6, 17, 3))  
  
  loop_through_people(frame, keypoints_and_scores, EDGES, 0.3)

  cv2.imshow('Movenet Multipose Window', frame)

  if cv2.waitKey(10) & 0xFF==ord('q'):
    break

cap.release()
cv2.destroyAllWindows()

Scaling (Width/Height)

1080/2048
0.52734375

Vectorized Frame

frame
array([[[217, 211, 219],
        [217, 211, 219],
        [217, 210, 221],
        ...,
        [199, 190, 194],
        [199, 190, 194],
        [199, 190, 194]],

       [[210, 204, 212],
        [210, 204, 212],
        [210, 203, 214],
        ...,
        [199, 190, 194],
        [199, 190, 194],
        [199, 190, 194]],

       [[209, 203, 211],
        [209, 203, 211],
        [209, 202, 213],
        ...,
        [196, 187, 191],
        [196, 187, 191],
        [196, 187, 191]],

       ...,

       [[ 47,  70,  69],
        [ 47,  70,  69],
        [ 47,  70,  69],
        ...,
        [ 41,  67,  70],
        [ 41,  67,  70],
        [ 41,  67,  70]],

       [[ 50,  73,  72],
        [ 50,  73,  72],
        [ 48,  71,  70],
        ...,
        [ 41,  67,  70],
        [ 41,  67,  70],
        [ 41,  67,  70]],

       [[ 51,  74,  73],
        [ 51,  74,  73],
        [ 50,  73,  72],
        ...,
        [ 40,  66,  69],
        [ 40,  66,  69],
        [ 40,  66,  69]]], dtype=uint8)
plt.imshow(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
<matplotlib.image.AxesImage at 0x19252116880>

Keypoint Matrix and Confidence Scores

keypoints_and_scores[0]
array([[0.48517755, 0.10898407, 0.35875612],
       [0.47976708, 0.11077929, 0.46474373],
       [0.47994435, 0.1071113 , 0.4717213 ],
       [0.4723238 , 0.11269185, 0.4148368 ],
       [0.47123876, 0.10366924, 0.4909297 ],
       [0.4823754 , 0.11886524, 0.79927677],
       [0.48038223, 0.0954906 , 0.78360116],
       [0.51465076, 0.12093391, 0.5353545 ],
       [0.51162773, 0.09023584, 0.739223  ],
       [0.5279961 , 0.11441359, 0.5836447 ],
       [0.5291515 , 0.10141041, 0.48077533],
       [0.52872527, 0.11163795, 0.7669046 ],
       [0.5289806 , 0.09788188, 0.689511  ],
       [0.5415193 , 0.12559794, 0.8202237 ],
       [0.53999746, 0.08833516, 0.6755993 ],
       [0.5804207 , 0.11740522, 0.6815445 ],
       [0.58320117, 0.09349357, 0.8147298 ]], dtype=float32)
scores= keypoints_and_scores[0][:, 2]
scores
array([0.35875612, 0.46474373, 0.4717213 , 0.4148368 , 0.4909297 ,
       0.79927677, 0.78360116, 0.5353545 , 0.739223  , 0.5836447 ,
       0.48077533, 0.7669046 , 0.689511  , 0.8202237 , 0.6755993 ,
       0.6815445 , 0.8147298 ], dtype=float32)

Logistics for Multi-Body Detection

Make Keypoints

def draw_keypoints(frame, keypoints, confidence_threshold):
    y, x, c = frame.shape
    shaped = np.squeeze(np.multiply(keypoints, [y,x,1]))
    
    for kp in shaped:
        ky, kx, kp_conf = kp
        if kp_conf > confidence_threshold:
            cv2.circle(frame, (int(kx), int(ky)), 6, (0,255,0), -1)

Define Edges

EDGES = {
    (0, 1): 'm',
    (0, 2): 'c',
    (1, 3): 'm',
    (2, 4): 'c',
    (0, 5): 'm',
    (0, 6): 'c',
    (5, 7): 'm',
    (7, 9): 'm',
    (6, 8): 'c',
    (8, 10): 'c',
    (5, 6): 'y',
    (5, 11): 'm',
    (6, 12): 'c',
    (11, 12): 'y',
    (11, 13): 'm',
    (13, 15): 'm',
    (12, 14): 'c',
    (14, 16): 'c'
}

Draw Connections

def draw_connections(frame, keypoints, edges, confidence_threshold):
    y, x, c = frame.shape
    shaped = np.squeeze(np.multiply(keypoints, [y,x,1]))
    
    for edge, color in edges.items():
        p1, p2 = edge
        y1, x1, c1 = shaped[p1]
        y2, x2, c2 = shaped[p2]
        
        if (c1 > confidence_threshold) & (c2 > confidence_threshold):      
            cv2.line(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0,0,255), 4)

Looping Through Multiple People

def loop_through_people(frame, keypoints_and_scores, edges, confidence_threshold):
    for person in keypoints_and_scores:
        draw_connections(frame, person, edges, confidence_threshold)
        draw_keypoints(frame, person, confidence_threshold)