if ret != True:
break
frame_mask = subtractor.apply(frame)
execution_time = (time.time() - start_time)
print("Execution time: " + str(execution_time))
cap.release()
You can find the benchmark data of the background subtractor below:
https://docs.nvidia.com/vpi/algo_background_subtractor.html
For TX2 with a 1920x1080 RGB8 input, it’s expected to have 35.0±0.7ms on the CUDA backend.
Could you check the sample in the document to see if anything different between the implementation?
More, please remember to maximize the performance with the VPI clock script first:
https://docs.nvidia.com/vpi/algo_performance.html#maxout_clocks
Thanks.
Is there anyway to obtain the benchmark source code to see if there where any optimizations?
Also, regarding the VPI clock script, is there any reason why I can’t just leave the TX2 in the maxed out state all the time, if power consumption is not an issue?
Sorry that I just realize the timing you mentioned is the time for the whole video.
Based on the score, it takes 5.7/170 = 33ms which is close to the benchmark score.
It seems that we don’t have a comparison between OpenCV and VPI for the background subtraction algorithm.
We are going to reproduce this internally to see the behavior in our environment.
Will share more information with you later.
Thanks.
Confirmed that we can reproduce the performance difference.
We are checking this with our internal team.
Will share more information later.
Thanks.
I made a mistake the 170 frame was an estimate. It appears that the actual video is 331 frames of 800x600 pixel frames. When it comes to the the benchmark score, the benchmark was for 1920x1080 frames. Since mine are much smaller and have 480K pixels vs over 2,073K pixels for the benchmark, should there be a corresponding increase in performance, from 35 ms to much less?
Thanks for the update.
We can also reproduce the performance issue in our environment.
To give more suggestions, we need to check more details with our internal team.
Will share more information with you once we got feedback.
Thanks.