-1

I am building a home surveillance system using Raspberry Pi and OpenCv. Basically, my set up will consist of two devices, the first will be the security camera which will be a raspberry pi zero and a pi camera. The other device will be a main hub (Raspberry Pi 3) which will do all of the heavy lifting such as facial recognition, speech recognition and other operations.

what I want to do is to stream the footage of the security camera to the main hub so that it can process the images. So essentially I want to capture the frame from the pi camera, convert it to a numpy array (if that isn't done by default) and send that data to the main hub to then be converted to back to an image frame to be analysed by Opencv.

I am separating the operations as so since my security camera operates on a raspberry pi zero which is not very fast and cant handle heavy lifting. It is also because my security camera is hooked up to a battery and I am trying to lower the Pi's usage hence why I am dedicating a main hub for the heavy operations.

I am using a python v3 environment on both devices. I am well aware of IoT communication technologies such as mqtt, TCP and so on. But, I would like help with actually implementing such technologies in python script in order to accomplish my needs.

  • Well, you need to think about the dimensions of the images (height and width in pixels), whether colour or greyscale, and how often you need to send them. Then try and convert that to a data rate in bytes/s and work out what bandwidth you can achieve across your wired/wifi network. Then think about whether you need to compress them first, or work in YUV or MJPEG. Then think about packet loss/restart mechanisms and buffering. – Mark Setchell Nov 17 '18 at 14:36
  • Well for now these things are not so important since they are easily configured i am just after the technique that will allow me to send the captured image numpy array data to the main pi. But to answer your points, dimensions are 1080x1920, colour, and it will be sent every time motion is detected. Yeah i also already tried doing the byte streaming using mqtt but my code didnt end up working. – Noor Sabbagh Nov 17 '18 at 23:31

1 Answers1

0

I think it will be better to break down your task. 1. Capture image stream from pi0 and stream it. 2. Take the stream from pi1 and Process it in pi3

Sample code to get you started with image capture Which you can find here :

import numpy as np
import cv2

cap = cv2.VideoCapture(0)

while(True):
    # Capture frame-by-frame
    ret, frame = cap.read()

    # Our operations on the frame come here
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # Display the resulting frame
    cv2.imshow('frame',gray)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

You need to find this yourself. Stream the video to the URL :: IP.Add.ress.OF_pi0/cam_read

Live Video Streaming Python Flask

Then use this URL to process the video in the pi3 Sample code from here:

import numpy as np
import cv2

# Open a sample video available in sample-videos
vcap = cv2.VideoCapture('IP.Add.ress.OF_pi0/cam_read')
#if not vcap.isOpened():
#    print "File Cannot be Opened"

while(True):
    # Capture frame-by-frame
    ret, frame = vcap.read()
    #print cap.isOpened(), ret
    if frame is not None:
        # Display the resulting frame
        cv2.imshow('frame',frame)
        # use other methods for object face or motion detection 
        # OpenCV Haarcascade face detection 
        # Press q to close the video windows before it ends if you want
        if cv2.waitKey(22) & 0xFF == ord('q'):
            break
    else:
        print "Frame is None"
        break

# When everything done, release the capture
vcap.release()
cv2.destroyAllWindows()
print "Video stop"

This answer isn't direct solution to your question. Instead its a skeleton for you to get started. Face detection can be found here

amran hossen
  • 270
  • 5
  • 11
  • Thank you for your reply, although it still doesn't really help because i am pushing for the most efficient and lightweight solution. If i use opencv on Pi0 then i will not be able to lower its consumption of battery and it will slow down its productivity. Instead i was thinking of just using ghe picamera module. Capture the frame in HD and colour, convert the image numpy array to bytes and stream that to Pi3. But my problem is the streaming part. Not how to do opencv or facial recognition. I have tried mqtt but the publish method that it has only streams bytearrays. – Noor Sabbagh Nov 17 '18 at 23:40