I am currently trying to use Nvidia DIGITS to train a CNN on a custom dataset for object detection, and eventually I want to run that network on an Nvidia Jetson TX2. I followed the recommended instructions to download the DIGITS image from Docker, and I am able to successfully train a network with reasonable accuracy. But when I try to run my network in python using OpenCv, I get this error,
"error: (-215) pbBlob.raw_data_type() == caffe::FLOAT16 in function blobFromProto"
I have read in a few other threads that this is due to the fact that DIGITS stores its networks in a form that is incompatible with OpenCv's DNN functionality.
Before training my network, I have tried selecting the option in DIGITS that is supposed to make the network compatible with other software, however that doesn't seem to change the network at all, and I get the same error when running my python script. This is the script I run that creates the error (it comes from this tutorial https://www.pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/)
# import the necessary packages
import numpy as np
import argparse
import cv2
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to input image")
ap.add_argument("-p", "--prototxt", required=True,
help="path to Caffe 'deploy' prototxt file")
ap.add_argument("-m", "--model", required=True,
help="path to Caffe pre-trained model")
ap.add_argument("-c", "--confidence", type=float, default=0.2,
help="minimum probability to filter weak detections")
args = vars(ap.parse_args())
# initialize the list of class labels MobileNet SSD was trained to
# detect, then generate a set of bounding box colors for each class
CLASSES = ["dontcare", "HatchPanel"]
COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))
# load our serialized model from disk
print("[INFO] loading model...")
net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])
# load the input image and construct an input blob for the image
# by resizing to a fixed 300x300 pixels and then normalizing it
# (note: normalization is done via the authors of the MobileNet SSD
# implementation)
image = cv2.imread(args["image"])
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 0.007843,
(300, 300), 127.5)
# pass the blob through the network and obtain the detections and
# predictions
print("[INFO] computing object detections...")
net.setInput(blob)
detections = net.forward()
# loop over the detections
for i in np.arange(0, detections.shape[2]):
# extract the confidence (i.e., probability) associated with the
# prediction
confidence = detections[0, 0, i, 2]
# filter out weak detections by ensuring the `confidence` is
# greater than the minimum confidence
if confidence > args["confidence"]:
# extract the index of the class label from the `detections`,
# then compute the (x, y)-coordinates of the bounding box for
# the object
idx = int(detections[0, 0, i, 1])
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")
# display the prediction
label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100)
print("[INFO] {}".format(label))
cv2.rectangle(image, (startX, startY), (endX, endY),
COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(image, label, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# show the output image
cv2.imshow("Output", image)
cv2.waitKey(0)
This should output the image specified in the call to the script, with the output of the neural network drawn over top of the image. But instead, the script crashes with the before mentioned error. I have seen other threads with people that have this same error, but as of yet, none of them have arrived at a solution that works with the current version of DIGITS.
My full setup is as follows:
OS: Ubuntu 16.04
Nvidia DIGITS Docker Image Version: 19.01-caffe
DIGITS Version: 6.1.1
Caffe Version: 0.17.2
Caffe Flavor: Nvidia
OpenCV Version: 4.0.0
Python Version: 3.5
Any help is much appreciated.