2

First things first, the documentation here says "JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB."

I am sending a .jpg that is ~1.4 MB In my search, others who had this issue were custom forming packets and ran into issues chunk transfering images. however unlike the others I am not forming my own API call, just passing a jpg to the python sdk. What is going wrong/what am I missing?

The error is:

getting image, start time
opening image:  2019_11_30_18_40_21.jpg
time elapsed for capturing image: 8.007975816726685
time elapsed for detecting image: 0.0017137527465820312
appending face found in image
identifying face
time elapsed for identifying image: 0.8008027076721191
Person for face ID e7b2c3fe-6a62-471f-8371-8c1e96608362 is identified in 2019_11_30_18_40_21.jpg with a confidence of 0.68515.
Traceback (most recent call last):
File "./GreeterCam_V0.1 - testing.py", line 116, in <module>
face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, face.candidates[0].person_id, image)
File "/home/pi/.local/lib/python3.7/site-packages/azure/cognitiveservices/vision/face/operations/_person_group_person_operations.py", line 785, in add_face_from_stream
raise models.APIErrorException(self._deserialize, response)
azure.cognitiveservices.vision.face.models._models_py3.APIErrorException: (InvalidImageSize) Image size is too small.  

my source code is:

if __name__ == '__main__':
    FRAMES_PER_SECOND = 0.13
    ENDPOINT = os.environ['COGNITIVE_SERVICE_ENDPOINT']
    KEY = os.environ['COGNITIVE_SERVICE_KEY']
    face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
    PERSON_GROUP_ID = 'my-unique-person-group'
    #IMAGES_FOLDER = os.path.join(os.path.dirname(os.path.realpath(__file__)))
    #camera = PiCamera()
    #camera.start_preview()
    test_images = [file for file in glob.glob('*.jpg')]
    #webcam = cv2.VideoCapture(0)
    while(True):
        start_time = time.time()
        print('getting image, start time')
        for image_name in test_images:
            image = open(image_name, 'r+b')
            print("opening image: ", image_name)
            time.sleep(5)
            faces = face_client.face.detect_with_stream(image)     
            #image = open(os.path.join(IMAGES_FOLDER, imageName), 'r+b')
            face_ids = []
            time1 = time.time()
            print('time elapsed for capturing image: ' + str(time1-start_time))
            # detect faces in image

            time2 = time.time()
            print('time elapsed for detecting image: ' + str(time2-time1))
            for face in faces:
                print('appending face found in image')
                face_ids.append(face.face_id)
            if face_ids:
                print('identifying face')
                # if there are faces, identify person matching face
                results = face_client.face.identify(face_ids, PERSON_GROUP_ID)
                time3 = time.time()
                print('time elapsed for identifying image: ' + str(time3-time2))
                name = 'person-created-' + str(time.strftime("%Y_%m_%d_%H_%M_%S"))
                if not results:
                    #if there are no matching persons, make a new person and add face
                    print('No person in the person group for faces from {}.'.format(imageName))
                    new_person = face_client.person_group_person.create(PERSON_GROUP_ID, name)
                    face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, new_person.person_id, image)
                    time4 = time.time()
                    print('time elapsed for creating new person: ' + str(time4-time3))
                    print('New Person Created: {}'.format(new_person.person_id))
                for face in results:
                    if not face.candidates:
                        new_person = face_client.person_group_person.create(PERSON_GROUP_ID, name)
                        face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, new_person.person_id, image)
                    else:
                        #add face to person if match was found
                        print('Person for face ID {} is identified in {} with a confidence of {}.'.format(face.face_id, os.path.basename(image.name), face.candidates[0].confidence)) # Get topmost confidence score
                        face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, face.candidates[0].person_id, image)
                        time4 = time.time()
                        print('time elapsed for creating new person: ' + str(time4-time3))   

Also this is on Raspbian on a pi 3B(+?)

davidt
  • 23
  • 5
  • maybe it is too small in `WIDTH x HEIGHT`, not in `MB`. – furas Dec 16 '19 at 05:06
  • @furas maybe theoretically, but I doubt this is the issue here since I am using a normal sized image with a normal aspect ratio – davidt Dec 16 '19 at 16:14
  • it is possible. In other question with recognizing in PyTesseract solution was to resize image 120%. But I would display filename to check image which makes problem. Maybe accidently this file is different then other. – furas Dec 16 '19 at 16:55
  • @davidt is this code sample complete? I just worked with another user getting the "image too small error" and it turns out the steps in creating a person group were incomplete. I am trying your sample and I don't see that you created a person group. Or maybe that person group already exists? I wanted to address this, as the code implemented in the right order would not need the file I/O statement as a parameter, as I see the workaround as the answer has shown. I also have working samples of the Identify API call, if that is what you were after. – Azurespot Feb 22 '20 at 03:14
  • @Azurespot Yes I had aleady created a person group, and the accepted solution fixed my problem. Are you saying that I should not have to provide an image from file at all when using the person_group_person.add_face_from_stream(...) function? Or is there a better way to handle the creation of a new person if the face.identify(...) fails? I would like to see your working samples. – davidt Feb 24 '20 at 03:34
  • @davidt, I modified your sample a bit to separate the API calls out for clarity. In this example, you'll see that no modifications are necessary for the `add_face_from_stream` function. I found that in your sample, it was a scope issue that was causing that error. I hope this helps. I don't have a place to upload my images right now, but I will try to do that soon if you wanted to download and test. https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/Face/DetectIdentifyFace,py – Azurespot Feb 27 '20 at 04:27
  • @davidt here are the images. https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images – Azurespot Feb 27 '20 at 21:05

3 Answers3

1

I run your code on my side and got the same error .Seems there is something wrong with image param in code :

face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, face.candidates[0].person_id, image)

at the phase:

#add face to person if match was found

When I changed this line code to :

face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, face.candidates[0].person_id, open(image_name,"r+b"))

The issue was solved, faces has been added to a person successfully (this person has 1 face before) :

![enter image description here

Hope it helps.

Stanley Gong
  • 11,522
  • 1
  • 8
  • 16
  • Thank you for your help! I will try this fix this evening. – davidt Dec 17 '19 at 18:18
  • 1
    Hi @davidt , how's going? Has your issue been solved? – Stanley Gong Dec 18 '19 at 03:39
  • Solved! very odd that the image opening has to be in the argument. I would think my code would be equivalent. Thank you for your help!!!!!! – davidt Dec 18 '19 at 05:32
  • This worked because the image variable was lost in scope, that's why opening it inside the parameter eliminated the scope issue. But the downside is that image is not available anywhere else now. I added a sample link in the comments of the question for an alternative code layout that keeps the image both in scope and available as a variable for other processes. – Azurespot Feb 27 '20 at 21:08
1

I ran into this as well. This is because the stream has already been read when you use it in the detect_with_stream

You can go image.seek(0) or close the image and reopen it - but seeking is the better solution.

JDHannan
  • 36
  • 3
0

I was getting the same error because I was openning the photo before the recognition. So I removed the open and the code worked.