0

Google Cloud Video Intelligence provides the following code for parsing annotation results with object tracking:

features = [videointelligence.Feature.OBJECT_TRACKING]
context = videointelligence.VideoContext(segments=None)
request = videointelligence.AnnotateVideoRequest(input_uri=gs_video_path, features=features, video_context=context, output_uri=output_uri)

operation = video_client.annotate_video(request)
result = operation.result(timeout=3600)
object_annotations = result.annotation_results[0].object_annotations

for object_annotation in object_annotations:
    print('Entity description: {}'.format(object_annotation.entity.description))
    print('Segment: {}s to {}s'.format(
        object_annotation.segment.start_time_offset.total_seconds(),
        object_annotation.segment.end_time_offset.total_seconds()))

    print('Confidence: {}'.format(object_annotation.confidence))

    # Here we print only the bounding box of the first frame_annotation in the segment
    frame_annotation = object_annotation.frames[0]
    box = frame_annotation.normalized_bounding_box
    timestamp = frame_annotation.time_offset.total_seconds()
    timestamp_end = object_annotation.segment.end_time_offset.total_seconds()

    print('Time offset of the first frame_annotation: {}s'.format(timestamp))
    print('Bounding box position:')
    print('\tleft  : {}'.format(box.left))
    print('\ttop   : {}'.format(box.top))
    print('\tright : {}'.format(box.right))
    print('\tbottom: {}'.format(box.bottom))
    print('\n')

However, I want to parse the json file that is generated via output_uri. The format of the json file is as following :

{
  "annotation_results": [ {
    "input_uri": "/production.supereye.co.uk/video/54V5x8q0CRU/videofile.mp4",
    "segment": {
      "start_time_offset": {
      
      },
      "end_time_offset": {
        "seconds": 22,
        "nanos": 966666000
      }
    },
    "object_annotations": [ {
      "entity": {
        "entity_id": "/m/01yrx",
        "description": "cat",
        "language_code": "en-US"
      },
      "confidence": 0.91939145,
      "frames": [ {
        "normalized_bounding_box": {
          "left": 0.17845993,
          "top": 0.44048917,
          "right": 0.5315634,
          "bottom": 0.7752136
        },
        "time_offset": {
        
        }
      }, {

How can I use the example code to parse the JSON that is provided with output_uri ? What kind of conversion is needed for this ?

london_utku
  • 1,070
  • 2
  • 16
  • 36
  • 1
    Content of `object_annotations` is the same with the content in `output_uri` and this is already being parsed at loop `for object_annotation in object_annotations:`. Do you still need to parse `output_uri`? – Ricco D Jun 21 '21 at 02:50
  • For the time being I only use some parts of the data. In the future I need more information I will parse the output_uri again and retrieve for instance frame precise information. That is why I intend to gather information directly from output_uri. It shouldn't be that difficult but how. – london_utku Jun 21 '21 at 07:10

2 Answers2

1

Using the file from output_uri, you can parse the json using this code. I saved the file as response.json locally and will use this for parsing.

This is similar with your code above where it parses data at the 1st frame_annotation. But this code lacks conversion of time offsets since the function used to convert is from a time object.

I commented start_end_offset and end_time_offset since it has 2 keys, seconds and nano. It's up to you which one would you like to use, just uncomment the lines and adjust accordingly.

import json

f = open('response.json', "r")
data = json.loads(f.read())

for results in data["annotation_results"]:
    for obj_ann in results["object_annotations"]:
        #start_time_offset = obj_ann["segment"]["start_time_offset"]["seconds"]
        #end_time_offset = obj_ann["segment"]["end_time_offset"]["seconds"]
        frame_annotation = obj_ann["frames"][0]
        entity = obj_ann["entity"]["description"]
        confidence = obj_ann["confidence"]
        box = frame_annotation["normalized_bounding_box"]
        time_offset = frame_annotation["time_offset"] #apparently this also has 2 keys. Look out for the other key which is `seconds`

        print('Entity description: {}'.format(entity))
        #print('Segment: {}s to {}s'.format(start_time_offset,end_time_offset))
        print('Confidence: {}'.format(confidence))

        #You can modify the code here if you encounter the `second` key
        if 'nanos' not in time_offset: 
            print('No time offset in frame')
            print('Bounding box position:')
            print('\tleft  : {}'.format(str(box["left"])))
            print('\tleft  : {}'.format(str(box["top"])))
            print('\tleft  : {}'.format(str(box["right"])))
            print('\tleft  : {}'.format(str(box["bottom"])))
        else:
            print('Time offset of the first frame_annotation: {}'.format(time_offset["nanos"]))
            print('Bounding box position:')
            print('\tleft  : {}'.format(str(box["left"])))
            print('\tleft  : {}'.format(str(box["top"])))
            print('\tleft  : {}'.format(str(box["right"])))
            print('\tleft  : {}'.format(str(box["bottom"])))

For testing I used gs://cloud-samples-data/video/cat.mp4 and used its response: enter image description here

Ricco D
  • 6,873
  • 1
  • 8
  • 18
  • Thanks for the answer. Yes, you are right, this will parse the content using the json notation. But is there any way that we can translate the output_uri json file and still rely on google cloud video intelligence code with simple dot notation ? I have investigated there are python packages called : Bunch, python-box or dotmap. Any suggestions on how I may use them with simple dot json access and this way I can rely on google cloud video intelligence reference code. – london_utku Jun 21 '21 at 10:14
1

Using dotmap package and implementing a simple total_seconds function for timestamps, things are pretty close to original example code :

import json
import os
from dotmap import DotMap

def total_seconds(time_offset):
    if type(time_offset.seconds) is DotMap:
        seconds = 0
    else:
        seconds = time_offset.seconds

    if type(time_offset.nanos) is DotMap:
        nanos = 0
    else:
        nanos = time_offset.nanos

    return seconds/1.0 + (float)(nanos/1e9)

f = open("./visual.json")

jsonDict = json.load(f)
result = DotMap(jsonDict)

print(result)

object_annotations = result.annotation_results[0].object_annotations


for object_annotation in object_annotations:
    print('Entity description: {}'.format(object_annotation.entity.description))
    frame_annotation = object_annotation.frames[0]
    box = frame_annotation.normalized_bounding_box
    timestamp = total_seconds(frame_annotation.time_offset)
    timestamp_end = total_seconds(object_annotation.segment.end_time_offset)
    print("Timestamps : {0} - {1}".format(timestamp, timestamp_end))

    print('Bounding box position:')
    print('\tleft  : {}'.format(box.left))
    print('\ttop   : {}'.format(box.top))
    print('\tright : {}'.format(box.right))
    print('\tbottom: {}'.format(box.bottom))
    print('\n')
london_utku
  • 1,070
  • 2
  • 16
  • 36