2

I am making an input pipeline in tensorflow for a task I want to do. I have set up a TFRecord dataset which has been saved out to a file on disk.

I am trying to load in the dataset (to be batched and sent to the actual ML algorithm) using the following code:

dataset = tf.data.TFRecordDataset(filename)

print("Starting mapping...")

dataset = dataset.map(map_func = read_single_record)
print("Mapping complete")

buffer = 500 # How large of a buffer will we sample from?
batch_size = 125
capacity = buffer + 2 * batch_size

print("Shuffling dataset...")
dataset = dataset.shuffle(buffer_size = buffer)
print("Batching dataset...")
dataset = dataset.batch(batch_size)
dataset = dataset.repeat()

print("Creating iterator...")
iterator = dataset.make_one_shot_iterator()
examples_batch, labels_batch = iterator.get_next()

However, I get an error on the dataset.map() line. The error I get looks like this: TypeError: Expected int64, got <tensorflow.python.framework.sparse_tensor.SparseTensor object at 0x00000000085F74A8> of type 'SparseTensor' instead.

The read_single_record() function looks like this:

keys_to_features = {
                "image/pixels": tf.FixedLenFeature([], tf.string, default_value = ""),
                "image/label/class": tf.FixedLenFeature([], tf.int64, default_value = 0),
                "image/label/numbb": tf.FixedLenFeature([], tf.int64, default_value = 0),
                "image/label/by": tf.VarLenFeature(tf.float32),
                "image/label/bx": tf.VarLenFeature(tf.float32),
                "image/label/bh": tf.VarLenFeature(tf.float32),
                "image/label/bw": tf.VarLenFeature(tf.float32)
            }

features = tf.parse_single_example(record, keys_to_features)

image_pixels = tf.image.decode_image(features["image/pixels"])
print("Features: {0}".format(features))

example = image_pixels  # May want to do some processing on this at some point

label = [features["image/label/class"],
        features["image/label/numbb"],
        features["image/label/by"],
        features["image/label/bx"],
        features["image/label/bh"],
        features["image/label/bw"]]

return example, label

I'm not sure where the issue lies. I got the idea for this code from the tensorflow API documentation, slightly modified for my purposes. I really have no idea where to start trying to fix this.

For reference, here is the code I have for generating the TFRecord file:

def parse_annotations(in_file, img_filename, cell_width, cell_height):
    """ Parses the annotations file to obtain the bounding boxes for a single image
    """
    y_mins = []
    x_mins = []
    heights = []
    widths = []
    grids_x = []
    grids_y = []
    classes = [0]

    num_faces = int(in_file.readline().rstrip())

    img_width, img_height = get_image_dims(img_filename)

    for i in range(num_faces):
        clss,  x, y, width, height = in_file.readline().rstrip().split(',')

        x = float(x)
        y = float(y)
        width = float(width)
        height = float(height)

        x = x - (width / 2.0)
        y = y - (height / 2.0)

        y_mins.append(y)
        x_mins.append(x)
        heights.append(height)
        widths.append(width)

        grid_x, grid_y = get_grid_loc(x, y, width, height, img_width, img_height, cell_width, cell_height)

    pixels = get_image_pixels(img_filename)

    example = tf.train.Example(features = tf.train.Features(feature = {
        "image/pixels": bytes_feature(pixels),
        "image/label/class": int_list_feature(classes),
        "image/label/numbb": int_list_feature([num_faces]),
        "image/label/by": float_list_feature(y_mins), 
        "image/label/bx": float_list_feature(x_mins), 
        "image/label/bh": float_list_feature(heights), 
        "image/label/bw": float_list_feature(widths)
    }))

    return example, num_faces

if len(sys.argv) < 4:
    print("Usage: python convert_to_tfrecord.py [path to processed annotations file] [path to training output file] [path to validation output file] [training fraction]")

else:
    processed_fn = sys.argv[1]
    train_fn = sys.argv[2]
    valid_fn = sys.argv[3]
    train_frac = float(sys.argv[4])

    if(train_frac > 1.0 or train_frac < 0.0):
        print("Training fraction (f) must be 0 <= f <= 1")

    else:
        with tf.python_io.TFRecordWriter(train_fn) as writer:
            with tf.python_io.TFRecordWriter(valid_fn) as valid_writer:
                with open(processed_fn) as f:
                    for line in f:
                        ex, n_faces = parse_annotations(f, line.rstrip(), 30, 30)

                        randVal = rand.random()

                        if(randVal < train_frac):
                            writer.write(ex.SerializeToString())

                        else:
                            valid_writer.write(ex.SerializeToString())

Note that I've removed some code that isn't to do with the actual serialisation/creation of the TFRecords file.

Makcheese
  • 397
  • 1
  • 14

1 Answers1

1

Not tested, but it seems like the mapping function cannot return lists of SparseTensor and Tensor .

tf.VarLenFeature(tf.float32) returns a SparseTensor but tf.FixedLenFeature([], tf.int64) returns a Tensor.

For batching to work good, I would suggest you to only work with Tensor.

Suggestion on how you could derive label:

label = {
    "image/label/class" : features["image/label/class"],
    "image/label/numbb" : features["image/label/numbb"],
    "image/label/by" : tf.sparse_tensor_to_dense(features["image/label/by"], default_value=-1),
    "image/label/bx" : tf.sparse_tensor_to_dense(features["image/label/bx"], default_value=-1)
    "image/label/bh" : tf.sparse_tensor_to_dense(features["image/label/bh"], default_value=-1)
    "image/label/bw" : tf.sparse_tensor_to_dense(features["image/label/bw"], default_value=-1)
}

For inspiration on how you can treat the output of this mapping, I suggest this thread.

syltruong
  • 2,563
  • 20
  • 33