0

I am currently working on an optical flow project and I come across a strange error.

I have uint16 images stored in bytes in my TFrecords. When I read the TFrecords from my local machine it is giving me uint16 values, but when I deploy the same code and read it from the docker I am getting uint8 values eventhough my dtype is uint16. I mean the uint16 values are getting reduced to uint8 like 32768 --> 128.

What is causing this error?

My local machine has: Tensorflow 1.10.1 and python 3.6 My Docker Image has: Tensorflow 1.12.0 and python 3.5

I am working on tensorflow object detection API While creating the TF records I use:

with tf.gfile.GFile(flows, 'rb') as fid:
    flow_images = fid.read()

While reading it back I am using: tf.image.decoderaw

Dataset: KITTI FLOW 2015

vinay s
  • 23
  • 1
  • 5
  • 1
    How to you read images? Give us some code please – Sharky Mar 22 '19 at 09:40
  • I am using the Tensorflow Object detection API: while creating the TF records I use: with tf.gfile.GFile(flows, 'rb') as fid: flow_images = fid.read() While reading it I am using tf.image.decode_raw, I am working on Kitti dataset for flow 2015 – vinay s Mar 22 '19 at 09:46
  • I may assume, that if image was saved as `_bytes_feature`, as it is usually, it will be decoded as uint8. Why do you specifically need uint16 considering it's an image? – Sharky Mar 22 '19 at 10:36
  • Because these images are my optical flow groundtruth provided in lossless uint16 format. – vinay s Mar 22 '19 at 11:34

0 Answers0