0

I have implemented VGG network in TensorFlow and it's already trained. I also have Amazon Web Services p2.xlarge instance(Nvidia Tesla K80, 12GB) with installed the Deep Learning AMI for Amazon Linux Version from AWS marketplace.

When I am using the network the image processing takes about 30 seconds, which is way too long compared to the same network used on TITAN X took 1-2 seconds.

Does anyone has experience with this or any suggestions how to fix this issue?

plamba95
  • 11
  • 4

1 Answers1

0

How about "NVIDIA Volta Deep Learning AMI" with p3.2xlarge (Tesla V100 GPU) instance? Spot price was 50 cents/hour when i tried on Oct. 27, 2017.

Sign up on https://www.nvidia.com/en-us/gpu-cloud/?ncid=van-gpu-cloud and get your "API Key" to use the AMI free of charge.

EC2/GPU config info: https://aws.amazon.com/blogs/aws/new-amazon-ec2-instances-with-up-to-8-nvidia-tesla-v100-gpus-p3/

Setogit
  • 86
  • 4
  • But yet again, this instance suppose to be better than the machine with Titan X, so I think it will be the same with another instance, but I can't figure out where is the problem. – plamba95 Oct 30 '17 at 11:45
  • In my app, I repurposed a pre-trained vgg19 model. Inference time of one 256x256 color jpeg on p3.2xlarge with the Volta AMI was like 100 milliseconds or less. – Setogit Oct 30 '17 at 19:19