I'm building the system that handle five streams concurrently. I'm facing the problem when I run multiple deep learning model in order to handle those streams then speed down very much. So, if you are used to design the system as so. Please comment to me some suggests.
Asked
Active
Viewed 52 times
-1

vvvvv
- 25,404
- 19
- 49
- 81

LUân Đào Duy
- 81
- 1
- 2
- 10
-
What GPU are you using? Some graphic cards they have two GPU chips. For instance tesla K80, then you can setup a VM on each chip. So you got two machines, and they won't impact each other. Or you can add more GPUs by SLI. And setup VM for each GPU. – George Yu Jun 21 '19 at 20:54
-
Thank for your suggest. I am using one rtx 2080 ti. So there is anyway that i want to optimize performance on one. – LUân Đào Duy Jun 23 '19 at 02:08
-
Then it will be all depend on your particular project. Depend on which deep learning library are you using. What is the framework of you deep learning model. For instance, if you are modeling LSTM. In keras, there is cudnnLSTM is optimized by cudnn core. At mean time, you can consider about using some frameworks with fewer computation and memory. Get rid of some unimportant features. There are a lot of methods from many aspects can help more or less. – George Yu Jun 24 '19 at 03:14
-
yup. I am using yolov3 of mxnet framework. If you have much experiments about it. Suggest me something. thanks u – LUân Đào Duy Jun 24 '19 at 15:00
1 Answers
0
I would consider to run multiple deep learning models each on a separate machine. Otherwise, you will always have them fighting for shared resources like RAM, CPU time, HDD, etc, and you won't achieve optimal performance.

Sergei
- 1,617
- 15
- 31
-
Thank for your comment. I only has one gpu 2080ti. So i wonder that there is anyway to optimize it best performance. – LUân Đào Duy Jun 23 '19 at 02:11