1

I am working on a video analytics project where I have to detect 5 kinds of objects from 10 number of CCTV cameras. And, The customer provided only one Ubuntu PC system to deploy my Video analytics Engine.

Now, I have to install all of my application modules on a single PC.

So I choose the below Big-Data tech stack to implement this,

Since I have only a single PC and single SSD drive,

Is it OK if I use the SINGLE BROKER + MULTIPLE PRODUCER + SINGLE PARTITION FOR A TOPIC + SINGLE CONSUMER approach?

or Should I need to follow some other approach in order to get high throughput?

I am highly helpful if anyone gives advice in this,

Note: I know this will have a SINGLE POINT of FAILURE (using a single PC). but the customer agreed to proceed.

SaddamBinSyed
  • 553
  • 3
  • 17
  • 1
    _What will happen...?_ - You tell us. Are you getting a specific error? – OneCricketeer Jul 13 '21 at 18:22
  • @OneCricketeer, actually my question is, Is it OK if I use the **SINGLE BROKER + MULTIPLE PRODUCER + SINGLE PARTITION FOR A TOPIC + SINGLE CONSUMER approach?** or Should I need to follow some other approach in order to get high throughput? so far I created multi-broker and multiple partitions but I can sense there is a slowdown. so In order to know the better approach I am asking the community. Thanks – SaddamBinSyed Jul 14 '21 at 06:06
  • 1
    @SaddamBinSyed, in my pov, If you have 1 consumer then 1 partition/topic is fine. what I would suggest is that, try with above appraoch and see the performance. – Abu Muhammad Jul 14 '21 at 07:07
  • Multi kafka broker on one machine does not give you any performance improvement since you have the same memory, disk, network, etc. It will work fine... People write integration tests with all components on one machine all the time – OneCricketeer Jul 14 '21 at 14:57

0 Answers0