1

I need to transfer several terabytes/day between three windows computers. Two computers are employed to acquire medical imaging data (500 Gbytes/sample, ca. 6-7 samples/day), and the third is dedicated to data analysis (browsing through 3D stacks, etc.).

Thus far, the computers are connected through gigabit ethernet. That works, but it is so slow that the entire workflow becomes inefficient as a consequence.

My question is: what is the current best practice to link a small number of PCs into the fastest possible network? Should I deploy a small fiber LAN? Or should I forgo ethernet altogether and use some combination of thunderbolt, USB-c, or some proprietary hardware? Or InfiniBand hardware?

aag
  • 407
  • 1
  • 6
  • 19
  • 3
    10GBase-T is a thing. – Michael Hampton Oct 30 '18 at 13:45
  • 10GBase-T would be a thing, but is it the best thing? I am reading about 100GbE (https://www.storagereview.com/mellanox_introduces_connectx4_adapter), might that be a better option? Or would other bottlenecks limit transfer speed? Price is not the first concern, I can sink a few K$ into adapters. – aag Oct 30 '18 at 14:11
  • 1
    At that point you have to ask whether your workstations can even keep up! – Michael Hampton Oct 30 '18 at 14:12
  • 1
    10GbE should transfer 500GB in about 7m20s (as opposed to the 1h15m of 1GbE), and has the benefit you can still use ethernet and the switches are only hundreds of dollars. There's also 40GbE gear (1m50s transfer for 500GB), though now you're into fiber and the switches are thousands of dollars. – gregmac Oct 30 '18 at 14:29

2 Answers2

3

"Best" is a matter of requirement and budget.

100GbE switches are readily available but pricey - and as Michael has hinted, your data sources and sinks likely won't be able to keep up with 12 GB/s.

Beyond 100GbE there's also 200G and even 400G Ethernet but that's pretty much overkill for just a few TB/day.

10GbE is probably sufficient with 1.2 GB/s or 4.3 TB/h and also affordable - it's already a challenge to get your storage up to that speed. In comparison, 400G could transport up to 3 TB/min which requires serious horsepower at both ends.

10GbE offers a variety of physical layers, most prominently

  • 10GBASE-T over Cat-6A (cheapest, 100 m reach)
  • SFP+ DAC (15 m reach)
  • 10GBASE-SR over MMF (OM4 for max. reach of 400 m)
  • 10GBASE-LR over SMF (OS2 for 10 km reach)

If you're still on Cat-5 cabling you'll need to rewire for full reach (Cat-6 should be good for 55 m, Cat-5e for maybe 30 m).

If you do need 40GbE you can pretty much forget about copper - there's 40GBASE-T but it requires Cat-8 cabling which is somewhat exotic and just goes for 30 m. 100G+ is all fiber or very short-range DAC.

Zac67
  • 10,320
  • 2
  • 12
  • 32
0

As already stated, a small 10 GbE LAN should be sufficient for your needs.

If you are on the cheap, I would like to point out two other options:

  • if no serious data transfer happens between the two imaging machines, use 1x single-port 10 GbE interface for each imaging computer and 1x dual-port 10 GbE for the analysis machine. Directly connect the former to the latter. In this manner, you can avoid the cost of a 10 GbE switch;
  • use cheap, second hand Infiniband DDR, QDR or FDR hardware
shodanshok
  • 47,711
  • 7
  • 111
  • 180