First of all, apologies if this is the wrong StackExchange site for this question - it's about Ethernet network capacities, but not specifically about servers etc.
I am designing a network for a data acquisition system that will be outputting data via TCP at just over 30 Mbps. (To ward off the obvious first comment, this is definitely megabits per second.)
I recall hearing somewhere that one should aim to keep a network's normal utilisation under 10% of its capacity, but I can't find any proper research to that effect. Is the 10% figure reasonable, and if so, is it appropriate for my data acquisition system or is it a figure that is intended for e.g. corporate networks that will be more 'bursty' than my constant 30 Mbps?
Would it be better to use Gigabit Ethernet, which would be running at about 3% capacity, compared to 100baseT devices that would be running at 30% capacity?