It's a dangerous for a real network because it's applying a narrow metric when you need to take a holistic approach.
In graph theory, reliability is best when you can remove the most paths on the graph, and still retain a valid path between all vertices. Hence, a network where each vertex connects to all others with a dedicated paths is the most reliable.
In real networks, that's just one of many factors. Reliability is best when your servers and workstations can happily communicate at acceptable speeds come rain or shine and that's determined by a lot more than simply 'how many links to other things do I have?'
In real networks, you must take into account:
- Protocols
- Geographical Location
- Finance
- Human Fallibility
- Compatibility
- Interference
- Business/User/Vendor/Stakeholder expectations
Any one of the factors above (and probably a whole bunch more) will impact the reliability of a real network. So you can see that a holistic view must be employed to properly gauge a real network's reliability.
Applying computer science directly to IT scenarios tends to be problematic because real-world factors are rarely considered in detail
Here's a few real-world examples I've encountered where the reliability of a network couldn't be measured by k-connectivity:
- The computer infrastructure supporting a 24/7 steel mill operation. We'd regularly encounter reliability problems due to electrical/magnetic interference on the network links. For this reason, many cat5 runs had to be converted to optical links. It wouldn't have mattered if we had 1 or 100 cat5s for the runs that were near active furnaces. You'd still see regular network drop-outs due to unpredictable EMF.
- A network mix of IPX and IP running over Netgear, Cisco, HP, 3COM and unbranded network equipment. Connections would drop. IPX traffic would mess with measurement and flow of IP traffic. Systems couldn't be universally monitored and we had interruptions in service simply due to being unable to detect and repair problems quickly enough.