I have a Python script running on Linux that needs to repeatedly and reliably read the ARP table, in order to check some IP-MAC associations. I'm evaluating the possible options, and wondering which one of these is leanest/cleanest - primarily in terms of performance and overhead.
The following are currently on my radar:
- Run
arp -an
usingsubprocess
and parse the output - Read
/proc/net/arp
and parse the output (similar to #1 but different way to get the underlying data; libraries like python_arptable do exactly that) - Open a socket and use the
SIOCGARP
ioctl system call to poll the ARP table (per http://man7.org/linux/man-pages/man7/arp.7.html). I don't yet have the Python implementation for this, but believe it would be feasible. - Run
ip neighbor list
usingsubprocess
and parse the output - Use a Python netlink library like pyroute2 to read the table
If I'm not mistaken, options #1 and #2 read the table directly from the kernel cache, and #4 and #5 use netlink calls; I'm not too sure where #3 falls.
For some time I was doing #5 - specifically using pyroute2 - but experienced some reliability issues; e.g. ARP entries that definitely exist not being listed. I was left with the sense that pyroute2 is potentially a bit too powerful for the simple task at hand here. I'm currently doing #2 and it seems to be working fine for now - but wondering if there's a better way.
Although my implementation language is Python, I believe the question is not strongly language-specific. Would love some pointers on the pros and cons of the above. Specifically:
- Better to run a command and read its output or read a file from
/proc
? (option #1 vs #2) - Old-school kernel vs new-age netlink? (#1/2 vs #4/5)
- How does the ioctl approach (#3) compare to the others?