I suspect it's less a Java programming issue than an OS networking stack/OS network configuration issue:
http://coding.derkeiler.com/Archive/Java/comp.lang.java.help/2009-09/msg00087.html
On some OSes, a single native TCP socket can listen to a port on both
IPv4 and IPv6 simultaneously. It is able to accept connections from
remote IPv4 and from remote IPv6 clients. On other OSes (such as WinXP)
an OS native socket CANNOT do that, but can only accept from IPv4 or
IPv6, not both. On those OSes, it is necessary to have two listen
sockets in order to be able to accept connections from both remote IPv4
and IPv6 clients, one socket to listen for IPv4 connections and one for
IPv6.
Windows 7 and Windows Server 2008 handle dual stacks just fine; Windows XP not so much :)
You seem to be on Linux - most modern Linux desktops and servers also handle dual ipv4 ipv6 with no problem.
Here's a good article on interoperability:
You know how you can "turn off" IPV6 for your Java application: -Djava.net.preferIPv4Stack=true
You can also force your server to use IPV6 like this: echo 0 > /proc/sys/net/ipv6/bindv6only
This is arguably your best source:
You should absolutely be able to accomplish what you want (at least at the Java programming level), unless you're limited by external network issues:
Nodes) V4 Only V4/V6 V6 Only
------- ----- -------
V4 Only x x
V4/V6 x x x
V6 Only x x
PS:
Here's one more good link, which explains what's happening at the socket level. It's not Java (it's C), but exactly the sample principles apply: