1

Disclaimer: I'm not a network guru but I do learn quickly! Bear with me.

Situation: I have a colo setup which is comprised of several servers, a iscsi SAN, connected with a Cisco 3560G switch, protected by a Cisco ASA appliance. The switch (and servers) are configured using vlan's such that iscsi traffic is on a dedicated vlan (actually two for redundancy and throughput) and all other network traffic is over another vlan.

I also have a basic SOHO setup, for which I have another Cisco 3560G and a ridiculous router provided by my ISP (it's a Cisco router that doesn't actually allow you to route more than one subnet). My current configuration is what I believe is called a 'router on a stick' configuration. My local SOHO is a standard 192.168.10.0/24 network whereas the colo is 10.0.0.0/8.

I've managed to configure the local 3560G to handle all my local machines and I've also configured my router to have a persistent IPSEC vpn connection to the ASA, which is great. I can connect from any SOHO client to any of my colo servers (To be clear, I can access any device on vlan1).

Goal: I want my some of my local SOHO hosts to be able to access the iscsi at the COLO on that resides on vlan2...

The difference from the client is pinging 10.10.10.x (vlan1) vs 10.10.0.x (vlan2) ... I cannot figure out what I need to do to get this to work. What I have found, and I think this makes sense is that I the iscsi vlan (vlan2) is not connected directly to the ASA and thus is not available to be NAT'd by the ASA. Vlan2 devices are connected directly from the iscsi guest and the host and the traffic is managed by the switch. The ASA has no idea about the vlan2 traffic.

Possible Solution: Is it possible to 'trunk' my soho switch with my colo switch (they're exactly the same model and specs) such that the vlan information is shared and they can 'talk'.

I'm not sure what more specific information I need to post but if anyone can lend some assistance I'd really appreciate it. Sorry if I'm not absolutely clear but networking isn't my forte.

Skyhawk
  • 14,200
  • 4
  • 53
  • 95
Mark
  • 31
  • 2

1 Answers1

1

Interesting question with a few possible answers but first an important question.

Why do you want to do this? iSCSI performance will be terrible, the latency will likely cause timeouts on the SCSI bus and transfer speed will be low.

The easiest way to do this would be to configure the ASA to have a subinterface on vlan 2 and simply route the traffic, normally routing and SANs don't go together but in this case I don't see it being the main performance constraint.

You could look into using l2tp or gre tunneling which would allow you to transport the vlans over the VPN. There's a good explanation of these techniques detailed here. http://www.openflow.org/wk/index.php/Tunneling_-_GRE/L2TP

However I would recommend you find another approach. NFS/SCP/FTP or some other protocol that performs reasonably over this sort of link.

  • iSCSI and kernel SCSI timeouts should be configurable, but of course the performance concern is valid nonetheless. Routing and SANs *do* go together when done right (this is one of the reasons why iSCSI has been specified after all), but this seems likely not to be the case here. – the-wabbit Jan 27 '12 at 11:51
  • Hi Mark, thanks for your response. The reason I want to do this is that the SAN contains some information that my SOHO servers need to access on a temporary basis. I have several Dell 2950's in my SOHO that depend on SAN. One is a MS Exchange box who's storage is configured on that SAN and that I need to access for historical purposes. The several other servers are Xen hosts who's guest images currently reside on the SAN. In terms of latency and speed, if it helps I do have a dedicated 100Mbps connection at both ends. It's 100Mbps up/down @ the COLO and 100 Mbps down, 5 up at my SOHO. – Mark Jan 27 '12 at 22:40