1

I'm new to Fibre Channel so please take that into consideration. We have 3 servers. 2 servers are running Hyper-V and 3rd server was bought to act as a file/backup server for those 2 servers. All servers are running Windows 2012 STD.

Steps I've done:

  • I've connected Server A and Server B to one 2 port card on Server C.
  • I've installed on all servers feature called Windows Standards-Based Storage Management but I don't see any changes
  • I've created a StoragePool and Virtual storage drives on Server C and I would like to share it to Server A and Server B but whatever I do I can't see them on those servers.

Is there something like iSCSI Initiator but for Fibre Channel ?

MadBoy
  • 3,725
  • 15
  • 63
  • 94
  • 1
    What are you trying to actually achieve here, it's really not clear, talk us through the protocol stack you're expecting, also give us much more detail, hardware involved etc. – Chopper3 Mar 05 '13 at 14:33
  • 1
    We want to share the storage from Server C to Server A and Server B. We want the drives from Server C be visible on Server A and Server B as normal drives and we expect it to act like iSCSI connection (more or less) but with greater speed. The drives from server C can do over 500MB/s write/read so 1gbit iSCSI is not enough to get that power into the servers. Hope that's more clear now. – MadBoy Mar 05 '13 at 14:50
  • yeah, that's not going to happen though sorry, it's just not supported. You CAN do IP-over-FC but it's a very old and unsupported spec that almost nobody, and I mean nobody, uses - it certainly won't work in this way. Also you don't mention what FC HBAs and switches you have, do you have these, if so what are they? Either way I'm pretty sure it'd be wasted hardware anyway. You need to do what you're doing via regular IP-over-Ethernet sharing, whether that's working via iSCSI or a NAS protocol such as SMB/CIFS. I think you've gone down a blind-alley with the whole FC thing here. – Chopper3 Mar 05 '13 at 14:53
  • We have 3 x Emulex LPe11002-E PCI-E x4 Dual Fibre Channel LC 4Gb/s. No switches. Direct connection between 3 servers. As for the blind-alley customer just buys stuff and tells me later on expecting me to fix it :-) – MadBoy Mar 05 '13 at 14:56
  • Erm...how do you mean 'direct connection', as in the supposed server has two cards, one going to each client? And I feel for you with this situation, not your fault, just never going to work. – Chopper3 Mar 05 '13 at 15:00
  • Well there are 2 connections on each card.. client assumed that one card on Server C can be connected to both Server B and Server A. I have no idea, but I even tried it connecting 2 cables between A and C and it just doesn't work. Anyways seeing how this doesn't work is there any other way to utilize FC for that or iSCSI is only way ? – MadBoy Mar 05 '13 at 15:19
  • Technically, if you REALLY wanted to, you COULD connect two servers like that, but literally nobody in the world would do that. They're clearly massively out of their depth when it comes to storage. FC is used by millions to connect their servers to FC-based SAN arrays (think EMC/HP EVA/NetApp etc.) but NOT for connecting simple peer networking scenarios as you've suggest, it's massively overkill for that anyway. I'm not even sure iSCSI is what they really want, get them to describe what they want and get a storage expert in to do that. – Chopper3 Mar 05 '13 at 15:50
  • 1
    Mellanox makes dual port 10Gb cards that can be used to do what you are asking, but you'd probably have to use iSCSI to accomplish it. I am using HP branded cards (part # 581199-001) that have dual SFP+ ports. And using copper SFP+ cables made by proline to interconnect them. Because these are Ethernet NICs, you are limited in trying to get them to work over FC. – MikeAWood Mar 11 '13 at 22:46

2 Answers2

5

Fibre channel is generally done through a switch. That said, direct connect is supported, but usually between targets and initiators. Initiators are server HBAs, targets are disk drives (or tapes). I don't know of any software you could run in windows that would allow a server to present its HBA as a target.

What you're trying to do is definitely supported and well documented via ethernet, but according to a quick google on the subject, it doesn't support FC.

Basil
  • 8,851
  • 3
  • 38
  • 73
  • What are my options to share the drives from C to B and A with no speed limit like iSCSI imposes (thru only 1gbit NIC). – MadBoy Mar 05 '13 at 14:51
  • But 10Gbps NICs, they're quite cheap these days, lots of servers come with them already - it's easier and more effective than trying to bond multiple 1Gbps NICs anyway. – Chopper3 Mar 05 '13 at 15:05
  • 10Gbgps need 10gbs switch (unless you get 2 port NICs). – MadBoy Mar 05 '13 at 15:20
  • Yep, and their prices have come down a lot recently, it's the best way to get >1Gbps for ethernet traffic, there was a time when bonding multiple 1Gbps NICs made sense financially but that's gone. – Chopper3 Mar 05 '13 at 15:51
1

I think your best solution is, as others have suggested, to buy some 10GbE gear.

However, it is actually possible to do what you were trying to do, with a few cavaets - the first being that you don't run Windows on the FC target machine: The linux drivers for both QLogic and Emulex HBAs support FC target mode:

That of course requires you are comfortable setting up a storage server under linux. There are some linux-based NAS/SAN appliance stacks that will add a configuration layer for you, and Openfiler in particular supports FC Target mode when using QLogic HBAs. I haven't used openfiler, so I can't comment on it other than it exists.

I have, however, used target mode for both Emulex and QLogic HBAs under linux, and can verify that it works well.

None of this speaks to whether doing this is a good idea or not, although if OpenFiler are willing to charge for commercial support for the QLogic FC target mode they must be willing to stand behind it.

Daniel Lawson
  • 5,476
  • 22
  • 27
  • Thanks. I just shared directories. It's better on Windows since i have Hyper-V there as well. – MadBoy Mar 12 '13 at 07:27