1

This question is similar to: Blade Enclosure, Multiple Blade Servers, Whats the closest approximation to a DMZ?

In my case, I don't have virtualization so I cannot use vLANs as suggested in answers to the question above.

So I have several blades in one single chassis. Some of the blades shall be part of the DMZ and some shall be in the internal network (behind the DMZ).

Is there a security issue as all blades are interconnected via the chassis? Shall I run a firewall on each blade to limit access to the internal chassis network?

I will use HP blades running Linux.

2 Answers2

1

Virtualization isn't a prerequisite for using VLANs, you just need switch support for VLANs which your HP blade system will almost certainly have.

In most cases you can configure the internal switch which each blades network interfaces connect to on a port by port basis and VLAN at that point giving you the separation you're after.

gravyface
  • 13,957
  • 19
  • 68
  • 100
James Yale
  • 5,182
  • 1
  • 17
  • 20
0

I have not used HP's blade solution, but I had a similar problem I needed to solve using an IBM S chassis with several DMZ blades on a non-virtualized setup.

Normally, you'd setup VLANs with tagging/trunking and (likely) LACP to provide redundancy between your uplinks between your chassis switch module and your real switches on your network, but I felt that with the few blades I had (3), and the simplistic network I was setting up, I skipped the VLAN trunking and instead treated the blade/switch module port pairings as host/server interfaces (see below) and plugged them into an untagged native VLAN -- either the "trusted" or "DMZ" VLAN, along with the appropriate firewall interface -- on the one managed switch I had for the project.

I found through packet-sniffing on the Advanced Management Module interface (the main chassis "command and control" unit) I could see a lot of (proxied) ARP chatter from the internal chassis network (from the various I/O modules), but from within the operating systems/network stack (Linux) installed on the bare metal on the blade itself, I couldn't "see" any of the non-DMZ hosts from the DMZ ones because I had assigned each physical interface on the blade switch module to an isolated group that in turn was assigned to a blade, effectively setting up a 1:1 ratio of switch module ports to logical interfaces on the blade, i.e. on the switch module, port 1 is assigned to group 1, which is assigned to blade 1; port 2 is assigned group 2, which is assigned to blade 2, and so on.

As for the shared storage, I did something similar again with a combination of zoning (the mapping of I/O ports between blades and storage; this determines which harddrive(s) the blade(s) have access), RAID pools, and volumes (the block-level device you partition/format/mount in your OS).

Once you get your head around the fact that the blade chassis is just a tidier/more efficient hardware I/O architecture, what you're doing logically within the storage or network configuration is essentially the same as the physical equivalent.

gravyface
  • 13,957
  • 19
  • 68
  • 100