3

I am somewhat comfortable with Linux / opensolaris command line options but not very good.

I am trying to build a Storage based on ZFS, the specs are:

  • Dell 670 Machine with 4 Hard Drives (SATA and IDE)
  • I have downloaded and created Open Indiana USB and can boot live and install
  • I have also seen napp-it (http://www.napp-it.org/index_en.html) and have installed it on Oracle Solaris Express (Personal Home PC)

I am a bit confused and have a few questions:

  1. Boot / Install Disk is separate, and I have extra 4 attached internal Drives, when I do the installation how do I form a Pool including all 4 drives?
  2. If I go the napp-it way (which seemed easier to me) and form a pool of all 4 drives via web gui, how can I share it to my Windows machines over the network, I have two iMac (Apple OSX) as well.
  3. Is there a document I can read about ZFS and how to handle / maintain it?
  4. Any other recommendations you think will help?

I am really tempted to install zfs NAS on openindiana and will be grateful if some expert can provide a step by step either command line or napp-it web gui - including setting up sharing with windows, osx clients.

Caleb
  • 11,813
  • 4
  • 36
  • 49
Mutahir
  • 2,357
  • 2
  • 32
  • 42
  • Just as a sidenote, have you tried NexentaStor? If this is mainly meant to be a storage appliance, Nexenta may be an easier path. – ewwhite May 18 '11 at 17:53
  • Hi ewwhite, thanks for your input, I have heard off it, but I want to try open indiana or oracle solaris express edition - will definitely check nexentastor too but for now its OpenIndiana or Oracle SolarisExpress - :-) Thanks ! – Mutahir May 18 '11 at 18:05
  • Almost all of this information is easily discoverable with a few common-sense google queries. Like "ZFS create a pool" or "ZFS administration". – Brett Dikeman Jan 10 '12 at 18:45

4 Answers4

3

I can tell you the CLI commands for creating the pool. I don't know the GUI or WebUI though.

#replace drive0... with your drives    

zpool create tank drive0 drive1
#This will create a striped set, no redundancy, single failure = dead pool

zpool create tank mirror drive0 drive1 mirror drive2 drive3
#This will create a stripe set of mirrors, basically RIAD10

zpool create tank raidz drive0 drive1 drive2 drive3
#Thid create a stripe set with single parity, like RAID5

zpool create tank raidz2 drive0 drive1 drive2 drive3
#Thid create a stripe set with double parity, like RAID6

Run zpool status to make sure the command you ran was successful.

You can change the mountpoint of the pool (by default /tank, because that's the name picked above) with a command like zfs set mountpoint=/another_location tank

It would not apply directly to your situation, but may be of help, I have a short intro to ZFS for FreeBSD on my blog.

Chris S
  • 77,945
  • 11
  • 124
  • 216
  • Thanks for the blog link -- currently have a few FreeNAS boxes and have been wanting to experiment with ZFS on plain-old FreeBSD. This looks like a good resource. – nedm May 18 '11 at 19:30
2

On question #1: ZFS boot pools can only be single disks or simple mirrors. You cannot use multiple mirrors or RAID-Z pools for booting at this time, due to limitations in boot loaders (GRUB on x86, OpenBoot on SPARC). IIRC, there is a Google Summer of Code proposal to get GRUB 2.0 support into OI, which might allow for booting from more complex pools. For now you'll need additional disks for a root/boot pool if you want to use your four current disks in a single pool.

On question #2: See the "sharesmb" property in the man page of zfs(1) for enabling CIFS sharing for Windows clients. MacOS X should also be able to connect to CIFS shares, though NFS can probably be used there as well.

On question #3: The ZFS Best Practices Guide has a lot of information that will help you: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

On question #4: I concur with ewwhite above: think about NexentaStor if all you want to do is build a NAS.

eirescot
  • 574
  • 4
  • 8
2

about 1: Install OpenIndiana on a single disk (mirroring is not supported by the base installer, if you need, you have to do later)

2. Create your pool when OpenIndiana is running

3. If you use napp-it, you can create pools with the pool-menu and share folders via the share menu (just klick on a ZFS-folder - share smb column) among other management tasks like disk, pool, share acl, job, snap and replication management.

4. If you want to share your folder via afp and napp-it (fully supported by the Web-UI) you need to run the napp-it online-afp-installer first. See this pdf.

Caleb
  • 11,813
  • 4
  • 36
  • 49
Gea
  • 36
  • 2
2

I wrote an OpenIndiana tutorial a while back.

The other thing it has is a simple explanation of how to kind of lock it down a little. I have some friends and family on the server so I have barely more trust in them than strangers-- in regards to my system.

They are still 'users' after all. You can never trust 'users'.

ZFS File Server Walkthrough

-- EDIT --

Upon suggestion, the text of the post has been cutted and pasted.

But, it is formatted better on my blog. So, read here read there. Where ever.

Since writing this I have done a few different things like removing permissions from users and symlinking only the tools necessary for SFTP in an accessible directory. I feel it has made the machine more secure.

I will be rewriting this when I upgrade to a 16-20 bay server.


Use Open Indiana with ZFS to Create a Somewhat Locked Down File Server

Install OpenIndiana v148 with SSH

You will need a system with at least four(4) disks for this example

The system disk
    This disk is to put the operating system on.
    I recommend at least 30GB
    The faster the better
The first data disk
    This the first disk of a pair.
    Reliablilty is paramount
    Buy as big as you can afford
The second data disk
    This the second disk of a pair.
    Reliablilty, again, is paramount
    And buy as big as you can afford
AT LEAST ONE BACKUP DISK
    RAID, ZFS, OTHER... their purpose is to help with uptime
    ZFS also assists in somewhat painlessly growing your storage capacity
    Backup is backup, redundant disk strategies are for use and failure
    Buy as big as you can afford

Follow the prompts, turn on SSH, use the whole system disk.

Update the system via CLI

pkg image-update --require-new-be

The GUI tools are not working in release 148 upon installation.

Find the disk names

format

Use [CTRL + C] to exit the format command

Create the mirrored zpool

zpool create newpool mirror c2t2d0 c2t3d0

Check out your handiwork

zpool status df -h

Create a base directory structure

newpool|-business |-hobby |-books |-users|-admin01 | |-asmith|-shared | |-lsmith|-shared |-misc

mkdir /newpool/business/ mkdir /newpool/hobby/ mkdir /newpool/books/ mkdir /newpool/users/ mkdir /newpool/users/admin01/ mkdir /newpool/users/asmith/ mkdir /newpool/users/asmith/shared/ mkdir /newpool/users/lsmith/ mkdir /newpool/users/lsmith/shared/ mkdir /newpool/misc/

Create any groups if necessary

groupadd admin01 groupadd internal groupadd external groupadd common

Add any non-existing initial users

Please note that I am creating two users with two commands, they are long so the text is wrapping.

useradd -d /newpool/users/asmith/ -c "Adam Smith" -G internal,common -s /usr/lib/rsh asmith useradd -d /newpool/users/lsmith/ -c "Luanne Smith" -G external,common -s /usr/lib/rsh lsmith

The options are as follows:

-d is the home directory /newpool/users/username/ in this example.
-c is the real name, it can really be anything. But it you want it to contain a space then enclose the value in double quotes.
-G list all the groups of the directories you want the people to have access to separated by commas.
    At the very least I give membership to the common group -G common .
    But maybe I want to give access to the external directory as well -G external,common .
-s /usr/lib/rsh is the 'restricted shell' to prevent a lot of funny business.

Set passwords for any non-existing initial users

passwd lsmith passwd asmith

passwd username

(Enter password twice-- tada!)

(passwd: password successfully changed for username)

Modify existing users

usermod -G admin01,internal,common admin01

(UX: usermod: admin01 is currently logged in, some changes may not take effect until next login.)

You can verify user information in the plaintext /etc/passwd file

You can verify group creation in the plaintext /etc/group file

Apply proper owner:group properties

chown admin01:admin01 /newpool/business/ chown admin01:peers /newpool/hobby/ chown admin01:peers /newpool/books/ chown admin01:admin01 /newpool/users/ chown admin01:admin01 /newpool/users/admin01/ chown asmith:admin01 /newpool/users/asmith/ chown asmith:admin01 /newpool/users/asmith/shared/ chown lsmith:admin01 /newpool/users/lsmith/ chown lsmith:admin01 /newpool/users/lsmith/shared/ chown admin01:common /newpool/misc/

Apply proper permissions

(4 read 2 write 1 execute)

(! execute required for non-owner:group on directory to traverse file system)

chmod 700 /newpool/business/ chmod 750 /newpool/hobby/ chmod 750 /newpool/books/ chmod 711 /newpool/users/ chmod 770 /newpool/users/admin01/ chmod 770 /newpool/users/asmith/ chmod 770 /newpool/users/asmith/shared/ chmod 770 /newpool/users/lsmith/ chmod 770 /newpool/users/lsmith/shared/ chmod 750 /newpool/misc/

770 gives writability, readability, traversing to owners and group members, and nothing to others - for regular user directories

750 gives writing to the owner, reading and traversing to the owner and group members, and nothing to others - for read only access to regular users

711 gives all access to the owner, and being able to traverse the directory to everyone - allows regular users to descend deeper into the directory tree where they may have access

700 gives no access to anyone but the owner, can't even open the directory - revoke access to regular users entirely

NFS & Samba

Currently, I don't have any NFS or Samba shares set up for this server.

I will update the instructions should that change.

Set quotas

On my file server I don't plan on having many users and even fewer user groups. So far I have no plans for any quotas.

If I did set a quota, I would likely do it on a user by user basis.

zfs set userquota@username=100G newpool/users/username

However, with version 15 of ZFS user group quotas are available as well.

zfs set groupquota@common=250GB newpool/misc

More users?

Add new directories

mkdir in users /newpool/users/username/ and /newpool/users/username/shared/

Add new users

useradd -d /newpool/users/username/ -c "Fname Lname" -G [comma separated list,]common -s /usr/lib/rsh username

Change owner:group properties to new users directories

Same as above

Apply proper permissions to new directories

Same as above

Set new user password

Same as above

BradChesney79
  • 67
  • 5
  • 12
  • 1
    As you're the author (and the article is quite small) you could easily post your tutorial here and leave the link for reference which is our preferred method, as it shields us from dead links. – user9517 Jan 10 '12 at 22:40
  • I would be happy to do that, but it seems I am over 5,000 characters too long for these infernal boxes. I'm a tad new to this stuff so please provide a pointer to where I should put it. – BradChesney79 Jan 28 '12 at 04:45
  • That would be great. You can just [Edit](http://serverfault.com/posts/348574/edit) your own answer, the character limit for answers is about 30k. – user9517 Jan 28 '12 at 08:10