Redgwell, I presume?

The life and times of a kiwi geek

Archive for the ‘File Server’ Category

Mirroring my Solaris OS partition

leave a comment »

My file server is all up and running with 3 TB of redundant storage space. But it’s still vulnerable to hard-drive failure.

If the OS drive goes down, then it’s going to be difficult (impossible?) to get things back up and running. So I decided to mirror the OS drive.

The OS drive is a 320 GB, and I have two spare 500 GB drives lying around just waiting to serve. So what I need to do is simple (in theory). Add one of the 500 GB drives to the root pool, detach the 320 GB drive, and finally add the other 500 GB.

In practice, I ran across a few issues. Both of the 500 GB drives had NTFS partitions, and so my first problem occurred with the command:

zpool attach -f rpool c8d0s0 c0t0d0s0

cannot open ‘/dev/dsk/c0t0d0s0’: I/O error

So it seems like there was no slice to attach to the root pool.

A bit of mucking about with format:

format -e c0t0d0
selecting c0t0d0
[disk formatted]
.
.
.
fdisk -> create partition, Solaris2:

Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Active Solaris2 1 60800 60800 100

Now give it a label:

format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0

Ready to label disk, continue? y

Should be good to go!

zpool attach -f rpool c8d0s0 c0t0d0s0
cannot open ‘/dev/dsk/c0t0d0s0’: I/O error

Damn! It still couldn’t find slice 0.

A look at the partition table revealed why:

partition> print
Current partition table (default):
Total disk cylinders available: 60798 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 – 60797 465.74GB (60798/0/0) 976719870
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 – 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0

It looks like there is no space at slice 0 to add to the root pool.
After a quick google, I found a post suggesting that I make my partition table look a bit more like this:

Part Tag Flag Cylinders Size Blocks
0 root wm 1 – 60797 465.73GB (60797/0/0) 976703805
1 unassigned wm 0 0 (0/0/0) 0
2 backup wm 0 – 60797 465.74GB (60798/0/0) 976719870

Right – now let’s give it a go:

zpool attach -f rpool c8d0s0 c0t0d0s0
… (after some loading )…
Please be sure to invoke installgrub(1M) to make ‘c0t1d0s0’ bootable.

Great – that sounds a bit better. Let’s check on the resilvering:

zpool status
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 5.93% done, 0h4m to go
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c8d0s0 ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0 745M resilvered

errors: No known data errors

Fantastic. After waiting for the resilvering process to complete, all that was required was a quick command to write the bootloader:

installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t0d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 271 sectors starting at 50 (abs 16115)

I then repeated this process for the second disk. Once the second disk was ready to go into the root pool, I removed the 320 GB drive:

zpool detach rpool c8d0s0

and after adding the second 500 GB drive, I ended up with:

zpool status
pool: rpool
state: ONLINE
scrub: resilver completed after 0h4m with 0 errors on Sat Jan 2 20:07:57 2010
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0 12.3G resilvered

errors: No known data errors

Now – all I need to do is reboot to see if this was all a terrible mistake…

Advertisements

Written by redgwell

January 2, 2010 at 7:30 pm

Posted in File Server

Building a home file server using Solaris/ZFS

leave a comment »

I have a file server at home that’s been long due for replacement.

It’s currently running windows xp, and has a 320 GB drive,  two 500 GB drives, and a 1 TB drive. If any drive fails, I stand to lose a bunch of stuff that I don’t really want to loose – so today I decided to buy three 1.5 TB drives and install Open Solaris.

Installation

I’ll be installing the Solaris OS itself on the 320 GB drive, and using the three 1.5 TB drives for a raidZ1 redundant file system. If any one of the latter drives fail, my data will remain intact. Ideally, I’d like to mirror the OS drive to prevent that from failing too, but for now I’ll just back up the OS to the redundant drives in case of failure.

The first thing I needed to do was install Solaris. This was pretty simple – burn the image to disk, reboot the server, and configure the BIOS to boot from the cd-rom drive. Following the on-screen instructions was simple enough – I did however notice a 30-60 second pause at a login prompt while the window system (gnome) loaded. While I scrambled around looking for the root password, the live cd eventually sprang into life on its own and I ended up at the Solaris desktop.

Once there, simply double clicking on the Install OpenSolaris icon set the installation wizard off, and within minutes, the installation was proceeding. After 15 minutes, the wizard was complete, and I rebooted.

The next step was to setup the redundant file storage. I did this using a number of online resources – the most useful being this blog. After following the basic instructions on this page, I had 3.0 TB of redundant storage to start copying my old data to. While doing this, I found it really easy to create a new user group for my non-root user, and create a ‘home’ folder for this user. However I got stuck on creating SMB shares.

SMB Shares

The instructions I followed said to just enter:

zfs set sharesmb=on tank/home

For me, this yielded an error:

cannot share ‘tank/home’: smb add share failed

After a bit of research, I found that I needed to install some SMB related services, and recreate my file system:

pkg install SUNWsmbs

pkg install SUNWsmbskr

reboot

svcadm enable -r smb/server

zfs create -o casesensitivity=mixed -o nbmand=on tank/home

zfs set sharesmb=on tank/home

echo ‘other password required pam_smb_passwd.so.1 nowarn’ >> /etc/pam.conf

After doing that, I needed to reset my user(‘s) password(s):

passwd Rob

DNS Issues

I have my server on a static IP, so I changed the network interface settings from DHCP to static IP.

After doing this, I found that DNS was no longer working correctly from Firefox. I could still ping IP addresses, and  successful results from “nslookup” confirmed that DNS was still working to some degree. The solution to this problem was to update the ‘/etc/nsswitch.conf’ file:

cp /etc/nsswitch.dns /etc/nsswitch.conf

Copying from NTFS drives

I didn’t want to copy my existing data from the old drives over the network – Instead, I opted for plugging the drives into the available SATA ports in the server and copying the data that way. This of course meant that I had to mount NTFS partitions.

I found this blog to be quite useful, although I noticed that he didn’t mention how to get the name of the drive that you need to mount.

To do this, run the command “format”, and immediately after the command generates output, hit ctrl+c to prevent things from going any further. You should be able to identify the device name from the output by looking at the information, and figuring out which drive is which. Once you have the device name, run the “prtpart” command. This will tell you the path to all of your partitions. Looking through them, you should be able to match up the device name from the first command with the path to the partition.

Final thoughts

It wasn’t too bad getting all of this going. It’s certainly not as easy as a Windows 7 install, but then it’s a different kind of user that is doing this sort of thing. The entire process was very reminiscent of Linux installations I’ve done in the past – for the most part, things work out of the box. But there are always a handful of hiccups along the way to keep things interesting.

Written by redgwell

December 29, 2009 at 7:17 pm

Posted in File Server