Wednesday, May 4, 2016

ReadyNAS - Backup

Backup

Redundancy in the form of RAID 6 and snapshots don't reduce the need for good backups. I chose to backup the RN204 to a set of external 2.5" USB 3 4 TB drives. We have one for each day. Last night's goes offsite at the end of the day. These drives are all ext4 formatted so that the files retain the permissions. If the NAS goes down, I can plug the external drive into a windows box and read it with https://www.diskinternals.com/linux-reader/

The only way to get ReadyNAS OS to copy files across and also delete files on the backup drive that have been deleted on the source drive is to use RSync. The other backup methods offered do not delete files on the backup drive. They just add new files and modify changed files. That is useless for a backup in my opinion. I want my backup to match the source.

I setup a directory on the USB drive called BACKUP. The backup for each share will go under this directory. 

There is a trick to setting up RSync as a backup method. You have to set the left side of the backup dialog to be "remote". This really means use the RSync process on the RN204. The IP of the "remote" is 127.0.0.1.


Once you have the left side setup hit "Test Connection" and you should get:



The destination of the backup is one of the USB 3 ports. In my case I chose the upper USB 3 port. That is always the destination of my backup, no matter what drive is attached. To select the USB port hit "Browse" on the right side of the dialog and navigate to the USB port you desire using the navigator on the left side of the dialog.


The path to the backup drive folder needs to be filled in. Since I use a BACKUP directory to hold each share's backup mine look like this:


Hit Next and you will be able to schedule the backup. If you have multiple backups scheduled at the same time, they will run in succession from the top of the list to the bottom of the list of backups you have scheduled. There is no need to try to guess how long each one takes and schedule it according to that. Since I want my backups to run at midnight, I set all of them to start at 00:05. We only want a full backup the first time for RSync. I have an email sent upon completion of the backup no matter if there were errors or not.


Hit Finish.

I need to change one thing for the backup setup. The backup should remove any files on the destination (backup) drive that have been removed from the NAS. Select the backup job and pick settings. Pick Advanced. Check "Remove deleted files on target" and ok this.


Running through each tab in the settings for the new backup job you should see this:







Rsync is amazingly slow at copying files. I had 130GB of files to copy. Using a linux cp they copied in a couple of hours. The same thing using RSync took almost a day. On the other hand Rsync is incredibly fast at incremental backups since it only has to copy the differences. 

The backup setup forces you to schedule a complete backup the first time. Seemingly you are doomed to excessively long first backups due to the poor speed of RSync. But there is a trick if you have a fresh share that has not been populated with data yet. Select and run the RSync backup before you populate the share with any files. Since there is nothing to copy Rsync will complete in a couple of seconds. Select and disable the backup from the schedule tab on the backup properties. Now populate your share with files. Next copy the files to the backup drive, preferably with a linux cp -R --preserve so that all of the attributes of the files on the backup drive are identical to the source. But I have also been successful with a windows drag and drop copy. Finally select and re-enable the backup. Since the files are already on the backup drive, Rsync will deal with the differences only and run quickly.

A couple of things to note:
SSH into the NAS uses root as the master login name, not admin like the dashboard does. But the passwords are the same.
USB drives are mounted in the /media directory of the NAS.

To populate the set of external drives, I plugged one good backup drive into a USB port. Then I plugged the blank drive into another USB port. I used a linux "cp -R --preserve /media/<USB source drive> /media/<USB destination drive>" on the RN204 to copy the files from the good drive to the blank drive. Then I unmounted both USB drive (eject from the dashboard) and plugged the destination drive into the USB port that is designated for the nightly backups. Rsync had no clue the drives had been swapped out and dutifully performed its backup. 

Thursday, March 31, 2016

ReadyNAS - Setting up a share

ReadyNAS - Setting up a share

The first task after initial configuration was to create users and groups. To my way of thinking, these are Linux types of things. Since we use both Linux and Windows in our environment, and our underlying permission model is already based on Linux, this was straightforward. I setup the users and groups as we have them now. I did make sure that the UID for each user on the NAS matched the UID in our existing network. Same for the GIDs for the groups. Root UID and GID are 0:0 by default in any Linux system to they match already.

On a new NAS, enable NFSv4 from the web interface by picking System -> Services -> NFS -> Enable NFSv4.


From the web interface, picking shares and then new folder will initiate a set of dialogs to setup a new share:

  • I enabled Bit-rot protection. More about it is here. Since this is an RN204 I got a warning that it would impact performance but I don't care for my testing. 
  • I enabled snapshots on an hourly basis since that is what we need in our environment. The ReadyNAS does automatic pruning of snapshots keeping snapshots as detailed here:
    48 hours of hourlys
    4 weeks of dailys
    8 weeks of weeklys
  • I enabled SMB for windows access, NFS for Linux/Unix access and Rsync for backup access (see later post).
To complete the setup I used the gear to access the settings:



I checked "Allow Snapshot Access" so that users can see the snapshots in the share and restore files themselves if needed. Otherwise the snapshots are hidden and only accessible from the NAS interface. 

SMB permissions


Setting up permissions is next. By default the share is wide open. Not good for our environment. Starting with SMB protocol, uncheck "Allow anonymouse access", uncheck read/write for everyone. I enabled read/write for each group as appropriate for our environment. We don't control access on a per user basis but use group controls so I left permissions for each user unchecked. Note that user admin and the cloud username is read/write by default and I don't know how to change it. 


I did not do anything with HOSTS, DFS, and ADVANCED. 

With this setup I can map a drive from a windows 7 PC. I can create a folder and file within the share. Using ssh (root user) to log into the RN204, I see these permissions from ls -al:


My username and default group was on the file.docx and folder. Note that the share was in the /<volume name> directory. I'm not wild about the wide open permissions on the share and the objects created in it. 

I'll skip the NFS and Rsync setups for now. Moving to the tab labeled "file access" I can see why guest was owner and group for the share. I also see why the Linux world permissions are rwx - Everyone is set to read/write.


  • I changed the folder owner to root and group to our default group. 
  • The "Grant rename ..." control the Linux sticky bit permission. Unchecked results in a t in the permissions. I left it checked.
  • I changed Everyone to read only. 
  • I disabled guest group access (uncheck both read only and read/write).
  • I disabled guest user access (uncheck both read only and read/write).
  • I enabled read/write access for our default group. 



The result from ls -al after creating a folder and file with the new setup:

 

I can see the permissions for the share, new file and new folder are per the dialog "folder owner" and "folder group". World permissions are per the "Everyone" permissions in the dialog. The NAS is using ACLs too, denoted by the + in the permissions so let's see what those are.


Whiteouts are our standard group. This seems reasonable. I'm not sure if the admin user and group are needed for proper operation of some service in the NAS. I'll have to test that. 

For the new folder:


Whiteouts are our standard group and my user name. Seems reasonable.

For the new file:


Whiteouts are our standard group and my user name. Seems reasonable.

The dialogs for the share setup in ReadyNAS assume that all files and folders under the share will have the same permissions. For us, we have more complicated permission schemes than that. We will have different permissions on folders in a single share. I'm assuming that I will have to set those up using SSH into the NAS or from a Linux/Unix mount of the share. 

NFS Permissions


The default permissions for NFS access to the share are set as:


I checked read/write for AnyHost.

Information about mounting a share on a Linux / Unix box can be found at Netgear or on the web at a lot of pages but I like this one. 

Testing an NFS mount of a NAS share from CentOS


For my testing I mounted from a CentOS machine at /net/rn204 as su:
cd /net/rn204
mkdir test2
chown root:XXXX test2 (where XXXX is our standard group)
mount -t nfs 172.21.1.21:/RN204/test2 test2

 In the dialog box above I had to set AnyHost to allow read/write in order to get it mounted. 
As root, I could cd into the mounted share, but I could not create a file or folder in the share. I got permission denied. I guess that makes sense since the IP for the CentOS machine was not in the list of hosts for NFS.

I could create a folder and file when I was my own user.

I added the IP of my CentOS machine to the list of hosts in the NFS setup on the NAS and set allow root access. Now as soon as I tried to cd into the share or ls the share my CentOS machine hung at the command line. Both logged in as me or as root. 

I disabled root access but left the IP of the CentOS machine in the list of hosts. Again I could not cd or ls the share in the root of it without hanging the CentOS shell.   

It turns out I uncovered a bug. If snapshots are enabled, and allow snapshot access is turned on, then any operation in the root of the share will hang a shell on a linux machine which has that share mounted. Operation in a sub-directory of the root of the share works fine. Turning off allow snapshot access is a workaround for this problem. See this post in the NetGear community. NetGear confirmed this is a bug in 6.4.2. 

Testing an NFS mount of a NAS share from Ubuntu


I decided to install Ubuntu on a USB memory stick. See http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows and make sure the drive is formatted as FAT32 by checking the format box in the dialog shown in step 3.

Once Ubuntu booted, I added my user, changed my UID  and GID to match our network and the NAS and cd /mnt. Using sudo, I created a rn204 directory and changed it to 770 root:<default group>.  Sudo su made me the root user. I created a sub-directory, rn204/temp and tried to mount the NAS share: mount -t nfs <ip of nas>:/RN204/temp temp. I got an error message. The fix is to install nfs-common. The problem and fix are here. Once that was done I was able to mount the share.

I made myself my login by su <username>. I was able to list the share and create a file on it. I became root (sudo su). I could not cd temp or list temp due to permission denied. I then added the IP of the Ubuntu machine to  the list of hosts and checked allow root access. I could cd into temp with no problem. But an ls -al never returned. Same problem as with CentOS. The same bug is operating as described for CentOS so it is not a problem due to different Linux flavors. I also tested with nfsv4 with the same bug showing up


Summary of setting up a share


Let's assume that UIDs and GIDs are setup as described in the beginning of this post. 

From the web interface, picking shares and then new folder will initiate a set of dialogs to setup a new share:

  • I enabled Bit-rot protection. More about it is here. Since this is an RN204 I got a warning that it would impact performance but I don't care for my testing. 
  • I enabled snapshots on an hourly basis since that is what we need in our environment. The ReadyNAS does automatic pruning of snapshots keeping snapshots as detailed here:
    48 hours of hourlys
    4 weeks of dailys
    8 weeks of weeklys
  • I enabled SMB for windows access, NFS for Linux/Unix access and Rsync for backup access (see later post).
Hit Create. 

Once the share is created, pick the Shares tab on the web interface, select the share just created, and RMB on the gear and pick Settings. 

DO NOT ENABLE "ALLOW SNAPSHOT ACCESS"  at this time. This will allow you to operate in the root of the new share without encountering the bug described above. 

Setup the SMB permissions for this share: Pick "Network Access" tab, then "SMB". Set the permissions you want for the share. Typically I disable all access from Everyone. You are effectively setting group permissions on this dialog. Once set click Apply.


Setup the folder owner and group for this share: Pick "File Access" tab. Select the folder owner and folder group from the drop downs. Setup permissions to allow them access and disable everyone else except Admin. Once set click Apply.


Setup the NFS permissions for this share: Pick "Network Access" tab, then NFS. I disable AnyHost and add specific IP addresses for all of our Linux / Unix machines. I allow Read/Write and Root Access for each machine IP. Once set click Apply.

Now enable async for NFS. Pick Advanced and check "Enable async".



Hit OK.

Since I use RSync to backup the share and Rsync is much slower than cp, I first run an RSync backup without any files in the share. Then I copy the files onto the backup drive using cp. See the backup post for more details.

Copy files and directories into the share. I prefer to use NFS to do this since I can preserve date, time and permissions when I do it. From a Linux machine with the share mounted:
cd <root_of_share>
cp -R --preserve <path_to_source> ./

Once this has completed double check the permissions from the Linux machine or from SSH on the NetGear NAS. Set them if you don't have what you want.

Now go back and "Allow Snapshot Access" if you want users to be able to get at the snapshot for this share.



You can now use the share.

Friday, March 25, 2016

ReadyNAS - Overview

My search for a storage system replacement

Background

I currently use a NetApp Storevault S500 in an SMB environment. That NAS has been great for us. It runs a version of NetApp's Data ONTAP. Like all NetApp filers, snapshots are instantaneous. We run nightly backups off a snapshot to get a consistent dataset. We also run hourly snapshots extending back 24 hours, nightly snapshots extending back 2 nights and weekly snapshots extending back 2 weeks. Someone in our office is pulling a file or directory from those snapshots nearly every day. They have been life savers.

We run the Storevault in what NetApp calls RAID-DP, which is similar to RAID 6. That allows for 2 drives to fail before we lose data.

We don't use all that much storage compared to most businesses. About 600 GB so our needs are modest. The problem is that the Storevault has been discontinued for years. We have no hardware support so if there is a hardware failure that is all she wrote for the Storevault. We used to have SDLT tape backups in a grandfather - father - son scheme using Arkeia as the software. Arkeia is being discontinued by Western Digital and the SDLT tapes were pretty expensive. Their life span was limited too. Recovery of all that data from tape would have been a long process. So we switched to a round robin of Western Digital My Book external drives with ethernet ports. Plug it in and at midnight a GoodSync job fires up on one of our workstations and updates the My Book from the nightly snapshot on the Storevault. Shut the My Book down the next morning, plug a new one in, and then take last night's drive offsite. The advantage of this setup is if the Storevault ever went down, the My Book could be put online and we were back up and running instantly. Maybe not fast when serving our users but at least we could limp along until a real NAS was put into place.

Given the risk of the Storevault failing, and our storage need slowly growing towards exceeding its capacity, I decided to begin a search for a replacement.

NetApp


Since we love the feature set of the NetApp OS, NetApp was a natural to look at. I got a quote for a FAS2520, 6x2TB SATA drives with 3 years support for $8950. That would give us 4.5 TB of usable storage when configured to run RAID-DP.  Not bad. After seeing comments on the web that they would nail us for support for years 4 and 5 I got a re-quote for 5 years of support for $9850. That seemed pretty reasonable for what we wanted. But before pulling the trigger I decided to look around. 

Alternatives


Since instant snapshots was a big driver for us I started with that. I quickly came across a file system that came out several years ago called Btrfs. It was being developed for Linux and featured the same pointer based technique NetApp uses for snapshots. Ok sounds interesting but I am not about to roll my own storage solution. Been there many years ago and it was a royal pain. Sucked up a lot of time in maintenance. So what products use this file system? Turns out Netgear rolled out a line of NAS boxes using that file system in its ReadyNAS systems in 2013. Synology was just starting to use it in 2015 in a subset of its NAS line. I decided to look at ReadyNAS since it had been around for a while. The reviews, which were mostly targeted at home users, looked good. But our environment is SMB so I was a little leary. However, the price is attractive. An SMB oriented ReadyNAS 516 6 drive bay system was just under $1200 without disks at Amazon

ReadyNAS


When you buy a unit from the top tier storage vendors, you have to use their disks. The drives have secret sauce on them and without the sauce a drive does not work in the unit. NetApp gets a huge amount for their drives. I seem to remember $1100 for a 2TB drive, 10x more than a consumer grade drive. On the other hand, Netgear allows you to buy and populate the unit with commercially available drives. If you want support you have to use a drive from their approved list. I have had good success with Western Digital so I found a 3 TB drive in their Red line of drives specifically designed for NAS applications. $106 from Amazon. In total a 516 with 6x3TB drives would run about $1825 including 5 years of hardware support. Adding 5 years of software support brings the total to about $2500. Using Netgear's RAID calculator, 6x3TB disks in a RAID 6 configuration gives us 10.9 TB of usable storage, over 10 times what we need now. The difference between the NetApp quote and a 516 would be $7350. With that savings we could get a second unit and replicate the data to an offsite location and still have money left over. 

But would a ReadyNAS do what we needed it to do? In my experience the only way to be sure was to try it. I decided to get an RN204 4 bay unit targeted at home users and try it out. The OS is the same but the performance in an SMB environment would be less than ideal due to the lower end processor and memory in the unit. Populated with 4 WD Red 3TB drives in a RAID 6 configuration would set us back $725. A cheap test. If it did not work out I could ebay it or find another use for it. 

ReadyNAS RN204 testing

Setup


Install and setup was easy. I was a little concerned by the fact that configuration of the unit occurs from a web site that Netgear has. I'll have to look into that later since login info was set via the web site. I'm concerned that if the website got hacked, our data would be exposed. The website got a C grade on security here.

There was some monkeying around with setting up the RAID-6 in what Netgear calls a "traditional" configuration. In that configuration you cannot expand the volume. But you can have a hot spare drive which we want. You can't have a hot spare with their X-RAID configuration. I don't recall exactly what the monkeying around entailed. It had to do with the fact that the unit wanted to initialize in RAID 5 and I wanted RAID 6. I found on the web that upgrading from RAID 5 to 6 took someone 2 weeks. I flushed the drives and started from scratch with RAID 6. The initialization of the array took 27 hours. Be patient I guess. 

Further posts will deal with specific testing I did.