| Apr | MAY | Jun |
| 02 | ||
| 2012 | 2013 | 2014 |
COLLECTED BY
Collection: Shallow Crawl Started 2013
| FAQ - Table of Contents | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||||||||||
| 1.1 What is the licensing concern? | ||||||||||||
|
ZFS is licensed under the Common Development and Distribution License (CDDL), and the Linux kernel is licensed under the GNU General Public License Version 2 (GPLv2). While both are free open source licenses they are restrictive licenses. The combination of them causes problems because it prevents using pieces of code exclusively available under one license with pieces of code exclusively available under the other in the same binary. In the case of the kernel, this prevents us from distributing ZFS as part of the kernel binary. However, there is nothing in either license that prevents distributing it in the form of a binary module or in the form of source code. For further reading on this issue see the following excellent article regarding non-GPL licensed kernel modules. |
||||||||||||
| 1.2 How do I install it? | ||||||||||||
|
ZFS on Linux is available for numerous distributions and the installation process largely depends on the package manager. The following distributions all have support for ZFS and documentation on how it can be installed. If your distribution isn't listed below you can build ZFS using the officially released tarballs. |
||||||||||||
| 1.3 Why doesn’t it build? | ||||||||||||
|
Building a kernel module against an arbitrary kernel version is a complicated thing to do. Every Linux distribution has their own idea of how this should be done. It depends on the base kernel version, any distribution specific patches, and exactly how the kernel was configured. If you run in to problems here are few thing to check. If none of these things explain your problem, then please open a new issue which fully describes the problem.
|
||||||||||||
| 1.4 How do I mount the file system? | ||||||||||||
|
A mountable dataset will be created and automatically mounted when you first create the pool with zpool create. Additional datasets can be created with zfs create and they will be automatically mounted. |
||||||||||||
| 1.5 Why should I use a 64-bit system? | ||||||||||||
|
You are strongly encouraged to use a 64-bit kernel. At the moment zfs will build in a 32-bit environment but will not run stably. In the Solaris kernel it is common practice to make heavy use of the virtual address space because it is designed to work well. However, in the Linux kernel most memory is addressed with a physical address and use of the virtual address space is strongly discouraged. This is particularly true on 32-bit arches where the virtual address space is limited to roughly 100MiB by default. Using the virtual address space on 64-bit Linux kernels is also discouraged. But in this case the address space is so much larger than physical memory it is not as much of an issue. If you are bumping up against the virtual memory limit you will see the following message in your system logs. You can increase the virtual address size with the boot option vmalloc=512M.
However, even after making this change your system will likely not be entirely stable. Proper support for 32-bit systems is contingent upon the zfs code being weaned off its dependence on virtual memory. This will take some time to do correctly but it is planned for the Linux port. This change is also expected to improve how efficiently zfs utilizes the systems memory. And can be further leveraged to allow tighter integration with the standard Linux VM mechanisms. |
||||||||||||
| 1.6 What kernel versions are supported? | ||||||||||||
|
The current spl/zfs-0.6.1 release supports Linux 2.6.26 - 3.9 kernels. This covers most of the kernels used in the major Linux distributions. The following distributions are regularly tested at LLNL using a buildbot based continuous integration development model. If you need support for a newer kernel you may find it in the latest github sources.
| ||||||||||||
| 1.7 What /dev/ names should I use when creating my pool? | ||||||||||||
|
There are different /dev/ names that can be used when creating a ZFS pool. Each option has advantages and drawbacks, the right choice for your ZFS pool really depends on your requirements. For development and testing using /dev/sdX naming is quick and easy. A typical home server might prefer /dev/disk/by-id/ naming for simplicity and readability. While very large configurations with multiple controllers, enclosures, and switches will likely prefer /dev/disk/by-vdev naming for maximum control. But in the end, how you choose to identify your disks is up to you.
|
||||||||||||
| 1.8 How do I change the /dev/ names on an existing pool? | ||||||||||||
|
Changing the /dev/ names on an existing pool can be done by simply exporting the pool and re-importing it with the -d option to specify which new names should be used. For example, to use the custom names in /dev/disk/by-vdev: $ sudo zpool export tank $ sudo zpool import -d /dev/disk/by-vdev tank |
||||||||||||
| 1.9 How do I setup the /etc/zfs/vdev_id.conf file? | ||||||||||||
|
In order to use /dev/disk/by-vdev/ naming the /etc/zfs/vdev_id.conf must be configured. The format of this file is described in the vdev_id.conf man page. Several examples follow.
After defining the new disk names run udevadm trigger to prompt udev to parse the configuration file. This will result in a new /dev/disk/by-vdev directory which is populated with symlinks to /dev/sdX names. Following the first example above, you could then create the new pool of mirrors with the following command:
$ sudo zpool create tank \
mirror A0 B0 mirror A1 B1 mirror A2 B2 mirror A3 B3 \
mirror A4 B4 mirror A5 B5 mirror A6 B6 mirror A7 B7
$ sudo zpool status
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
A0 ONLINE 0 0 0
B0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
A1 ONLINE 0 0 0
B1 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
A2 ONLINE 0 0 0
B2 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
A3 ONLINE 0 0 0
B3 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
A4 ONLINE 0 0 0
B4 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
A5 ONLINE 0 0 0
B5 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
A6 ONLINE 0 0 0
B6 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
A7 ONLINE 0 0 0
B7 ONLINE 0 0 0
errors: No known data errors
|
||||||||||||
| 1.10 What’s going on with performance? | ||||||||||||
|
To achieve good performance with your pool there are some easy best practices you should follow. Additionally, it should be made clear that the ZFS on Linux implementation has not yet been optimized for performance. As the project matures we can expect performance to improve.
|
||||||||||||
| 1.11 What does the /etc/zfs/zpool.cache file do? | ||||||||||||
|
Whenever a pool is imported in the system it will be added to the /etc/zfs/zpool.cache file. This file stores pool configuration information such as the vdev device names and the active pool state. If this file exists when the ZFS modules are loaded then any pool listed in the cache file will be automatically imported. When a pool is not listed in the cache file it will need to be explicitly imported. |
||||||||||||
| 1.12 How do I setup an NFS or SMB shares? | ||||||||||||
|
ZFS has been integrated with the Linux NFS and SMB servers. You can share a ZFS file system by setting the sharenfsorsharesmb file system property. For example, to share the file system tank/home via NFS and SMB with the default options. $ sudo zfs set sharenfs=on tank/home $ sudo zfs set sharesmb=on tank/home Note you must still manually configure your network to allow NFS of SMB. You will also need to make sure that the NFS and SMB packages for your distribution are installed. |
||||||||||||
| 1.13 Can I boot from ZFS? | ||||||||||||
| Yes, numerous people have had success with this. However, because it still requires the latest versions of grub and is distribution specific we don't recommend it. Instead we suggest using ZFS as your root file system. There are excellent walk through available for both Ubuntu and Gentoo. | ||||||||||||
| 1.14 How do I automatically mount ZFS file systems during startup? | ||||||||||||
Note that the SELinux policy for ZFS on Linux is not yet implemented. This can lead to issues such as the init script failing to auto-mount the filesystems when SELinux is set to enforce. The long term solution is to add ZFS as a known filesystem type which supports xattrs to the default SELinux policy. This is something which must be done by the upstream Linux distribution. In the mean time, you can workaround this by setting SELinux to permissive or disabled. $ cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted |
||||||||||||
| 1.15 How does ZFS on Linux handles Advanced Format disks? | ||||||||||||
|
Advanced Format (AF) is a new disk format which natively uses a 4,096 byte instead of 512 byte sector size. To maintain compatibility with legacy systems AF disks emulate a sector size of 512 bytes. By default, ZFS will automatically detect the sector size of the drive. This combination will result in poorly aligned disk access which will greatly degrade the pool performance. Therefore the ability to set the ashift property has been added to the zpool command. This allows users to explicitly assign the sector size at pool creation time. The ashift values range from 9 to 16 with the default value 0 meaning auto-detect the sector size. This value is actually a bit shift value, so an ashift value for 512 bytes is 9 (29 = 512) while the ashift value for 4,096 bytes is 12 (212 = 4,096). To force the pool to use 4,096 byte sectors we must specify this at pool creation time:$ sudo zpool create -o ashift=12 tank mirror sda sdb |
||||||||||||
| 1.16 Do I have to use ECC memory for ZFS? | ||||||||||||
|
Using ECC memory for ZFS is strongly recommended for enterprise environments where the strongest data integrity guarantees are required. Without ECC memory rare random bit flips caused by cosmic rays or by faulty memory can go undetected. If this were to occur ZFS (or any other filesystem) will write the damaged data to disk and be unable to automatically detect the corruption. Unfortunately, ECC memory is not always supported by consumer grade hardware. And even when it is ECC memory will be more expensive. For home users the additional safety brought by ECC memory might not justify the cost. It's up to you to determine what level of protection your data requires. |
||||||||||||
| 1.17 Can I use a ZVOL for swap? | ||||||||||||
|
Yes. Just make sure you set the ZVOL block size to match your systems page size, for x86_64 systems that is 4k. This tuning prevents ZFS from having to perform read-modify-write options on a larger block while the system is already low on memory. $ sudo zfs create tank/swap -V 2G -b 4K $ sudo mkswap -f /dev/tank/swap $ sudo swapon /dev/tank/swap |
||||||||||||
| 1.18 How do I generate the /etc/zfs/zpool.cache file? | ||||||||||||
|
The /etc/zfs/zpool.cache file will be automatically updated when your pool configuration is changed. However, if for some reason it becomes stale you can force the generation of a new /etc/zfs/zpool.cache file by setting the cachefile property on the pool. $ sudo zpool set cachefile=/etc/zfs/zpool.cache tank |
||||||||||||
| 2.1 How can I help? | ||||||||||||
|
The most helpful thing you can do is to try ZFS on your Linux system and report any issues. If you like what you see and would like to contribute to the project please send me an email. There are quite a few open issues on the issue tracker which need attention or if you have an idea of your own that is fine too. |