zfs编译
㈠ ubuntu zfs 怎么挂装
下面将指导大家在Ubuntu/linux 上安装原生的ZFS 文件系统。
测试环境:Linux 2.6.35-24-generic #42-Ubuntu SMP x86_64 GNU/Linux Ubuntu 10.10 ,也适用于Ubuntu 10.04。
确保安装以下软件包
build-essential
gawk
zlib1g-dev
uuid-dev
若没有安装,使用命令,安装:
sudo apt-get install build-essential gawk zlib1g-dev uuid-dev
现在准备从http://zfsonlinux.org/安装SPL和ZFS
sudo cd /usr/src
下载最新版本:
sudo wget http://github.com/downloads/behlendorf/spl/spl-0.5.2.tar.gz
sudo wget http://github.com/downloads/behlendorf/zfs/zfs-0.5.2.tar.gz
构建SPL(编译ZFS时会用到)
sudo tar -xvzf spl-0.5.2.tar.gz
sudo cd spl-0.5.2/
sudo ./configure
sudo make
sudo make install
构建ZFS
cd ..
sudo tar -xvzf zfs-0.5.2.tar.gz
sudo cd zfs-0.5.2/ sudo ./configure
sudo make
sudo make install
查看一下splat是否工作,ZFS模块已经装载:
sudo modprobe splat
sudo splat -a
sudo modprobe zfs
lsmod |grep zfs
OK~~
如果丢失路径 LD_LIBRARY_PATH,所有的ZFS命令都将出现如下错误:
zfs: error while loading shared libraries: libspl.so.0: cannot open shared object file: No such file or directory
可添加环境,加以修正:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
好了~~
㈡ oracle zfs storage vm里怎么安装
1、下载安装Oracle VM VirtualBox。
6、下一步,虚拟机就简单的创建完成了。
㈢ lfs是什么意思
LFS──Linux from Scratch,就是一种从网上直接下载源码,从头编译Linux的安装方式。它不是发行版,只是一个菜谱,告诉你到哪里去买菜(下载源码),怎么把这些生东西( raw code) 作成符合自己口味的菜肴──个性化的Linux,不单单是个性的桌面。
Log-structured file system,是影响近代高效能档案系统很深远的设计,有许多效能导向的特殊应用档案系统,像WAFL,Sprint,ZFS,都以LFS概念为设计的基础. 因为一般的档案系统,只能利用10~15%的磁盘频宽,LFS却可以提升到80%。
这是因为如今的磁盘机的IO频宽已经很快,但是受限于磁头磁轨移动的机械动作加速有限,和磁盘转动到所需的磁区也需要的时间,让机械动作比较少的连续磁区读取,速度远比需要大量机械动作的随机读取快. 但是根据研究,UNIX的档案系统,约略80%是8K以下小档,分散在各地的小档,会造成大量随机读取,而让磁盘I/O明显变慢. LFS就是想解决这样的问题。
LFS为了能充份的利用磁盘I/O频宽,减少随机读写,他定义了Segment为基本的磁盘存取单位,Segment 是由连续的小磁区组成,大小为512KB (1024个磁区). LFS假设系统有足够大的快取记忆体,让磁盘机的动作会集中在写入(因为读取大多会从快取),因此我们只要把小档集中起来成为一个个segment 大小再集中写入,就可以理想上使用100%磁盘频宽。
顾名思义,LFS本身就是把档案系统当成一个巨大的log,他的好处是可以很容易处理异常关机产生的问题,只需要检查最后写入的磁区就可以. 但相对有个缺点,就是必须保持log尾端有足够的free space 才能新增/异动档案. 因此,一但log 满了,就必须清理log中所有删除的档案来释放空间,并且把空出来的空间往后移,移到尾端再利用,这个动作称做Segment Clean。
Segment Clean 是一项非常繁重的工作,会占用绝大多部分的磁盘频宽,以致于系统效能受到拖累,形成LFS实作非常大的问题。
此外,LFS 虽然把小档案clusted 起来成为Segment,让写入效能大增,但读取时仍可能需要到各个Segments中读取小档,由于Segment是很大的IO单位,造成IO瓶颈. 当快取没有这些档案,读取效率就会变慢。
㈣ proxmox ve -- ZFS on Linux 2019-08-30
ZFS是由Sun Microsystems设计的一个文件系统和逻辑卷管理器的组合。从proxmox ve 3.4开裤稿始,zfs文件系统的本机Linux内核端口作为可选文件系统引入,并作为根文件系统的附加选择。不需要手动编译ZFS模块-包括所有包。
通过使用zfs,它可以通过低硬件预算花销实现最大的企业功能,并且可以通过利用SSD缓存或纯使用SSD来饥哪实现高性能系统。ZFS可以通过适度的CPU和内存负载以及简单的管理来取代成本高昂的硬件RAID卡。
General ZFS advantages
ZFS很大程度上依赖于内存,因此至少需要8GB才能启动。烂纯码在实践中,尽可能多地使用高配置硬件。为了防止数据损坏,我们建议使用高质量的ECC RAM。
如果使用专用缓存和/或日志磁盘,则应使用企业级SSD(例如Intel SSD DC S3700系列)。这可以显着提高整体性能。
If you are experimenting with an installation of Proxmox VE inside a VM (Nested Virtualization), don’t use <tt>virtio</tt> for disks of that VM, since they are not supported by ZFS. Use IDE or SCSI instead (works also with <tt>virtio</tt> SCSI controller type).
When you install using the Proxmox VE installer, you can choose ZFS for the root file system. You need to select the RAID type at installation time:
| RAID0
|
Also called “striping”. The capacity of such volume is the sum of the capacities of all disks. But RAID0 does not add any rendancy, so the failure of a single drive makes the volume unusable.
|
| RAID1
|
Also called “mirroring”. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The resulting capacity is that of a single disk.
|
| RAID10
|
A combination of RAID0 and RAID1. Requires at least 4 disks.
|
| RAIDZ-1
|
A variation on RAID-5, single parity. Requires at least 3 disks.
|
| RAIDZ-2
|
A variation on RAID-5, double parity. Requires at least 4 disks.
|
| RAIDZ-3
|
A variation on RAID-5, triple parity. Requires at least 5 disks.
|
The installer automatically partitions the disks, creates a ZFS pool called <tt>rpool</tt>, and installs the root file system on the ZFS subvolume <tt>rpool/ROOT/pve-1</tt>.
Another subvolume called <tt>rpool/data</tt> is created to store VM images. In order to use that with the Proxmox VE tools, the installer creates the following configuration entry in <tt>/etc/pve/storage.cfg</tt>:
<pre><tt>zfspool: local-zfs
pool rpool/data
sparse
content images,rootdir</tt></pre>
After installation, you can view your ZFS pool status using the <tt>zpool</tt> command:
<pre><tt># zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
errors: No known data errors</tt></pre>
The <tt>zfs</tt> command is used configure and manage your ZFS file systems. The following command lists all file systems after installation:
<pre><tt># zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.94G 7.68T 96K /rpool
rpool/ROOT 702M 7.68T 96K /rpool/ROOT
rpool/ROOT/pve-1 702M 7.68T 702M /
rpool/data 96K 7.68T 96K /rpool/data
rpool/swap 4.25G 7.69T 64K -</tt></pre>
Depending on whether the system is booted in EFI or legacy BIOS mode the Proxmox VE installer sets up either <tt>grub</tt> or <tt>systemd-boot</tt> as main bootloader. See the chapter on Proxmox VE host bootladers for details.
This section gives you some usage examples for common tasks. ZFS itself is really powerful and provides many options. The main commands to manage ZFS are <tt>zfs</tt> and <tt>zpool</tt>. Both commands come with great manual pages, which can be read with:
<pre><tt># man zpool
To create a new pool, at least one disk is needed. The <tt>ashift</tt> should have the same sector-size (2 power of <tt>ashift</tt>) or larger as the underlying disk.
<pre><tt>zpool create -f -o ashift=12 <pool> <device></tt></pre>
To activate compression
<pre><tt>zfs set compression=lz4 <pool></tt></pre>
Minimum 1 Disk
<pre><tt>zpool create -f -o ashift=12 <pool> <device1> <device2></tt></pre>
Minimum 2 Disks
<pre><tt>zpool create -f -o ashift=12 <pool> mirror <device1> <device2></tt></pre>
Minimum 4 Disks
<pre><tt>zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4></tt></pre>
Minimum 3 Disks
<pre><tt>zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3></tt></pre>
Minimum 4 Disks
<pre><tt>zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4></tt></pre>
It is possible to use a dedicated cache drive partition to increase the performance (use SSD).
As <tt><device></tt> it is possible to use more devices, like it’s shown in "Create a new pool with RAID*".
<pre><tt>zpool create -f -o ashift=12 <pool> <device> cache <cache_device></tt></pre>
It is possible to use a dedicated cache drive partition to increase the performance(SSD).
As <tt><device></tt> it is possible to use more devices, like it’s shown in "Create a new pool with RAID*".
<pre><tt>zpool create -f -o ashift=12 <pool> <device> log <log_device></tt></pre>
If you have an pool without cache and log. First partition the SSD in 2 partition with <tt>parted</tt> or <tt>gdisk</tt>
| Always use GPT partition tables. |
The maximum size of a log device should be about half the size of physical memory, so this is usually quite small. The rest of the SSD can be used as cache.
<pre><tt>zpool add -f <pool> log <device-part1> cache <device-part2></tt></pre>
Changing a failed device
<pre><tt>zpool replace -f <pool> <old device> <new device></tt></pre>
Changing a failed bootable device when using systemd-boot
<pre><tt>sgdisk <healthy bootable device> -R <new device>
sgdisk -G <new device>
zpool replace -f <pool> <old zfs partition> <new zfs partition>
pve-efiboot-tool format <new disk's ESP>
pve-efiboot-tool init <new disk's ESP></tt></pre>
| <tt>ESP</tt> stands for EFI System Partition, which is setup as partition #2 on bootable disks setup by the Proxmox VE installer since version 5.4. For details, see Setting up a new partition for use as synced ESP . |
ZFS comes with an event daemon, which monitors events generated by the ZFS kernel mole. The daemon can also send emails on ZFS events like pool errors. Newer ZFS packages ships the daemon in a separate package, and you can install it using <tt>apt-get</tt>:
<pre><tt># apt-get install zfs-zed</tt></pre>
To activate the daemon it is necessary to edit <tt>/etc/zfs/zed.d/zed.rc</tt> with your favourite editor, and uncomment the <tt>ZED_EMAIL_ADDR</tt> setting:
<pre><tt>ZED_EMAIL_ADDR="root"</tt></pre>
Please note Proxmox VE forwards mails to <tt>root</tt> to the email address configured for the root user.
| The only setting that is required is <tt>ZED_EMAIL_ADDR</tt>. All other settings are optional. |
It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the host. Use your preferred editor to change the configuration in <tt>/etc/modprobe.d/zfs.conf</tt> and insert:
<pre><tt>options zfs zfs_arc_max=8589934592</tt></pre>
This example setting limits the usage to 8GB.
|
If your root file system is ZFS you must update your initramfs every time this value changes:
<pre><tt>update-initramfs -u</tt></pre>
|
Swap-space created on a zvol may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage.
We strongly recommend to use enough memory, so that you normally do not run into low memory situations. Should you need or want to add swap, it is preferred to create a partition on a physical disk and use it as swapdevice. You can leave some space free for this purpose in the advanced options of the installer. Additionally, you can lower the “swappiness” value. A good value for servers is 10:
<pre><tt>sysctl -w vm.swappiness=10</tt></pre>
To make the swappiness persistent, open <tt>/etc/sysctl.conf</tt> with an editor of your choice and add the following line:
<pre><tt>vm.swappiness = 10</tt></pre>
<caption class="title">Table 1. Linux kernel <tt>swappiness</tt> parameter values</caption> <colgroup><col style="width:33%;"> <col style="width:66%;"></colgroup>
<tt>vm.swappiness = 0</tt>
|
The kernel will swap only to avoid an out of memory condition
|
|
<tt>vm.swappiness = 1</tt>
|
Minimum amount of swapping without disabling it entirely.
|
|
<tt>vm.swappiness = 10</tt>
|
This value is sometimes recommended to improve performance when sufficient memory exists in a system.
|
|
<tt>vm.swappiness = 60</tt>
|
The default value.
|
|
<tt>vm.swappiness = 100</tt>
|
The kernel will swap aggressively.
|
ZFS on Linux version 0.8.0 introced support for native encryption of datasets. After an upgrade from previous ZFS on Linux versions, the encryption feature can be enabled per pool:
<pre><tt># zpool get feature@encryption tank
NAME PROPERTY VALUE SOURCE
tank feature@encryption disabled local
NAME PROPERTY VALUE SOURCE
tank feature@encryption enabled local</tt></pre>
| There is currently no support for booting from pools with encrypted datasets using Grub, and only limited support for automatically unlocking encrypted datasets on boot. Older versions of ZFS without encryption support will not be able to decrypt stored data. |
| It is recommended to either unlock storage datasets manually after booting, or to write a custom unit to pass the key material needed for unlocking on boot to <tt>zfs load-key</tt>. |
| Establish and test a backup procere before enabling encryption of proction data.If the associated key material/passphrase/keyfile has been lost, accessing the encrypted data is no longer possible. |
Encryption needs to be setup when creating datasets/zvols, and is inherited by default to child datasets. For example, to create an encrypted dataset <tt>tank/encrypted_data</tt> and configure it as storage in Proxmox VE, run the following commands:
<pre><tt># zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
Enter passphrase:
Re-enter passphrase:
All guest volumes/disks create on this storage will be encrypted with the shared key material of the parent dataset.
To actually use the storage, the associated key material needs to be loaded with <tt>zfs load-key</tt>:
<pre><tt># zfs load-key tank/encrypted_data
Enter passphrase for 'tank/encrypted_data':</tt></pre>
It is also possible to use a (random) keyfile instead of prompting for a passphrase by setting the <tt>keylocation</tt> and <tt>keyformat</tt> properties, either at creation time or with <tt>zfs change-key</tt> on existing datasets:
<pre><tt># dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
| When using a keyfile, special care needs to be taken to secure the keyfile against unauthorized access or accidental loss. Without the keyfile, it is not possible to access the plaintext data! |
A guest volume created underneath an encrypted dataset will have its <tt>encryptionroot</tt> property set accordingly. The key material only needs to be loaded once per encryptionroot to be available to all encrypted datasets underneath it.
See the <tt>encryptionroot</tt>, <tt>encryption</tt>, <tt>keylocation</tt>, <tt>keyformat</tt> and <tt>keystatus</tt> properties, the <tt>zfs load-key</tt>, <tt>zfs unload-key</tt> and <tt>zfs change-key</tt> commands and the <tt>Encryption</tt> section from <tt>man zfs</tt> for more details and advanced usage.