About the storage for fog server in Ubuntu
-
Hi team, I got a old machine with four hard drivers. I want to install Ubuntu and run fog server on it. And I want to use all four hard drivers for the storage. I am installing Ubuntu in one of the hard driver, and try to create a storage pool for fog server using the other three hard drivers. Do I need to create a storage pool for fog server? I mean if I did not create the storage pool, will fog server still use the other three drivers for storage? What I learned from Internet is to create a RAIDZ ZFS storage pool. I am not sure if it gonna work or not.
-
If your old computer is a server with a physical raid card, that is your easiest solution.
If you don’t have a hardware raid card then it gets a little more complicated. You can either setup a software raid using mdadm in ubuntu. If you don’t care about data loss you can setup a raid-0 configuration to stripe all of the drives. If you do care about data loss then use raid-1 (2 drives) or raid-5 (3 drives with 2 data drives and one parity drive). then mount that raid volume to /images (before fog is installed)
You can also use LVM and add the 3 drives to the LVM pool where the root directory is mounted. Again this approach doesn’t protect against data loss of a single disk.There are a number of ways to go about it some easier than others.
-
@weidongyan As mentioned there are different ways of doing this. I myself would tend to use a proper hardware RAID controller or use LVM. The later one doesn’t cost any money, is pretty flexible and easier to use than software RAID (a.k.a. mdadm in Linux) from my point of view. But that’s just what I am used to.
I just want to emphasize what George already mentioned. Make sure you simply mount the storage you create from those disks in path
/images
in your FOG server and life will be pretty easy from the FOG side. -
@Sebastian-Roth Hi, Sebastian, I just created one storage pool and it is mounted in the path of /storage-pool. So right now I need to install fog server and move the storage pool to /images. Is that right? How could I move the storage? I am new about this.
-
@weidongyan What does your
/etc/fstab
look like? (post it here).Really before you install fog you want to mount your storage pool over the /images directory so your storage pool will look transparent to FOG. The FOG installer script does a lot of things to make sure your FOG image repository is setup correctly, that is why its important to have the /images directory created and then your storage pool mounted to /images.
-
@george1421 ```
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=007d0276-b64f-4252-95bc-ca3867b64211 / ext4 errors=remount-ro 0 1 /swapfile none swap sw 0 0
-
@george1421 when I created the storage pool, it automatically created at the path of /storage-pool/, how could I change the path of it to the /images? just copy the fold to /images?
-
@george1421 https://drive.google.com/open?id=1vDs7csLf08NkDHtVM_TXx--ujKXipVOc
You can see that in the root there is a fold called storage-pool
-
@weidongyan Well from the fstab you posted it doesn’t look like your storage pool is going to be mounted at all. Run the following two commands and post the output you get here:
lsblk mount df -h
-
~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 91M 1 loop /snap/core/6350 loop1 7:1 0 34.6M 1 loop /snap/gtk-common-themes/818 loop2 7:2 0 140.7M 1 loop /snap/gnome-3-26-1604/74 loop3 7:3 0 2.3M 1 loop /snap/gnome-calculator/260 loop4 7:4 0 13M 1 loop /snap/gnome-characters/139 loop5 7:5 0 14.5M 1 loop /snap/gnome-logs/45 loop6 7:6 0 3.7M 1 loop /snap/gnome-system-monitor/57 sda 8:0 0 149G 0 disk └─sda1 8:1 0 149G 0 part / sdb 8:16 0 232.9G 0 disk ├─sdb1 8:17 0 232.9G 0 part └─sdb9 8:25 0 8M 0 part sdc 8:32 0 232.9G 0 disk ├─sdc1 8:33 0 232.8G 0 part └─sdc9 8:41 0 8M 0 part sdd 8:48 0 232.9G 0 disk ├─sdd1 8:49 0 232.8G 0 part └─sdd9 8:57 0 8M 0 part ~$ mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) udev on /dev type devtmpfs (rw,nosuid,relatime,size=3020640k,nr_inodes=755160,mode=755) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=610372k,mode=755) /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14145) debugfs on /sys/kernel/debug type debugfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M) configfs on /sys/kernel/config type configfs (rw,relatime) fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime) tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime) tmpfs on /run/user/121 type tmpfs (rw,nosuid,nodev,relatime,size=610368k,mode=700,uid=121,gid=125) /var/lib/snapd/snaps/core_6350.snap on /snap/core/6350 type squashfs (ro,nodev,relatime,x-gdu.hide) /var/lib/snapd/snaps/gtk-common-themes_818.snap on /snap/gtk-common-themes/818 type squashfs (ro,nodev,relatime,x-gdu.hide) /var/lib/snapd/snaps/gnome-3-26-1604_74.snap on /snap/gnome-3-26-1604/74 type squashfs (ro,nodev,relatime,x-gdu.hide) /var/lib/snapd/snaps/gnome-calculator_260.snap on /snap/gnome-calculator/260 type squashfs (ro,nodev,relatime,x-gdu.hide) /var/lib/snapd/snaps/gnome-characters_139.snap on /snap/gnome-characters/139 type squashfs (ro,nodev,relatime,x-gdu.hide) /var/lib/snapd/snaps/gnome-logs_45.snap on /snap/gnome-logs/45 type squashfs (ro,nodev,relatime,x-gdu.hide) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=610368k,mode=700,uid=1000,gid=1000) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000) /var/lib/snapd/snaps/gnome-system-monitor_57.snap on /snap/gnome-system-monitor/57 type squashfs (ro,nodev,relatime,x-gdu.hide) storage-pool on /storage-pool type zfs (rw,xattr,noacl) sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime) nfsd on /proc/fs/nfsd type nfsd (rw,relatime) ~$ df -h Filesystem Size Used Avail Use% Mounted on udev 2.9G 0 2.9G 0% /dev tmpfs 597M 1.9M 595M 1% /run /dev/sda1 146G 7.5G 131G 6% / tmpfs 3.0G 97M 2.9G 4% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup tmpfs 597M 28K 597M 1% /run/user/121 /dev/loop0 91M 91M 0 100% /snap/core/6350 /dev/loop1 35M 35M 0 100% /snap/gtk-common-themes/818 /dev/loop2 141M 141M 0 100% /snap/gnome-3-26-1604/74 /dev/loop3 2.3M 2.3M 0 100% /snap/gnome-calculator/260 /dev/loop4 13M 13M 0 100% /snap/gnome-characters/139 /dev/loop5 15M 15M 0 100% /snap/gnome-logs/45 tmpfs 597M 72K 596M 1% /run/user/1000 /dev/loop6 3.8M 3.8M 0 100% /snap/gnome-system-monitor/57 storage-pool 450G 0 450G 0% /storage-pool
-
@weidongyan Ok, I see, you decided to use ZFS. You should be able to adjust the mount point of that ZFS storage pool following the instructions here: https://docs.oracle.com/cd/E19253-01/819-5461/gaztn/index.html
As well please run
zpool status
and post output here. -
~$ zfs get mountpoint storage-pool NAME PROPERTY VALUE SOURCE storage-pool mountpoint /images local ~$ zpool status pool: storage-pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM storage-pool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 errors: No known data errors
Did I succeed to mount the pool?
-
@george1421 Hi, George. Right now I found that the directory /images has been created and fog server has been installed. Is there any way to fix this to use the storage pool that I created? I did not mount the volume before the installation of fog server.
-
@weidongyan So what does
dh -h
show at the moment? -
Filesystem Size Used Avail Use% Mounted on udev 2.9G 0 2.9G 0% /dev tmpfs 597M 2.0M 595M 1% /run /dev/sda1 146G 8.5G 130G 7% / tmpfs 3.0G 19M 2.9G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup /dev/loop8 3.8M 3.8M 0 100% /snap/gnome-system-monitor/57 /dev/loop0 2.3M 2.3M 0 100% /snap/gnome-calculator/260 /dev/loop5 3.8M 3.8M 0 100% /snap/gnome-system-monitor/100 /dev/loop6 35M 35M 0 100% /snap/gtk-common-themes/818 /dev/loop1 91M 91M 0 100% /snap/core/6350 /dev/loop9 4.2M 4.2M 0 100% /snap/gnome-calculator/406 /dev/loop10 15M 15M 0 100% /snap/gnome-characters/296 /dev/loop11 1.0M 1.0M 0 100% /snap/gnome-logs/61 /dev/loop12 13M 13M 0 100% /snap/gnome-characters/139 /dev/loop15 15M 15M 0 100% /snap/gnome-logs/45 /dev/loop13 89M 89M 0 100% /snap/core/7270 /dev/loop14 43M 43M 0 100% /snap/gtk-common-themes/1313 /dev/loop7 141M 141M 0 100% /snap/gnome-3-26-1604/74 /dev/loop2 150M 150M 0 100% /snap/gnome-3-28-1804/67 /dev/loop3 55M 55M 0 100% /snap/core18/1066 /dev/loop4 141M 141M 0 100% /snap/gnome-3-26-1604/90 tmpfs 597M 32K 597M 1% /run/user/121 tmpfs 597M 44K 597M 1% /run/user/1000 storage-pool 450G 0 450G 0% /image
-
@george1421 I realized that I mounted the storage-pool after the installation of fog server and the /images has some folders there before I mounted. Should I uninstall fog server to start again?
-
@weidongyan Well I don’t see your storage pool mounted anywhere.
What does lsblk show?
Wait you change the post I see the /images mounted now. Rerun the fog installer with the /images mounted now. That should fix the /images directory.
-
After the fog installer is complete run this command
ls -la /images
and post the results here. -
total 16 drwxrwxrwx 4 fogproject root 4096 Aug 1 14:21 . drwxr-xr-x 30 root root 4096 Aug 2 09:38 .. drwxrwxrwx 4 fogproject root 4096 Aug 1 16:37 dev -rwxrwxrwx 1 fogproject root 0 Aug 1 14:21 .mntcheck drwxrwxrwx 2 fogproject root 4096 Aug 1 14:21 postdownloadscripts
-
@weidongyan ok that’s good. The FOG images directory is setup correctly. Now as long as after a reboot the /images directory is mounted then you should be good to go.