@Jim-Holcomb I still don’t fully understand. The database is just a point to store and pull the data from. Nothing more.
If you manually rename the folder in Linux, would you expect the database to automatically update when now it has no reference point? It’s the same reason in reverse. Why allow the folder to be renamed in the terminal if it won’t update the database that may be referencing it?
The folder doesn’t get automatically created when you define a new image. This only happens after the image is defined and captured. The pieces handling it are all after the database information is pulled. The GUI is not creating the folder. FTP is.
If you manually change the folder name in the terminal, you will still need to change the folder in the GUI. It’s the same for updating the path in the GUI. While you could temporarily store the change, but what about you are changing the path when the image hasn’t been captured yet?
I’ve seen the OP’s issue for several years on ~20 fog servers on various hardware platforms (virtual and bare metal), using both resized and non-resized images. I can confirm that this was still an issue on 1.5.8 but today I upgraded to 1.5.9 and it seems to be resolved. Old images still show the incorrect size, but recapturing them updates the image size on client to the correct value, which is approximately the minimum required hard drive capacity on the client when deploying the image.
Testing systems Dell o7010 both fog server and client computer. Both systems have local ssd sata drives. The target computer is running a customized linux kernel 5.6.18 and a customized init but both as based on FOG 1.5.9. The customization was done to aid in debugging and bench-marking the systems.
mount /dev/sda1 /mnt/locdsk
mount -o nolock,proto=tcp,rsize=32768,wsize=32768,intr,noatime "192.168.10.1:/images/dev" /images
#Test 1 creation of local and remote file by target computer
time dd if=/dev/zero of=/mnt/locdsk/L10gb.img count=1024 bs=10485760
time dd if=/dev/zero of=/images/R10gb.img count=1024 bs=10485760
#Test 2 cp files to and from server
time cp /mnt/locdsk/L10gb.img /images
time cp /mnt/locdsk/L10gb.img /images/L10gb-1.img
time cp /images/R10gb.img /mnt/locdsk
time cp /images/R10gb.img /mnt/locdsk/R10gb-1.img
#Test 3 scp files to and from server
time scp /mnt/locdsk/L10gb.img firstname.lastname@example.org:/images/L10gb-2.img
time scp /mnt/locdsk/L10gb.img email@example.com:/images/L10gb-3.img
time scp firstname.lastname@example.org:/images/dev/R10gb.img /mnt/locdsk/R10gb-2.img
time scp email@example.com:/images/dev/R10gb.img /mnt/locdsk/R10gb-3.img
#Test 4 ssh pipeline to and from server
time cat /mnt/locdsk/L10gb.img | ssh firstname.lastname@example.org "cat > /images/L10gb-4.img"
time cat /mnt/locdsk/L10gb.img | ssh email@example.com "cat > /images/L10gb-5.img"
time ssh firstname.lastname@example.org "cat /images/dev/R10gb.img" | cat > /mnt/locdsk/L10gb-6.img
time ssh email@example.com "cat /images/dev/R10gb.img" | cat > /mnt/locdsk/L10gb-7.img
Testing results as captured.
## Building the test files both local and remote
# time dd if=/dev/zero of=/mnt/locdsk/L10gb.img count=1024 bs=10485760
10737418240 bytes (11 GB, 10 GiB) copied, 20.2216 s, 531 MB/s
**real 0m20.223s user 0m0.001s sys 0m6.460s
# time dd if=/dev/zero of=/images/R10gb.img count=1024 bs=10485760
10737418240 bytes (11 GB, 10 GiB) copied, 93.3867 s, 115 MB/s
**real 1m33.390s user 0m0.003s sys 0m5.369s
## Confirm that files exist and are properly sized
# ls -la /mnt/locdsk/
drwxr-xr-x 3 root root 4096 Oct 9 08:25 .
drwxr-xr-x 3 root root 1024 Oct 9 08:23 ..
-rw-r--r-- 1 root root 10737418240 Oct 9 08:26 L10gb.img
drwx------ 2 root root 16384 Jan 10 2013 lost+found
# ls -la /images/
drwxrwxrwx 3 sshd root 63 Oct 9 2020 .
drwxr-xr-x 19 root root 1024 Oct 9 08:23 ..
-rwxrwxrwx 1 sshd root 0 Sep 28 13:36 .mntcheck
-rw-r--r-- 1 root root 10737418240 Oct 9 2020 R10gb.img
drwxrwxrwx 2 sshd root 26 Sep 28 13:36 postinitscripts
### Copy Local to Remote ###
# time cp /mnt/locdsk/L10gb.img /images
** real 1m34.821s user 0m0.083s sys 0m7.314s
# time cp /mnt/locdsk/L10gb.img /images/L10gb-1.img
**real 1m34.759s user 0m0.046s sys 0m6.801s
### Copy Remote to Local ###
# time cp /images/R10gb.img /mnt/locdsk
**real 1m41.710s user 0m0.084s sys 0m11.327s
# time cp /images/R10gb.img /mnt/locdsk/R10gb-1.img
**real 1m41.520s user 0m0.095s sys 0m11.392s
### SCP Local to Remote ###
# time scp /mnt/locdsk/L10gb.img firstname.lastname@example.org:/images/L10gb-2.img
The authenticity of host '192.168.10.1 (192.168.10.1)' can't be established.
ECDSA key fingerprint is SHA256:OpIsFYWVDCr/ovMlmPPSl46jpT332P3+BHnchdxzTCI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.10.1' (ECDSA) to the list of known hosts.
L10gb.img 100% 10GB 110.0MB/s 01:33
**real 1m40.007s user 0m44.460s sys 0m13.378s
# time scp /mnt/locdsk/L10gb.img email@example.com:/images/L10gb-3.img
L10gb.img 100% 10GB 109.5MB/s 01:33
**real 1m37.404s user 0m44.420s sys 0m13.068s
### SCP Remote to Local ###
# time scp firstname.lastname@example.org:/images/dev/R10gb.img /mnt/locdsk/R10gb-2.img
R10gb.img 100% 10GB 101.9MB/s 01:40
**real 1m44.166s user 0m43.986s sys 0m22.887s
# time scp email@example.com:/images/dev/R10gb.img /mnt/locdsk/R10gb-3.img
R10gb.img 100% 10GB 102.0MB/s 01:40
**real 1m44.620s user 0m43.437s sys 0m23.061s
### SSH Pipeline Local to Remote ###
# time cat /mnt/locdsk/L10gb.img | ssh firstname.lastname@example.org "cat > /images/L10gb-4.img"
**real 1m35.562s user 0m42.701s sys 0m12.975s
# time cat /mnt/locdsk/L10gb.img | ssh email@example.com "cat > /images/L10gb-5.img"
**real 1m35.749s user 0m43.478s sys 0m11.166s
### SSH Pipeline Remote to Local ###
# time ssh firstname.lastname@example.org "cat /images/dev/R10gb.img" | cat > /mnt/locdsk/L10gb-6.img
**real 1m43.745s user 0m44.738s sys 0m20.828s
# time ssh email@example.com "cat /images/dev/R10gb.img" | cat > /mnt/locdsk/L10gb-7.img
**real 1m43.564s user 0m43.976s sys 0m21.966s
I discarded the VLAN idea because it’s too late to implement safely now for me.
You’re right - there is 1 L3 10 Gigabit Switch and A lot of L2 1 Gigabit Switches
My question was, is a network as described with the network plan I provided realistic?
I worried that because everything is in one LAN (192.168.5.0/24), and the ISP router is effectively the DHCP Server, that this may lead to broadcast storming or other fatal performance loss in the network because every Client has a dynamic IP.
No worries for the number of hosts. With tcpip there is not really an issue with broadcast storms. If you are using an old lan technology like netbeui, spx, or banyan vines then broadcasts would be a concern. With TCP the main type of broadcast are ARP messaging (in general).
Regarding the HDD - it’s supposed to be 2 SAS HDD’s in RAID 1, because these are the only harddrives in the paper server. So effectively 1 HDD. I know 200 Mbit’s is much, I’m still debating in changing it to 2 SSD’s. I was just worried they would break faster.
One HDD or 2 in Raid-1 same difference since only one is the leader disk an the other is the mirror or follower disk. If you are using a traditional RAID controller then the onboard cache memory will help a bit with performance. But remember you are dealing with multi GB files for imaging so the cache will only help so much. In regards to SSDs, for FOG imaging they will not break faster than HDD. What breaks SSDs is many writing to the drive. In the case of standard fog imaging its a write once, deploy (read) many times. SSDs are ideally suited for FOG imaging. I would say the HDD would have a shorter life because of the head thrashing about the disk when you have multiple imaging going on at the same time.
Last thing regarding the Bottleneck … So, the image server cannot deploy faster than his own read speed and the write speed of the Client, right?
Here are actually the bottlenecks in imaging. Lets assume a deployment here server->client
FOG Server disk to network
Network to fog imaging
Fog imaging to disk
In the case of a FOG deployment, the fog server does very minimal work. On the FOG server it only moves data from the disk storage to the network adapter and then manages the overall progress of imaging. If you wanted to you could run the FOG server on a Raspberry PI 4 server. The key is getting a fast data path from disk to the network.
For fog imaging the target computer does all of the work. The target computer takes in the image from the network, decompresses the image dynamically, and then writes the image to the local hard drive on the target computer. So impacts on deployment speed is network, CPU (Ghz and number of cores), memory speed, and local storage drive.
So if you were to setup FOG and deploy to a computer the program that writes the image to disk is called PartClone. PartClone gives a performance number. This is usually in GB/min. This number is actually a composite number that indicates how fast Partclone can write the image to disk. But behind that number is all of the defined bottlenecks. Lets say you take 2 computers one is a 2010 Core2 Duo with a HDD and the second is a 2019 Quad Core with an NVMe drive. Using that same FOG server the Core2 computer will probably deploy in the 4GB/m range (bottleneck is CPU or local HDD). Where that Quad Core with NVMe drive will deploy in the 6.5GB/min range (bottle neck is the 1GbE network)
@MikeBC I agree. In our imaging lab we are only using wd15 docks the wd19 even after the dock firmware upgrade were just to unreliable reporting to the firmware as pxe compliant. So in the end we moved the wd19 docks to the user’s desk and acquired their wd15 docks for the imaging labs.
@Sebastian-Roth This is not a new branch, just an upgrade and move from Ubuntu to Centos 7 as it is more stable when upgrading. The only thing I can do as far as getting closer is to move this computer into the server room, but since it is a desktop that would be a little cumbersome. These are juniper switches.
For creating the schools’ base image, I use this same machine to do the work for all of them since it is pretty close to the “golden image” for each one. I am using this Dell Optiplex 7040. I create a legacy image (have some more impoverished districts with old machines) and a UEFI image.
I have not tried different iPXE binaries and I wasn’t aware that you guys wanted me to do a mirror port. I will try and work on this today if I get some time.