@admin Happily whitelisted. Not slowing down browser and not intrusive; just the way I like it.
Best posts made by DevinR
-
RE: Ads
-
More compression testing!
This isn’t strictly a tutorial, but I am sharing info, so here we go! I loved the work by Ch3i here: https://forums.fogproject.org/topic/4948/compression-tests, but I wanted to know a bit more. Yesterday I spent more time than I should tell my boss just uploading the same image with different compression ratings. Here’s a brief table of contents:
- Questions
- Data
- Methodology\Specs
- Conclusions
- Future work
So without further delay, lets go!
Questions-
Here are some questions that I plan to answer:- How much extra space does each compression level take (compared to max 9)?
- How long does each compression level take to upload?
- How long does each compression level take to download?
- What is best for my environment?
Data-
- Picture of my data, yay! Fog Compression.PNG
- Full 2007 excel file for your own manupulation, boo. Fog.xlsx
Methodology\Specs-
FOG server (1.2.0) is a dual core 2.8GHz dell Optiplex 380 (2008ish) with Debian 7 (Wheezy). It has 1x7200RPM HDD (80GB) as the linux root file system with a pair of WD “Black” 1TB 7200RPM HDDs in a linux software raid mirror mounted on /images. It also acts as our DHCP server for known computers, unknown devices like phones or tablets, and VoIP phones (between 350 and 400 devices during the business day). Connected to netgear unmanaged Gig switch with a single link.FOG client is a new dell Optiplex 3020 with the dell image (with the first 2 partitions removed. Only a single partition (sda3) is being uploaded\downloaded. This client has something weird with for server, so all uploads and downloads cause the client to take like 80s with a blinking cursor before it fully loads and continues. This makes all of my times show longer and all of my speeds to be lower than actually observed. Connected to same netgear unmanaged Gig switch with a single link.
All images were sent with unicast download tasks and upload were done with a Single disk multi partition fixed size option set.
A few times I had to run the upload twice because I didn’t save the image the first time I uploaded. This is why there are two entries for uploads. All uploads were completed before any downloads to make sure that we didn’t get a chain effect of errors from 1 compression to another.
The raid utilization was determined by polling the raid hard drives active time every second for a duration of 100 seconds during the download. This was accomplished with the command “iostat -dx /dev/sdb /dev/sdc 1 100 > Compression5.log” I then took these 100 entries and averaged them to produce the snapshot of about 1/4th of the total download time for the image.
Conclusions-
- There are bigger factors for download time than compression. I was concerned about why compress 5 had such bad download performance. I ran the test again and got much better results. It just seems that the FOG server was busier, or our network was under heavy load, causing many packet failures, or something weird. I don’t know what. This makes believe that while greener is better for download duration, at any time, any of the higher compression images can have worse time to download times than a lower compression.
- Upload time can safely be reduced with minimal impact on image size. Even if it is just to compression 8, time goes to 75% of compression 9. Going to compression 7 further lowers it to about 60% of the time with a .2% increase in stored image size.
- I care about deployment time, so any compression level above 1 will give me a good deployment time. I also like my upload to not take forever, so going forward, my compression level is getting set to 7.
Future work-
- Same download tests but multicasting
- Same download tests but to multiple hosts at once (4 or 5)
- Other?
-
RE: Bit Torrent
You go, /u/Junkhacker! I would love to have bit torrent sync, so long as it allows me to lower my overall deployment time when doing 6+ clients.
I can’t use multicast because that hurts our network in a VERY bad way (all network traffic basically halts), so I use 5 concurrent single casts typically to about 30 computers at a batch.
Thus, whatever you do, my deplying to 6-10 clients takes about 2x as long as 1 client does. If the torrent send can reduce that number for 6+ to 1.5x of 1-5 clients, then I would be extremely happy and would find great benefit for my company.
If it works as good as possible, I would expect total deployment time for 30 clients to be like 15-20% of what it currently is.
-
RE: Compression tests
So I did my own testing and I don’t find there to be much difference for downloading, but compression level does seem to increase uploading time significantly. If interested, my post is titled more compression testing.
Latest posts made by DevinR
-
RE: Ads
@admin Happily whitelisted. Not slowing down browser and not intrusive; just the way I like it.
-
RE: More compression testing!
@Wayne-Workman These days, do I still need to go to a special forum section and ask for a user? Or how does it work? I guess I can go look around, I’m sure its somewhere.
-
RE: More compression testing!
@Wayne-Workman Thanks! That looks amazing. I guess I need to get setup for wiki editing access to be able to add a few other things around in there…
-
RE: Ideal FOG Setup
I am so excited for 1.3.0! Having said that, all of my real work has to be done on 1.2.0. Like the issue Tom recently mentioned that broke several images for a few revisions, when using the latest, you are subject to the latest bugs (even undiscovered ones).
I can’t afford to have production problems on my production fog server, but I happily have a node setup with the latest SVN for my own pleasure and enjoyment (I’m looking at you web-based boot menu!). Good luck!
-
RE: Compression tests
So I did my own testing and I don’t find there to be much difference for downloading, but compression level does seem to increase uploading time significantly. If interested, my post is titled more compression testing.
-
More compression testing!
This isn’t strictly a tutorial, but I am sharing info, so here we go! I loved the work by Ch3i here: https://forums.fogproject.org/topic/4948/compression-tests, but I wanted to know a bit more. Yesterday I spent more time than I should tell my boss just uploading the same image with different compression ratings. Here’s a brief table of contents:
- Questions
- Data
- Methodology\Specs
- Conclusions
- Future work
So without further delay, lets go!
Questions-
Here are some questions that I plan to answer:- How much extra space does each compression level take (compared to max 9)?
- How long does each compression level take to upload?
- How long does each compression level take to download?
- What is best for my environment?
Data-
- Picture of my data, yay! Fog Compression.PNG
- Full 2007 excel file for your own manupulation, boo. Fog.xlsx
Methodology\Specs-
FOG server (1.2.0) is a dual core 2.8GHz dell Optiplex 380 (2008ish) with Debian 7 (Wheezy). It has 1x7200RPM HDD (80GB) as the linux root file system with a pair of WD “Black” 1TB 7200RPM HDDs in a linux software raid mirror mounted on /images. It also acts as our DHCP server for known computers, unknown devices like phones or tablets, and VoIP phones (between 350 and 400 devices during the business day). Connected to netgear unmanaged Gig switch with a single link.FOG client is a new dell Optiplex 3020 with the dell image (with the first 2 partitions removed. Only a single partition (sda3) is being uploaded\downloaded. This client has something weird with for server, so all uploads and downloads cause the client to take like 80s with a blinking cursor before it fully loads and continues. This makes all of my times show longer and all of my speeds to be lower than actually observed. Connected to same netgear unmanaged Gig switch with a single link.
All images were sent with unicast download tasks and upload were done with a Single disk multi partition fixed size option set.
A few times I had to run the upload twice because I didn’t save the image the first time I uploaded. This is why there are two entries for uploads. All uploads were completed before any downloads to make sure that we didn’t get a chain effect of errors from 1 compression to another.
The raid utilization was determined by polling the raid hard drives active time every second for a duration of 100 seconds during the download. This was accomplished with the command “iostat -dx /dev/sdb /dev/sdc 1 100 > Compression5.log” I then took these 100 entries and averaged them to produce the snapshot of about 1/4th of the total download time for the image.
Conclusions-
- There are bigger factors for download time than compression. I was concerned about why compress 5 had such bad download performance. I ran the test again and got much better results. It just seems that the FOG server was busier, or our network was under heavy load, causing many packet failures, or something weird. I don’t know what. This makes believe that while greener is better for download duration, at any time, any of the higher compression images can have worse time to download times than a lower compression.
- Upload time can safely be reduced with minimal impact on image size. Even if it is just to compression 8, time goes to 75% of compression 9. Going to compression 7 further lowers it to about 60% of the time with a .2% increase in stored image size.
- I care about deployment time, so any compression level above 1 will give me a good deployment time. I also like my upload to not take forever, so going forward, my compression level is getting set to 7.
Future work-
- Same download tests but multicasting
- Same download tests but to multiple hosts at once (4 or 5)
- Other?
-
RE: Bit Torrent
You go, /u/Junkhacker! I would love to have bit torrent sync, so long as it allows me to lower my overall deployment time when doing 6+ clients.
I can’t use multicast because that hurts our network in a VERY bad way (all network traffic basically halts), so I use 5 concurrent single casts typically to about 30 computers at a batch.
Thus, whatever you do, my deplying to 6-10 clients takes about 2x as long as 1 client does. If the torrent send can reduce that number for 6+ to 1.5x of 1-5 clients, then I would be extremely happy and would find great benefit for my company.
If it works as good as possible, I would expect total deployment time for 30 clients to be like 15-20% of what it currently is.
-
RE: Compression tests
I know that I speak for EVERYONE when saying yes, lets get it down to at least 6 (maybe 5?).