A lot of work has been done on this and so far I think we’re on the right path.
The latest init’s are up and ready for more testing, just needing @Doctrg to run the test.
He’s already tested simply deploying back and it’s giving him some issues with the first partition (saying the partition is smaller than the source). I think this is because of the “math” that was being done originally up until I corrected around it last night.
With any luck, however, a fresh upload will enable this image to be deployed to smaller/larger systems.
So the reason the date selector was not being seen was because I am grabbing the minimal date.
Some entries, for this field, might have ‘0000-00-00 00:00:00’ as the date time stamp, which is invalid.
So the original way it worked, if the date was invalid (start time will always be earlier than end time) it would present “invalid”.
The updated method should work for this for all of you having this problem. I’m correcting this behaviour by adjusting the start date to 2 years before the current date.
Hopefully this is sufficient. The reason I don’t want to go back further is the amount of time it would take to generate the selector. (I’m not guessing the selector based on max end date any more. I’m just giving a date range based on the earliest time found and forward (as it’s not really a selector otherwise right?))
I’m also changing the way things are displayed a little bit.
Instead of showing logs of tasks that have either an invalid start OR end date, both start AND end must be invalid to no longer show. It may make some strange readings for things, but shouldn’t hurt anything otherwise.
@Wayne-Workman I find that a bit overzealous. My fog server itself is a crappy *buntu Vm with 2g of ram and 2 cores, hooked into our Vswitch of 8 ports, and my clients are all i5 8g of ram laptops. I routinely image 10 at a time in unicast, 40+ in multicast. When I have all 10 imaging, yes, the imaging is around 50% slower than just doing 1, but the time savings are still huge.
@MikeoftheLibrary After reading that, there’s another thing I forgot to mention. Each multicast session only goes as fast as the slowest host. So if one of them has bad RAM or a failing disk or bad patch cable, this would slow down every host in that multicast task. This could be your issue - and most likely is since you’re using business grade network equipment.
in order to @george1421 's post, because of the may incoming space issue i have the main fog server on a vm with only 50gb of storage and an additional physical storage node with 4tb raid storage that will also went on a backup tape each week. our vm’s on esx also have a backup each week.