You could setup BittorrentSync to distribute your images to your storage nodes…
but as for writing an image entirely over bittorrent… would probably be a big rewrite.
You could setup BittorrentSync to distribute your images to your storage nodes…
but as for writing an image entirely over bittorrent… would probably be a big rewrite.
BTSync is always on the latest, automatically distributed if you want that for testing.
I would say Debian 7 as well. Not given me any problems.
Ages ago I did some testing with BTSync for this. might be worth investigating… unless RSYNC suits you better.
the image folder would apply to one image. the images (note the ‘s’) would be all images.
if you wanted to beta you could also use Bittorrent Sync to get the latest changes as well as SVN.
I have seen it on virtualbox VMs, but today it was on new i5 systems. Same model system works perfectly on other PCs but some seem to boot loop.
I wasn’t doing the new PC installs, but the guy who was wouldn’t know how to muck things up too much so for the moment we’ve switched back while I troubleshoot.
Have you had any reports of looping boots after tasks? I’ve seen this a couple of times on new PCs we’ve switched to our new fog server today.
I will try to update to the latest and try again but got a ton of other things to do since it’s nearly the end of the school year.
the installer uses Apt to install stuff,
I run on Debian and I’ve not had major problems.
do you have volume licenses for office or windows?
Windows licensing can be a real pain, no matter what your doing.
You could use something to install office post imaging, but it would likely be complex using all different keys.
there is a NAS tutorial somewhere… could probably be adapted to make FreeNAS a storage node.
are you using a multi partition single image? it shouldn’t expand on it’s own
have you tried including the whole path to your unattend? (I may have missed a bit since there have been a load of posts…)
I think rather than building solutions into fog for this it would be something users need to be aware of and manage themselves.
If their environments have smaller HDDs/SSDs then the sysadmins need to know this and should be aware of it beforehand. I deliberately build my images smaller than the smallest possible HDD in my environment and expand on the client.
or you could use the fixed resizable image.
Dropping an error back to the server is a good idea… unfortunately knowing your environment is probably the best thing to do as it will save you the time of shrinking the image and then re-uploading.
Also not all HDDs the same size are equal… I’ve had a whole room of 80GB HDDs and had one system fail because it’s 80GB HDD wasn’t the same size as the others.
I run all my FOG servers as VMs, mainly in VMware ESXi or XenServer 6.2.
for 0.32 I had all Ubuntu but for 1.x.x I’m now using Debian 7 and it’s not been a problem at all. My virtualbox test clients boot loop but none of the physicals do the same…
You may be limited by your storage if you have lots of active VMs on the same SR/Datastore since disks only go so fast and aren’t the best performance for random IO. You may also get limitations on your network if you have active VMs on the same physical NIC… a 1gb link can be swallowed quickly while imaging.
everything depends how much performance you need, what hardware you have and how often you are going to be imaging.
you could try having a pfsense VM do your routing.
pfsense VM with 2 NICs, one NAT, one internal.
other VMs have internal NICs, pfsense can handle your DHCP easily enough.
I do this with bridged NIC instead of NAT… but might work.
if your serious, drop the wireless and get Ethernet.
There is the location plugin. might be of use to you.
iirc there is a guide to making a storage server out of a NAS…
I use VMs so storage nodes are easy enough to setup for me.