You are correct, I tried it on fresh Ubuntu 16.04 and it worked.
I didn’t spend the time to pinpoint the exact error, but removing the old config (get rid of /opt/fog/.fogsettings) is letting the installer run, so this particular issue is fixed.
You are correct, I tried it on fresh Ubuntu 16.04 and it worked.
I didn’t spend the time to pinpoint the exact error, but removing the old config (get rid of /opt/fog/.fogsettings) is letting the installer run, so this particular issue is fixed.
Following instructions on https://fogproject.org/download
Steps:
Download https://github.com/FOGProject/fogproject/archive/1.5.4.tar.gz
Extract, cd to bin
Run foginstall.sh
Error:
Please change installation directory.
Running from here will fail.
You are in /var/www/fog/bin which is a folder that will
be moved during installation.
If I instead do something like run bin/foginstall.sh (from base directory), there are many errors relating to missing files or scripts. So it seems the script requires being run while inside bin.
Note this is an attempted reinstall, more or less, it isn’t completely fresh. Ubuntu 16.04.
Am I missing something?
@sebastian-roth I was referring to my intended use case. I want to capture an image from a computer and automatically copy the individual files from the image onto a file server. In order to do that, I need a script that runs on the server, not the host. I could find a way to trigger it still (some primitive form of RPC, maybe a quick HTTP request with wget), but it wouldn’t be quite as convenient.
It seems my same solution worked on a different partition file. Or partclone.ntfs --restore isn’t quite the same as partclone.restore. Or windows system partitions aren’t NTFS…
The other component of this is automatically running the script after imaging is complete. I see there is both a postdownload and postinit script, but both of them run on the host. Is there a postdownload script that runs on the server? I didn’t find a forum post describing more than the first two cases.
Here is my intended use case:
I want to image a computer, and automatically mount the image file, copy all the individual files to a file server, and thus access the individual files directly.
My problem so far is finding out the format of the image files and the tools needed to mount them.
So far, I have discovered that they are whatever image format partclone uses, and then (by default) gzipped. So, I gunzipped them, and am left with the actual partclone file.
As best as I can tell, the answer is to run something like this:
partclone -s unzipped_image_file -o raw_image_file.img --restore --restore_raw_file
Then, in theory, raw_image_file.img would be like an img created by dd - actually raw, sector by sector. Then, I could mount it as if it were a device:
mount -t ntfs raw_image_file.img /some/mount/path
So far, running the partclone restore is successful, but I cannot mount it. Here is the error message I get:
$ sudo mount test.img ./test
Failed to read last sector (204798): Invalid argument
HINTS: Either the volume is a RAID/LDM but it wasn’t setup yet,
or it was not setup correctly (e.g. by not using mdadm --build …),
or a wrong device is tried to be mounted,
or the partition table is corrupt (partition is smaller than NTFS),
or the NTFS boot sector is corrupt (NTFS size is not valid).
Failed to mount ‘/dev/loop0’: Invalid argument
The device ‘/dev/loop0’ doesn’t seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
Any idea how to mount the image, or otherwise accomplish my use case?