i was googling the problem a bit and i was curious, will it boot if you remove the raid card?
just trying to understand the source of the panic.
Posts made by Junkhacker
-
RE: rcu_sched stall OR kernel panic on PowerEdge R640
-
RE: The future of partclone and therefore FOG as it is
@Sebastian-Roth i understand the reasoning for not updating it. i’m hoping to have a chance to help with testing by the end of the week.
-
RE: The future of partclone and therefore FOG as it is
@george1421 i only brought it up because we’re updating ztsd for this build anyway, so i figured “why not have the latest one?”
-
RE: The future of partclone and therefore FOG as it is
@Sebastian-Roth FYI zstd 1.4.3 is out. it’s a just a bug-fix upgrade, and a bug that’s probably not relevant to our use case, but we should probably upgrade anyway.
-
RE: Very slow cloning speed on specific model
@jondeong Just an FYI: the update to dev-branch would have updated the KERNEL RAMDISK SIZE for you to what is going to be the new size requirement. that was almost certainly the cause of the kernel panic.
-
RE: NTFS partitions corrupt after capturing resizable image
@dolf FOG is image deployment software, not backup software. resizable is the default because that’s the most useful and common type of image capture to be used for FOG’s intended purpose. While, yes, FOG can be used to backup systems, it is not it’s intended purpose and we have encouraged people for years to not use it as such.
-
RE: [Seeking Volunteers] Bench Testing! Our trip to the best results!
@Mokerhamer if you’re really wanting to push things to the limit, you might be interested in helping out with testing/development here: https://forums.fogproject.org/topic/13206/the-future-of-partclone-and-therefore-fog-as-it-is/105
the newest version of partclone will allow us to save images without checksums, decreasing the captured data slightly and increasing compress-ability. my initial testing says it will be about a 10% improvement on compression.
-
RE: [Seeking Volunteers] Bench Testing! Our trip to the best results!
@george1421 i just want to chime in that those speeds seem completely normal to me. that’s what i was getting on a regular basis before i switched the VM host for my FOG server.
-
RE: [Seeking Volunteers] Bench Testing! Our trip to the best results!
@george1421 all of what you said is true, but it just emphasizes the importance of benchmarking compression. with all of the variables that can come into play, the one thing you can usually rely on being consistent among peoples setups is 1GbE to the end client.
the maximum transfer rate on gigabit is well established, but what’s important is end result speed of writing to disk, and that’s what we can effect with compression methods.
btw, i wasn’t being grumpy. i just like to highlight how fast Fog can be. it’s one of Fog’s killer features that other methods can’t beat. (if anyone has seen a faster deployment method than Fog, please let me know.)
-
RE: [Seeking Volunteers] Bench Testing! Our trip to the best results!
oh, also, i disagree with george about how fast an “ideal setup” can be with a single GbE network and one unicast:
https://youtu.be/gHNPTmlrccM -
RE: [Seeking Volunteers] Bench Testing! Our trip to the best results!
@george1421 we did a bunch of testing to compare pigz to zstd back when we decided to include zstd compression in fog.
I had found the optimal setting for pigz in my environment was compression at level 6, and the optimal for zstd was level 11.
comparing those 2 optimal settings against each other.
zstd:
10% faster capture speed
26% smaller files were produced from capture
deployment was 36% faster.zstd was early in it’s development and adoption back then and has had some changes to actually improve on it’s compression and speed since those tests were done, but we don’t know exactly by how much.
-
RE: FOG : Main sites and Branches organisation
@processor the first 2 can be done by just setting up storage nodes at each site. the 3rd one, makes the whole thing complicated though.
-
RE: Init.xz issue
@bigjim i’m betting that the init file didn’t download correctly. you can update it manually or the easiset fix would be to try running the installer again
-
RE: Unable to use an NFS share to store images
@vpt i know that there are a few forum posts on how to do it as well, if you haven’t searched, like this one for example https://forums.fogproject.org/topic/8668/qnap-nas-storage/18
-
RE: Unable to use an NFS share to store images
@vpt pretty much. although, if the QNAP device supports both NFS and FTP, it can be set up as it’s own storage node instead of mounting it on the fog server
-
RE: Unable to use an NFS share to store images
@vpt fog uses /images as an NFS share, it is not supported to share out an NFS share that is a mounted NFS share
-
RE: Ipxe issue
@cmurray139 the default fog ipxe boot files load /tftpboot/default.ipxe on your server. this is a ipxe boot script file that has the address of your fog server in it. this file can be customized if you like (i use a customized one to direct certain computers to my dev fog server instead of the production one, for example). make backups of your changes, this file is overwritten whenever you perform an upgrade of fog.
-
RE: VM settings requirements
@Spark for the
Windows VM, (someone feel free to correct me if i’m wrong on these specs)
80GB HD, 1 CPU, 4GB ramfor the Fog server, you’ll probably want at least enough room to store 3 images (remember that when you replace an existing image, both old and new exist at the same time on the server for a short time) so
200GB HD, 1 CPU, and at least 512MB ram -
RE: fogproject user account ?
@Qweeqweg the change from using an account named “fog” to an account named “fogproject” is recent. in spite of directions, people kept creating and using an account named “fog” as their user account when setting up the server.
-
RE: fogproject user account ?
@Qweeqweg that account is used for ftp access by fog. it was created during the install process.