• capturar imagen de 2 discos

    6
    0 Votes
    6 Posts
    1k Views
    S

    @Pilar Your upload of the picture didn’t go all the way to the end. If I had to guess I would think that you receive this kind of error: https://forums.fogproject.org/topic/13628/ftp-issue-when-pulling-an-image

    Just follow George’s tutorial on how to fix this.

    By the way: Here in the forums we try to keep issues separated to help other people follow and better understand it. So please don’t jump from one issue to the next in this one topic. Open a fresh new topic for each and every issue you have and we’ll answer every single one of them.

  • Error on Start from hard disk (HD)

    6
    0 Votes
    6 Posts
    923 Views
    S

    @george1421 said to @eliaspereira in Error on Start from hard disk (HD):

    I see you change the exit mode to Grub. That will work too, but I’m surprised that grub works when SANBOOT didn’t

    I find this a bit strange too but well you never know. Hardware can be a real pain and is doing weird things across the board. So I guess there is some hardware out that not able to chainload the OS via EXIT or SANBOOT but is happy with the GRUB method…

  • Move images between storage groups

    3
    0 Votes
    3 Posts
    527 Views
    S

    @Gameman @george1421 The wording (“bug”) kept me from looking at this for a while now. Found a bit of time to look into this and I think it’s not a bug really. When editing an image definition there is no drop down for “Storage Group” but a whole new settings tab for this (right of “General”). Here you all should be able to assign images to different storage groups as you wish.

    Though I have to admit that it might be a bit of a challenge to find this. Not sure why it was designed like this in 1.5.x.

  • API - Create Host "error": "Required database field is empty"

    4
    0 Votes
    4 Posts
    580 Views
    S

    @maddyred You might want to take a look at this topic: https://forums.fogproject.org/topic/15848/api-creating-host

  • Fog api image create 417 error

    4
    0 Votes
    4 Posts
    615 Views
    S

    @maddyred Finally found some time to look into this. Not sure where you got some of data field names from. You better take a look at the code. Here you find the reference to the fields used for images: https://github.com/FOGProject/fogproject/blob/dev-branch/packages/web/lib/fog/image.class.php#L35

    Just below that your also find the defintion of databaseFieldsRequired (for images) - you need at least those four to be able to create an image defintion: name, path, imageTypeID, osID

    Here is an example with some more fields that worked on my tests:

    ... data = { "name": "fogsvos", "description": "API created image", "path": "/images/fogsvos", "imageTypeID": 1, # Single Disk - Resizable "imagePartitionTypeID": 1, # Everything "osID": 50, # Linux "format": "5", # Partclone Zstd "protected": 0, "compress": 9, "isEnabled": 1, "toReplicate": 1, } ...
  • Error Updating database

    2
    0 Votes
    2 Posts
    357 Views
    george1421G

    @hernani First I would recommend that you update your version of FOG. you are on 1.5.9RC2. I would at least go to 1.5.9 or even better once at 1.5.9 switch to the dev-branch and move up to 1.5.9.111.

    If you use the git method to install FOG then just switch to the /root/fogproject directory and do a git pull command then switch to the bin directory and rerun the installer ./installfog.sh script again.

    After that if you want to upgrade to the dev-branch you can issue these commands.

    cd /root/fogproject git checkout dev-branch git pull cd bin ./installfog.sh

    That might not take care of this issue, but at lest if you are on the latest release the devs can help you.

    Typically when we see this problem at the end of capture with an ftp error message its because someone changed the fogproject linux user account or password. The memory exhaustion issue might be addressed in the update. I know I’ve seen this error before quite a while ago.

  • Move images to Archive

    1
    0 Votes
    1 Posts
    388 Views
    No one has replied
  • Interface not ready, waiting for it to come up

    Solved
    19
    0 Votes
    19 Posts
    5k Views
    R

    Resolved by running the command udp-sender start then i restarted the Fog machine as ubuntu

  • Web GUI error after clicking List Hosts

    25
    0 Votes
    25 Posts
    9k Views
    george1421G

    @tesparza Its good we can point our finger at low memory as the problem. The small cache space on 4GB is only left 800MB of room so mysql had to commit everything to disk instead of caching it for later. (speculation) I did find it surprising that in the previous top screen shot, you had 4GB of swap space but none was used even when the system was low on ram. It wouldn’t have really helped in your case, but I find it strange that no swap space was used.

    Now that you found an acceptable level you can start decreasing your fog client check in time till you find a happy balance between speed and frequency. The only issue I can think of by keeping the check in time to 5 minutes would be during post imaging if the fog client was used to rename the system, connect to AD, or do a scheduled reboot. Its possible the client computer wouldn’t react to the command for about 5 minutes.

  • import /images

    3
    0 Votes
    3 Posts
    656 Views
    george1421G

    @electronico_nc Well lets start off by saying this. FOG Images have 2 parts. There are the raw files that are stored in the /images directory and the meta data that is stored in the database. You have the raw files so rebuilding the data will take some work, good guessing, and a bit of luck.

    Just recreate your image definitions by hand as you did in the beginning. Be sure to name them exactly what they were before. In the example above the image name and directory name is almost always the same. So name this image win7_64 (watch your case). Fill out the rest of the settings as you would normally do. One could guess that this image is a windows 7 so you will need to select that in the OS field. If you did not name all of them as clearly, be a good guesser to see what OS it is. The compression values are only used on image capture so you should just guess what you might have used. Just do this for each directory in the /images folder.

    The only (not a gotcha, but something to be aware of) is that on the image list view the image size will be zero. This is because the image size is set at capture time only. Since you are side loading these images that field will be zero until you recapture the image.

  • FOG deploy: partitions 4 and 5 too big for disk

    24
    0 Votes
    24 Posts
    8k Views
    S

    @madeyem Thanks heaps for testing again and letting me know! As well thanks for your patience.

  • Fog client does not add printers

    11
    0 Votes
    11 Posts
    1k Views
    S

    @btoffolon The fog-client (FOGService) is run as local SYSTEM account.

  • Dell Optiplex 7090 Capture issue

    2
    0 Votes
    2 Posts
    420 Views
    R

    Sorry guys, the issue was that this particular unit had 2 hard drives, so selecting single disk was the issue.

    All good now.

  • Issues with USB Type C NIC Adaptors

    22
    0 Votes
    22 Posts
    7k Views
    M

    @michaeloberg said in Issues with USB Type C NIC Adaptors:

    @kghli @george1421 @Sebastian-Roth

    This could be a breakthrough. I just noticed that the system is passing the address through iPXE correctly and that FOG is what is recognizing the USB Type-C dongle’s MAC address - the exact scenario that @kghli is experiencing. I took a screen shot of the issue and here is the iPXE address (which is the correct system address):

    3e42e174-18f4-40d4-80a0-22714580ef00-image.png

    Then when I boot to FOG (now running Debian 10.11 and FOG 1.5.9, and choose “Client System Information” then choose “Display MAC Address” it show’s the USB Dongle’s MAC:

    e70169a5-6a1d-4983-be3c-9d9a7dea4c9e-image.png

    Hopefully this is going to help troubleshoot our issues as we have narrowed it down to FOG alone, not the manufacture of the system, the BIOS configuration or the version of FOG.

    Thanks in advanced!

    Mike

    I also recompiled iPXE from (g4bd0) to (g1844a) and verified the date (ls -la /tftpboot/*.efi) was today and it still is not working.

  • ipxe can't log in

    16
    0 Votes
    16 Posts
    1k Views
    S

    @george1421 I pulled the latest build of Ipxe and the build number did update and had a different build number but it still did the same thing.

    I’ve reimaged using deploy image about 5 times now and it consistently works now go figure!

    Luckily in my environment we don’t have any old computers, this moving forward will be the lowest spec machine we will have.

    One other thing that was strange was when doing a full registration of the host and I was asked would I like to deploy an image if I said yes it accepted my username and password using Ipxe.

    Thanks for explaining why Ipxe was picked as a default efi image too. I just wonder how many others will be affected by this change in firmware by HP and what it is changing in the NIC!

    David

  • FOG Server Deployment Architecture & Stress Test Tools

    4
    0 Votes
    4 Posts
    567 Views
    george1421G

    @wt_101 When I say heavy lifting is done by the client computer, I mean all of the work and the actual performance of fog imaging is directly impacted by the target computer’s capabilities and components. While I understand this is technically impossible, but if you have 2 computers that are exactly matched, except for one has DDR3 1600 and the other has DDR4 2133 RAM, the second computer with the faster ram will deploy the image faster because the transferred image is decompressed in ram on the target computer (more on that in Q2).

    Q1 To be honest I never paid attention to what the web ui says for size vs what is on the disk. Off the top of my head having a 3:1 compression ratio seems a bit high in my estimation. Is it possible, yes. What really is a metric is what is the size of actual data on the target computer vs the size of the image files. Its possible that the web ui is recording something different that raw source disk vs compressed image file. There is a compression slider in the image definition. This tells the compressor what compression metric to use (not the right words) during compression. The higher the number the more compression methods it uses to compress the data. i think the slider is set for a default of 4 or 6 for gzip that value is a good balance between compression size and speed. For zstd the Goldilocks number is 11. Where the gzip compressor has a range of 0 to 9, zstd has a range of 0 to 22. I don’t think anyone has done any testing to find the actual Goldilocks number in a quantitative way though. I suspect they found a number that worked well for them and called it good.

    Q2 Option A is correct. The image is compressed/decompressed on the client so only a compress image is ever communicated with the client. This saves on storage image size on the storage node as well as transfer bandwidth. From a metric standpoint I know that a 25GB target image can be transferred in about 4 minutes. The only way that’s possible on a 1 GbE network is to transfer a compressed image.

    Q3 See that is where the magic of FOG is. The developers created a custom version of linux. That version of linux is called FOS (FOG Operating System). That OS has all of the tools built in that FOG uses to image a target computer. Yes FOS has zstd and gzip compressors built in. When you pxe boot a computer during image, first the iPXE boot loader is transferred to the target computer. iPXE is responsible for the FOG iPXE menu. Once you make a menu selection (like registration) you will see two files transferred to the target computer if you have a fast eye. You will see bzImage (the kernel) and init.xz (virtual hard drive) send to the target computer, that IS FOS linux being sent over. The OS is very small and very fast.

    For Point 4, that is more of a question for the developers. I don’t look under the hood for statistics settings. I just know that on the Partclone screen what that speed number means. I don’t know if the FOG program as a way to record that speed or not. As for taskelasped time I think that means something else. As I mentioned above, on a 1 GbE network a 25GB image should take about 4 minutes of transfer time. 16 seconds seems a bit quick.

    For Point 6, The fog client is used for more than just renaming the client and connecting the target computer to AD. Its also used for application deployment and some rudimentary system management. You do not need to run the FOG Client if you don’t want to manage the target computer after image deployment.

    Q1 yes there is a way. On my campus, which is mostly MS Windows based, I don’t use the fog client at all, yet I still have a touchless deployment. I leverage a feature in FOG called a Post Install Script to make changes to MS Windows unattend.xml file just after the image is pushed to the target computer. For a linux client it is just as easy most of the things that configure linux is just in text file, and FOS Linux is… wait for it… linux, so the possibilities are endless. The concept of a post install script is that you would create a bash script on the fog server that is executed by FOS Linux. That bash script would mount the target computer’s hard drive (post image deployment) and make the necessary adjustments to the hostname and any other deployment specific settings. The post install script can have access to fog host definition variable so you can leverage some of the extra fields in the host definition for specific uses (like other1 and other2 fields).

  • join to domain dont work:(

    4
    0 Votes
    4 Posts
    670 Views
    L

    @lerne-nie-aus if i give static ip, than i can ping 8.8.8.8 and surf in internet.

    I dont know why:(

  • User defaults to root on image capture

    5
    0 Votes
    5 Posts
    781 Views
    S

    @george1421 said in User defaults to root on image capture:

    The root of the issue is that when FOS Linux (which runs on the target computer) clones an image it runs as root (to have full control of the target computer during imaging), that is why they are being created on the FOG server as the root user.

    That’s correct, though not the whole truth. FTP is used within the FOG server internally to move the fully captured image files from /images/dev/… to it’s final destination /images/IMAGENAME/. It’s been a long time since I looked at that part last time but I think it was actually meant to do a chown operation alongside those FTP move and rename operation. But I can’t remember why this wasn’t actually working from the top of my head. It’s probably still the case because it works in pretty much all cases since then.

    I guess it can be changed to chown fogproject user If you can explain why it causes issues on deploy in your case. What’s special about your setup? Modifications?

  • HTTP Error 5xx - Chainloading

    4
    0 Votes
    4 Posts
    289 Views
    J

    @sebastian-roth May have just fixed it - had to set the server I restored from snapshot to the correct location in “Location Management” on the master server. Will update thread once I get an opportunity to test.

  • API - Creating Host

    2
    0 Votes
    2 Posts
    303 Views
    M

    Sorry, I only noticed after posting that I accidentally removed a single ’ character when I edited my tokens for the post.

159

Online

12.3k

Users

17.4k

Topics

155.8k

Posts