• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. AndrewG78
    3. Topics
    A
    • Profile
    • Following 0
    • Followers 0
    • Topics 7
    • Posts 70
    • Best 3
    • Controversial 0
    • Groups 0

    Topics created by AndrewG78

    • A

      Solved vsftpd

      FOG Problems
      • • • AndrewG78
      8
      0
      Votes
      8
      Posts
      738
      Views

      S

      @AndrewG78 I’ll try to answer all the things you brought up. But first let me state that so far you haven’t been clear (from my point of view) what has happened on which FOG server. For replication there are at least two parties (servers) involved and it’s important for me to understand which one showed the issue. I will get to that point later on again.

      Although replication services are disabled, there is still some replication done between storage groups.

      Disabled on which server? All FOG servers?

      1 a) The question is, was this a proper behaviour?
      I thought replication is done only within the storage group members(nodes).

      As I haven’t invented the replication algorithm I don’t know it as much as Tom would. But reading the docs I get the impression that this is expected to happen: https://wiki.fogproject.org/wiki/index.php?title=Replication
      6. If the node currently checking is the "primary master group" for the data it's working, it will attempt replicating its data to the master of each of the other groups the data is assigned under.

      1 b) Are there any other services that could do this replication?

      You have two nodes and both have replication services running on them!

      The high cpu load(kworker and vsftpd) was related to replication and lack of disk space. Replication processes did not stop even if there was 0% of free space.
      I think this is a bug.

      The vsftpd part is what I would call the receiving node in this constellation. This might give you an idea which node was causing this. Disks can run out of space for many different reasons. I don’t see why our replication service should constantly check and stop replication just because of little space. Every server needs a good working disk space monitoring to warn the sysadmin to take care of it. See it from this side: If we add a check and simply stop replicating because of a lack of disk space people who don’t monitor their disk space won’t notice possibly for month and might blame us about replication not working. Although it’s not nice to hit a full disk this will eventually cause trouble and make the sleeping sysadmin aware.

      3 a) Should there be some smarter log rotation ?

      As well something a sysadmin should be able to handle. Linux has logrotate and I don’t see why we should invent that again.

      3 b) "No new tasks found "is logged every 10s - Can we change this time somehow ?

      Yes, web UI -> FOG Configuration -> FOG Settings -> FOG Linux Service Sleep Times -> MULTICASTSLEEPTIME

      Sorry if my answers sound a bit impolite. I don’t mean it that way! Just wanted to show you that things can be seen from the other side as well.

    • A

      legal issue

      General
      • • • AndrewG78
      4
      0
      Votes
      4
      Posts
      647
      Views

      Tom ElliottT

      @AndrewG78 while it is true that you can incorporate our product with yours, the only request (and I believe it is covered in the GPL as well) is that you credit our software and links to our software as being included.

      This way people know for things related to FOG directly know to come to us for support needs.

    • A

      Solved Deploy - No Snapins for Group

      FOG Problems
      • • • AndrewG78
      5
      0
      Votes
      5
      Posts
      608
      Views

      S

      @AndrewG78 Ok, fixed in working branch (ref). Will be in the next release. Thanks again for reporting.

    • A

      Multiple FOG servers in one network

      General
      • • • AndrewG78
      18
      0
      Votes
      18
      Posts
      2.3k
      Views

      S

      @AndrewG78 said:

      There are several identical broadcast responses.

      Can’t explain that without having a full wireshark/tcpdump pcap file. Way too much information is missing to be able to get a glimpse on why this might happen

      There is tftpd error - Error code 8: User aborted the transfer

      It’s kind of a known thing. Before loading the boot file via TFTP the client requests the file size (via RRQ query command). The server answers the size query and for some weird reason the client sends back a “User aborted the transfer” and then sends a new request to actually download the file.

    • A

      Tablet with WINDOWS 10 and USB-LAN SMC 7500 adapter

      Hardware Compatibility
      • • • AndrewG78
      53
      0
      Votes
      53
      Posts
      10.4k
      Views

      A

      @Sebastian-Roth
      Gr8 news. Thx a lot

    • A

      Solved Allowed memory size of 536870912 bytes exhausted

      FOG Problems
      • • • AndrewG78
      9
      0
      Votes
      9
      Posts
      1.4k
      Views

      Tom ElliottT

      @andrewg78 it’s because of the huge history. There is no fixing that can cure it beyond cleaning it out. With 1.6, however, it will be somewhat alleviated to the use of proper sql pagination. This isn’t a leak persay, it just can’t store the amount of information in the memory space.

    • A

      Solved Windows 10 version 1607 resize issue

      Windows Problems
      • • • AndrewG78
      11
      0
      Votes
      11
      Posts
      1.4k
      Views

      george1421G

      @andrewg78 I’m still thinking its bit locker that is causing your issue. When you use single disk not resizable then clone the image, you are not changing the geometry of the disk. Please read through this post and see if running the commands to disable bitlocker makes single disk resizable work better. https://forums.fogproject.org/topic/10824/image-upload-deploy-taking-a-long-time/43

      Looking over the entire thread just for your info.

    • 1 / 1