• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. Foglalt
    3. Posts
    F
    • Profile
    • Following 0
    • Followers 0
    • Topics 22
    • Posts 158
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Building a test environment

      @Sebastian-Roth

      Thank you a lot for description, it is clear and i think it was actually needed to understand.

      Here comes the place where it fails:
      0_1514978609406_dhcpfail.jpg

      It fails with no dhcp response, but before that point, dhcp responded, even lease time is received, and ofc init files are loaded properly before this part. What is it then?

      posted in General
      F
      Foglalt
    • RE: Building a test environment

      Actually the whole setup consiste of 1 switch (a dumb one, not managed), 2 machines (server and 1 client) for the purest environment.

      I am a bit confused about what is written here about defalt.ipxe file and its chainloading. Can you pls be a bit more detailed? I am not an ipxe expert sorry 🙂

      When i am in the office and have a bit of time, i will shoot some pictures for more specific .

      (just for the record, de unpopulated tftp folder had a good reason. upon failing to setup dhcp (well, it maybe a debian specific issue on 9.3) it fails the later part of the script which sets up a new tftp location and puts there files as usually. i was not aware about it at first, my bad, sorry.

      posted in General
      F
      Foglalt
    • RE: Building a test environment

      Maybe i was wrong somehow. Telling or doing it. After another retry (fully kill and redo fog in test environment) dhcp booted with legacy hw. And, as my luck goes, it stuck at another point:

      After initializing random number generator, it starts configure interface (client computer). then it says things about ip it gots, lease time, then “failed to get ip via dhcp tried on eth0 interface”. it is strange, as it was starting with dhcp, so it must have gotten it already once in the process.

      I will do check if it still needs the default.ipxe file which was never in dhcp config, but was requested by the client.

      posted in General
      F
      Foglalt
    • RE: Building a test environment

      I met that post a bit earlier, and yes, deb 9 has issues with dhcp config (actually i dont get why it cant kill a process with service stop commands…). manually it is solved. BUT, as there is always a but… if i rerun the setup, why tftpboot directory is populated? and if i did populate it with files, and it finds them (seemingly), why looks for a nonexistent file? (meaning default.ipxe, as it was not mentioned in config file at all).

      posted in General
      F
      Foglalt
    • Building a test environment

      Hello!

      As a new year came, i got my hands on few machine for test project (goal is to be able to use bios end uefi craps work, too). For this, I had the following:

      network: 1 switch, and behind it 1 server and some clients. 10.0.0.255 network

      server 10.0.0.1, i let fog setup all needed for this machine (dns, dhcp, i let it all on)
      client: i think it is not relevant for start, but it is a little hp machine which can load into legacy and uefi mode, too

      fog version: 1.4.4

      issue1: tftp was not set properly (i was trying to test if it works at all and on client i got file not found errors).
      fix try: opened up setup folder and put tftp files to server /srv/tftp folder
      outcome1: now it gets connected, but it says doesnt find default.ipxe file. well, it was not there, should it? so as it was a legacy box, i give it a try with undionly.kkpxe to if it loads anything at all. now it does, but got into a load loop. after ipxe initialisation part (status ok written) it restart ipxe config part.

      my question (not mentioning the really strange issue with unpopulated tftp directory) why does it want to have default.ipxe at all? in config part of dhcp it is not even mentioned (stock isc dhcp conf, made by fog setup script). next question would be why is there that loop? 😞

      i just wanted to make the simpliest setup and at this state i cant even start testing what i wanted 🙂 is it the result of christmas food or what the hell i did wrong? 🙂

      posted in General
      F
      Foglalt
    • RE: Partclone or Partimage

      @quazz oh, you meant them? well i never used them really, sorry not to recognise them by name 😞

      posted in General
      F
      Foglalt
    • RE: Partclone or Partimage

      I have things in advanced menu of fog, but i maybe misunderstood why you mention this. I will test with many browsers, it maybe in help, and will post it tomorrow (atm fighting with dhcp/fog/pxe/uefi/bios combo as main target, that kills my brain cells, it is just a side thing i noticed).

      Anyway, setting is defaulted as partclone, new image it turns out to be set to partimage, but in actual task, partclone starts (not to mention idk what partclone option, as i saw more than one in rolldown list).
      @Quazz can you pls tell me what do you mean with mentioning apache log? it is a form, how would be anything logged without posting form content? (and what is browser console log?)

      posted in General
      F
      Foglalt
    • RE: Partclone or Partimage

      atm in 1.4.4 there is option for set default. but as i set (well, actually on clean setup it was on partclone) it still shows partimage upon creating a new image (and it starts partclone ofc if image is actually being created.)

      oh, and it is debian 9.3

      posted in General
      F
      Foglalt
    • RE: Partclone or Partimage

      @quazz That is where I made my post. That there, it says Partclone gzip. So default is set to partclone, then new image “set” is partimage (which is the older one if i remember well). At that point i thought it is kinda bug. Is it? 🙂

      posted in General
      F
      Foglalt
    • Partclone or Partimage

      Hey guys!

      I have a totally clean install of current stable (1.4.4) on a debian machine as server. As it is being populated, i realised a strange (for me at least) fenomenon. Images I relocated from old machine (fog v1.3.x) used setting partclone for method. On this new setup if i create a new image the default settings (remember, “next-next-finish” fully clean install) sets cloning method for partimage. And, if image is being created, ofc partclone is running (as i was expecting). Is it a bug of some kind or a simple setting what i should change to partclone somewhere as default?

      posted in General
      F
      Foglalt
    • RE: FOG 1.5.0 RC 10

      I am planning a rebuilding of fog machine in many ways. Is there a chance to have 1.5 stable soon? 🙂 i have images in v1.3.x, will i have to remake images or will be able to use with new version, too? ah, and os will change under server, too. From ubuntu to debian finally… now we have a short, but non-imaging period at work, so i can do my work to replacement 🙂

      posted in Announcements
      F
      Foglalt
    • RE: Deploying method you guys use vs disappearing hosts

      @sebastian-roth I have absolutely no problem about what you said and agree on upgrade. I only did the post to see if we may have a “bad habit situation” on updating the host informations. So, I will move onto stable (well, i dont like to risk productivity, so in production environment i will stick to stable release).

      About pending macs. We dont use fog client at all but sometimes on fog pages i see we have pending macs and it wants me to decide about it. I somehow linked this to the issue in thoughts this is why i asked. ( will do upgrade as soon as possible with a junk-removed state of db then see what happens)

      And the most important thing, BIG thanks for you folks who volutarily do such a huge and very very good job! We cant be grateful enough for your work!

      posted in General Problems
      F
      Foglalt
    • RE: Deploying method you guys use vs disappearing hosts

      @sebastian-roth

      atm we use fog 1.3.4. the problem when we noticed first was more than 1 updates ago, however i dont know how many patch it survived.

      The update method for mac address is that:

      • open dummy host from host listing
      • select mac on the data page of the host
      • overwrite the mac with the one copied from a document elsewhere (a.k.a. copy-paste).
      • press buttor for updating host data

      It is nothing special I guess, that is why i dont get the point in the increasing number of “broken hosts” in db. Atm I see a host missing what was only used once for some testing purpose only (that would mean to me that mac was not even updated at all, but mac MAYBE used for a short time maybe). I dont know if it can help, but I do hope.

      Btw, I am trying to clear all obstacles from the path of upgrade to up-to-date version. At a previous try to eliminate the issue I was suggested to do update. Unfortunatelly at that point it was not solution.

      Does it seem to have connection to the method of updateing mac addresss or is it connected to pending host thing? (which i still dont understand the meaning of, sry)

      posted in General Problems
      F
      Foglalt
    • Deploying method you guys use vs disappearing hosts

      Hello guys!

      I want to make a "survey like question’ about the how-to of deploying. Let me tell why i mean by that. We at the company have thousands of computers but we use only a few working images for them. When a computer comes in to deploy we use “imaging or so called mule hosts”. It means in host database we dont register all computers 1by1 for all future usage. We use like dummy1, dummy2, etc and use them for cloning. Only the description, mac and image is changing for the process.

      As some pcs are coming some are going out from our pc pools (going out here means forever) it would be hard to followup with all computers status, soon host would be crowded table (like when it comes to mass going out). This is why only some of the computers are in the host list for ever (servers, special pcs, stb).

      This method was used for ages, since like 0.2x version of fog but for a time after 1.1 (maybe 1.2) it began to fail us in a strange way. During the process of mac update (old dummy go out, new dummy in, so mac needed to change for the process) the host lost the primary mac. it HAS mac registered, but in the database primary was lost. As a result, the host was not visible anymore, but still there, cannot be deleted, update, etc. It needed manual db garbage collection. (as for now i am making semi automatic garbage collecting scripts for the crew to use).

      Any of you use same method? If so, do you have the same “disappearing host” issue? Or, even better, any of you maybe know the clue of this thing? I maybe called stupid, but i dont even know or understand the “pending mac” thing, can anyone explain it? Maybe that is the clue, idk.

      So, how you guys do it?

      (secretly i ask here the developers about how many host registerion --a.k.a. number of host in db-- can be a bottleneck for the usability at all 🙂 )

      posted in General Problems
      F
      Foglalt
    • RE: Images not deploying after update to 1.3.5

      I think I will check this topic often, as it is a really disturbing thing and need to find good way to solve. Especially that we have many images and recapturing them on version changes is closer to impossible (take hours actually) that if here we come to a problem solving trick that would be good. btw the less problematic is if images can be converted somehow.

      I think we can even accept tricks to make this steps rather than having 2 working sets of fog to retake images before and after such changes 🙂

      posted in FOG Problems
      F
      Foglalt
    • RE: host MAC update problem

      Take your time, as I have learned to solve it for me is not urgent. You told me if i think it needs further investigation, let it be posted in separate thread, so i did 🙂

      posted in FOG Problems
      F
      Foglalt
    • host MAC update problem

      We had an issue about previously missing and disappeared host. With the kind and fast help problem was solved. As it happened again, reason seems to be same thought it would be a help for future investigation that I put here what we found:

      • host is changed, so without creating a new host, mac and image is changed (dummy host for cloning, registered for this purpose only and for a limiterd time)
      • after mac change (overwritten with new data by simply copy paste) update button pressed
      • host updated it says and right after it the host is not visible with host lookup. disappeared
      • in database host has 1 mac address only, not pending mac, no primary mac

      So, seems that mac update procedure has some issue somehow what can make host disappeare under some condition. Actually idk what condition cos as it was reported between mac update and save nothing happened, 3 sec passed till lookup and voila, no host appeares.

      Can we do any other test to help you guys to investigate the problem? (well, if i do only that i put flag to primary mac, host reappears).

      fun fact: up to know a partucular host did it more than one time 🙂 it maybe has bad juju? 😄

      posted in FOG Problems
      F
      Foglalt
    • RE: Log on fog system

      @george1421 thx is why i was paused blinking.

      posted in Feature Request
      F
      Foglalt
    • RE: Log on fog system

      @Wayne-Workman i am confused:

      mysql> select * FROM  history where hText like '%dest%';
      Empty set (0.00 sec)
      
      mysql>
      
      

      I am pretty sure we had deletes before 🙂

      posted in Feature Request
      F
      Foglalt
    • RE: Log on fog system

      @Tom-Elliott I checked that table and see many things, but i cant see image uploads and zero delete at all. i created an imaging report to see my missing image when was last used, paired it with history table to see if imaging is maybe logged in different way, but i cant see match.

      Imaging is not reported in history table? actually i see only “updates” like mac, task and so on. No image uploads, etc. We are investigation a situation that is disturbing (deleted or missing image) and it makes a pain in the ass not to see where and what happened to that image. (i even considered to build an older version of fog server to see the pre-v1.3 images and at least use them to recreate an at least few months old version.

      posted in Feature Request
      F
      Foglalt
    • 1 / 1