Uploading to wrong storage group
-
Also - temporarily disable the
Koei (root)
storage node. It’s a check box in it’s settings.Then see what happens.
-
No change when disabling koei. Also disabled olifant, tried again, enabled one and tried again, then both and tried again. No change.
-
@dolf Why does the olifant node have the directory
/mnt/...
? In it’s description, you say it’s in lab2-server, is that the fog server or another server? Have you mounted a remote directory to the fog server? -
I have only one physical Ubuntu machine running fog, with two hard drives. There is a storage node on each, belonging to different storage groups, so that I don’t have to have all of the images on one disk (not enough space).
/mnt/olifant
is simply the mount point for a physical 1.5TB internal SATA drive with one ext4 partition. -
@dolf Ok. I’m about out of ideas then.
I’m going to need to ask for the help of @Tom-Elliott, @Moderators and @Testers for ideas.
-
@Wayne-Workman TBH I did not read the entire thread so if this has already been asked my apologies,
What is the results of
showmount -e 127.0.0.1
?======
OP: If I understand this correctly you only have one FOG server but you have 2 hard drives in this physical server with each not having enough space for all of your images, do I understand this right? If so I think I would take a little different and less complex approach. There is two routes I can think of- Setup LVM and just add those disks to an LVM volume group and then create a LVM logical volume. Let LVM decide where to span the files on those two physical disks. Temp mount that LVM volume over /mnt and then move the content of /images to that LVM volume. Unmount /mnt and remount that new LVM volume over /images and be done with it. (make sure you update fstab). That way you are running a standard FOG configuration without having to deviate from the norm.
- On your existing FOG server create 2 directories under /images like olifant and koei. Mount those two drives over those directories. Then when you create your image definitions make sure you pick one of the two drives. So if you create an image name called WIN7X64 make sure the destination path isn’t /images/WIN7X64 but /images/koei/WIN7X64 instead. Its not as clean as doing it at the OS level as with LVM but it should work.
-
# showmount -e 127.0.0.1 Export list for 127.0.0.1: /mnt/olifant/images/dev * /mnt/olifant/images * /images/dev * /images *
Thanks for the workaround ideas. Although I think we still have to solve the bug… If I apply a workaround, I won’t be able to help you find the bug.
Are there any success stories of having two nodes on one machine? Is it even supported?
-
If I add the image to the koei storage group as well, will it be copied (replicated) from
/mnt/olifant/images/
to/images/
? I could try that, remove it from the olifant group, and then add it again? -
@dolf I think you’re on to something there. The problem might be caused by the fact that the nodes are using the same interface.
To be honest, I’ve not seen this specific use case before, so I’m just guessing.
-
I’m curious if that 1.5TB drive can hold everything… if so, do away with the other node.
-
/images
can hold everything, for now. I removed the second storage node and storage group. I’ll just move the older images to the extra drive and symlink them to /images/…So according to @Quazz , this is not supported. Fine by me. It seemed like a logical use case to me, so maybe we should say somewhere in the Wiki that this is not supported?
-
@dolf I’ve successfully ran two storage nodes on one interface before just fine. In fact both storage nodes were using the exact same directory, as well. I’m sure this isn’t the problem. There must be something else that was going on.
-
Is there any way we can get more information?
I know, for a fact, FOG can run two nodes (hell two groups each with the a node that points at the same place) without issue. It’s how I test different things. As for “bandwidth” usage, I’d imagine the items would look, more or less, the same minus the slight delay you might have when reading the “individual” nodes’ charts.
-
Sorry, since I did away with the second storage group, it’s hard to test this. I’ll create a second storage group and test this when I have more time. For now, I’m pushing a deadline to get the lab deployed before Monday.
-
I’m solving for now as I’m fairly sure this is fixed. Of course feel free to test and update this thread as any other information comes forward. I’ve tried to test, but too many other things going on at the moment.