@Keating178 The Invalid Storage Group error you were seeing should be fixed if you can upgrade to the latest within the dev-branch from GIT.
Hopefully this will fix the issue you were seeing too.
@Keating178 The Invalid Storage Group error you were seeing should be fixed if you can upgrade to the latest within the dev-branch from GIT.
Hopefully this will fix the issue you were seeing too.
I have identified a few more issues and been able to correct them.
Clicking on pages within the menu as opposed to full refresh on the same page, some elements such as list -> select items to delete, click delete, the modal not appearing requesting password to ensure you want to do this. On full refresh to link it worked. On clicking from the menu it didn’t. Fixed by causing reinitialization.
Import elements didn’t return proper notification items. Similarly warnings and info notifications always assumed error. This has been fixed
Added rudimentary file upload progress system. Will be working to implement a progress indicator.
Creating images when entering the name, the path didn’t auto populate. This has been re-implemented.
Snapin file uploads were already set to be operational, but did not work due to typo on the POST field to send up the files. Renamed and introduced.
Host export didn’t properly indicate the primac field. It didn’t even pull in the Mac at all and the values were improper. The host name appeared in description, and name field was the spot Mac was trying to populate. Added the primac field and properly pulling the data now.
Along with the reinitialization for the modal requesting passwords, this same reinitialization was causing table populating to be reinitialization and throw an error as it already existed. Set to retrieve so not to throw the error.
@Sebastian-Roth the Macs are being captured as raw, which captures the entire disk, not the partitions.
Would you mind trying on working-1.6? (Preferabbly the same database but on a test instance?) to see if performance is improved at all?
@maverick2041 I found the issue and was able to add a fix for it.
Please run a git pull and reinstall.
Of the fix also 2 little features come in:
A refresh button for the Data tables, so you can force a refresh of the data within the table being seen.
Added whoami route to the API. This returns a json formatted string with the following information:
{"ipaddress": "275.275.275.275", "hostname": "fogserver", "osid": "1", "osname": "Redhat", "installtype": "N"}
The 275 is meant to ensure nobody guesses a proper IP.
Hopefully this will allow snapins to work properly in 1.6
@jeremyvdv Spanning tree enables the ports to wait for a period of time between link up/link down before the switch will pass the information to the machine in case there’s a link connected back to itself on the switch.
This up/down time can be around 27 seconds in normal Spanning tree mode. This would explain why during the automated portion the device cannot get a link or IP, but once at the terminal and restarting the networking it will pick it up.
If your network is using spanning tree, if you can do so, disable it, or if not it’s best to use one of the Fast/Rapid stp methodologies as their up/down time is much more significantly reduced.
@ty900000 I just pushed another update, though it may be a little while before the artifacts are ready for testing.
I’m fairly sure the issue here has to be the FIFO. I’ve also gotten rid of the “Maybe check the fog server to ensure disk space is good to go” by providing the available disk space. It also adds the exact command that partclone is trying to use so we can see what’s going on.
2060 is just the case statement, so I don’t think it’s failing because of the case. I think it’s failing because the FIFO was still open. To combat this, I’ve added a 5 second wait to let the disk settle and release the information for the FIFO so we can remove it to recreate it later on.
@TOXYKILLER Here’s what I find for one of your IP’s (if this is you then cool, but still a bit on the “whoa wait a minute” side)
Source: whois.arin.net
IP Address: 11.101.149.115
Name: DODIIS
Handle: NET-11-0-0-0-1
Registration Date: 1/19/84
Range: 11.0.0.0-11.255.255.255
Org: DoD Network Information Center
Org Handle: DNIC
Address: 3990 E. Broad Street
City: Columbus
State/Province: OH
Postal Code: 43218
Country: United States
What does this mean?
Well DoD own’s the IP Space in its entirety of 11.X.X.X (Including 11.101.X.X).
@ty900000 The problem with using 114, is it’s missing an argument. So the Segfault is double’d due to -c/-a0 and not likely due to either of the arguments, but rather the fact partclone is segfaulting as it cannot determine file size, as it is literally commented out of the code. This means, a raw (imager) image file will not be generated at all. (/images/<imagename>/d1p<#ofparttion_to_be_cloned_in_imager_format>.img will be missing comletely.)
In my case the partition number was 2, and in yours it appears the partition number is 3.
I have created a fork of the partclone repository and implemented what I would think should address the issue by reimplementing the filesize display capabilities. As @Sebastian-Roth learned, simply uncommenting the line allowed partclone to work properly (as far as we could tell).
Until we patch the issue on our side (either by using our forked copy of the repository or directly implementing it as a part of the build process), this is still going to be a problem. I think we have a way to fix it, just haven’t had time to implement and have you (or anybody else who’d be willing) to test it.
@lkisser in regards here it almost sounds like there could be 2 hard drives on the 3620? If so chances are the drive that is set to boot still has windows 7 installed, and the other drive actually has the windows 10 image on it. This is possible but again we’re left with too few details.
@Sebastian-Roth Yeah, this isn’t something readily available in the code.
@endia If you’re at least semi familiar with code, you might look into /var/www/fog/lib/fog/bootmenu.class.php
This isn’t currently coded for, but you might even be able to do what you want using a simple (relatively speaking) hook to alter the boot menu system.
There’s a template hook already created that should help get you started. By creating a hook, you aren’t altering the core code within FOG. Hooks are meant as a means to inject custom data specifically around what you’re requesting to happen.
The hook that’s setup is located in /var/www/fog/lib/hooks/bootitem.hook.php
Ultimately what you would do is make your changes and set the active flag to true. (it’s a variable named active)
@rogalskij while I understand the slowdown is problematic some of it seems bound to network speeds. For your tests are you only imaging a single machine, or multiples at the same time? Are they on a separated network or a congested one? No matter how fast an ssd disk you have, these things need to be thought of as well. What are the speeds of the performing network?
Please don’t take this as the only word. I just want to best understand the whole.
@george1421 Logging has not been removed, so you can easily see if an account is failing to login for some reason. (Not necessarily the why it isn’t logging in).
If it’s making it to the LDAP plugin to check authentication, the request will be logged if it fails for any reason.
As it’s not making it that far, I’m more inclined to think there’s an issue before this point (at which logging likely isn’t tracking).
I think this regex would work best:
(?=^.{3,40}$)^[\w][\w0-9]*[._-]?[\w0-9]*[._-]?[\w0-9]*[._-]?[\w0-9]*[@]?[\w0-9]*([.]?[\w0-9])+$
This isn’t perfect, but should be closer to what’s needed.
Essentially, it’s looking at the final [.]?[\w0-9] and grouping it allowing us to add as many as wanted.
Here’s a slightly better version, I think, as it will allow normal usernames or email addresses:
^(?:[\w\d][\w\d _\-]{3,40}|[\w\d.%+\-]+@[\w\d.\-]+\.[\w]{2,4})$
In the above, it will only limit 3-40 characters without email. With email, the sky’s the limit. However, as @george1421 stated, the user field is only 40 characters so it would fail if you had anything larger than the field could accept.
Master branch is on 1.5.9-RC2
To get specifically 1.5.8 you would run:
cd /root/fogproject
git checkout tag/1.5.8 -b master
cd bin
./installfog.sh -y
I would highly recommend not doing this in your main branch though.
Example:
git clone https://github.com/fogproject/fogproject /root/fog-1.5.8
cd /root/fog-1.5.8
git checkout tag/1.5.8 -b master
cd bin
./installfog.sh
This way your /root/fogproject folder can be updated easily.
Thank you,
Logging was configured to try to make things more easy to track.
I’ve just added the ability to select whether or not to actually write the information to disk.
They’re labelled appropriate as FOG Configuration Page -> FOG Settings -> Logging Settings.
Thank you,
I typically see this with permissions issues.
Please try running:
sudo chown -R fog:root /images && sudo chmod -R 777 /images
I suspect your capture will work a lot better after this is performed.
Due to the security controls of the fog client, I understand what you’re wanting and could probably code this relatively easily, however I think it would probably work better if you did an api script that ran on a schedule.
Essentially the request would be:
put <url>/fog/host
data:
{
[
‘hostIDs’: get <url>/fog/host/ids data: [{“pending”: 1}],
‘pending’: 0
]
}
Of course a bit more output may be needed, but the principal is the same and could be scripted from your fog server.
Simple.
You can not NFS share an NFS share.
What does this mean:
You have mounted the NFS Share to your FOG server. Then you’re trying to make that share be NFS shared from the FOG Server.
So you’re NFS sharing an NFS share. This is simply not allowed.
If you want the NFS share to work with minimal issue, you would need to setup a new Storage node, that just points to the relevant information for the NAS itself.
So, for example, you would create a new node within the group called:
NASNode
ipofnas
/volume1/images
etc…
There’s likely many posts with more direct implication of setting this kind of thing up.
https://forums.fogproject.org/topic/9430/synology-nas-as-fog-storage-node
Essentially that will do what you’re looking for.
You do not install the fog software directly. You just need to configure the node to have FTP and NFS.
You will miss a few little details such as disk size and what not.