Snapin script interrupted by Hostname Changer before completion (I think)
-
@fry_p said in Snapin script interrupted by Hostname Changer before completion (I think):
it says it completed in 30 seconds and returned code 0
That means your batch script is exiting before it’s sub process has finished (the setup.exe). This is not a client issue but rather and issue with your snapin issue. Even if you did what @michael_f suggested you would have the exact same issue. The drive installation would claim it exited before it actually did and start the service before the installation had actually completed.
The client is performing exactly as it should. It runs the script and waits for that script’s process ID to terminate before doing anything else. This is evident by the return code being 0. If the snapin did not exit properly (or even run at all) the return code would be -1. My “guess” is that the setup.exe is actually just a poorly constructed wrapper. It does a little prep-work (that takes 30 seconds), and then spawns a child process to do the actual work. The problem is that wrapper doesn’t wait for the child process to finish, it just exits as soon as the child is spawned. This is something you’d need to investigate. Perhaps try running the batch script and monitoring the processes that exist. The other option (and probably more likely) is that the network path cannot be reached (your \homead) and that takes 30 seconds or so to “attempt”. It fails, runs the rest of the script (which of course can’t run a setup.exe it can’t find), and then exits the script.
-
@Jbob I think I get it. On Monday, I will try to find the sub-process that actually installs the driver and not just “setup.exe”. I know for a fact it has a number of sub-processes to install superfluous garbage bloatware as well, so finding the plain driver install has a silver lining,.
The one thing that I don’t understand is why it works as a manual snapin deployment. I initiate the task from the GUI, reboot and it takes it from there.
-
@fry_p In this case you could try Double Driver: http://boozet.org/dd.htm
You install the driver on a computer (not your master) and save the driver with DD on a usb-drive.
Afterwords you restore the driver on your master.
DD saves the driver in a way, that windows can install it via inf-file.
By restoring the driver you define a destination, where the driver-files are being stored. DD adds the path to registry so that windows can find it. -
@fry_p perhaps it is the network path option I suggested? Is your network share setup for public anonymous read access?
-
This sounds to me like it is a self-extracting exe and that it kicks off an MSI after the exe finishes extracting. So the script waits, but when the exe is done it continues. The problem is that the MSI, the actual driver install is still running. You maybe able to use 7zip and extract the exe and then find the MSI installer and run it silently instead. I have had to do that for many driver installs.
@michael_f gave a good suggestion if you don’t need anything more than the drive itself, but if you need the Nvidia software also then my suggestion maybe a better option. I know some video cards are picky if they don’t have their entire package installed.
-
@Jbob @ITSolutions I figured out what it was. Jbob was right on the money. I didn’t have anonymous authentication permissions on the share it was on. I ran the script manually and it threw an authentication (bad UN/PW) error. Then I had a thought: why use a share in the first place? I can just put the driver installation files in the image and use the batch file to point at them after imaging! I was over-complicating it really. I am trying this method now. Will keep you posted
-
@fry_p Putting the files on the image is one solution - but for people making universal images, shares are a good solution.
-
@fry_p I also like to use shares for the updating issue. If that driver ever needs updated then you need to update the image. On a share you just replace the file. I have a small samba share on my FOG server that has installers and scripts that I use just for deployment. Just need to make sure there is no sensitive info on that share. I have also set up a site where every script that is pushed through FOG maps a drive with a UN/Password in the script before executing the installer. Not the best, but safer than an open share.
-
@ITSolutions I suppose my method is a band-aid fix of sorts. At my place of work, we are moving to standardize the models of PC’s we are putting in. By the time next school year starts, we will have about 75% of all of the PC’s in the network be Dell Optiplex 5040’s. Ultimately, we are going the model-specific route of images since we only have a handful of models we use. I agree with what you are saying though.
-
@fry_p There are reasons for every way of doing things. Having an open share is a big security risk, and my other example of passing credentials over a snap in is not the best security practice by any means. Just wanted to give the other options that others may find useful. So in terms of security your solution is the best choice.
Model specific is great, quicker deployments since sysprep is not needed and easier to manage in many ways. When I worked for in the schools I wish we could have gotten to that point. But part of our issue was I was at an ISD and we supported the technology that each school choose. With 18 districts you ended up with everything under the sun. Now I am in a small community health care org and we have model specific images for our 10 models we currently have and are narrowing down to even less of a mix.
-
@ITSolutions said in Snapin script interrupted by Hostname Changer before completion (I think):
… my other example of passing credentials over a snap in is not the best security practice by any means.
@ITSolutions Thats interesting, could you please elaborate on this a bit? Does the “net use”-command send passwords in plain text?
-
@michael_f Yeah, I would use net use in the script. With the new FOG client it is a little more secure since it uses certs. But the script does still have the credentials in it. When I used it I would create a very restricted account, to access the share. Then remove the mapping it at the end of the script. This basically was security by obscurity. But I felt it was better than a fully open share that a simple scan could find and have access to.
An Idea I had but have never had time to explore is creating a bash script to update the password for a share at set or random intervals.
on FOG install samba
Create a user “snapin user” with whatever rights you needed(probably read-only, but could be write if needed)
Create a bash script to randomly create a password and set it for “Snapin user” and also update a net use script in the snapin dir with new password.Then you would use that script to push out as a snap in and just pass the net use script what file to run on the share. So it would be:
Run with: CMD.exe
Snapin: Net Use script
Arguments: “mapped drive”\Install_programxIt is complicated but if you are really needed the security it can work to avoid having unchanging credentials passed to your PC’s.
-
@ITSolutions I am using network shares too, and agree with you that deleting the connection at the end and using a restricted account is a quite good idea.
But I still can’t see the security risk in putting the credentials in the script:
The script is stored on the fog server, which should be secure.
It is afaik sent to the client encrypted if using the new fog client.
On the client-side the script is executed with system-privileges, so an eventually logged in user can’t access the script? Or is the script saved readable in a temporary file?
If the credentials are sent encrypted to the server by the “net use”-command (which I assume), how can somebody get access to the credentials?Maybe I got something wrong? I would really apreciate, if someone could clarify this.
-
@michael_f With the new client you are right about it being encrypted in transit. But the snapin’s are transferred to the client and stored in a temp folder under the FOG install location(C:\Program Files(x86)\FOG\tmp) before being executed, this would be the script with the credentials. This could result in the file with the credentials remaining on the client due to various reasons. Such as if the client gets the snapin and is immediately shut down or the service is stopped and the temp files are not removed. Also given that it is saved on the client it is possible to recover the data using file recovery tools even after deletion.
I had come up with the idea of the random password with the old client as it sent in plain text over the network. But as I didn’t have time, I never got around to working on it. Now with the new client it is less likely to get the snapin file, but still very possible. It all depends on the level of security you need/want for the distribution share. If you keep the share locked down, with little or no information other than programs files that can be installed, the risk is very minimal. That is were I am now, doesn’t seem to be worth the effort, security vs convince, there will always be a trade off.
I know I am going way off of the extreme cautious and almost tin-foil hat security concerns, but just thoughts on if you really want to ensure that passwords are truly secure.
-
@michael_f said in Snapin script interrupted by Hostname Changer before completion (I think):
@ITSolutions I am using network shares too, and agree with you that deleting the connection at the end and using a restricted account is a quite good idea.
But I still can’t see the security risk in putting the credentials in the script:
The script is stored on the fog server, which should be secure.
It is afaik sent to the client encrypted if using the new fog client.
On the client-side the script is executed with system-privileges, so an eventually logged in user can’t access the script? Or is the script saved readable in a temporary file?
If the credentials are sent encrypted to the server by the “net use”-command (which I assume), how can somebody get access to the credentials?Maybe I got something wrong? I would really apreciate, if someone could clarify this.
It wouldn’t be difficult to snatch the script with the clear-text passwords in it from the server. It would be a matter of having a MAC (or list of MACs) from your environment (easy), finding the FOG Server’s IP (easy), and then writing a basic script to guess at the taskid. Then it’s just a matter of iterating through a most-probable range of task IDs and through a single or large list of MACs with GET requests… like this one:
http://10.2.1.11/fog/service/snapins.file.php?mac=90:B1:1C:98:03:8C||00:00:00:00:00:00:00:E0&taskid=3274
The most-probable range of task IDs is easy to determine by looking at any c:\fog.log file. Even if you couldn’t access the log to determine a most-probable range of task IDs, you could just iterate through 5,000 of them and likely hit the valid one. Once a valid one is hit, the snapin download starts. An attacker wouldn’t even need to know the task’s name or any associated files names.
Plain-Text passwords are never secure.
-
@ITSolutions just throwing my 2 cents in here. Do not use plain text password hard-coded into a file. In the next version of the client we could easily add a checkbox to snapins “Hide snapin details in fog.log”. With that you could make the password a parameter to the batch script. For example:
RunWith: cmd.exe RunWithArgs: /c File: MyBatchScript.cmd Args: MyPassword
Snapin configuration is transmitted in a secure median whereas the file itself is obtained via a simple http download and then a sha512 is generated and compared against a security transmitted checksum to ensure integrity. Now if you really wanted to be secure you’d also need to disable the legacy client support as a potential attack vector would be to make the legacy client API calls before the new client has a chance to grab the snapin information, thereby giving it in plain text. I’m not sure if we have such a checkbox to disable the legacy client yet so pinging @Tom-Elliott.
Another option:
Bake in your user share password into some SYSTEM read-only file on your image and make your batch script read it for the credentials. -
@Jbob said in Snapin script interrupted by Hostname Changer before completion (I think):
Another option:
Bake in your user share password into some SYSTEM read-only file on your image and make your batch script read it for the credentials.Even then, the NFS shares are readable by anyone, and one could restore the image to a linux directory via CLI and then browse to the files.
I would urge everyone to stay away from clear-text passwords.
-
@Jbob That would mean, i could securly put my arguments here:
and in the script I connect my share like
net use \\share\folder /user:%1 %2
That would be a great feature!
-
@michael_f That’s the idea. and if this can be removed from the fog logs as @jbob said, it’d be pretty legit.
-
@Wayne-Workman Thank you for clarifying this for me!