http://bobhenderson.org/pdq-deploy-fog-imaging-happiness-take-2/
Create that in inventory, and after the machines join the domain from the Fog Client, it’ll find em in the collection and deploy! It’s that easy!
http://bobhenderson.org/pdq-deploy-fog-imaging-happiness-take-2/
Create that in inventory, and after the machines join the domain from the Fog Client, it’ll find em in the collection and deploy! It’s that easy!
@x23piracy USually you’d need to have PDQ fire off a powershell that then calls the installer and such and use the runas option there. PDQ runs as the deployment account to my knowledge
@trialanderror Incorrect.
The last RC was released in early November. November,December, and January are usually development wastelands for MOST Products, due to the large amounts of holidays, outside of work obligations, and then inside of work end of year stuff that needs to get done.
Developers here are working, as always, as they can with the time allowed. I’d expect some more news in the coming weeks, but that is nothing more than a gut feeling I personally have vs having anything to go on.
As this is a opensource project, if you’d like to see more development, you are more than welcome to jump in and try to fix open issues!
@uwpviolator Yep, I sure am calling the unattend. I’m imaging the same way I have been for a while.
Would you be able to sanitize your unattend and provide, and I’ll do a line by line? As stated, my Unattend hasn’t really been updated since hte Win7 days, so I’d love to see a modern one that is known working…
@uwpviolator Actually, I think George is right. To test this and make sure it’s none of my scripts that are acting up, I actually dumped the drivers in the C:/Driers folder prior to capture, and then sysprepped with hte info. So eve nwith the files there 100%, windows doesn’t see them and goes on it’s own merry way.
I’m with @george1421 on this one, 1709 is doing something dumb again. Honestly, I think they are trying to push us more and more to MDT/WDS…or for them, SCCM.
@george1421 That’s probably it. It’s a 1709 image, as we’re always pushing the latest and greatest for whatever reason. I’m sure they’ve made it more difficult to get your drivers from anywhere but Windows Update…
@george1421 See, that’s the issue. I have that right after OOBE, as you can see below, and it still doesn’t pull. Weird huh?
@george1421 Yep, I’ve found that before numerous times. My problem is mostly ignorance of not knowing how the Unattend.xml is built, so I don’t know where to put that. Hence asking if someone had a sanitized one they’d be willing to share, so I can see the layout and go.
I’ve used the generators, as well as my existing Win10/Win7 ones, but still don’t really know where to put it.
Thanks!
@uwpviolator Could you share how you got OOBE to point to the drivers folder? I’m in the same boat you are, and have yet to figure out how to edit my Unattend to look in C:\Drivers for Win10
Even better, if you could share your unattend with the critical data ripped out, I’d love to see it.
@george1421 Not worth it. Security through obscurity is a joke and just a portscan away.
@george1421
There already is a fog server at each site, a storage node, with the HQ server being the master for inventory management purposes. That part is working great.
The issue is the amount of clients that leave the site into the great unknown. The users are remote in many cases, and come to their local office or HQ maybe once a month, at best.
I’m looking to manage application versions and such remotely, similiar to a MDM. This isn’t what FOG was built for, but I’m hoping to use the pre-existing tool the techs are used to to do it. The idea is to have the clients report into the HQ fog machine via the client, see any new snapins assigned to them, grab and go from there.
A real MDM like Filewave or Intune would be great to do this, except for the cost in this case. We use PDQ on site for everything, and it works great, so we’re now looking for outside of the buildings.
@george1421
All good points. 300 some devices per location, with some going up to 700ish.
Now knowing that the system is talking HTTP, my plan is to handle it just like any other web server, and only expose the needed ports via the DMZ firewall. The majority of our snapins will simply be scripts contained via the snapin framework which instructs the clients to download the software directly from their location, so I’m not overly worried about slow server to client communication via wan.
We’re not going to be using a VPN or some other solution for this. Theres numerous reasons for it, but in this case we’re setting up a proof of concept to allow cloud hosting, where the VPN becomes more of an issue.
When I have a chance to set this up, I’ll report back. I think it’ll work, if I can get it all worked out.
Fantastic. I’m not seeing any issues with it being able to push Snapins over the public internet then, as long as the snapins are built to not depend on any internal sources. Am I missing something?
This opens up a tons of new things for us, by the way. Combining Snapins with Chocolatey/OneGet means a great alternative for our sites that can’t do SCCM for reasons.
@wayne-workman Alright, that’s well and good, but doesn’t answer the question in the slightest. Wondering, again, what protocol is used to communicate from server to client, and since it’s encrypted, if having it communicate over public internet is a feasible solution.
I’m hoping to be able to use a FOG server in the DMZ to deploy Snapins to remote clients that do not have consistent connection to the local lan, nor is VPN an option in this case, nor Direct Access.
Random question, and not sure of the answer. Can the FOG Client be used to deploy snapins over the public internet, if I give the fog server a Public IP and forward some ports?
Since I’m not sure what tools are used for the client to download and sync snapins, I’m worried about the security. But if it’s HTTPS or the like, I could see it working well.
Note, all AD join and the like would still happen on the local lan, this would be for management after imaging. Almost like an MDM
This will come off as dickish, but I don’t care. It needs to be said.
A: Fog is NOT a vdi/remote desktop solution. It is an imaging solution, designed for use on the local lan or a distributed networking environment, not over the public internet. It will not work the way you are asking it to, nor is it designed to. You’re being given a screw driver and trying to use it to hammer nails.
B: Streaming a VHD over HTTP sounds like an ultimate fail to me, due to things like latency and bandwidth involved. If you’re dealing with cheap end user devices already, this is gonna be a nightmare. Take a look at things like LTSP and the like, see how they get around this, and the limitations they have on a LOCAL network, and go from there.
Look at systems like the Guacamole Project on Apache to figure how they’re doing HTML5 gui streaming via VNC/RDP/etc and see if you can build on that. Note, the info there is simply being displayed, not streamed over the internet, and there is still latency involved.
Putting out a request for someone to basically help you develop the entire project you’re trying to build an ‘eco-friendly startup’ on rubs me the wrong way, since so far you’ve offered nothing back to the group nor seem to be in a place, financially or technically, to do so.
It’s possible with many different VDI tools. Fog is not a VDI tool. CCBoot is.
For Linux, using LTSP would work easily to stream the OS to the users, and have their data on a central NFS setup or the like.
For Windows, using something like RDP is the easiest, but you have to deal with user cals and the like becuase of this. There are third parties, like Citrix and such, as well, but you still need to deal with licenses.
Maybe you should start with what is your goal. Stream the oS from a boot environment? to cheap chromebooks? what.
@george1421 Thanks George. I understand about the IP reasons.
That site makes sense, now that I can see where the offline service bit had to be put. I had it in the wrong spot on mind, which meant drivers weren’t being picked up. Moved it, and it’s working great.
Thanks!
Where exactly does one put the offline settings pass in the unattend to make sure it gets processed? I posted asking before about someones sanitized one to use as an example, but no luck…
Could anyone share a sanitized version of a working Win10 Unattend.xml file that will pull from a specific folder for drivers?
I’ve been reusing my same one from Win 7, and thought I had changed the offline settings pass appropriately to find the drivers, but I’m wrong. So would anyone be able to post one I can use as a comparission to fix mine?
Thanks!