Active directory Join issue
-
@anthonyglamis No, you don’t need to create a new image for a new revision. It’s only if you reinstalled fog completely, regenerating the ca certs.
-
@anthonyglamis said:
The only thing I am confused about it every time I update my revision I will have to create another image?
This should not be the case, unless you’re just doing it wrong - which is possible.
The new client has a security model that is based on a cryptographically secure trust model. Details about it are in the wiki. If you blast your ssl certificates and CA on the server, then, this trust is also blasted.
And the new client will not accept communications from an un-trusted source. This is by design.
-
@Arrowhead-IT Well I’m an idiot then because after every revision I was reinstalling Fog. I tested 2 machines. The images were a success and the auto join to AD worked perfectly! This is going to make my life so much easier. Thanks guys for all the help. Thanks for your time. Now I can at least help anyone else that might have AD issues Also for someone like me who is a newbie to Linux, I might compile a write up to help anyone in the future.
Now on to figure out how to store printers and have them map automatically and I will be in serious business!Once again thank you to everyone that replied to this thread!
-
@anthonyglamis said:
Also for someone like me who is a newbie to Linux, I might compile a write up to help anyone in the future.
Please do. Post it in our Tutorials section.
-
@anthonyglamis Just so I’m understanding, there wasn’t a problem at all? I know you were having a problem, but this wasn’t something FOG was doing necessarily?
-
@Tom-Elliott I’m not going to say there were not any small bugs that were fixed via the latest revisions. There were times where images would not even capture, but on a second try they would, or deploy for that matter. I’m not sure about the certificates issue either, logically it makes sense to compile an image, that is not on the domain, install the latest Client Service and then capture that image. Then deploy to your clients. I wasn’t always reinstalling fog after revisions so in theory I should have been successful once or twice.
I have successfully deployed an image to 2 laptops today, but here is what I ended up doing. I had a image I wanted to capture on a computer that was still on my domain. I UN-installed the Client Service, restarted, reinstalled the client service, ensured the client and server were talking (i didn’t have to check the log as it auto joined to AD so obviously it was working), and captured that image. It worked. I figured who cares if I capture an image of a computer already joined to my domain as the client service would rename a unique identifier as well as host name of my choice. -
@anthonyglamis said:
I figured who cares if I capture an image of a computer already joined to my domain as the client service would rename a unique identifier as well as host name of my choice.
I’d very strongly recommend against that.
Not because what the fog client does isn’t sound or anything… but because…
You’re image is now dirty. Any custom settings that were applied to that computer for that particular OU, and any particular user that logged onto that computer… those are ON your image. IF that image is later renamed/rejoined on another piece of hardware, those settings float to that next system if those specific settings are not explicitly undone by the next set of policies… and then the next, and the next.
And - maybe your Active Directory setup doesn’t set any settings on clients… but my setup does. A lot of settings, in fact. Settings that are specific and unique to the individual OUs that computers are placed in. Specific policies, specific pieces of deployed software. I look after 500+ computers and I rely on AD to work, on policy to work as expected, and I cannot go around doing these things by hand on all the hosts.
The images I upload - They have never been on any domain, they have never visited www.google.com or www.microsoft.com. I bring all software in via LAN or flash drive. All updates from our WSUS service. Chrome has never been signed into, nor firefox. My images are 100% built from scratch, nothing but vanilla windows. I never install the bloatware that comes with driver downloads - I extract the driver files themselves and install them using the integrated Windows method manually. My images are absolutely 100% pure. Because of this, I can download my image at any point and just update it - and I still have a very pure image.
If you’re system has already been on a domain and has been dirtied, it’ll always be dirty. It’ll always have settings and behaviors that you might not expect. And you’re just asking for complications down the road regarding these things.
The new FOG Client feature to unjoin/rename/rejoin is intended for host renaming to be super smooth. Even if that permits an image that is already domain-joined to be uploaded and deployed without issue - it’s just really bad practice to do this.
-
@Wayne-Workman honestly you are Absolutely right, but I was so happy to get an image to work also while auto joining I was beside myself. I guess it’s back to the drawing board. I’ll create a baseline tomorrow and load a new client into it and see if I have any luck.
-
@Wayne-Workman Glad to say I created a default master image, and have successfully deployed it 3 times with auto join working flawlessly! This was a test fog server. I am going to start to build out a master for for my corporate office, and will use this template for my 17 remote locations. I plan to have a sever in each communication room, as I want the image to pull from the local LAN rather than over the net. Some of our sites are in dire need of infrastructure upgrades, so I would kill the circuit deploying from a Master Node.
I’ll start to research this a little more, and will also work on the write up of my experience with the help of everyone that helped on this thread.
Also, when I started this project I went with Ubuntu 14.04 LTS as a platform. Would anyone suggest migrating to Ubuntu 15.10? Technically I have a stable platform, but I’m wondering how long the platform I am on now will be supported?
Thanks again guys!
-
@anthonyglamis I’d actually suggest CentOS 7, or Debian.
-
@Wayne-Workman OK I can give those platforms a try. Would you suggest desktop or server?
Can I get a little clarity? I ran svn up from the trunk directory. I restarted my Linux box as it also needed updating for software. Logged back in and I am on the same revision. I thought a reboot would start and stop the fog service thus updating it to the latest revision. Did I miss a command?
-
@anthonyglamis after running svn up from the trunk directory, you would go into the bin directory and reinstall.
The fast way is
./installfog.sh -y
-
@Wayne-Workman Correct but won’t that kill my certificate chain? I thought this was my problem all along? So let me try to wrap my head around this. Update the revision. Reinstall fog, register a new client and deploy the same image I had stored? I’ll test this now.
-
@anthonyglamis Yes, pull new revision, run installer.
The certs and CA carry over.
-
@Wayne-Workman I just updated my revision, reinstalled fog. Tried to deploy the same image that was successful 3 times today and I received an error "no disk passed (runPartprobe)
Thoughts? I checked out this thread however I am not capturing and image, I am attempting to deploy a known good image (at least before a revision upgrade and reinstall of fog).https://forums.fogproject.org/topic/6535/windows-10-capture-deploy-woes/2
-
@anthonyglamis I think this is a bug. Tom’s doing a whole lot of work/improvements on the upload/download scripts at the moment.
-
Update. Earlier I was successful deploying an image to 3 different laptops. These were for my Austin site. I just tried to deploy the same image to another laptop for my Austin site, and the authentication errors have returned. This is kind of blowing my mind. I am on revision 6124. I’m not really sure why I was successful 3 times and now the CA chain is broken. This is interesting.
More updates. I have 2 images, both are for my Austin sites. 1 is a baseline, the other has printers already set up as TCP/IP ports. The image with the printers is failing. The log is returning Authentication errors as stated above, and the hostname changer did not work either.
I decided to try the baseline image. The hostname changer worked. I have a “switch user” option and my domain is showing up as an option to log into. I try to login and it says “The security database on the server does not have a computer account for this workstation trust relationship”. I did stage the computer in my default directory OU before deploying the image. The log is still stating that the CA cert validation failed. Could not authenticate. -
@Wayne-Workman Is there any way to get this post categorized as “unsolved”? I am still having issues.
-
@anthonyglamis said:
“The security database on the server does not have a computer account for this workstation trust relationship”.
Check to see if your image is already bound to the domain.
Also, inside of your /opt/fog/.fogsettings file, make sure there is not two fields for
CaCreated=
-
@Wayne-Workman Wayne, thank you for the reply. I apologize but what do you mean by “bound to the domain”? This is what my .fogsettings file looks like.
Created by the FOG Installer
Version: 6124
Install time: Thu 14 Jan 2016 04:05:49 PM CST
ipaddress=“192.168.1.243”;
interface=“eth0”;
routeraddress=" option routers 192.168.1.1;“;
plainrouter=“192.168.1.1”;
dnsaddress=” option domain-name-servers 192.168.20.5; “;
dnsbootimage=“192.168.20.5”;
password=“0ea409”;
osid=“2”;
osname=“Debian”;
dodhcp=“n”;
bldhcp=“0”;
installtype=“N”;
snmysqluser=”"
snmysqlpass=“”;
snmysqlhost=“”;
installlang=“0”;
donate=“0”;
fogupdateloaded=“1”
submask=‘’
blexports=‘1’
storageLocation=‘/images’
storageftpuser=‘’
storageftppass=‘’
docroot=‘/var/www/html/’
webroot=‘fog/’
caCreated=‘’
startrange=‘’
endrange=‘’
bootfilename=‘’
packages=‘apache2 php5 php5-json php5-gd php5-cli php5-curl mysql-server mysql-client tftpd-hpa tftp-hpa nfs-kernel-server vsftpd net-tools wget xinetd sysv-rc-conf tar gzip build-essential cpp gcc g++ m4$
noTftpBuild=’’
notpxedefaultfile=‘’