@csuther3 Are you copying from a profile that already has cortana broken? Because it doesn’t fix it in that manner. You have to start with a fresh standard windows profile, then customize it.
Best posts made by JJ Fullmer
-
RE: Cortana/Windows Search breaks in default profile
-
RE: Unable to access /fog/token.dat file
@rmishra1004 is this happening on all hosts or just this one? Have you tried hitting the ‘reset host encryption keys’ on the host in the webgui?
If you copy past the url that fails, what happens? I believe you should see an error message like this{"error":"im"}
when going to it in a browser
-
RE: How does FOG select the HDD on a system for Imaging, in a multi disk system.
@FRG You can specify the disk to image to in the host settings screen with the host primary disk field
By default Fog will use the first disk it finds (I believe) or you can specify per host.
i.e. if you wanted it to image to a second disk you could specify/dev/sda2
or/dev/nvmen1n1
-
RE: UEFI-PXE-Boot (Asus t100 Tablet)
Ahem
There is a way to image these with fog.
Supposedly you should be able to just enable the network stack in bios and then if it’s a ethernet adapter that’s recognized you can set it as a boot option or select it by hitting esc to bring up the boot menu.
However since that didn’t work for me, I found a different method.
It’s a little bit abstract, but not too hard, I promise, give it a chance.What I used (I did 64 bit efi, substitute 32 bit versions of .efi files if you wanted to do 32 bit)
- A usb hub, any hub with 3 or more ports should do. I was using a powered usb 3 4 port hub.
- I used the startech USB210000S2 Usb ethernet adapter. It has the SMC LAN 7500 chipset, which is the important part
- 2 usb drives, no substantial size needed. (you may get away with one, but I used 2)
- On the first FAT32 formatted usb drive you just need a couple files in the root of the drive
-
- the efi driver for the usb (found at this link http://ww1.microchip.com/downloads/en//softwarelibrary/obj-lan95xx-uefi/lan95xx_7500_uefi_driver_0.5.zip, also attached 0_1479851633433_SmscUsbNetDriver.efi , also credit where credit is due, I discovered this file via this blog post http://www.johnwillis.com/2014/03/pxe-booting-using-usb-to-ethernet-dongle.html) (also I renamed this to usb.efi for simplicity later)
-
- ipxe.efi from your /tftpboot folder on your fog server, copy it off with your favorite ftp/scp client. (or just download the latest one straight from the fog project github https://github.com/FOGProject/fogproject/raw/dev-branch/packages/tftp/ipxe.efi)
-
- You can also put these files on the root of the C drive
- On the second flash drive
-
- create a refind efi bootable flash drive using a tool like rufus https://rufus.akeo.ie/downloads/ to put the USB flash drive image on a usb drive via dd that you get from here http://www.rodsbooks.com/refind/getting.html
-
- It makes a ~6MB partition that I’m not sure can be extended to fit the other files
Now plug the usb ethernet adapter, and the flash drives into the usb hub and plug the usb hub into the asus t100 usb port (well technically I have a T100H, but this method also worked on a Fusion5 chinese tablet, RCA Cambio tablet, and the atom and core M versions of the intel compute stick).
Now boot to the bios to make sure the secure boot setting is off and the network stack is enabled. It will probably work regardless of the network stack setting, but better safe than sorry. (Note: I always seem to have to hold shift and hit restart from windows to force it to boot to uefi firmware)
Save changes and exit and start tapping esc
tapping esc on boot should bring up a boot menu. Select the refind EFI usb drive.
On the ReFInd gui boot screen select one of the efi shell options.at the efi shell find which fs (file system) your efi files are on (the ones put on either the second flash drive or the C drive) by running these commands
fs0: ls
keep incrementing fs# (fs0: fs1: fs3: etc.) until you see your ipxe.efi and usb driver files.
When you find them run these two commands to start the pxe boot#replace usb.efi with whatever you named the driver file load usb.efi ipxe.efi
It should start at ipxe initializing devices
If you use the 32bit versions, don’t forget to set your kernel and init in the fog gui for that host.
Another caveat to this method is you have to remember to change the mac address from the usb ethernet adapter to the wifi mac of the device in the fog gui.Sure it’s not as smooth a system as wake on lan to network boot, but as daunting as it looks it all takes less than a minute to get it booted to pxe.
If you have problems with this, you may try setting a static ip address for your adapter in dhcp and make sure it’s pointed to fog. I have the uefi/bios coexistence setup with the policies found in the fog wiki in my windows dhcp and it works perfect.
If you read all that and think, that’s too much work for so many devices. Well than get a few of these usb setups. I used this method on about 30 intel compute sticks (didn’t require the refind usb, they have a built in efi shell) and it didn’t take all that long.
In theory, I imagine it’s possible to image these with wifi, but that’s a challenge for another day.
-
RE: Cortana/Windows Search breaks in default profile
@Wayne-Workman said in Cortana/Windows Search breaks in default profile:
@Arrowhead-IT can you make a git repo with your two scripts in it, with a read me and a GNU GPLv3 license?
I made the repo where they will go.
I was trying to decide whether I wanted to make one repo or a bunch for a few other fog-snapin/image prep type scripts.
For now it’s just the one repo. I’d prefer it to be in the fog-project group of repos, but it doesn’t have to be. I’ll link to the fog-project repos for sure though.https://github.com/darksidemilk/Create-and-Deploy-Windows-Default-Profiles
-
RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1
@Ceregon I’ve never messed with cloning a raid array. Anything can be done, but whether or not it’s going to work with built-in stuff is a different question.
I imagine you have vroc/vmd enabled in the bios on the machine where you’re deploying already. I’ve never got to play with Vroc but I’m familiar with it, just wasn’t able to convince management to buy me the stuff to try it a few years back.
My first guess is that /dev/md124 doesn’t exist because the raid volume doesn’t exist yet, but it sounds like you found that in a debug session on a host you’re trying to deploy too. So that’s probably out. But I just wonder if the VROC volume needs to be created beforehand to be deployed to, but I don’t have a full understanding of when that volume is made.My next guess would be that a RAID array is a multiple disk system, so the image needs to be captured in multiple disk mode
Are you having different disk sizes for these RAID volumes? would capturing with multiple disk or dd be an option?
In theory a RAID is a single volume, and you may be able to capture it correctly and it sounds like you’ve found others in the forum that have done that?Other possibility is the need for different VROC drivers in the bzImage kernel, but I feel like if that was the case, then you wouldn’t be able to see the disk at all when capturing.
You could also capture in debug mode and mount the windows drive before starting the capture to see if you can read stuff?
This is from part of a postdownload script that will mount the windows disk to the path/ntfs
. /usr/share/fog/lib/funcs.sh mkdir -p /ntfs getHardDisk getPartitions $hd for part in $parts; do umount /ntfs >/dev/null 2>&1 fsTypeSetting "$part" case $fstype in ntfs) dots "Testing partition $part" ntfs-3g -o force,rw $part /ntfs ntfsstatus="$?" if [[ ! $ntfsstatus -eq 0 ]]; then echo "Skipped" continue fi if [[ ! -d /ntfs/windows && ! -d /ntfs/Windows && ! -d /ntfs/WINDOWS ]]; then echo "Not found" umount /ntfs >/dev/null 2>&1 continue fi echo "Success" break ;; *) echo " * Partition $part not NTFS filesystem" ;; esac done if [[ ! $ntfsstatus -eq 0 ]]; then echo "Failed" debugPause handleError "Failed to mount $part ($0)\n Args: $*" fi echo "Done"
Also, hot tip, once you’re in debug mode, you can run
passwd
and set a root password for that debug session. Then runifconfig
to get the ip. Then you can ssh into your debug session withssh root@ip
then put in the password you set when prompted. Then you can copy and paste this stuff and it’s a lot easier to copy the output or take screenshots.Another possibilty could be using pre and post download scripts to fix the raid volume in the linux side, I found this information https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/linux-intel-vroc-userguide-333915.pdf but I didn’t dig into to that too much.
-
RE: snappin doesn't work
Also, for debugging.
If you put the script into a function and add [cmdletBinding()] you can run it with -debug
which means you can add lines like Write-Debug “variables is $variable”; that will only show up when -debug is specified and will pause for you. You can also add Write-Verbose “messages”; that only show
i.e. to do it while still running it as a ps1 script with arguments you’d have to be a little tricky. If you mad the script a .psm1 and added aexport-modulemember -function funcName;
line at the end and then ran this in a ps consoleipmo -Force -Global \path\to\psm1;
you could then run your script as a function that you can import into any script or console and add the -Debug and or -Debug lines directly when running the functionparam ( [String]$programme, [switch]$debug, [switch]$verbose ) function installProgramme { [CmdletBinding()] param ( [String] $programme ) $user = "install" $pwd = "1234500000000000000000000000000000000000000000000000000000AAAA=" $serveur = "\\fileserver.istic.univ-rennes1.fr\partage" $cert = $(Get-ChildItem cert:\CurrentUser\TrustedPublisher | where {$_.Subject -eq "CN=ISTIC/ESIR Signature"}) $tab_key = @() foreach ($i in $cert.SerialNumber.ToCharArray()){$tab_key += [convert]::ToInt16($i,16)} $password = ConvertTo-SecureString -key $tab_key -string $pwd $credential = New-Object -TypeName system.management.Automation.PSCredential -ArgumentList $user, $password Write-Verbose "Attempting to mount $serveur..."; #net use p: $dossier_partage /p:n /u:$($credential.GetNetworkCredential().username) $($credential.GetNetworkCredential().password) if (!(Test-Path -Path p:)){ $net = new-object -ComObject WScript.Network $net.MapNetworkDrive("p:", $serveur, $false, $credential.GetNetworkCredential().UserName,$credential.GetNetworkCredential().password) } Write-Debug "Check if P is mounted..."; #lorsque l'on lance un script powershell, si il y avait des espaces dans le nom, cela ne passait pas #lorsque l'on faisait un start-process et ce nom en argument. Donc on utilise plutot le nom court $prog_court = (New-Object -ComObject Scripting.FileSystemObject).GetFile($programme).ShortPath write-host "$(hostname):Dossier de l'installer $($dossier_installer)" write-host "" write-host "$(hostname):lancement de $($programme)" write-host "$(hostname):lancement de $($prog_court)" #start-process -FilePath $programme -wait -NoNewWindow $dossier_installer = $((get-item -path $programme).DirectoryName) if (!(Test-Path -Path "$dossier_installer\logs_fog_install")){New-Item -ItemType directory -Path "$dossier_installer\logs_fog_install"} $extension = (get-item -path $programme).Extension if ($extension -eq ".bat" -or $extension -eq ".cmd") { #write-host "$env:COMPUTERNAME:C'est un script bat" start-process -FilePath $prog_court -wait -NoNewWindow -RedirectStandardOutput ${dossier_installer}\logs_fog_install\${env:COMPUTERNAME}_log.txt -RedirectStandardError ${dossier_installer}\logs_fog_install\${env:COMPUTERNAME}_error.txt } if ($extension -eq ".ps1") { #write-host "$env:COMPUTERNAME:C'est un script powershell" $policy = Get-ExecutionPolicy Set-ExecutionPolicy AllSigned start-process -FilePath PowerShell -Arg $prog_court -wait -NoNewWindow -RedirectStandardOutput ${dossier_installer}\logs_fog_install\${env:COMPUTERNAME}_log.txt -RedirectStandardError ${dossier_installer}\logs_fog_install\${env:COMPUTERNAME}_error.txt Set-ExecutionPolicy $policy } #net use p: /delete $net.RemoveNetworkDrive("p:") } #create string to run function $runFunc = "installProgramme $programme"; if($debug){ $runFunc += " -Debug"; } if($verbose) { $runFunc += " -Verbose"; } $runFunc += ";"; Invoke-Expression $runFunc;
-
RE: Cortana/Windows Search breaks in default profile
@Wayne-Workman said in Cortana/Windows Search breaks in default profile:
@Quazz That definitely needs integrated into the script.
I believe it already is integrated in the script. The only .dat files it should copy are the ntuser.dat files. But I could be wrong on that one.
I’m still planning on some serious work on this to make it only edit the ntuser.dat hive for each setting stored there instead of copying the whole thing. But I have a lot of other projects that have to take priority at the moment. I will make a note to add in an explicit exclude in the copy for the UsrClass.dat just to be safe. -
RE: FOG 1.5.10 officially released
@Sebastian-Roth I have a work in progress concept for permanent links. But things have gotten busy and I haven’t had much free time to get it working in full.
Short version is that we’ll be able to create uuids for all pages and that can be used as a permanent link even if it moves elsewhere in the doc. -
RE: FOG/Powershell not copying to Win32/GroupPolicy/Adm
So this is an idea unrelated to your script syntax
Do you have access to your active directory central store and are all the computers involved in AD?
Based on what you have written you’re using the adm policy templates for applying chrome policies via group policy that you had set at the AD level.If you copy the admx files to the central store they’ll get copied down to each AD joined computer automatically. Is that an option for you?
I believe that the folder you’re copying to has some extra security built into it or something. I remember reading that once upon a time. You can use the chrome.admx file and the googleUpdate.admx files from where you got the adm files and copy them to c:\windows\policyDefinitions and that will work fine. I used to do it that way before I started just including them in the domain central store which can be accessed (read\write) remotely via
\\domainControllerHostname\C$\Windows\SYSVOL\sysvol\domainFqdn\Policies\PolicyDefinitions
You may have to login to the domain controller and find the local folder of that share.
I think you can copy adm files to either the central store or local store too, but I’ve read that admx files are the better option. I can’t remember why but I recall it being convincing.I hope that helps.
-
RE: Cortana/Windows Search breaks in default profile
@Arrowhead-IT Of course since it is in github now, anyone is welcome to help with commits and contributions
-
HP Z640 - NVME PCI-E Drive
Hi Friends,
We just got these new Z640 workstations and they come with these NVME 256GB SSDs plugged into a pci-e slot.
They show up in the bios, and when I boot to fog and check compatibility and partition information it says it’s compatible and all the partition info pops up no problem. One possible issue is that it assigns it to something like /dev/nvme instead of the standard /dev/sdaBut then when I try to image or even just do a hardware inventory, FOG gives me a “HDD not found on system” error and then reboots after 1 minute.
Is this something that just isn’t supported? Do I need to enable something in a custom kernel? What am I missing here?
Thanks,
-JJP.S.
Fog info
Version svn 5676
bzImage Version: 4.3.0
bzImage32 Version: 4.3.0 -
RE: FOG/Powershell not copying to Win32/GroupPolicy/Adm
@victorkrazan6267 I just read this part after making my domain central store reccomendations.
We have some non-domain computers and I utilized copying the admx files to C:\Windows\PolicyDefinitions for setting the policies in local group policies. You could theoretically embed them in your image as well.You can also use the policyfileeditor module https://www.powershellgallery.com/packages/PolicyFileEditor/3.0.1
to edit the local group policy as part of that script.i.e. to set chrome to always open pdfs externally you could do
$machinePol = "C:\WINDOWS\system32\grouppolicy\machine\Registry.pol"; $chromeKey= "Software\policies\google\chrome"; Set-PolicyFileEntry $machinePol -key $chromeKey -ValueName "AlwaysOpenPdfExernally" -Data 1 -Type DWord;
It takes a little time to learn that module. But it’s pretty useful to have a way to script changes to local group policies.
-
RE: Troubleshooting snapins not deploying. Error code 255
I am trying using the mapped letters from when I mount the shares.
I also just discovered how to run a command prompt as the system user via psexec (http://stackoverflow.com/questions/77528/how-do-you-run-cmd-exe-under-the-local-system-account)
So I’m using that method to test whether or not it should work via the fog service.
-
RE: HP Z640 - NVME PCI-E Drive
@george1421
Yes that’s how I got the hardware inventory to run without error.
But that doesn’t work for imaging. It acts likes it’s going to work, filling partitions and what not, then it just says database updated and reboots like it was finished instead of launching into partclone. -
RE: Tablet PC hangs on bzImage
@Zerpie You can also try booting from the ipxe shell, which if isn’t built in to the tablet as a boot option (sometimes it is sometimes it isn’t) then you can make a rEFInd usb disk and add all the ipxe efi boot options. Then you can create a startup.nsh script that will switch to the fs#: of the usb drive and then boot to whichever 32 bit ipxe file ends up working. It would be tricky and still involve usb drives but you could in theory make it work.
Another possibility would be to customize fog’s built in refind for those tablets if that happens to be booted to successfully (i.e. if boot to hard drive from the fog menu is working). You could change the default boot settings, I believe you can add some conditions to it, I know you can do it to have different times of the day have different default boot options. So one possibility would be to add the refind efi shell to the fog refind.conf boot options and make it the default during some time slot you are going to image and just also find a way to link a startup.nsh script. I haven’t actually tested this idea, it’s just another possibility if you want network boot to work. But all of that is nill if none of the ipxe efi boot files get you through bzimage32 boot.
-
RE: How to automatically run several .bat scripts after image deployment
@mecsr Use powershell instead of bat scripts, you’ll have more power. Pun intended, but it’s also just true. Especially when it comes to licensing tools. Sometimes the tools just don’t have command line options. Powershell has more user friendly tools for editing registry keys and environemnet variables which licenses sometimes require.
Also use the windows system image manager included in the windows ADK for creating an unattend.xml file
Snapins are an excellent method.
-
RE: HP Z640 - NVME PCI-E Drive
I am also trying the linux live cd idea to see if it gets anything different and maybe if there’s a way to change it to something more standard
-
RE: Disable snapin hashing
Here is the code I use to create a snapin after publishing a chocolatey package to my repo.
I added the hashes after the problem started and it sometimes helps but it seems the behavior is slightly unpredictable and the hash record on fog still changes somehow.Write-Verbose "making sure package $global:packageName exists as fog snapin"; if ( (Invoke-FogApi -uriPath "snapin/search/$global:packageName" -Method GET).count -eq 0){ Write-Verbose "snapin does not exist, creating new snapin"; $snapinScript = Get-Item "path\to\chocoPkgSnapin.ps1"; $hash = ($snapinScript | Get-FileHash -Algorithm SHA512).Hash; $fileSize = $snapinScript.Length; $json = @{ "name"="$global:packageName" "file"="chocoPkgSnapin.ps1" "args"="-pkgname $global:packageName" "reboot"="" "shutdown"="" "runwith"="powershell.exe" "runwithArgs"="-ExecutionPolicy Bypass -NoProfile -File" "protected"="0" "isEnabled"="1" "toReplicate"="1" "hide"="0" "timeout"="0" "packtype"="0" "storagegroupname"="default" "hash"="$hash" "size"="$fileSize" } | ConvertTo-Json; Invoke-FogApiChocoSnapin -uriPath 'snapin/new' -Method POST -jsonData $json -verbose; } else { Write-Verbose "Snapin already exists"; } Write-Verbose "Updating hases for all snapins"; $snapinScript = Get-Item "path\to\chocoPkgSnapin.ps1"; $hash = ($snapinScript | Get-FileHash -Algorithm SHA512).Hash; $fileSize = $snapinScript.Length; $snapins = Get-FogObject -type object -coreObject snapin; $snapins.snapins | Where-Object file -match 'choco' | ForEach-Object { $data = @{ "id" = "$($_.id)"; "name" = "$($_.name)"; "file" = "$($_.file)"; "runwith"="powershell.exe"; "runwithArgs"="-ExecutionPolicy Bypass -NoProfile -File"; "args" = "$($_.args)"; "protected"="0"; "isEnabled"="1"; "toReplicate"="1"; "hide"="0"; "timeout"="0"; "packtype"="0"; "reboot"=""; "shutdown"=""; "size"="$fileSize" "hash" = "$hash"; } | convertto-json; Update-FogObject -type object -coreObject 'snapin' -IDofObject $_.id -jsonData $data -uri "snapin/$($_.ID)/ Write-Verbose 'Done!'; return;
Some of the snapins return 500 errors when I attempt to loop through them all and update their hash records.
Since that isn’t working I’m really hoping there’s some way to disable the hashing function, even if it’s some hackish way in the database or something.