Windows FOG storage node and multicasting mess
-
@fry_p said in Windows FOG storage node and multicasting mess:
I only have Windows Server 2012 servers at each building
And those will run Hyper-V, as it’s included in the OS, which you can install Linux within.
FOG Storage Nodes in 1.3.0 RC series can multicast from the storage nodes, meaning that your router doesn’t matter so much anymore since multicast traffic isn’t required to traverse broadcast domains in this setup.
-
@Wayne-Workman I’m not so excited spinning up Hyper-V on each server 1 - due to licensing and 2- having 10+ separate virtual environments. It didn’t fly with my colleagues. They do like the idea of independent multicasting from each node. Would a Optiplex 990 be suitable for this (with a girthy hdd of course)?
-
@fry_p Just messing with you about the windows storage nodes. But in the end its not a practical solution.
The 990s would work great for you. Depending on the total size of your images I might even throw a small SSD drive in that 990 instead of the 750GB disks, unless you already have the disks then recycle as needed.
For multicasting to work your switches at the sites must support igmp snooping to allow easy access to the multicast stream. Multicasing is typically blocked over wan links (as you found out).
You can do the hyper-v route as Wayne suggests. In a typical FOG image deployment the FOG server just moves files, there are no heavy cpu requirements for the FOG server. I was going to recommend a Intel NUC i3 or i5 for this setup. That coupled with a 250GB SSD would make a very nice deployment server.
Since you have 10 sites, I would configure a FOG storage node just as another client computer as you are creating each FOG server. I would then make a mother image and upload to FOG. Then deploy these 10 990s as just another client. If one fails just spin up a new 990.
-
@fry_p said in Windows FOG storage node and multicasting mess:
They do like the idea of independent multicasting from each node. Would a Optiplex 990 be suitable for this (with a girthy hdd of course)?
I created a mobile FOG deployment server using an Intel NUC Dual Core Celeron processor. For single unicast deployment it was about 70% of my production server on speed.
-
@fry_p said in Windows FOG storage node and multicasting mess:
I’m not so excited spinning up Hyper-V on each server 1 - due to licensing and 2- having 10+ separate virtual environments.
What licensing? Ubuntu and CentOS 7 are free. You don’t use any licenses when you have Linux VMs in Hyper-V. Look it up. One physical Windows installation allows for 2 free Windows VMs in Hyper-V on that physical box. You may install 50,000 Linux VMs in Hyper-V without using any licenses for anything.
All 14 of our storage nodes here run in Hyper-V on 14 different machines in 14 different buildings. Dedicating more hardware when you already have everything you need without additional hardware wouldn’t be my choice. Not to mention the huge safety net you get with snapshotting a VM.
-
@Wayne-Workman so true and probably better aproach.
-
@Wayne-Workman I was mistaken, I know Microsoft is usually a stickler for licensing, so I assumed the worst. That takes out one of the arguments I had. The other I am reconsidering due to the snapshot feature I didn’t think of. The only concern is space now. Each server is also the file server for each building. Can I selectively choose which images get replicated to other nodes? We have a vital few that would be necessary everywhere, but many superfluous specialized images not needed anywhere else. If you can answer that, I have a convincing case for my colleagues, plus I’d be sold on it.
-
@fry_p said in Windows FOG storage node and multicasting mess:
Can I selectively choose which images get replicated to other nodes?
Yes.
The way to do it is - you make a storage group for every storage node you have, and then assign one storage node to each of the groups. The group names in this scenario would be best if named something relevant to the location they will serve. Ours are named after the building abbreviations here. I think our storage group names even match our storage node names - but that doesn’t matter, It’s just how we kept it simple.
You’d also optionally setup the location plugin as well so that imaging across the WAN stops.
Then, you’d simply click the particular image you want to work with, click Storage Group, and then add the groups you want it shared with.
Keep the main server marked as primary in this area, Primary is indicated by the big green checkbox.
Exact same concept for snapins.
This is how ours is setup at work.
-
@george1421 said in Windows FOG storage node and multicasting mess:
In a typical FOG image deployment the FOG server just moves files, there are no heavy cpu requirements for the FOG server. I was going to recommend a Intel NUC i3 or i5 for this setup. That coupled with a 250GB SSD would make a very nice deployment server.
Here is the compromise my colleagues have come up with. We will put it to the test with a 990. If we see big benefits, we will do as @george1421 does and have a mobile node when we have mass imaging to do. They are not keen on hyper-v still and we don’t do enough mass deployment to justify having a machine in each building (they would just sit there most of the time). We just don’t do that enough to invest time and complicate the process I am being told.
I would now like to know @george1421 's method to changing the IP easily on the node for portability. I am ignorant to how nodes work and how hard it is to change the IP, so I am open to suggestions. Thanks guys!
-
@fry_p You’d use the FOGUpdateIP script. It not only automatically reconfigures FOG with whatever IP that DHCP gives it, but it also automatically configures dnsmasq - so that no changes are necessary to DHCP in said location that you power it up in.
https://github.com/wayneworkman/FOGUpdateIPAnd of course an Optiplex 990 is more than you need, it would do fine. I’ve successfully ran fog on an old tower with an IDE hdd in it with a Pentium 4 processor and 256MB of slow-as-getout RAM. It’s slower yeah but it still gave about half the speed performance with a single deployment that our awesome servers at work give for a single deployment. That’s mostly due to compression, gig network card, and the write speed limitations of the target host’s hdd.
You would likely get full performance for most things with an Optiplex 990. Remember FOG’s biggest bottleneck is the write-speeds of the target host’s hdd - most just can’t keep up.
-
@Wayne-Workman I am now trying to replicate and I am unable to see anything on my newly set up node. When I go to look at the graph and click on it, it displays “A valid database connection could not be made”. I made sure to put in the proper sql credentials. I tried a fresh install but I think the problem is the management password in the node section of the GUI. I don’t know what that is supposed to be.
-
@fry_p The FOGUpdateIP project is designed for a full-blown FOG Server that is intended to be carried around from location to location, not for storage nodes.
-
@Wayne-Workman Oh ok. Dang. Going back to the node, what shall I do to remedy the replication issue? I currently do not have FOGUpdateIP installed anyway. The weird thing is that Snapins are replicating fine… but no images
EDIT: New error
[10-20-16 10:00:06 am] * Type: 2, File: /var/www/html/fog/lib/fog/fogftp.class.php, Line: 462, Message: ftp_login(): Login incorrect., Host: 10.1.34.84, Username: fog
I think it goes back to not knowing which credentials to enter
-
So I figured out the replication issue (not all of my passwords matched). The database issue is still there (can’t see graph or any info on it). My next question is how the node gets integrated into the system. How does a PC know to pull from that node instead of the Main Server? I am failing to understand that.
So if I have a node in a building and my main fog server at the main site, the pc looks at the main server for “instructions” because DHCP tells it to look at it during netboot. I don’t see anywhere I can specify where it should pull from. It is quite possible this intelligence built in and I’m just an idiot.
-
@fry_p The missing bit if info you need is to install the location plugin on the root / master node. Then you can associate storage nodes to location and also workstations to location that way the workstation knows who it should talk to for imaging.
-
@george1421 I just found that deep in the forums as you posted this. I had no idea this was a thing, but it is really cool! So the last piece is my database connection. I think it may be a permissions thing. I can log into my FOG database locally on the master server with a blank password. However, on the node, I unable to connect with the same credentials. I already commented out the “Bind-Address = 127.0.0.1” in /etc/mysql/my.conf
It’s probably a stupid thing but I’m stumped.
-
@fry_p If you are using ubuntu you need to do something (sorry rhel guy) to enable remote access to the database. I know Tom posted something just recently for another forum user. Let me see if I can find it.
-
@george1421 Again, just as you posted this, I found it in the wiki. I can now access the database from the node, I hope the graph in the GUI is soon to follow…
-
@fry_p Sorry I’m running just a little slow today.
Glad you have it worked out. The remote storage node doesn’t have its own database, it uses the Master Node’s database that is why it need remote access to the database.
-
@george1421 I appreciate your help though. I’ll give the GUI time to catch up, or if it still doesn’t work, make another post. Thanks everyone!