FOG Status
-
any news on when a beta for the fog distro will be available
-
There’s a beta already available
Tom has been working really hard to keep up with all the bugs we can throw at him too.
-
That’s right
-
can you link me to the distro?
-
I’m not creating the distro, so maybe communicate with Kevin directly, if he’s so willing he can give you the link.
-
I was misunderstanding you… there currently isn’t a “Fog Distro” of linux, it’s in talks, really I feel it is counter productive to limit people to a singe distro, I think the current installation process of FOG is simple and doesn’t need the added headache of building a distro. BUT TO EACH HIS OWN!!!
As long as I can still install FOG on my choice of Liunx flavor, I will remain happy.
BUT if you are looking for the installation files of the current 0.33 beta
direct traball download, must be decomperssed
[url]https://mastacontrola.com/fog_0.33b.tar.bz2[/url]or check out the svn
[code]
svn co https://svn.code.sf.net/p/freeghost/code/trunk fog_0.33b[/code] -
This may not be the place to discuss this, however I’m going to throw it out there.
Any interest in building in config management abilities?
For example, Puppet? I know Razor is out there for baremetal, but I feel that FOG still has a place in terms of usability for helpdesk etc. My dream FOG setup would be such that a manifest in fog could configure the install (correct subnet + desired services). Rsync the images/kernals, and perhaps other bits and pieces. I have always strayed away from the node setup, as with multiple sites things get messy (all computers in one db).I am probably missing something, but I think that would be pretty sweet.
-
updog: I agree, but that approach would be Linux centric… Though it’s a not so ugly way to think of snap-ins for linux hosts
-
love using fog used ghost years and years ago but love fog!!! Liked reading the comments so far - lots of different opinions - I first started using FOG about 4 years ago have 12 nodes, 300 machines out there - longest one up has been 800+ days running ubuntu 11.04 - still going (no updates, no power blackouts there, touch wood… no ups as the sites are for public use only so no need to fret if it all falls in a heap [B]but it never has[/B]) still does what i need it to do after all this time… ain’t broke, don’t fix…
- love the idea of a FOG VM Image / distro as most places these days run esx within their corporate environment anyways but you could always make your own vm template (centos minimal or any other distro, whatever flies your kite… few config file changes and you’re away)
NFS - agree with security concerns
[QUOTE][COLOR=#000000][FONT=Arial][SIZE=15px][COLOR=#000000][FONT=Arial][SIZE=15px]Improve security in general, https out of the box, only serve images that have active tasks, etc. [/SIZE][/FONT][/COLOR][/SIZE][/FONT][/COLOR][/QUOTE]
couldn’t you place all images in a holding directory outside of “/images” and once a task been created move the image file for imaging and back again after finishing?
just trying to point out that instead of reinventing the wheel, just give it a wheel alignment - it might be putting more lipstick on a pig but in my humble opinion it’s the best damn pig at the show
- love the idea of a FOG VM Image / distro as most places these days run esx within their corporate environment anyways but you could always make your own vm template (centos minimal or any other distro, whatever flies your kite… few config file changes and you’re away)
-
[quote=“Muppet, post: 20820, member: 20418”]couldn’t you place all images in a holding directory outside of “/images” and once a task been created move the image file for imaging and back again after finishing?[/quote]
Why not just create a symbolic link? Just add and remove it when necessary?
-
I guess I’m not fully understanding what’s wrong here.
NFS is a security, simply because we’ve assumed chmod 777 to the directory and, in the exports file, given rw to the /images/dev file. This issue could be easily fixed with adding/changing nfs permissions to a user on the FOG server authicated within the pxelinux.cfg/default (or generated pxe) file. Just add the username and password during the creation of the file that is then added to the fog script to “authenticate” the user to the nfs. Then, the permissions could be rw for the entire /images directory. Create the file in /images/dev as per usual, and after, move (mv) the file down to the /images directory.
This will remove the need for ftp, unless that’s how you still want image replication between storagenodes.
I’m doing the best with what I’ve got right now, but I haven’t the time to figure all of this out quite yet.
-
I wasn’t really understanding what the issue was myself. Now that you’ve outlined a solution I can see the problem. My question is if you restrict the nfs export to a single user where do you put the user info for the pxe boot upload or download connection? the pexlinux.cfg/default is just for the pxe menu (debug, registration ect) right?
-
NFS doesn’t do any auth by itself, it can use kerberos, but doesn’t have a concept of “logging in”. Hence the use of FTP in the first place to “move” the image. NFS can restrict the IPs connecting to/accessing it though, but that’s all you get. And I wouldn’t use kerberos just for FOG
-
Any idea when 0.33 will be in final release?
-
Nope it’s still heavily in the beta phase, I bet when we do have an idea we will post it on the forums or on the main site letting everyone know we are close. There is a thread you are more than welcome to monitor it can be located here ->[url]http://fogproject.org/forum/threads/latest-fog-0-33b.6476/[/url]