Script to install Samba with settings for FOG
-
Tom is currently building CIFS support into an init and kernel.
I’ll be able to test this shortly.
Which brings up other questions about how permissions and users and groups should be structured, both directory permissions and samba permissions.
Obviously /images would be read/execute only, but only to a “download” user…
/images/dev would be read/write/executeso… for local users… I am suggesting:
fog
fogupload
fogdownloadand a group: fogsamba
all three of those would go into that group.and permissions on /images could be:
[CODE]groupadd fogsamba
usermod -a -G fogsamba fog
usermod -a -G fogsamba fogupload
usermod -a -G fogsamba fogdownload
chown -R fogupload:fogsamba /images
chmod -R 740 /images[/CODE]I’m still very new to permissions… FEEL FREE to critique me! I might learn something!
-
@cspence said:
If we run authentication for FOG through Kerberos, uploads could prompt for a password to mount the share before imaging.
I disagree with this, because it would inhibit automated uploads and downloads via Cron style deployments.
Some people use FOG as a disaster recovery tool, and take regular uploads of servers and user computers. If they are not able to automate the upload / download process, then FOG is no longer a viable option for their usage.
Credentials must be passed to the client. I was asking Tom about this, and he’s thinking about doing a php querry to get the credentials.
-
Kerberizing samba will not get in the way of this. If a job needs to be automated, a read-only account can be used.
-
additionally, making the /images directory readable to ‘everyone’ would create security issues.
For those who upload images that may contain confidential and sensitive material, allowing the images directory to be accessible by anyone on the network would allow an intruder to copy the images and restore them via FOG… even if FOG isn’t accessable via the internet, and without MAC address network authentication, anyone could walk in with a laptop and connect to WiFi and download the images, or plug into a network port and download the images.
therefore, a ‘fogdownload’ user must be used for read-only.
-
@cspence said:
Kerberizing samba will not get in the way of this. If a job needs to be automated, a read-only account can be used.
Kerberizing? Can that be done on a Linux machine? Say for instance the FOG admin has no windows servers? This is the case for many, many small businesses in U.S. and in countries in South America that can’t afford Windows Server.
-
Exactly.
This setup only allows us to improve the integrity of the /images directory over the current setup while confidentiality is still an issue. Then again, if you’re worried about confidentiality, you shouldn’t be doing deployments using FOG or any unencrypted imaging system.
-
@cspence said:
Exactly.
This setup only allows us to improve the integrity of the /images directory over the current setup while confidentiality is still an issue. Then again, if you’re worried about confidentiality, you shouldn’t be doing deployments using FOG or any unencrypted imaging system.
Linux supports encrypted directories… I use them on my laptop. If a FOG administrator wanted, he could create a /images directory during Linux installation and make it encrypted.
-
@Wayne-Workman said:
@cspence said:
Kerberizing samba will not get in the way of this. If a job needs to be automated, a read-only account can be used.
Kerberizing? Can that be done on a Linux machine? Say for instance the FOG admin has no windows servers? This is the case for many, many small businesses in U.S. and in countries in South America that can’t afford Windows Server.
Kerberos is an MIT thing, not a Microsoft thing. Also, if you want to emulate active directory, there’s always LDAP/kerberos.
-
Well I don’t know anything about Kerberos… that’d be up to you guys.
-
Basically, you don’t have credentials flying around in the clear. You use tickets.
-
@cspence said:
Basically, you don’t have credentials flying around in the clear. You use tickets.
That sounds good.
I was just outlining how some use FOG… didn’t mean to ruffle feathers at all.
Some people do upload images with sensitive stuff on them…
and some people do automated uploads and downloads…
Those are the two main points I wanted to convey.
-
@Wayne-Workman said:
@cspence said:
Basically, you don’t have credentials flying around in the clear. You use tickets.
That sounds good.
I was just outlining how some use FOG… didn’t mean to ruffle feathers at all.
Some people do upload images with sensitive stuff on them…
and some people do automated uploads and downloads…
Those are the two main points I wanted to convey.
Don’t sweat it. Tom and I were talking these points over just a moment ago.
-
OK! so…
Good news and bad news…
GOOD NEWS:
Tom integrated CIFS support into the inits and kernels within a matter of HOURS… wow!When I turn OFF nfs on my FOG server, and then do a “debug download”
I can successfully issue a mount command via CIFS to the /images directory.I can then go into that directory and see my images, make files, delete files, etc.
BAD NEWS:
The script changes in my earlier post did not work…So… I hard coded everything… into this file:
[CODE]/svn/trunk/src/buildroot/package/fog/scripts/bin/fog.checkin[/CODE]and I was using this command for mounting, more or less:
[CODE]mount -t cifs -o username=root,password=PASSWORDHERE //10.0.0.3/images /images 2>/tmp/mntfail;[/CODE]and it would SEEM that it’s not using that command to mount… the error says “failed to mount on 10.0.0.3:/images blah blah” and I’m thinking that error is generated from the $storage variable… and isn’t actually the output from my actual command to mount.
So… this begs the question… why can I issue the command to mount inside a debug download, but the regular download task fails?
I’m convinced that somehow it’s not using the commands that I wrote into the aforementioned file.
Here’s the file as it is… I just changed my password. Note that those mounting commands DO work if I issue them manually.
[CODE]#!/bin/bash
. /usr/share/fog/lib/funcs.sh
RUN_CHKDSK=“”;
HOSTNAME_EARLY=“0”;
OS_ID_WIN7=“5”;
OS_ID_WIN8=“6”;
for arg incat /proc/cmdline
; do
case “$arg” in
initsh)
ash -i;
;;
nombr)
nombr=1;
;;
*)
;;
esac
done
clear;
displayBanner;
#setupDNS $dns;
osname=“”;
mbrfile=“”;
determineOS “$osid”;
macWinSafe=echo $mac|sed 's/://g'
;
cores=$(grep “core id” /proc/cpuinfo|sort -u|wc -l);
sockets=$(grep “physical id” /proc/cpuinfo|sort -u|wc -l);
cores=$((cores * sockets));
arch=$(uname -m);
if [ “$cores” == “0” ]; then
cores=1;
fi
if [ “$chkdsk” == “1” ]; then
RUN_CHKDSK=“-x”;
fi
if [ “$hostearly” == “1” ]; then
HOSTNAME_EARLY=“1”;
fi
if [ “$mc” == “yes” ]; then
method=“UDPCAST”;
elif [ “$mc” == “bt” ]; then
method=“Torrent-Casting”;
else
method=“NFS”;
fi
debugPause;
#fdisk -l &> /tmp/fdisk-before;
echo “”;
dots “Checking Operating System”
echo $osname;
dots “Checking CPU Cores”
echo $cores
echo “”;
dots “Send method”
echo $method
blGo=“0”;
nfsServerName=“”;
if [ “$mode” == “clamav” ]; then
dots “Checking In”;
queueinfo=wget -q -O - "http://${web}service/Pre_Stage1.php?mac=$mac&avmode=$avmode" 2>/dev/null
;
echo “Done”;
debugPause;
dots “Mounting Clamav”;
if [ ! -d “/opt/fog/clamav” ]; then
mkdir -p /opt/fog/clamav 2>/dev/null;
fi
#mount -o nolock,proto=tcp,rsize=32768,wsize=32768,intr,noatime $clamav /opt/fog/clamav;
mkdir /opt
mkdir /opt/fog
mkdir /opt/fog/clamav
mount -t cifs -o username=root,password=PASSWORDHERE //10.0.0.3/images /opt/fog/clamav;
echo “Done”;
debugPause;
dots “Adding clamav to path”;
if [ -d “/opt/fog/clamav/bin” ] && [ -d “/opt/fog/clamav/sbin” ]; then
export PATH=$PATH:/opt/fog/clamav/bin:/opt/fog/clamav/sbin 2>/dev/null;
else
handleError “Cannot find clamav binaries to run task.”;
fi
echo “Done”;
debugPause;
fi
if [ “$type” == “up” ]; then
dots “Checking In”
queueinfo=wget -q -O - "http://${web}service/Pre_Stage1.php?mac=$mac&type=$type" 2>/dev/null
;
echo “Done”;
debugPause;
dots “Mounting File System”
mkdir /images 2>/dev/null;
#mount -o nolock,proto=tcp,rsize=32768,wsize=32768,intr,noatime $storage /images &> /dev/null;
mkdir /images
mount -t cifs -o username=root,password=PASSWORDHERE //10.0.0.3/images /images &> /dev/null;
if [ “$?” == 0 ]; then
echo “Done”;
else
echo “Failed”;
handleError “Unable to mount NFS”;
fi
debugPause;
elif [ “$type” == “down” ] && [ “$capone” != “1” ]; then
mac64=getMACAddresses | base64
;
dots “Attempting to send inventory”;
doInventory 2>/dev/null;
poststring=“mac=${mac64}&sysman=${sysman64}&sysproduct=${sysproduct64}&sysversion=${sysversion64}&sysserial=${sysserial64}&systype=${systype64}&biosversion=${biosversion64}&biosvendor=${biosvendor64}&biosdate=${biosdate64}&mbman=${mbman64}&mbproductname=${mbproductname64}&mbversion=${mbversion64}&mbserial=${mbserial64}&mbasset=${mbasset64}&cpuman=${cpuman64}&cpuversion=${cpuversion64}&cpucurrent=${cpucurrent64}&cpumax=${cpumax64}&mem=${mem64}&hdinfo=${hdinfo64}&caseman=${caseman64}&casever=${casever64}&caseserial=${caseserial64}&casesasset=${casesasset64}”;
invres=“”;
while [ “$invres” == “” ]; do
invres=wget -O - --post-data="$poststring" "http://${web}service/inventory.php" 2>/dev/null
;
echo “$invres”;
done
debugPause;
dots “Checking In”;
while [ “$blGo” == “0” ]; do
if [ “$capone” != “1” ]; then
if [ “$mc” != “yes” -a “$mc” != “bt” ]; then
queueinfo=wget -q -O - "http://${web}service/Pre_Stage1.php?mac=$mac&type=$type" 2>/dev/null
;
blPass=echo $queueinfo|grep "##"
;
waittime=0;
while [ ! -n “$blPass” ]; do
echo -n " * $queueinfo (“;
sec2String “$waittime”;
echo “)”
queueinfo=wget -q -O - "http://${web}service/Pre_Stage1.php?mac=$mac&type=$type" 2>/dev/null
;
blPass=echo $queueinfo | grep "##"
;
sleep 5;
waittime=$(expr $waittime “+” 5);
done
echo “Done”;
debugPause;
directive=”${queueinfo:3}“;
if [ ! “$directive” = “GO” ]; then
tmpStorageIp=echo $queueinfo|cut -d'@' -f2 2>/dev/null
;
tmpStorage=echo $queueinfo|cut -d'@' -f3 2>/dev/null
;
tmpName=echo $queueinfo|cut -d'@' -f4 2>/dev/null
;
if [ “$tmpStorage” != “” -a “$tmpStorageIp” != “” ]; then
storage=$tmpStorage;
storageip=$tmpStorageIp;
nfsServerName=$tmpName;
else
handleError “Error determining storage server!”;
exit 1;
fi
dots “Using Storage Node”
echo “$nfsServerName”
debugPause;
fi
else
queueinfo=wget -q -O - "http://${web}service/mc_checkin.php?mac=$mac&type=$type" 2>/dev/null
;
blPass=echo $queueinfo|grep "##"
;
echo “Done”;
waittime=0;
while [ ! -n “$blPass” ]; do
echo -n " * $queueinfo (”;
sec2String “$waittime”
echo “)”
queueinfo=wget -q -O - "http://${web}service/mc_checkin.php?mac=$mac&type=$type" 2>/dev/null
;
blPass=echo $queueinfo | grep "##"
;
sleep 5;
waittime=$(expr $waittime “+” 5);
done
if [ “$mc” == “bt” ]; then
dots “Using image”
# download $img.torrent file
wget -q -O /tmp/$img.torrent http://${web}/service/torrent.php?torrent=$img;
ctorrent /tmp/$img.torrent -x > /tmp/filelist.txt;
torrentDownloadSize=cat /tmp/filelist.txt|grep "Total:*"|awk '{print $2}'
;
echo “$img”;
dots “Size of image to download”
echo “$torrentDownloadSize MB”;
debugPause;
fi
fi
dots “Mounting File System”;
mkdir /images $debugstring 2>/dev/null;
#mount -o nolock,proto=tcp,rsize=32768,intr,noatime $storage /images 2>/tmp/mntfail;
mkdir /images
mount -t cifs -o username=root,password=PASSWORDHERE //10.0.0.3/images /images 2>/tmp/mntfail;
mntRet=“$?”;
if [ ! “$mntRet” == “0” ] && [ ! -f “/images/.mntcheck” ]; then
blame=wget -q -O - "http://${web}service/blame.php?mac=$mac&type=$type" 2>/dev/null
;
if [ ! “$blame” == “##” ]; then
echo “Failed”;
echo “”;
echo “Error during failure notification: $blame”;
while [ ! “$blame” == “##” ]; do
blame=wget -q -O - "http://${web}service/blame.php?mac=$mac&type=$type" 2>/dev/null
;
if [ ! “$blame” == “##” ]; then
echo $blame;
fi
sleep 5;
done
else
echo “Failed”;
echo “”;
cat /tmp/mntfail;
echo “”;
fi
sleep 5;
else
echo “Done”;
blGo=“1”;
fi
debugPause;
fi
done
else
echo “Done”;
dots “Mounting File System”;
mkdir /images $debugstring 2>/dev/null;
#mount -o nolock,proto=tcp,rsize=32768,intr,noatime $storage /images 2>/tmp/mntfail;
mount -t cifs -o username=root,password=PASSWORDHERE //10.0.0.3/images /images 2>/tmp/mntfail;
echo “Done”;
fiLocal Variables:
indent-tabs-mode: t
sh-basic-offset: 4
sh-indentation: 4
tab-width: 4
End:[/CODE]
-
So I’ve decided the mount validation probably just doesn’t like the output from my mount command or something, and it THINKS it failed… when infact it succeeded…
So i’m gonna jimmy-rig this script so that it’s impossible to fail… rip out everything that has anything to do with “failing”
And THEN we will see if it fails or not…
-
GOOD NEWS AND BAD NEWS… AGAIN!!!
Bad news:
did a debug download, was fiddling around with mounthing…
did this:
[CODE]rm -rf /images[/CODE]
before this:
[CODE]umount /images[/CODE]and all of my images and data … GONE!!! MOTHER F@&*$#
Good news:
Restored my images from backup… was a process…
Ran another debug task.
created the /images directory manually at CLI
[CODE]mkdir /images[/CODE]Mounted to the remote images directory via CLI (ensured NFS was NOT running first):
[CODE]mount -t cifs -o username=root,password=PASSWORDHERE //10.0.0.3/images /images[/CODE]Issued the fog command:
[CODE]fog[/CODE]and BADA BING bada BOOM
mounting passed and imaging finished without incident.
So… Conclusion… something is going wrong with mounting using the fog.checkin script. I don’t know what it is… I removed all the failure code and replaced it with the success code for EVERY section!
When I do the mount BEFORE the fog command, when the fog command tries to mount, I suppose it errors out, but is still somehow able to succeed?? Maybe because I made failing impossible??? I HAVE NO IDEA
BUT,
I JUST IMAGED USING SMB !!!
WOOOOOOOOT :d
Now, as far as SPEED goes, I was running through a 1Gbps switch.
The source HDD was SATA 2 (3Gbps) and destination was the same (I think). The target host has a 2.93Ghz core 2 Duo processor with I think DDR 2 RAM.
I saw speeds at roughly 3.25 GB / min in the partclone window.
According to Google:
3.25 (gigabytes / minute) =
0.433333333 GbpsUsing the EXACT same hardware, but running the image download via NFS (ensuring SMB is turned OFF)
I saw the same sustained speeds of 3.25ish GB / min.
Could others please validate that there are no performance hits?
I’m using OLD equipment to test with. -
Just in case you’re interested: https://github.com/cspenceiv/fog-imager
I have been building a simplified set of imaging scripts. They’ll be fairly similar to what is in use now, but hopefully much easier to read and understand. I’m attempting to get away from a lot of things we currently do.
As of right now, I only have the upload script functional (on an experimental basis). That upload script does not support xfs and jfs (and others that aren’t supported officially by FOG yet). Additionally, it only does multi-disk, multi-partition creates for everything on a system.
Resizability is something I’ll look at later once the basics are taken care of here.
Right now, my test platform is a Arch live disk I built specifically for this testing (that way I’m not testing the buildroot image at the same time). Of course, this is also why I don’t have xfs and jfs support right now (big whoop for this testing).
…and of course, I’m just using samba shares.
-
@cspence Very nice work. Have you seen any performance hits during your testing?
-
@Wayne-Workman said:
@cspence Very nice work. Have you seen any performance hits during your testing?
At this point, it’s all about building a working prototype with VMs. But my other testing didn’t show any slow down using samba. Then again, I’m just using plain SATA drives.
-
This doesn’t rely on an internet connection to return the default external IP.
default_info=$(ip route list | awk '/^default/ {print $5}') default_info=$(ip -o -f inet addr show $default_info | awk '{print $4}' | cut -f1 -d"/") echo $default_info
-
Topic moved to Tutorials simply because of the Samba setup script in the OP.