Script to install Samba with settings for FOG
-
@cspence said:
Oh, I see! You were just trying to share your images directory. I’ve been talking about replacing NFS completely with samba. It would take care of some of the biggest security issues.
Bad perfomance…
-
Everything I’ve been reading has said there’s no difference because hardware will be the limitation far before. Any sources?
-
I feel a throughput test coming on…
-
If so, it won’t be me right now.
Also, with my experience with it, transferring from a Linux based samba server is just as quick.
-
Oh and FYI, Fedora 22 DOES NOT like the smbpasswd -s argument at all!
-
@cspence The difference is IO access. NFS need more IO access than CIFS, if you have a good NAS/SAN/Local storage prefer NFS. If you have Local/NAS storage in SATA II take what you want.
Tests with : HP MSA, HP3par, EMC Clarion, EMC VNX, Synology RS2414 and Netgear Readynas RN3220 -
@ch3i said:
@cspence The difference is IO access. NFS need more IO access than CIFS, if you have a good NAS/SAN/Local storage prefer NFS. If you have Local/NAS storage in SATA II take what you want.
Tests with : HP MSA, HP3par, EMC Clarion, EMC VNX, Synology RS2414 and Netgear Readynas RN3220Now you got me reading loads of recent material on the subject…
-
Boys…
Check out my recent post here… I think I’m on a roll…
https://forums.fogproject.org/topic/5176/smb-setup-for-external-storage
-
I just attempted to image via SMB. was a no go. Tom says that CIFS support must be implemented in the kernel and init.
However, I did learn that NFS will accept a command designed for SMB for mounting.
Anyways, I edited the file fog.checkin, commented out the NFS stuff, and included commands for mounting via SMB.
At this point, I just hard coded my username and password, but in the future, variables for user and pass should be used.
[CODE]#!/bin/bash
. /usr/share/fog/lib/funcs.sh
RUN_CHKDSK=“”;
HOSTNAME_EARLY=“0”;
OS_ID_WIN7=“5”;
OS_ID_WIN8=“6”;
for arg incat /proc/cmdline
; do
case “$arg” in
initsh)
ash -i;
;;
nombr)
nombr=1;
;;
*)
;;
esac
done
clear;
displayBanner;
#setupDNS $dns;
osname=“”;
mbrfile=“”;
determineOS “$osid”;
macWinSafe=echo $mac|sed 's/://g'
;
cores=$(grep “core id” /proc/cpuinfo|sort -u|wc -l);
sockets=$(grep “physical id” /proc/cpuinfo|sort -u|wc -l);
cores=$((cores * sockets));
arch=$(uname -m);
if [ “$cores” == “0” ]; then
cores=1;
fi
if [ “$chkdsk” == “1” ]; then
RUN_CHKDSK=“-x”;
fi
if [ “$hostearly” == “1” ]; then
HOSTNAME_EARLY=“1”;
fi
if [ “$mc” == “yes” ]; then
method=“UDPCAST”;
elif [ “$mc” == “bt” ]; then
method=“Torrent-Casting”;
else
method=“NFS”;
fi
debugPause;
#fdisk -l &> /tmp/fdisk-before;
echo “”;
dots “Checking Operating System”
echo $osname;
dots “Checking CPU Cores”
echo $cores
echo “”;
dots “Send method”
echo $method
blGo=“0”;
nfsServerName=“”;
if [ “$mode” == “clamav” ]; then
dots “Checking In”;
queueinfo=wget -q -O - "http://${web}service/Pre_Stage1.php?mac=$mac&avmode=$avmode" 2>/dev/null
;
echo “Done”;
debugPause;
dots “Mounting Clamav”;
if [ ! -d “/opt/fog/clamav” ]; then
mkdir -p /opt/fog/clamav 2>/dev/null;
fi
#mount -o nolock,proto=tcp,rsize=32768,wsize=32768,intr,noatime $clamav /opt/fog/clamav;
mount -t cifs $clamav -o username=root,password=PasswordHere /opt/fog/clamav;
echo “Done”;
debugPause;
dots “Adding clamav to path”;
if [ -d “/opt/fog/clamav/bin” ] && [ -d “/opt/fog/clamav/sbin” ]; then
export PATH=$PATH:/opt/fog/clamav/bin:/opt/fog/clamav/sbin 2>/dev/null;
else
handleError “Cannot find clamav binaries to run task.”;
fi
echo “Done”;
debugPause;
fi
if [ “$type” == “up” ]; then
dots “Checking In”
queueinfo=wget -q -O - "http://${web}service/Pre_Stage1.php?mac=$mac&type=$type" 2>/dev/null
;
echo “Done”;
debugPause;
dots “Mounting File System”
mkdir /images 2>/dev/null;
#mount -o nolock,proto=tcp,rsize=32768,wsize=32768,intr,noatime $storage /images &> /dev/null;
mount -t cifs $storage -o username=root,password=PasswordHere &> /dev/null;
if [ “$?” == 0 ]; then
echo “Done”;
else
echo “Failed”;
handleError “Unable to mount NFS”;
fi
debugPause;
elif [ “$type” == “down” ] && [ “$capone” != “1” ]; then
mac64=getMACAddresses | base64
;
dots “Attempting to send inventory”;
doInventory 2>/dev/null;
poststring=“mac=${mac64}&sysman=${sysman64}&sysproduct=${sysproduct64}&sysversion=${sysversion64}&sysserial=${sysserial64}&systype=${systype64}&biosversion=${biosversion64}&biosvendor=${biosvendor64}&biosdate=${biosdate64}&mbman=${mbman64}&mbproductname=${mbproductname64}&mbversion=${mbversion64}&mbserial=${mbserial64}&mbasset=${mbasset64}&cpuman=${cpuman64}&cpuversion=${cpuversion64}&cpucurrent=${cpucurrent64}&cpumax=${cpumax64}&mem=${mem64}&hdinfo=${hdinfo64}&caseman=${caseman64}&casever=${casever64}&caseserial=${caseserial64}&casesasset=${casesasset64}”;
invres=“”;
while [ “$invres” == “” ]; do
invres=wget -O - --post-data="$poststring" "http://${web}service/inventory.php" 2>/dev/null
;
echo “$invres”;
done
debugPause;
dots “Checking In”;
while [ “$blGo” == “0” ]; do
if [ “$capone” != “1” ]; then
if [ “$mc” != “yes” -a “$mc” != “bt” ]; then
queueinfo=wget -q -O - "http://${web}service/Pre_Stage1.php?mac=$mac&type=$type" 2>/dev/null
;
blPass=echo $queueinfo|grep "##"
;
waittime=0;
while [ ! -n “$blPass” ]; do
echo -n " * $queueinfo (“;
sec2String “$waittime”;
echo “)”
queueinfo=wget -q -O - "http://${web}service/Pre_Stage1.php?mac=$mac&type=$type" 2>/dev/null
;
blPass=echo $queueinfo | grep "##"
;
sleep 5;
waittime=$(expr $waittime “+” 5);
done
echo “Done”;
debugPause;
directive=”${queueinfo:3}“;
if [ ! “$directive” = “GO” ]; then
tmpStorageIp=echo $queueinfo|cut -d'@' -f2 2>/dev/null
;
tmpStorage=echo $queueinfo|cut -d'@' -f3 2>/dev/null
;
tmpName=echo $queueinfo|cut -d'@' -f4 2>/dev/null
;
if [ “$tmpStorage” != “” -a “$tmpStorageIp” != “” ]; then
storage=$tmpStorage;
storageip=$tmpStorageIp;
nfsServerName=$tmpName;
else
handleError “Error determining storage server!”;
exit 1;
fi
dots “Using Storage Node”
echo “$nfsServerName”
debugPause;
fi
else
queueinfo=wget -q -O - "http://${web}service/mc_checkin.php?mac=$mac&type=$type" 2>/dev/null
;
blPass=echo $queueinfo|grep "##"
;
echo “Done”;
waittime=0;
while [ ! -n “$blPass” ]; do
echo -n " * $queueinfo (”;
sec2String “$waittime”
echo “)”
queueinfo=wget -q -O - "http://${web}service/mc_checkin.php?mac=$mac&type=$type" 2>/dev/null
;
blPass=echo $queueinfo | grep "##"
;
sleep 5;
waittime=$(expr $waittime “+” 5);
done
if [ “$mc” == “bt” ]; then
dots “Using image”
# download $img.torrent file
wget -q -O /tmp/$img.torrent http://${web}/service/torrent.php?torrent=$img;
ctorrent /tmp/$img.torrent -x > /tmp/filelist.txt;
torrentDownloadSize=cat /tmp/filelist.txt|grep "Total:*"|awk '{print $2}'
;
echo “$img”;
dots “Size of image to download”
echo “$torrentDownloadSize MB”;
debugPause;
fi
fi
dots “Mounting File System”;
mkdir /images $debugstring 2>/dev/null;
#mount -o nolock,proto=tcp,rsize=32768,intr,noatime $storage /images 2>/tmp/mntfail;
mount -t cifs $storage -o username=root,password=PasswordHere /images 2>/tmp/mntfail;
mntRet=“$?”;
if [ ! “$mntRet” == “0” ] && [ ! -f “/images/.mntcheck” ]; then
blame=wget -q -O - "http://${web}service/blame.php?mac=$mac&type=$type" 2>/dev/null
;
if [ ! “$blame” == “##” ]; then
echo “Failed”;
echo “”;
echo “Error during failure notification: $blame”;
while [ ! “$blame” == “##” ]; do
blame=wget -q -O - "http://${web}service/blame.php?mac=$mac&type=$type" 2>/dev/null
;
if [ ! “$blame” == “##” ]; then
echo $blame;
fi
sleep 5;
done
else
echo “Failed”;
echo “”;
cat /tmp/mntfail;
echo “”;
fi
sleep 5;
else
echo “Done”;
blGo=“1”;
fi
debugPause;
fi
done
else
echo “Done”;
dots “Mounting File System”;
mkdir /images $debugstring 2>/dev/null;
mount -o nolock,proto=tcp,rsize=32768,intr,noatime $storage /images 2>/tmp/mntfail;
echo “Done”;
fiLocal Variables:
indent-tabs-mode: t
sh-basic-offset: 4
sh-indentation: 4
tab-width: 4
End:[/CODE]
That file came out of r3530 btw.
-
Tom is currently building CIFS support into an init and kernel.
I’ll be able to test this shortly.
Which brings up other questions about how permissions and users and groups should be structured, both directory permissions and samba permissions.
Obviously /images would be read/execute only, but only to a “download” user…
/images/dev would be read/write/executeso… for local users… I am suggesting:
fog
fogupload
fogdownloadand a group: fogsamba
all three of those would go into that group.and permissions on /images could be:
[CODE]groupadd fogsamba
usermod -a -G fogsamba fog
usermod -a -G fogsamba fogupload
usermod -a -G fogsamba fogdownload
chown -R fogupload:fogsamba /images
chmod -R 740 /images[/CODE]I’m still very new to permissions… FEEL FREE to critique me! I might learn something!
-
@cspence said:
If we run authentication for FOG through Kerberos, uploads could prompt for a password to mount the share before imaging.
I disagree with this, because it would inhibit automated uploads and downloads via Cron style deployments.
Some people use FOG as a disaster recovery tool, and take regular uploads of servers and user computers. If they are not able to automate the upload / download process, then FOG is no longer a viable option for their usage.
Credentials must be passed to the client. I was asking Tom about this, and he’s thinking about doing a php querry to get the credentials.
-
Kerberizing samba will not get in the way of this. If a job needs to be automated, a read-only account can be used.
-
additionally, making the /images directory readable to ‘everyone’ would create security issues.
For those who upload images that may contain confidential and sensitive material, allowing the images directory to be accessible by anyone on the network would allow an intruder to copy the images and restore them via FOG… even if FOG isn’t accessable via the internet, and without MAC address network authentication, anyone could walk in with a laptop and connect to WiFi and download the images, or plug into a network port and download the images.
therefore, a ‘fogdownload’ user must be used for read-only.
-
@cspence said:
Kerberizing samba will not get in the way of this. If a job needs to be automated, a read-only account can be used.
Kerberizing? Can that be done on a Linux machine? Say for instance the FOG admin has no windows servers? This is the case for many, many small businesses in U.S. and in countries in South America that can’t afford Windows Server.
-
Exactly.
This setup only allows us to improve the integrity of the /images directory over the current setup while confidentiality is still an issue. Then again, if you’re worried about confidentiality, you shouldn’t be doing deployments using FOG or any unencrypted imaging system.
-
@cspence said:
Exactly.
This setup only allows us to improve the integrity of the /images directory over the current setup while confidentiality is still an issue. Then again, if you’re worried about confidentiality, you shouldn’t be doing deployments using FOG or any unencrypted imaging system.
Linux supports encrypted directories… I use them on my laptop. If a FOG administrator wanted, he could create a /images directory during Linux installation and make it encrypted.
-
@Wayne-Workman said:
@cspence said:
Kerberizing samba will not get in the way of this. If a job needs to be automated, a read-only account can be used.
Kerberizing? Can that be done on a Linux machine? Say for instance the FOG admin has no windows servers? This is the case for many, many small businesses in U.S. and in countries in South America that can’t afford Windows Server.
Kerberos is an MIT thing, not a Microsoft thing. Also, if you want to emulate active directory, there’s always LDAP/kerberos.
-
Well I don’t know anything about Kerberos… that’d be up to you guys.
-
Basically, you don’t have credentials flying around in the clear. You use tickets.
-
@cspence said:
Basically, you don’t have credentials flying around in the clear. You use tickets.
That sounds good.
I was just outlining how some use FOG… didn’t mean to ruffle feathers at all.
Some people do upload images with sensitive stuff on them…
and some people do automated uploads and downloads…
Those are the two main points I wanted to convey.