0.11 work, part 1 - perf improvements for //, shell style, minor bugfixes

This commit is contained in:
Tim Foster 2008-07-14 00:47:24 +01:00
parent 89cb5a25ce
commit 15bbf270be
19 changed files with 304 additions and 318 deletions

View File

@ -1,15 +1,14 @@
NAME
ZFS Automatic Snapshot SMF Service, version 0.10
ZFS Automatic Snapshot SMF Service, version 0.11
DESCRIPTION
This is a simple SMF service which you can configure to take automatic,
scheduled snapshots of any given ZFS filesystem as well as perform simple
incremental or full backups of that filesystem.
This is a simple SMF service which can will take automatic,
scheduled snapshots of given ZFS filesystems and can perform simple
incremental or full backups of those filesystems.
Documentation for the service is contained in the manifest file,
zfs-auto-snapshot.xml.
@ -21,11 +20,10 @@ This GUI is installed in the GNOME menu under:
Administration -> Automatic Snapshots
We also bundle a simple GUI application, which will query the user for the
properties required, and will proceed to build an instance manifest. This
properties required, and will then build an instance manifest. This
GUI is documented as part of the installation instructions below.
INSTALLATION
To install, as root, pkgadd TIMFauto-snapshot. This package now contains
@ -64,6 +62,10 @@ The properties each instance needs are:
command:
# zfs set com.sun:auto-snapshot:frequent=true tank/timf
When the "snap-children" property is set to true,
only locally-set filesystem properties are used to
determine which filesystems to snapshot -
property inheritance is not respected.
zfs/interval [ hours | days | months ]
@ -110,7 +112,7 @@ Usage: zfs-auto-snapshot-admin.sh [zfs filesystem name]
EXAMPLES
The following shows me running it for the ZFS filesystem
The following shows us running it for the ZFS filesystem
"tank/root_filesystem".
timf@haiiro[593] ./zfs-auto-snapshot-admin.sh tank/root_filesystem
@ -144,5 +146,7 @@ http://blogs.sun.com/timf/entry/zfs_automatic_snapshot_service_logging
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_8
http://blogs.sun.com/timf/entry/zfs_automatic_for_the_people
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_11
The ZFS Automatic Snapshot SMF Service is released under the terms of the CDDL.

View File

@ -4,7 +4,7 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<create_default_instance enabled='false' />
<instance name='tank-timf-torrent,foo' enabled='false' >
@ -12,7 +12,6 @@
<exec_method
type='method'
name='start'
exec='/lib/svc/method/zfs-auto-snapshot start'
timeout_seconds='10' />
<exec_method

View File

@ -36,7 +36,7 @@
# snapshot, destroy a snapshot as per the snapshot retention policy, unable to
# zfs send a dataset (if configured) or is unable to create or update the cron
# job.
#
#
@ -63,8 +63,19 @@ PREFIX="zfs-auto-snap"
# clients can get confused by colons. Who knew?
SEP=":"
# This variable gets set to the restarter/logfile property
# whenever we have $FMRI defined. Used by the print_log and
# print_note functions below for all output, it's definied
# by the schedule_snapshots take_snapshots and unschedule_snapshots
# methods.
LOG=""
# Determine whether this invocation of the script can use
# the recursive snapshot feature in some versions of ZFS.
# a null string in this variable says we don't.
HAS_RECURSIVE=$(zfs snapshot 2>&1 | fgrep -e '-r')
# this function validates the properties in the FMRI passed to it, then
# calls a function to create cron job that schedules a snapshot schedule based
# on the properties set in the service instance.
@ -84,22 +95,21 @@ function schedule_snapshots {
case $BACKUP in
'full' | 'incremental' )
if [ -z "${BACKUP_SAVE_CMD}" ]
then
check_failure 1 "Backup requested, but no backup command specified."
fi
if [ -z "${BACKUP_SAVE_CMD}" ] ; then
check_failure 1 \
"Backup requested, but no backup command specified."
fi
;;
esac
# for now, we're forcing the offset to be 0 seconds.
typeset OFFSET=0
if [ "$FILESYS" != "//" ]
then
# validate the filesystem
zfs list $FILESYS 2>&1 1> /dev/null
check_failure $? "ZFS filesystem does not exist!"
fi
if [ "$FILESYS" != "//" ] ; then
# validate the filesystem
zfs list $FILESYS 2>&1 1> /dev/null
check_failure $? "ZFS filesystem does not exist!"
fi
# remove anything that's there at the moment
unschedule_snapshots $FMRI
@ -108,11 +118,10 @@ function schedule_snapshots {
# finally, check our status before we return
STATE=$(svcprop -p restarter/state $FMRI)
if [ "${STATE}" == "maintenance" ]
then
STATE=1
if [ "${STATE}" == "maintenance" ] ; then
STATE=1
else
STATE=0
STATE=0
fi
return $STATE
}
@ -139,51 +148,53 @@ function add_cron_job { # $INTERVAL $PERIOD $OFFSET $FMRI
case $INTERVAL in
'minutes')
TIMES=$(get_divisor 0 59 $PERIOD)
ENTRY="$TIMES * * * *"
TIMES=$(get_divisor 0 59 $PERIOD)
ENTRY="$TIMES * * * *"
;;
'hours')
TIMES=$(get_divisor 0 23 $PERIOD)
ENTRY="0 $TIMES * * *"
TIMES=$(get_divisor 0 23 $PERIOD)
ENTRY="0 $TIMES * * *"
;;
'days')
TIMES=$(get_divisor 1 31 $PERIOD)
ENTRY="0 0 $TIMES * *"
TIMES=$(get_divisor 1 31 $PERIOD)
ENTRY="0 0 $TIMES * *"
;;
'months')
TIMES=$(get_divisor 1 12 $PERIOD)
ENTRY="0 0 1 $TIMES *"
TIMES=$(get_divisor 1 12 $PERIOD)
ENTRY="0 0 1 $TIMES *"
;;
esac
# Since we may have multiple instances all trying to start at
# the same time, we need some form of locking around crontab.
# Normally we'd be able to get SMF to manage this, by defining dependencies -
# Normally we'd be able to get SMF to manage this, by defining dependencies -
# but I'm not sure there's a way to prevent it from starting two instances
# at the same time (without requiring users to explicitly state dependencies
# and change them each time new instances are added)
# This isn't perfect (eg. if someone else if running crontab at the
# This isn't perfect (eg. if someone else is running crontab at the
# same time as us, we'll fail) but it'll do for now.
LOCK_OWNED="false"
while [ "$LOCK_OWNED" == "false" ]
do
mkdir /tmp/zfs-auto-snapshot-lock
if [ $? -eq 0 ]
then
LOCK_OWNED=true
else
sleep 1
fi
while [ "$LOCK_OWNED" == "false" ] ; do
mkdir /tmp/zfs-auto-snapshot-lock > /dev/null 2>&1
if [ $? -eq 0 ] ; then
LOCK_OWNED=true
else
sleep 1
fi
done
# adding a cron job is essentially just looking for an existing entry,
# removing it, and appending a new one. Neato.
crontab -l | grep -v "/lib/svc/method/zfs-auto-snapshot $FMRI$" > /tmp/saved-crontab.$$
echo "${ENTRY} /lib/svc/method/zfs-auto-snapshot $FMRI" >> /tmp/saved-crontab.$$
crontab -l | grep -v "/lib/svc/method/zfs-auto-snapshot $FMRI$" \
> /tmp/saved-crontab.$$
echo "${ENTRY} /lib/svc/method/zfs-auto-snapshot $FMRI" \
>> /tmp/saved-crontab.$$
crontab /tmp/saved-crontab.$$
check_failure $? "Unable to add cron job!"
@ -203,18 +214,18 @@ function unschedule_snapshots {
# See notes on $LOCK_OWNED variable in function add_cron_job
LOCK_OWNED="false"
while [ "$LOCK_OWNED" == "false" ]
do
mkdir /tmp/zfs-auto-snapshot-lock
if [ $? -eq 0 ]
then
LOCK_OWNED=true
else
sleep 1
fi
done
while [ "$LOCK_OWNED" == "false" ]; do
mkdir /tmp/zfs-auto-snapshot-lock > /dev/null 2>&1
if [ $? -eq 0 ] ; then
LOCK_OWNED=true
else
sleep 1
fi
done
crontab -l | grep -v "/lib/svc/method/zfs-auto-snapshot $FMRI$" \
> /tmp/saved-crontab.$$
crontab -l | grep -v "/lib/svc/method/zfs-auto-snapshot $FMRI$" > /tmp/saved-crontab.$$
crontab /tmp/saved-crontab.$$
check_failure $? "Unable to unschedule snapshots for $FMRI"
@ -223,21 +234,19 @@ function unschedule_snapshots {
# finally, check our status before we return
STATE=$(svcprop -p restarter/state $FMRI)
if [ "${STATE}" == "maintenance" ]
then
STATE=1
if [ "${STATE}" == "maintenance" ] ; then
STATE=1
else
STATE=0
STATE=0
fi
}
# This function actually takes the snapshot of the filesystem. This is what
# really does the work. We name snapshots based on a standard time format
# This function actually takes the snapshot of the filesystem.
# $1 is assumed to be a valid FMRI
function take_snapshot {
typeset FMRI=$1
# want this to be global, used by check_failure
FMRI=$1
typeset DATE=$(date +%F-%H${SEP}%M${SEP}%S)
typeset FILESYS=$(svcprop -p zfs/fs-name $FMRI)
@ -245,8 +254,8 @@ function take_snapshot {
typeset SNAP_CHILDREN=$(svcprop -p zfs/snapshot-children $FMRI)
typeset BACKUP=$(svcprop -p zfs/backup $FMRI)
typeset STATE=0
typeset STATE=0
# an identifier allows us to setup multiple snapshot schedules
# per filesystem - so we append a <sep><label> token if the user has
@ -256,24 +265,21 @@ function take_snapshot {
# Shocking, I know.
typeset LABEL="$(svcprop -p zfs/label $FMRI)"
# the "//" filesystem is special. We use it as a keyword
# to determine whether to poll the ZFS "com.sun:auto-snapshot:${LABEL}"
# to determine whether to poll the ZFS "com.sun:auto-snapshot:${LABEL}"
# user property which specifies which datasets should be snapshotted
# and under which "label" - a set of default service instances that
# snapshot at defined periods (daily, weekly, monthly, every 15 mins)
if [ "$FILESYS" == "//" ]
then
FILESYS=$(get_snapshot_datasets $LABEL)
if [ "$FILESYS" == "//" ] ; then
FILESYS=$(get_snapshot_datasets $LABEL $SNAP_CHILDREN)
else
FILESYS=$FILESYS
FILESYS=$FILESYS
fi
if [ "$LABEL" != "\"\"" ]
then
LABEL="${SEP}${LABEL}"
if [ "$LABEL" != "\"\"" ] ; then
LABEL="${SEP}${LABEL}"
else
LABEL=""
LABEL=""
fi
# A flag for whether we're running in verbose mode or not
@ -284,89 +290,71 @@ function take_snapshot {
# Determine whether we should avoid scrubbing
typeset AVOIDSCRUB=$(svcprop -p zfs/avoidscrub $FMRI)
# prune out the filesystems that are on pools currently being
# scrubbed or resilvered. There's a risk that a scrub/resilver
# will be started just after this check completes, but there's
# also the risk that a running scrub will complete just after this
# check. Life's hard.
if [ "$AVOIDSCRUB" == "true" ]
then
# a cache of the pools that are known not to be scrubbing
NOSCRUBLIST=""
if [ "$AVOIDSCRUB" == "true" ] ; then
# a cache of the pools that are known not to be scrubbing
NOSCRUBLIST=""
# Create a list of filesystems scheduled for snapshots
# that are *not* on pools that are being scrubbed/resilvered
for fs in $FILESYS
do
POOL=$(echo $fs | cut -d/ -f1)
if is_scrubbing $POOL "$NOSCRUBLIST"
then
print_log "Pool containing $fs is being scrubbed/resilvered."
print_log "Not taking snapshots for $fs."
else
NOSCRUBLIST="$POOL $NOSCRUBLIST"
NOSCRUBFILESYS="$NOSCRUBFILESYS $fs"
fi
done
FILESYS="$NOSCRUBFILESYS"
# Create a list of filesystems scheduled for snapshots
# that are *not* on pools that are being scrubbed/resilvered
for fs in $FILESYS ; do
POOL=$(echo $fs | cut -d/ -f1)
if is_scrubbing $POOL "$NOSCRUBLIST" ; then
print_log "Pool containing $fs is being scrubbed/resilvered."
print_log "Not taking snapshots for $fs."
else
NOSCRUBLIST="$POOL $NOSCRUBLIST"
NOSCRUBFILESYS="$NOSCRUBFILESYS $fs"
fi
done
FILESYS="$NOSCRUBFILESYS"
fi
# walk each of the filesystems specified
for fs in $FILESYS
do
# Ok, now say cheese! If we're taking recursive snapshots,
# walk through the children, destroying old ones if required.
if [ "${SNAP_CHILDREN}" == "true" ]
then
# check if we have recursive snapshot capability, seeing
# a null string in this variable says we don't.
HAS_RECURSIVE=$(zfs snapshot 2>&1 | fgrep -e '-r')
for child in $(zfs list -r -H -o name -t filesystem,volume $fs)
do
destroy_older_snapshots $child $KEEP $LABEL
if [ -z "${HAS_RECURSIVE}" ]
then
print_note "Taking snapshot $child@$SNAPNAME"
zfs snapshot $child@$SNAPNAME
check_failure $? "Unable to take snapshot $child@$SNAPNAME."
fi
done
# take the recursive snapshots if we have the ability
if [ -n "${HAS_RECURSIVE}" ]
then
print_note "Taking recursive snapshot $fs@$SNAPNAME"
zfs snapshot -r $fs@$SNAPNAME
check_failure $? "Unable to take recursive snapshots $fs@$SNAPNAME."
fi
else
for fs in $FILESYS ; do
# Ok, now say cheese! If we're taking recursive snapshots,
# walk through the children, destroying old ones if required.
if [ "${SNAP_CHILDREN}" == "true" ] ; then
destroy_older_snapshots $fs $KEEP $LABEL
print_note "Taking snapshot $fs@$SNAPNAME"
zfs snapshot $fs@$SNAPNAME
check_failure $? "Unable to take snapshot $fs@$SNAPNAME."
if [ -z "${HAS_RECURSIVE}" ] ; then
for child in $(zfs list -r -H -o name -t filesystem,volume $fs) ; do
destroy_older_snapshots $child $KEEP $LABEL
print_note "Taking snapshot $child@$SNAPNAME"
zfs snapshot $child@$SNAPNAME
check_failure $? "Unable to take snapshot $child@$SNAPNAME."
done
else
destroy_older_snapshots $fs $KEEP $LABEL $HAS_RECURSIVE
print_note "Taking recursive snapshot $fs@$SNAPNAME"
zfs snapshot -r $fs@$SNAPNAME
check_failure $? "Unable to take recursive snapshots of $fs@$SNAPNAME."
fi
fi
else
destroy_older_snapshots $fs $KEEP $LABEL
print_note "Taking snapshot $fs@$SNAPNAME"
zfs snapshot $fs@$SNAPNAME
check_failure $? "Unable to take snapshot $fs@$SNAPNAME."
fi
# If the user has asked for backups, go ahead and do this.
if [ "${BACKUP}" != "none" ]
then
take_backup $fs $BACKUP "$LABEL" $FMRI
check_failure $? "Unable to backup filesystem $fs using \
$BACKUP backup strategy."
fi
# If the user has asked for backups, go ahead and do this.
if [ "${BACKUP}" != "none" ] ; then
take_backup $fs $BACKUP "$LABEL" $FMRI
check_failure $? "Unable to backup filesystem $fs using \
$BACKUP backup strategy."
fi
done
# finally, check our status before we return
STATE=$(svcprop -p restarter/state $FMRI)
if [ "${STATE}" == "maintenance" ]
then
STATE=1
if [ "${STATE}" == "maintenance" ] ; then
STATE=1
else
STATE=0
STATE=0
fi
return $STATE
}
@ -383,28 +371,31 @@ function destroy_older_snapshots {
typeset FILESYS=$1
typeset COUNTER=$2
typeset LABEL=$3
typeset HAS_RECURSIVE=$4
if [ "${COUNTER}" == "all" ]
then
return 0
if [ "${COUNTER}" == "all" ] ; then
return 0
fi
if [ -n "${HAS_RECURSIVE}" ] ; then
typeset RECURSIVE="-r"
fi
COUNTER=$(($COUNTER - 1))
# walk through the snapshots, newest first, destroying older ones
for snapshot in $(zfs list -r -t snapshot -H -o name $FILESYS \
| grep "$FILESYS@${PREFIX}${LABEL}" | sort -r)
do
if [ $COUNTER -le 0 ]
then
# using print_note, as this checks our $VERBOSE flag
print_note "$snapshot being destroyed as per retention policy."
zfs destroy $snapshot
check_failure $? "Unable to destroy $snapshot"
else
# don't destroy this one
COUNTER=$(($COUNTER - 1))
fi
| grep "$FILESYS@${PREFIX}${LABEL}" | sort -r) ; do
if [ $COUNTER -le 0 ] ; then
# using print_note, as this checks our $VERBOSE flag
print_note "$snapshot being destroyed ${RECURSIVE} as per \
retention policy."
zfs destroy ${RECURSIVE} $snapshot
check_failure $? "Unable to destroy $snapshot"
else
# don't destroy this one
COUNTER=$(($COUNTER - 1))
fi
done
}
@ -414,15 +405,14 @@ function destroy_older_snapshots {
# as passed into this function.
#
function check_failure { # integer exit status, error message to display
typeset RESULT=$1
typeset ERR_MSG=$2
if [ $RESULT -ne 0 ]
then
print_log "Error: $ERR_MSG"
print_log "Moving service $FMRI to maintenance mode."
svcadm mark maintenance $FMRI
if [ $RESULT -ne 0 ] ; then
print_log "Error: $ERR_MSG"
print_log "Moving service $FMRI to maintenance mode."
svcadm mark maintenance $FMRI
fi
}
@ -431,18 +421,19 @@ function check_failure { # integer exit status, error message to display
# A function we use to emit output. Right now, this goes to syslog via logger(1)
# but it would be much nicer to be able to print it to the svc log file for
# each individual service instance - tricky because we're being called from
# cron, most of the time and are detached from smf.
# cron, most of the time and are detached from smf. Working around this by
# appending to the $LOG file
function print_log { # message to display
logger -t zfs-auto-snap -p daemon.notice $*
echo $(date) $* >> $LOG
}
# Another function to emit output, this time checking to see if the
# user has set the service into verbose mode, otherwise, we print nothing
function print_note { # mesage to display
if [ "$VERBOSE" == "true" ]
then
logger -t zfs-auto-snap -p daemon.notice $*
if [ "$VERBOSE" == "true" ] ; then
logger -t zfs-auto-snap -p daemon.notice $*
echo $(date) $* >> $LOG
fi
}
@ -453,19 +444,18 @@ function print_note { # mesage to display
#
function get_divisor { # start period, end period, width of period
typeset START=$1
typeset END=$2
typeset WIDTH=$3
typeset RANGE=$START
typeset JUMP=$(( $RANGE + $WIDTH ))
typeset START=$1
typeset END=$2
typeset WIDTH=$3
typeset RANGE=$START
typeset JUMP=$(( $RANGE + $WIDTH ))
while [ $JUMP -le $END ]
do
RANGE="$RANGE,$JUMP"
JUMP=$(( $JUMP + $WIDTH ))
done
while [ $JUMP -le $END ] ; do
RANGE="$RANGE,$JUMP"
JUMP=$(( $JUMP + $WIDTH ))
done
echo $RANGE
echo $RANGE
}
@ -515,8 +505,7 @@ function take_backup { # filesystem backup-type label fmri
typeset BACKUP_DATASETS=""
# Determine how many datasets we have to backup
if [ "$SNAP_CHILDREN" == "true" ]
then
if [ "$SNAP_CHILDREN" == "true" ] ; then
BACKUP_DATASETS=$(zfs list -r -H -o name -t filesystem,volume $FILESYS)
else
# only one dataset to worry about here.
@ -524,8 +513,7 @@ function take_backup { # filesystem backup-type label fmri
fi
# loop through the datasets, backing up each one.
for dataset in $BACKUP_DATASETS
do
for dataset in $BACKUP_DATASETS ; do
# An initial check of the input parameters, to see how we should proceed
case $BACKUP in
@ -556,9 +544,9 @@ function take_backup { # filesystem backup-type label fmri
;;
esac
# Now perform the backup. Note that on errors, we'll immediately mark
# the service as being in maintenance mode, however, backups will still
# the service as being in maintenance mode, however, backups will still
# be attempted for other datasets in our list.
case $BACKUP in
"incremental")
@ -580,12 +568,21 @@ function take_backup { # filesystem backup-type label fmri
}
# Get a list of filesystem we should snapshot
function get_snapshot_datasets { #LABEL
# Get a list of filesystem we should snapshot. If snap_children is "true"
# then we don't list children that inherit the parent's property - we just look
# for locally set properties, and let "zfs snapshot -r" snapshot the children.
function get_snapshot_datasets { #LABEL #SNAP_CHILDREN
typeset LABEL=$1
typeset FS=$(zfs list -t filesystem -o name,com.sun:auto-snapshot:$LABEL \
| grep true | awk '{print $1}')
typeset SNAP_CHILDREN=$2
if [ "${SNAP_CHILDREN}" = "true" ] ; then
typeset FS=$(zfs get com.sun:auto-snapshot:$LABEL \
| grep local | grep true | awk '{print $1}')
else
typeset FS=$(zfs list -t filesystem,volume \
-o name,com.sun:auto-snapshot:$LABEL \
| grep true | awk '{print $1}')
fi
echo "$FS"
}
@ -608,17 +605,15 @@ function is_scrubbing { # POOL SCRUBLIST
# the pool name in a known list of pools that were not scrubbing
# the last time we checked.
echo "$NOSCRUBLIST" | grep "$POOL " > /dev/null
if [ $? -eq 0 ]
then
return 1
if [ $? -eq 0 ] ; then
return 1
fi
SCRUBBING=$(env LC_ALL=C zpool status $POOL | grep " in progress")
if [ -z "$SCRUBBING" ]
then
return 1
if [ -z "$SCRUBBING" ] ; then
return 1
else
return 0
return 0
fi
}
@ -638,24 +633,24 @@ function is_scrubbing { # POOL SCRUBLIST
case "$1" in
'start')
export LOG=$(svcprop -p restarter/logfile $SMF_FMRI)
schedule_snapshots $SMF_FMRI
if [ $? -eq 0 ]
then
result=$SMF_EXIT_OK
if [ $? -eq 0 ] ; then
result=$SMF_EXIT_OK
else
print_log "Problem taking snapshots for $SMF_FMRI"
result=$SMF_EXIT_ERR_FATAL
print_log "Problem taking snapshots for $SMF_FMRI"
result=$SMF_EXIT_ERR_FATAL
fi
;;
'stop')
export LOG=$(svcprop -p restarter/logfile $SMF_FMRI)
unschedule_snapshots $SMF_FMRI
if [ $? -eq 0 ]
then
result=$SMF_EXIT_OK
if [ $? -eq 0 ] ; then
result=$SMF_EXIT_OK
else
print_log "Problem taking snapshots for $SMF_FMRI"
result=$SMF_EXIT_ERR_FATAL
print_log "Problem taking snapshots for $SMF_FMRI"
result=$SMF_EXIT_ERR_FATAL
fi
;;
@ -667,12 +662,12 @@ case "$1" in
case $SMF_FMRI in
svc:/*)
export LOG=$(svcprop -p restarter/logfile $SMF_FMRI)
take_snapshot $SMF_FMRI
if [ $? -eq 0 ]
then
result=$SMF_EXIT_OK
if [ $? -eq 0 ] ; then
result=$SMF_EXIT_OK
else
result=$SMF_EXIT_ERR_FATAL
result=$SMF_EXIT_ERR_FATAL
fi
;;
*)

View File

@ -9,7 +9,7 @@ PKG=TIMFauto-snapshot
NAME=ZFS Automatic Snapshot Service
ARCH=all
BASEDIR=/
VERSION=0.10
VERSION=0.11
MAXINST=1
CATEGORY=application
DESC=Takes automatic snapshots of ZFS filesystems on a periodic basis.

View File

@ -37,10 +37,12 @@ FILES="auto-snapshot-daily.xml auto-snapshot-monthly.xml auto-snapshot-frequ
for manifest in $FILES
do
echo Importing $manifest
/usr/sbin/svccfg import /var/svc/manifest/system/filesystem/$manifest
done
for fmri in $FMRIS
do
echo Enabling $fmri
/usr/sbin/svcadm enable $fmri
done

View File

@ -31,13 +31,10 @@ DEFAULT=svc:/system/filesystem/zfs/auto-snapshot:default
FMRIS="svc:/system/filesystem/zfs/auto-snapshot:frequent svc:/system/filesystem/zfs/auto-snapshot:hourly svc:/system/filesystem/zfs/auto-snapshot:daily svc:/system/filesystem/zfs/auto-snapshot:weekly svc:/system/filesystem/zfs/auto-snapshot:monthly"
for fmri in $FMRIS $DEFAULT
do
/usr/sbin/svcadm disable $fmri
while [ "$STATE" != "disabled" ]
do
sleep 1
STATE=`/usr/bin/svcs -H -o state $fmri`
done
for fmri in $FMRIS $DEFAULT ; do
STATE=$(/usr/bin/svcs -H -o state $fmri)
if [ "$STATE" = "online" ] ; then
/usr/sbin/svcadm disable -s $fmri
fi
/usr/sbin/svccfg delete $fmri
done

View File

@ -17,7 +17,7 @@ d none usr 0755 root sys
d none usr/share 0755 root sys
d none usr/share/applications 0755 root other
f none usr/share/applications/automatic-snapshot.desktop 0644 root bin
d none usr/bin 0755 root sys
d none usr/bin 0755 root bin
f none usr/bin/zfs-auto-snapshot-admin.sh 0755 root sys
d none lib 0755 root bin
d none lib/svc 0755 root bin

View File

@ -7,7 +7,7 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<create_default_instance enabled='false' />
<instance name='space-archive' enabled='false' >

View File

@ -9,7 +9,7 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<create_default_instance enabled='false' />
<instance name='space-timf,backup' enabled='false' >

View File

@ -8,7 +8,7 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<create_default_instance enabled='false' />
<instance name='space-timf,daily' enabled='false' >

View File

@ -8,7 +8,7 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<create_default_instance enabled='false' />
<instance name='space-timf,frequent' enabled='false' >

View File

@ -8,7 +8,7 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<create_default_instance enabled='false' />
<instance name='space-timf,monthly' enabled='false' >

View File

@ -8,7 +8,7 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<create_default_instance enabled='false' />
<instance name='tank-rootfs' enabled='false' >

View File

@ -4,12 +4,13 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<!-- This is one of the default instances that comes with the
ZFS Automatic Snapshot SMF Service. It snapshots all filesystems marked
with the ZFS User Property com.sun:auto-snapshot:daily=true daily,
and keeps 31 of these snapshots into the past.
ZFS Automatic Snapshot SMF Service. It recursively snapshots all
filesystems marked with the ZFS User Property
com.sun:auto-snapshot:daily=true daily, and keeps 31 of these
snapshots into the past.
-->
<create_default_instance enabled='false' />
@ -45,7 +46,7 @@
override="true"/>
<propval name="keep" type="astring" value="31"
override="true"/>
<propval name="snapshot-children" type="boolean" value="false"
<propval name="snapshot-children" type="boolean" value="true"
override="true"/>
<propval name="backup" type="astring" value="none"

View File

@ -4,12 +4,13 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<!-- This is one of the default instances that comes with the
ZFS Automatic Snapshot SMF Service. It snapshots all filesystems marked
with the ZFS User Property com.sun:auto-snapshot:frequent=true every
15 minutes, and keeps 4 of these snapshots into the past.
<!-- This is one of the default instances that comes with the
ZFS Automatic Snapshot SMF Service. It recursively snapshots all
filesystems marked with the ZFS User Property
com.sun:auto-snapshot:frequent=true every
15 minutes, and keeps 4 of these snapshots into the past.
-->
<create_default_instance enabled='false' />
@ -45,7 +46,7 @@
override="true"/>
<propval name="keep" type="astring" value="4"
override="true"/>
<propval name="snapshot-children" type="boolean" value="false"
<propval name="snapshot-children" type="boolean" value="true"
override="true"/>
<propval name="backup" type="astring" value="none"

View File

@ -4,12 +4,13 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<!-- This is one of the default instances that comes with the
ZFS Automatic Snapshot SMF Service. It snapshots all filesystems marked
with the ZFS User Property com.sun:auto-snapshot:hourly=true hourly,
and keeps 24 of these snapshots into the past.
<!-- This is one of the default instances that comes with the
ZFS Automatic Snapshot SMF Service. It recursively s
snapshots all filesystems marked with the ZFS User Property
com.sun:auto-snapshot:hourly=true hourly,
and keeps 24 of these snapshots into the past.
-->
<create_default_instance enabled='false' />
@ -45,7 +46,7 @@
override="true"/>
<propval name="keep" type="astring" value="24"
override="true"/>
<propval name="snapshot-children" type="boolean" value="false"
<propval name="snapshot-children" type="boolean" value="true"
override="true"/>
<propval name="backup" type="astring" value="none"

View File

@ -4,12 +4,13 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<!-- This is one of the default instances that comes with the
ZFS Automatic Snapshot SMF Service. It snapshots all filesystems marked
with the ZFS User Property com.sun:auto-snapshot:monthly=true monthly,
and keeps 12 of these snapshots into the past.
<!-- This is one of the default instances that comes with the
ZFS Automatic Snapshot SMF Service. It recursively snapshots
all filesystems marked with the ZFS User Property
com.sun:auto-snapshot:monthly=true monthly,
and keeps 12 of these snapshots into the past.
-->
<create_default_instance enabled='false' />
@ -45,7 +46,7 @@
override="true"/>
<propval name="keep" type="astring" value="12"
override="true"/>
<propval name="snapshot-children" type="boolean" value="false"
<propval name="snapshot-children" type="boolean" value="true"
override="true"/>
<propval name="backup" type="astring" value="none"

View File

@ -4,12 +4,13 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<!-- This is one of the default instances that comes with the
ZFS Automatic Snapshot SMF Service. It snapshots all filesystems marked
with the ZFS User Property com.sun:auto-snapshot:weekly=true weekly,
and keeps 4 of these snapshots into the past.
<!-- This is one of the default instances that comes with the
ZFS Automatic Snapshot SMF Service. It recursively snapshots all
filesystems marked with the ZFS User Property
com.sun:auto-snapshot:weekly=true weekly,
and keeps 4 of these snapshots into the past.
-->
<create_default_instance enabled='false' />
@ -45,7 +46,7 @@
override="true"/>
<propval name="keep" type="astring" value="4"
override="true"/>
<propval name="snapshot-children" type="boolean" value="false"
<propval name="snapshot-children" type="boolean" value="true"
override="true"/>
<propval name="backup" type="astring" value="none"

View File

@ -22,7 +22,7 @@
CDDL HEADER END
Copyright 2006 Sun Microsystems, Inc. All rights reserved.
Copyright 2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
-->
@ -32,7 +32,7 @@
<service
name='system/filesystem/zfs/auto-snapshot'
type='service'
version='0.10'>
version='0.11'>
<!-- no point in being able to take snapshots if we don't have a fs -->
<dependency
@ -56,58 +56,57 @@
<propval name='duration' type='astring' value='transient' />
</property_group>
<!-- the properties we expect that any instance will define
they being :
<!-- the properties we expect that any instance will define
they being :
fs-name : The name of the filesystem we want to snapshot.
The special filesystem name "//" indicates we should
look at the com.sun:auto-snapshot:<label> ZFS user
property on datasets, set to "true" if the dataset
should have snapshots taken by this instance.
fs-name : The name of the filesystem we want to snapshot.
The special filesystem name "//" indicates we should
look at the com.sun:auto-snapshot:<label> ZFS user
property on datasets, set to "true" if the dataset
should have snapshots taken by this instance.
interval : minutes | hours | days | months
interval : minutes | hours | days | months
period : How many (m,h,d,m) do we wait between snapshots
period : How many (m,h,d,m) do we wait between snapshots
offset : The offset into the time period we want
offset : The offset into the time period we want
keep : How many snapshots we should keep, otherwise, we
delete the oldest when we hit this threshold
keep : How many snapshots we should keep, otherwise, we
delete the oldest when we hit this threshold
snapshot-children : Whether we should recursively snapshot
all filesystems contained within.
all filesystems contained within.
backup : If we want to perform a "zfs send" for our backup
we set this - either to "full" or "incremental".
If set to "none", we don't perform backups.
backup : If we want to perform a "zfs send" for our backup
we set this - either to "full" or "incremental".
If set to "none", we don't perform backups.
backup-save-cmd : A command string to save the backup - if unset,
we return an error and move the service to
maintenance.
we return an error and move the service to
maintenance.
backup-lock : A string we set when a backup operation is in
progress, to prevent two backups from the same
service instance running into each other. Not
completely flawless, but okay. Due to 6338294,
we use the keyword "unlocked" to indicate that
the lock is not held.
backup-lock : A string we set when a backup operation is in
progress, to prevent two backups from the same
service instance running into each other. Not
completely flawless, but okay. Due to 6338294,
we use the keyword "unlocked" to indicate that
the lock is not held.
label : A string that allows us to differentiate this set
of snapshot schedules from others configured for the
same filesystem. This is not usually needed and can
be left unset, but it can be useful in some
situations (particularly for backups).
label : A string that allows us to differentiate this set
of snapshot schedules from others configured for the
same filesystem. This is not usually needed and can
be left unset, but it can be useful in some
situations (particularly for backups).
verbose : Set to false by default, setting to true results
in the service printing more detail in the log
about what it's doing.
verbose : Set to false by default, setting to true results
in the service printing more detail in the log
about what it's doing.
avoidscrub : Set to true by default, this determines whether
we should avoid taking snapshots on any pools that have
a scrub or resilver in progress.
More info in the bugid:
6343667 need itinerary so interrupted scrub/resilver
doesn't have to start over
avoidscrub : Set to true by default, this determines whether
we should avoid taking snapshots on any pools that have
a scrub or resilver in progress.
More info in the bugid:
6343667 need itinerary so interrupted scrub/resilver
doesn't have to start over
-->
@ -194,7 +193,7 @@
<template>
<common_name>
<loctext xml:lang='C'>
ZFS automatic snapshots
ZFS automatic snapshots
</loctext>
</common_name>
<description>
@ -202,32 +201,17 @@
This service provides system support for taking automatic snapshots of ZFS
filesystems.
In order to use this service, you must create a service instance per set of
automatic snapshots you want to take.
In order to use this service, you must create a service instance per set of automatic snapshots you want to take.
The on starting a service instance, a cron job corresponding to the properties
set in the instance is created on the host. This cron job will regularly take
snapshots of the specified ZFS filesystem.
The on starting a service instance, a cron job corresponding to the properties set in the instance is created on the host. This cron job will regularly take snapshots of the specified ZFS filesystem.
On stopping the service, that cron job is removed.
We also have the ability to perform backups, done using the "zfs send" command.
A property set in the service called "backup-save-cmd" can be configured as the
command used to save the backup stream. See the zfs(1M) man page for an example.
The backups can be either "full" backups, or "incremental" backups - for each
incremental backup, a full backup must be configured first. If for some reason
an incremental backup fails, a full backup is performed instead.
We also have the ability to perform backups, done using the "zfs send" command. A property set in the service called "backup-save-cmd" can be configured as the command used to save the backup stream. See the zfs(1M) man page for an example. The backups can be either "full" backups, or "incremental" backups - for each incremental backup, a full backup must be configured first. If for some reason an incremental backup fails, a full backup is performed instead.
By default, snapshots will not be taken of any datasets resident on pools that
are currently being scrubbed or resilvered. This can behaviour can be changed
using the zfs/avoid scrub service property.
Care should be taken when configuring backups to ensure that the time
granularity of the cron job is sufficient to allow the backup to complete
between invocations of each backup. We perform locking to ensure that two
backups of the same filesystem cannot run simultaneously, but will move the
service into "maintenance" state should this occur.
By default, snapshots will not be taken of any datasets resident on pools that are currently being scrubbed or resilvered. This can behaviour can be changed using the zfs/avoid scrub service property.
Care should be taken when configuring backups to ensure that the time granularity of the cron job is sufficient to allow the backup to complete between invocations of each backup. We perform locking to ensure that two backups of the same filesystem cannot run simultaneously, but will move the service into "maintenance" state should this occur.
</loctext>
</description>
</template>