I am seeking advice about the backup strategies and possible use
of CVS to accomplish this task.
I happen to use 4-5 different computer on the daily basis for my work.
I use my laptop, desktop, and a file server at work as well as my personal
desktop and my wife's laptop at home.
It is of paramount importance for me that my files are in sync on all
for two reasons. I want to start working always with the latest and
most up to date version of files regardless of the computer which I am using.
Secondly, if a HDD dies on one or even three-four computers at the same moment
of time I will still have backup copy to recover the work.
Up until now I have used the combination of tar, rarely dd, and my
home brown scripts
to accomplish above task. I would always start work by running the
script which would
pull up the tar files either from the file server of USB drive and
untar them on my computer.
After I finish work I would run the script to tar specific directory
I was working on and push
them back to file server and a USB drive.
However it did happen to me that I forgot to run the script once or
twice in the past which
cause me great deal of frustration. Suddenly, I would have to
different versions of the
same file at two different computers and maybe the third older version
on my file server.
It also happen to me in the bast that I modify the files and I
realized that modification
sucked but I could not recover specific older version of particular file.
I do periodically burn DVDs with entire home directory, date it and
keep it on the shelf.
Are there any advantages of using CVS over my present method or I am
It looks to me that CVS could help me utilize pull+push strategy for
backing up the files but
would give me advantage over the tar and dd by allowing me incremental
updates as well as
keeping the past snapshots of my work.
I have seen a thread about 2-3 months ago on misc in which there was a
by a OpenBSD user who wanted to keep his /etc on his firewall machines
up to date as
well as back up configuration files in the case of the disaster by CVS.
I am open for any suggestions but I do have a strong preference for
the tools from the base
of the system. I noticed couple ports with poor man tools for
accomplishing above tasks.
I use the simpler RCS and mail a tarred copy of the RCS directory to one my Gmail accounts.
On OpenBSD that could be automated with a /etc/rc.shutdown script.
You don't need to be a genius to debug a pf.conf firewall ruleset, you just need the guts to run tcpdump
I have used CVS for backups for a few years, I recently switched to subversion mainly because it handles binary files better, and some other minor CVS annoyances ...
For me, it works pretty well.
UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things.
One utility you might want to look at is,rsync. I've yet to employ it, but it
might be useful. I have a similiar problem to the one you've brought up Oko,
and have been using a CVS+SSH solution for months under OpenBSD, FreeBSD,
and Windows XP machines.
Over the years, I've made use of scp, tar, cvs, nfs, samba, and sshfs. For
small projects, I usually use a simple tar & scp operation to move it from
workstation to file server, and then pull it from the next machine that needs
it. Important stuff now gets stored under revision control, either locally or
committed to a central repository at home. Any kind of data that I always carry
around with me (like my vimrc) irregardless of machine, gets placed under CVS.
Local modifications are tested, then committed back to the repos. on the OpenBSD
box, and other machines then periodically update from the CVS repository.
Anything that isn't suitable for use on all machines, has a date with the M4
preprocessor or symbolic links right after checkout. Conflicts, are of course
dealt with in the usual manor.
There's many version control systems available. If you have any intention to
shop around for one; bzr, svn, and git are worth checking out, especially if
you'll be dealing with conflicting changes. My file server runs OpenBSD, which
comes with CVS pre-installed because that is what the OpenBSD project uses, and
thus that is what my personal repository runs -- I can live with CVS perfectly
well under that condition.
All physical data is of course backed up separately from the repository to
avoid a single point of failure.... It is also possible to compress the CVS
repository and bring a copy along via E-Mail or flash drive. It makes
working on an important project much nicer. If you use a setup like git, it
is even possible to have each machine that pulled a copy of files, in effect
become a living backup of your repository.
Trust me -- using an VCS/SCMS is worth it. The advantages you mention are
there, and disadvantages basically depend on the software you choose to use,
and the precise situation you face.
Getting the current copy of files is easy, as long as you've both committed it
to, and can access the repository is trivial. Getting older versions of a file
or all the files is somewhat the point of such software, not just an advantage.
One thing that you also gain, is the ability to maintain a log of your changes,
which you wouldn't have with a file synchronization or network mounting
solution. Using file sync stuff like rsync or network mounts like SSHFS or NFS
help avoid the need to maintain current versions of files, but rather suck
during times without network access to your server. Things like RCS and CVS
help deal with files that will change, need history, and may have
multiple-versions running around. The rcs program in the base system is a very
basic Version Control System (VCS), but IMHO is less well suited to files that
are regularly used. You could think of rcs, cvs, and svn (subversion) as the
ed, ex, and vi of their problem domain.
Everything tools like CVS do, can be emulated with the file system, but under
that condition, it doesn't work with the problem of getting lazy or "dang, I
forgot to ..." as cleanly as CVS does. Distributed Source Code Management
Systems (DSCM) or SCM systems capable of distributed work flows can also be
handy (git, bazaar, perforce), especially if you want to maintain a file set
under version control on your computer while playing around, then push the
entire set of commits to a central repository when you are sure that what you
are committing, is what you want. (A good system also makes it easy to pick &
choose what parts you want to commit; some can even let you 'rewrite history'
before pushing it out, so to speak.) Using a SCMS is also much better suited to
situations where you have to live without network access to your home server.
Thou shalt check the array bounds of all strings (indeed, all arrays), for surely where thou typest ``foo'' someone someday shall type ``supercalifragilisticexpialidocious''.
Last edited by TerryP; 2nd February 2009 at 04:55 AM. Reason: spell check
However, if mere synchronization is the fundamental problem, then setting up NFS is even simpler, plus you don't have to worry about committing the local sandbox copy all the time, because with NFS whatever is being edited is the only copy.
|Thread||Thread Starter||Forum||Replies||Last Post|
|Backup strategies and disaster planning||sherekhan||OpenBSD General||16||2nd June 2009 10:30 PM|
|backup freeBSD 7.0 using Backup Exec||ccc||FreeBSD General||2||25th April 2009 09:23 PM|
|The best way to backup windows||TerryP||Other OS||4||8th February 2009 09:32 PM|
|Yet another backup question||rex||FreeBSD General||7||7th November 2008 03:22 PM|
|Auto backup||cwhitmore||FreeBSD General||6||19th August 2008 05:17 PM|