Originally Posted by JMJ_coder
Could you explain more?
I'm splitting your questions away from the original thread because your direction is different from that of the OP. Summarizing his situation:
Originally Posted by Oko
It is of paramount importance for me that my files are in sync on all
Promoting a NFS solution in this situation made sense given that edits to the target file(s) would be intrinsically seen everywhere else since changes were being made to the same
file(s). This didn't lessen the need for backing up; in fact, sharing critical files across a network from the same central location makes backing up more
important. Why? Consider the consequences of losing the file(s). If the sky will fall because they cannot be retrieved, then regularly backing up is very
important. If losing the file(s) is merely an annoyance & the contents can be recreated with nominal effort, backing up is less crucial. How anyone measures "critical"
is a personal decision.
What I have just described are facets of a common repository. Nowhere did I mention version control because the OP never mentioned that reclaiming intermediate versions was ever an issue. Nevertheless, many people mix common repository features with version control because version control systems are also common repositories. But just because apples are round, & oranges are too, doesn't mean that apples are oranges.
So if you are still questioning whether you need to set up CVS to save all files, you need to answer to yourself whether maintaining intermediate versions is important. If it is, then CVS/Subversion/etc.
may be in order; if maintaining intermediate versions isn't important, then using version control software isn't matching the best technology as a solution to the fundamental problem.
In your situation:
Originally Posted by JMJ_coder
...I have several systems that I want to synchronize (especially things such as rc files)...
It is unclear whether you really mean "manage"
when stating "synchronize"
, but perhaps you have a common configuration which is shared across numerous machines. This tends to beg for a common repository approach. But it does not answer how the centralized versions will be propagated outward.
However, on NFS, I can't guarantee that one of the systems will have internet connection 100% of the time (laptop) to access a remote system all the time (maybe a solution with periodic sychronization).
This then means:
- A drastic solution such as moving /etc to a NFS shared directory is now out of the question given that you may need to boot when connecting to the network is impossible. Fair enough. But this introduces the problem of synchronizing local copies with that found in the common repository. My guess is that the decision to "push" or "pull" files will be based on the timestamp of the files, but then you will need to worry whether all systems are using a common time. This is why setting up ntpd(1) is important.
- This is a decision for you to make. You will have to think through whether the ramifications/constraints imposed by setting up a cron(8) script answers the problem you are wanting to solve. The common time problem can be very real & files may be modified when they should not have been changed. When it comes to laptops where usage is all over the place, predicting when it will be connected to the correct network may also be difficult. Personally, I back up laptops manually, but that's what works for me (at the moment...).
So the questions you need to think through are classic issues relating to database replication. Who pushes what when, & if files are both pushed & pulled, how will discrepancies in system times be resolved?
I would recommend that you figure out solutions which are manually started & live with this for awhile. Build your solution incrementally. After some time, you may have more insight into how to deal with automatic synchronization.