Update: There is some new information and some good news at the bottom of the article.
One of the features that was introduced with Mac OS X Leopard was Time Machine. I use Time Machine constantly. I make sure my laptop (which is my primary machine) is backed up before I do anything particularly risky, like running tools that modify my drive, or taking my machine out of the house. That way I know that no matter what happens there is a safe copy of my data waiting at home.
The problem is that Time Machine is not automatic if you are a laptop user. I need to walk over, plug my laptop into a drive, and then wait while it runs. On my system it usually runs quickly, but it is still requires me to getting involved with the backup. It would be better if it could automatically backup across my wifi network. Apple supports network Time Machine backups between Leopard machines as well as selling a backup NAS product, Time Capsule. I have a Time Capsule and a Leopard desktop machine that I use as an AFP server, but I have given up on using either of them for Time Machine backups, since they have corrupted my backups multiple times. Unfortunately the current Time Machine over network implementation is fundamentally flawed and will never work correctly.
How Time Machine Works
Time machine works by literally cloning your drive into a subdirectory of another drive. If search for it on google you will find references to HFS+ hardlinks and metadata, but those are all internal implementation details to make it run with acceptable performance. If you drill down into a .backupdb bundle you will see several folders, and each one of them is a complete clone of your system at a specific point in time, minus any folders you have chosen to omit.
This is great in many ways. In particular, it means that all the applications that use Time Machine don't have to pull the files out of some archive format to work on them. That lets Finder navigate through them quickly, and hand them to third party QuickLook filters. It also means that that any filesystem that those files are stored on must support all of HFS+/HFSX's features or there will be a loss of fidelity. By fidelity I mean precise accuracy of all details of the file data and metadata, including full name (in whatever encoding your volume was using), extended attributes, permissions, acls, forks, etc.
Historically most filesystems have not been able to store a file originated on an HFS volume with full fidelity (that is why Apple used to tuck data in ._ files, they were used to stash all the data that would be lost), though that has been getting better in recent years. While losing some info might be okay when transferring a file to a foreign computer, it is never okay for a backup system to lose that kind of information. Because fidelity is such an issue and Apple has to use a filesystem that supports all HFS+ and HFSX's semantics Apple generally creates HFSX volumes for time machine volumes, since they can store content of both HFS+ and other HFSX volumes with no loss.
Backing up to a network
Okay, so in the local case Apple copies files between two drives, and it works great. Once you move to networks things get a lot more complicated. Besides from the reduced speed, most people are using laptops via wireless. Between the increased length of the backups, and the transient nature of the connections it is much likely that you will have an interrupted backup (though that can also happen with a local disk based backup, people love to just unplug drives...). Also, unless you are using something like iSCSI you can't directly use HFS+ on a remote disk, so something has to change. There are a couple of obvious solutions, all of which have drawbacks.
1) Use a network filesystem
This would be an ideal solution, if not for the filesystem fidelity issue. There are currently no network filesystems in wide usage that preserve all HFS+/HFSX semantics (particularly if you include the directory hardlink "implementation detail" of Time Machine). Of course Apple has its own network filesystem, AFP, which it could rev to support features it needs. There are two major problems with that. The first is that that most network filesystems leak the semantics of the underlying filesystem. For instance, some SMB volumes preserve case and some don't, and that is a side effect of whether or not the filesystem of the server preserves case.
So even if Apple revved AFP, the best they could do is guarantee that AFP served from HFSX using their server software would have HFSX semantics. Second, a large number of devices use embedded AFP servers on completely different OSes and FSes. There is no way Apple can know how netatalk on a consumer NAS serving files off of ext3 will handle things, but it is a good bet it will not match the semantics they depend on. So Apple would need to either block all 3rd party devices, or implement some sort of mangling in Time Machine to try to preserve all attributes in a way that would be durable. Since everyone hated ._ files the first time, that seems like a bad idea.
2) Use iSCSI/ndb/AoE
Time Machine already works with an HFSX backup disk connected via USB, so why not just connect the disk over the network. That would certainly solve any potential fidelity issues. The problem is that it introduces a completely separate set of issues. When you lose a network connection while doing a file transfer via a network filesystem the behavior is deterministic. The last files you sent over got there, the next ones you were planning to send didn't, and the one you were in the middle of might be there or not depending on exactly what happened, but you can pickup where you left off once you check that one file.
Disk drives aren't that simple. Since your machine is directly responsible for the block allocation it goes through the entire driver stack, just like it was a disk. It does io scheduling, block layout, etc. When you cut a network connection it is the equivalent of pulling out a USB cable without unmounting the drive. Mac OS X complains when you do that, because it can lead to data corruption. Most of the time it doesn't, but it is much more likely to if you are in the middle of writing stuff. Now take a situation where the cable is ethereal, it gets cut everytime your computer is put to sleep, and it is only connected when it is actively backing up files (doing lots of writes). It is a recipe for unrecoverable filesystem corruption on your backup drive.
The fact that Apple does not include support for any of these technologies in OS X or its embedded storage products certainly does not improve the case for using them.
3) Use a custom protocol
This is what commercial network backup systems do. It lets them deal with disconnects in a sensible way, and they don't care about filesystem fidelity because instead of storing files 1 to 1 they store the backed file as a blob in a database somewhere, and can store all of the attributes about it in their database. This is a lot more work to implement because now everything in time machine is no longer accessible through the normal filesystem interface. Depending on exactly how they implemented this they might be able to do it on a network filesystem, a raw network block store, or they might need a custom server.
What Apple actually did…
Okay, so those are the 3 obvious options. I left out things like "Design a whole new local and new network filesystem from scratch" as pie in the sky and not doable in the short term, though those are certainly options. Apple did not take any of the 3 obvious choices. Instead it did something allowed them to approximate solution number 2 using their existing technology stack. In short they used HFSX disk images stored on AFP volumes.
The problem is that doing that has all the downsides solution number 2. Every time you put your computer to sleep midback up it is like pulling the plug of a HD mid backup. Except that the drive is connected over a slow connection, and is thin provisioned (which makes it seem larger than it is), which makes actually preforming fscks on it completely impractical, so they have to be omitted or reduced. And disconnects happens quite frequently, so the OS does not pester you about disconnecting the drive. It is even worse because it is doing it over a network filesystem, which adds a whole extra layer of indirection and other issues.
If there was some way to make this solution work it would also mean there is a way to make it safe to randomly unplug hard drives. Trust me, if Apple knew how to do that it would be done, and the OS would not chastise you for doing something stupid when you unplug your USB pendrive without telling it first. Since they haven't figured out how to let you safely unplug USB drives unannounced it seems like a bad idea to base a backup solution on what is in essence a wireless USB cable that is phasing in and out of existence.
There are have been a bunch of great comments, but I want to call attention to one from Dominic. While my recent lost backup occurred even with all the newest updates, the backup was created before the latest software update or Time Capsule firmware. It is entirely possible the original corruption happened a while ago, but only lead to data loss recently. It sounds like if everything you are using is up to date and your backups are not already corrupted then everything should work. I am creating a fresh backup right now in order to test it out.
If you have not updated you should make sure you are using at least: