With SQL Server (or any RDBMS) there are known backup and recovery strategies and a WAL as added security. What kind of disaster recovery strategies are you employing with MongoDB?
That is an excellent question. It is trivial to back up a live SQL Server instance because it supports shadow-volume snapshots. The same isn't true for MongoDB.
Instead, we had to resort to running our MongoDB database on a 3-node cluster and this is our main strategy for resiliency. Additionally, one of the nodes is set to do a daily full-database dump, which is pretty much guaranteed to be in an inconsistent state but still provides us an extra degree of peace of mind.
So, when your data volume exceeds what will fit in RAM, I'm guessing that your plan is to shard to multiple MongoDB servers. Is your plan to continue to add multiple replicas to each shard to handle DR?
What is really important is keeping your indexes in RAM, our data already greatly exceeds the amount of RAM we have available. Even our indexes are only partially in memory already and performance is still terrific.
2.0 has a new index format that should reduce your index sizes by ~20-30% to fit more in RAM. If you haven't looked at upgrading yet, it is probably worth testing with 2.0.1 to see how it performs in your use case. You will need to reindex() or restore from a dump to take advantage of the new index format.
Worth noting that you can define a slave as hidden:true (so it never gets promoted to master), and run it with slavedelay of X hours..it's a great way to keep a rolling backup