Choosing how to Scale Persistance

We have recently been trying to find a decent database scaling solution for a product we are building, and have been coming up with lots of solutions, many of them seem fun and interesting, but none of them seemed to get our complete agreement. So we set out to have a structured way to choose which road we go down, when trying to build a distributed application with distributed persistance.

Brewer's (CAP) Theorem states:
There are three core systemic requirements that exist in a special relationship when it comes to designing and deploying applications in a distributed environment: Consistency, Availability and Partition Tolerance. You can only optimize for two at the expense of the third
In database laymen’s terms, the above translates to:

A) Up-To-Date/Speed -
reading and writing is  consistant and it's quick, but failures mean down-time.
 

Possible DB Solution: Splitting the data over lots of databases.

B) Up-To-Date/Recoverable - the system is fault tolerant and reading and writing is consistant, but more coordination means less speed.


Possible DB Solution: Every write goes to every database.


C) Speed/Recoverable -
the system is fault  tolerant and quick, but the values you read might be out of date.

Possible DB Solution: Local DB’s with an aggregation process.

Sadly we cannot have Up-To-Date/Speed/Recoverable, maybe when we have quantum computing but for now, let's try and stop trying to solve all three and pick what is suitable for us.

 So we have to ask, can we live with...

... some down time whilst we resolve problems? (compromise recoverability)

... the data being stale for a while? (compromise up-to-date)

... users having to wait for their requests? (compromise speed)

Which of the above can we live? As soon as we have that answer, then we have a strong argument for choosing the road we go down.