Data replication is a well-known strategy to achieve the availability, scalability and performance improvement goals in the data management world. However, the cost of maintaining several database replicas always strongly consistent is very high. The CAP theorem shows that a shared-data system can choose at most two out of three properties: consistency, availability, and tolerance to partitions. In practice, most of the cloud-based data management systems tend to overcome the difficulties of distributed replication by relaxing the consistency guarantees of the system. In particular, they implement various forms of weaker consistency models such as eventual consistency. This solution is accepted by many new Web 2.0 applications (e.g. social networks) which could be more tolerant with a wider window of data staleness (replication delay).However, unfortunately, there are no generic application-independent and consumer-centric mechanisms by which software applications can specify and manage to what extent inconsistencies can be tolerated. We introduce an adaptive framework for database replication at the middleware layer of cloud environments. The framework provides flexible mechanisms to enable software applications of keeping several database replicas (that can be hosted in different data centers) with different levels of service level agreements (SLA) for their data freshness. The experimental evaluation demonstrates the effectiveness of our framework in providing the software applications with the required flexibility to achieve and optimize their requirements in terms of overall system throughput, data freshness and invested monetary cost.