It’s actually not that slow if I just do one single get operation. I’m using multiple php scripts to import mysql database into Couchbase through PHP classes (PHP classes have business logic and know how to format mysql data to JSON document). I was getting around 3k/second. All of the sudden, I started getting 1.4k/second. I’ve removed two gets from the PHP classes and the speed came back.

This is my setup.

Version 3.0.3-1716-rel on Mac OS

I have one bucket with 8gb ram assigned to it

(Question) Where do I check the cache size…?

I haven’t tried to add a cluster yet. I will test it out.

I’ve also noticed that increment/decrement slowed down as well.

The bucket currently has 20 millions of documents.

*** I think I should change the title. The performance is okay, but it’s just slower than how it was **

If you can start to pair down the document sizes and increase the number of documents it may make life easier. 250KB is a very large doc as you noted. Smaller documents with more lookups == higher operation rates and typically more stored in RAM so overall lower latency in the long run.

That is a document modeling exercise though so that might be out of scope for your current activity but I wanted to mention it.