What would be the best way to try out new views and not suffer with the long
computation times on this big dataset? Should I create a "development"
database with only a subset (say a couple of hundred documents) of the data
and work there until I have all the views I want and the port those to the
"real" big database?
On Fri, Aug 1, 2008 at 1:42 PM, Jan Lehnardt <jan@apache.org> wrote:
>
> On Aug 1, 2008, at 18:08, Michael Hendricks wrote:
>
> On Thu, Jul 31, 2008 at 07:38:03PM -0300, Demetrius Nunes wrote:
>>
>>> The view I am trying to create is really simple:
>>>
>>> function(doc) {
>>> if
>>>
>>> (doc.classe_id.match(/8a8090a20075ffba010075ffbed600028a8090a20075ffba010075ffbf7200c48a8090a20075ffba010075ffbf7200d9/))
>>> emit(doc.id, doc);
>>> }
>>>
>>
>> You might try changing your emit() to
>>
>> emit(doc.id, null);
>>
>> I seem to recall some discussion on the mailing list that including the
>> document in the emitted value (especially for large documents) can
>> significantly affect view performance.
>>
>
> also, the doc id is always automatically included, so emit(null, null);
> does
> the trick as well :) for pagination use docid_startkey & _endkey.
>
> Cheers
> Jan
> --
>
--
____________________________
http://www.demetriusnunes.com