On Sat, 2007-04-14 at 17:10 -0700, Don Hopkins wrote:
> I've been working on a cairo-based tile engine written in C as a Python
> extension (part of my cellular automata machine module), which I now
> have working well enough to display animated cellular automata in two
> modes: as tiles or as pixels.
>> To render tiles, you pass it an array of cairo surfaces you've read in
> (or rendered), one for each tile, and it draws them by painting the
> surfaces onto a cairo context you pass in.
>> To render pixels, you pass it one cairo surface which it grabs pixels
> out of, and it draws by storing the pixels into a destination image
> surface. The caller can then render that destination surface on the
> screen itself (scaling if it likes).
>> Theoretically the tile mode could render tiny 1x1 colored tiles to
> produce the same results as the pixel mode, but it would be much less
> efficient to draw individual pixels by calling Cairo to copy 1x1
> surfaces. So I implemented the pixel mode to support single color tiles
> directly, drawing into an offscreen cairo surface instead of using the
> cairo context.
>> I've read on the mailing list that Cairo supports 565 ("begrudgingly",
> whatever that implies).
It means that cairo supports 565 only as a compatibility option, but
will _not_ allow you to create new surfaces as 565. Basically, if you
hand it a surface or pixel data that is in 565 format, it can use that
data. But you cannot create 565 surfaces manually. Cairo's native
format is 32 bit RGB with alpha.
The change to the LX and 256MB of RAM will help here, since the GPU on
the LX is more powerful and has more hardware accelerated conversion
operations. We'll need to do some tests, but we anticipate switching
the depth to 24bpp instead of 16.
Under the current builds, you just have to use 32-bit surfaces and let
cairo/X smash them down to 565 on the fly, along with the performance
loss, since the 8888 -> 565 conversion cannot happen in hardware on the
GX. There's not much you can do with 565 surfaces directly.
Dan
> But the surfaces it's handing my C code are 32 bit (RGB or ARGB).
> (Or at least that's what's happening on the emulator with a 16 bit 565
> screen -- I haven't tested it on the actual olpc yet).
> How can I get ahold of the actual 16 bit 565 buffer that X can directly
> and efficiently draw on the screen?
> I've read through the Cairo code, and apparently it has no internal
> support for 16 bit pixels.
> So is it the xlib/xrender back-end that actually has a 16 bit buffer and
> does the 888=>565 conversion?
> Is there any way for my C code to get ahold of that buffer to draw
> directly into it?
> Or do I have to puff everything up to 32 bit color, just to let Cairo's
> back-end stomp it back down to 565?
> If there's a 565 screen buffer somewhere, there there should be a way
> for C code to take a cairo context and use it to figure out the 565
> buffer to draw into (in the special case of the x backend).
> (Of course it would be the C code's responsibility to respect Cairo's
> CTM and know the pixel format and stuff like that, but that's just fine
> since it goes with the territory, and is worth doing to make it draw
> efficiently.)
>> A direct access api to Cairo's 16 bit buffer would make it possible to
> integrate SDL and other rendering libraries (like Mesa) with Cairo,
> without going through another X window, so games and Python extensions
> could draw more efficiently without doing lots of unnecessary format
> conversions.
>> I took a look at the USInvaders demo that uses PyGame and SDL, and it
> looks like it just makes another X window that SDK draws in directly,
> instead of going through Cairo. So SDL is drawing directly in 565, but
> going around Cairo's back, using a GTK "Socket" window.
>> -Don
>> _______________________________________________
> Sugar mailing list
>Sugar at laptop.org>http://mailman.laptop.org/mailman/listinfo/sugar