Sharing memory between devices?

Beta 4 is an existing release but unfortunately I don't have a Radeon HD to try this.

Is it possible with the Beta 4 to create a device using the CPU and a device using the GPU and "share" the memory between both of them so that part of the computation could be done on the CPU and part of the computation on the GPU?

I'm not thinking about excuting the same kernels on the CPU and GPU and make them compute in parallel but more, seperating the task in several sub tasks which would be compute on the CPU or GPU according where it fits the best to use 100% of the platform.

Create a context using CPU and GPU as devices. Now create a memory buffer using the context, the buffer will be created on BOTH devices. Create two command queues, one for CPU and the other for GPU. Now you should be able to enqueue different kernels to the respective device's command queues, each device operating on its own local copy of the memory buffer.