Mittwoch, 10. Februar 2010

CUDA, DirectX and OpenCL. Wow how great that technology is. Simply write your C code and have it executed on your GPU on up to 512 cores in parallel (NVIDIA Fermi architecture). I have seen many interesting approaches to use these methods. Except for the simulations and numerical calculation guys, like physics or weather forecasting, there are a few approaches to use the GPUs for databases. Simply execute a few operations on the GPU, like Joins or aggregations. Ok well lets have the smart people do those things..But what about data security at this point? Lets look at CUDA. According to the documentation all data is kept on the device memory as long as the app runs. The GPU hardware is only isolated by a driver that actually should protect other applications to access data on the device. Well I don't think that the driver developers have payed too much attention to data security yet.

Here is my challenge. I have one approach of how to access another programs data on the device memory. Does anyone else have one?

Mittwoch, 19. August 2009

I have been developing applications for 32 bit systems ever since I was 13. This year I got around my first 64bit app. Amazing. When the vendors introduced 64bit for common use (aka desktop and laptops) I started thinking "what hell of an app would use 64bit on a laptop". This app I got to work on was an index for data. Nothing too fancy. But very data centric. Since the goal was to have very fast operations on the index we had to tune a few knobs. One of the knobs is the addres space. Virtual 32bit address spaces. The principle is that you store a base pointer within your system and then reference everything from this base pointer. Not with 64bit but with 32bit. What is that any use for? Well if you have a tree structure (like and AVL or B-Tree) one of the main concerns is the node size. Since the nodes of a tree contain pointers to children the pointers are 64bit wide consuming more space than necessary. To get rid of this space consumption simply use the reduced address space. You can of course fit it to your personal needs in terms of space necessity.

Since the bottleneck between CPU and RAM will exist for the next N years (with N >>10 years) we still need to reduce the number of bytes pushed around between RAM and CPU. Reducing the size of pointers is one approach.

Mittwoch, 12. August 2009

For many years the rotating monster (harddrive) has been one big thing of interest for the algorithms around database systems. Indexes where created to retrieve and access data faster from memory than from disc. With SSDs a faster persistent storage medium is being established. At first everything seems like that with a very fast read we would have to buy a high penalty for sequential write (something like 1:10 in terms of throughput). So you can actually different index structures and algorithms for SSDs and HDDs that work optimal on one hardware and suboptimal for the other. But over time product development will definitely remove the write penalty from SSDs. Not completly but quite significantly. So now what to do with a disc? What a bout another storage layer between RAM and HDD?