Accessibility and automation technologies - which to prefer?

This is a discussion on Accessibility and automation technologies - which to prefer? within the Windows Programming forums, part of the Platform Specific Boards category; I am studying various accessibility technologies now because I want to create a simple (but I hope - useful) screen ...

Accessibility and automation technologies - which to prefer?

I am studying various accessibility technologies now because I want to create a simple (but I hope - useful) screen reader (it will be my university study project). So far I know 3 ways which most screen readers use to access elements on the screen and get their states (like - is button lit, pressed, inactive and so on):

1. Display driver interception, DDI. I have not tried it yet. It seems tricky and a bit dangerous, it can mess all the OS if not used carefully. This was good option for Win9x because there were no other good ways but now there are other technologies available on Windows. But maybe this approach still has some really good pros?

2. Windows API hooks. Less dangerous than DDI, but still tricky. Pros - it is familiar to me, and there are lots of articles and examples on the Web.

3. Microsoft Active Accessibility (now replaced by UI Automation in .NET Framework 3) . Cons - less documented then hooking. Pros - it is the way Microsoft recommends :-). I played a bit with it already. It seems easier to implement than point 2. But does it have the same power that hooking has?

So the question is to all those who have experience with these (or other) ways to access everything on the screen - can you share your experience a bit and tell, which approach would be the most effective (coding / usefulness ratio) ? Or anything I completely missed out?

Hmm, as far as I understand, DDI or mirror driver lets me receive GDI data, like - when text is being drawn etc. But how can I tell from GDI data, where buttons, menus and checkboxes are? For driver they will just be a bunch of lines and colors. Or I am wrong?

Hmm, as far as I understand, DDI or mirror driver lets me receive GDI data, like - when text is being drawn etc. But how can I tell from GDI data, where buttons, menus and checkboxes are? For driver they will just be a bunch of lines and colors. Or I am wrong?

That is correct, menus and such things will just be lines and bitmaps. That may not be what you want.

"One technology that was not mentioned in the article was to virtual remote desktop connections to get at the video driver data stream. This technique has the advantage of being very compatible, dynamically installable and simultaneously supporting both local and (increasingly popular) remote or virtual sessions."

is intriguing, but googling about "video driver data stream" did not give anything useful.
Oh, anyway, as Windows Vista has changed its driver model, it seems, this video interception approach is becoming obsolete. So the choice is between hooking and UI Automation.