If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

a) I don't use compiz - because it is a broken POS, ever was
b) I don't use lenny. I don't use debian. I never will
c) I don't use stone old software. So no Xserver 1.4.X but something more recent - 1.6.1
d) even with earlier xorg-server versions I never saw that mysterious 'tearing'.
e) I am using tvtime which does not work without xv - and it works well.

Video tearing is wierd; some people see it easily and it really bugs them, other people (like most developers ) don't see it at all. I was checking out the open drivers that shipped with Jaunty and watching Big Buck Bunny - I thought it was real smooth, but one of the guys from our multimedia team came over and was pointing out tears every few seconds. I never saw a single one.

My eyes are trained to see those errors, i saw em with vdpau + enabled composite at once too. But compared to fgrlx the oss ati is smooth. Nvidia is definitely the best driver for video but ati oss is not bad.

Well 2.6.29 support can be patched, with a smaller patch, when your distro provides some extra files or a very huge one. But 2.6.30 seems to be really problematic. When you make it compile then it uses 2 symbols which are not in the kernel anymore, one could possibly patched, but the other is in the binary part, so no go. I do not understand that when ati only provides drivers for the latest cards that they are not able to try a new kernel. Ubuntu even has got a collection of every mainline kernel incl. rc, so they would not even require to compile it on their own. Nvidia somehow manages for their current cards, for the others i am still waiting for official 2.6.30 support but 2.6.29 is there for every card.

I think you are talking about "pci_enable_msi".
I just make small place holder function like this:
#undef pci_enable_msi
int pci_enable_msi(struct pci_dev *pdev)
{
int pci_out;
pci_out=pci_enable_msi_block(pdev, 1);
return pci_out;
}
and add it somewhere in fglrx module.
I don't guarantee that it will actually work because last time I checked this, I was on 2.6.30-rc2 and since then I deleted my patch.

I think you are talking about "pci_enable_msi".
I just make small place holder function like this:
#undef pci_enable_msi
int pci_enable_msi(struct pci_dev *pdev)
{
int pci_out;
pci_out=pci_enable_msi_block(pdev, 1);
return pci_out;
}
and add it somewhere in fglrx module.
I don't guarantee that it will actually work because last time I checked this, I was on 2.6.30-rc2 and since then I deleted my patch.

Hmm, I don't essentially see why the '#define pci_enable_msi(pdev) pci_enable_msi_block(pdev, 1)' from the kernel sources should work different from the code you gave. That is, as far as I can see your code is the same as

which should be the same as the macro, no?
ps. The macro was checked from linux-2.6.30-rc5
Edit: Btw, make sure you have CONFIG_PCI_MSI in kernel enabled or none of this will work. Iirc Catalyst warned about it at one point.

Hmm, I don't essentially see why the '#define pci_enable_msi(pdev) pci_enable_msi_block(pdev, 1)' from the kernel sources should work different from the code you gave. That is, as far as I can see your code is the same as

which should be the same as the macro, no?
ps. The macro was checked from linux-2.6.30-rc5
Edit: Btw, make sure you have CONFIG_PCI_MSI in kernel enabled or none of this will work. Iirc Catalyst warned about it at one point.

A macro is just a macro. It's instruction for preprocessor to replace one text with another and preprocessor only works on source code.
It wasn't meant to even touch binaries.

With fglrx the main problem is that the pci_enable_msi is used by binary only module:
nm libfglrx_ip.a.GCC4|grep pci_enable_msi
U pci_enable_msi
There must be function called pci_enable_msi or module wont work.

In short macro is provided to keep API backward compatible while changing ABI.
Essentially preprocessor just replace text "pci_enable_msi(pdev)" with "pci_enable_msi_block(pdev, 1)".

A macro is just a macro. It's instruction for preprocessor to replace one text with another and preprocessor only works on source code.
It wasn't meant to even touch binaries.

With fglrx the main problem is that the pci_enable_msi is used by binary only module:
nm libfglrx_ip.a.GCC4|grep pci_enable_msi
U pci_enable_msi
There must be function called pci_enable_msi or module wont work.

In short macro is provided to keep API backward compatible while changing ABI.
Essentially preprocessor just replace text "pci_enable_msi(pdev)" with "pci_enable_msi_block(pdev, 1)".