If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

You are being unfair to ATI/AMD
Video part of linux, xorg, 3d and 2d stuff is a mess. Yes its a progressing mess, but it only means that driver devs need to keep up with changes in the mess.

And in the end we win, because oss driver gets more attention.

If it is so hard then why dose Intel never have a problem with Linux suport? All there drivers for Wireless, Graphics,... are always rock sold and upto date. ATI need to retool there drivers so they don't rely on stuff that is going to change in the next kernel or Xserver. They need to make it more modular so changes can be made with less effort.

tball: I have a Mobility FireGL v5700 wich is a HD3650...
Here is the suport thread I started on ubuntuforums for my laptop if you would like to see all the stuff I have to do to get it to work.http://ubuntuforums.org/showthread.php?t=1166667

It looks simple now that I worte that thread but it took me over a month to figure it all out (the catalyst I just found the last few tweeks for) I consider having to use a patched Xserver a major tweek.

But ya was to genoral about the catlayst not working. It dosn't work with my card with any of the suported distors with out major tweeking.

But ya isn't Arch the s/h/i/t

Note, the Intel wifi driver problem is not a problem with the driver it is a problem with Ubuntu. It works fine in Arch. Flash is also a problem in Ubuntu but works fine is Arch. In Ubuntu when I play video games like supertuxkart the game thinks I am pressing joydown all the time but in Arch that is not a problem

If it is so hard then why dose Intel never have a problem with Linux suport? All there drivers for Wireless, Graphics,... are always rock sold and upto date. ATI need to retool there drivers so they don't rely on stuff that is going to change in the next kernel or Xserver. They need to make it more modular so changes can be made with less effort.

ATI have probably done that as much as they can - but then we start getting into standardising interfaces, which the kernel devs don't like doing, and is one of the key issues of fglrx / kernel incompatibility. They probably do have support easily for new stuff, but due to their development cycle it'll bake for a month or two before being released (by which time things can change again). That's a con - a pro is that we get updated drivers every month!

ATI have probably done that as much as they can - but then we start getting into standardising interfaces, which the kernel devs don't like doing, and is one of the key issues of fglrx / kernel incompatibility. They probably do have support easily for new stuff, but due to their development cycle it'll bake for a month or two before being released (by which time things can change again). That's a con - a pro is that we get updated drivers every month!

Hum..., standardizing is generally a good thing. I don't know the reasons why the kernel devs don't want to do it though. Maybe they have a good one. Although, Linux is Open Source. They can just look at the code and talk to the devs and make a driver that runs cherry on all distos. The Xserver and Kernel are not disto spisific. The file system layout is though, so ATI should have in place a way for distro's to easly set where stuff should be installed.

The release cycal should not be based on things Like Ubuntu's LTR cycle. They should base it on owe... every other kernel and every Xserver. Also, from what I remember reading, the arch dev in charge of calalyst support and on the mailling list didn't think they were addressing any of the problems that were making it so hard to work with. So, new catalyst driver update every month was probably just adding to the problem not solving it.

As I understand it, kernel devs don't like standardizing the internal interfaces because they don't want to be put in a position where out-of-tree concerns end up holding more sway over the internal kernel architecture than in-tree concerns. They want to be free to reorganize, tear down and rebuild anything they think they can improve within the kernel, without being pressured to support every version of every interface going back a decade or more. OTOH they take standardization of userland<->kernel interfaces pretty seriously.

As I understand it, kernel devs don't like standardizing the internal interfaces because they don't want to be put in a position where out-of-tree concerns end up holding more sway over the internal kernel architecture than in-tree concerns. They want to be free to reorganize, tear down and rebuild anything they think they can improve within the kernel, without being pressured to support every version of every interface going back a decade or more. OTOH they take standardization of userland<->kernel interfaces pretty seriously.

Ok, thanks for letting me know. I guess that is a good reason and being open source should make up for changes in the structure. I meen they should let everyone know ahead of time that they will be overhalling something.