Fedora from an Ubuntu point of view

In the interests of not becoming blinkered to one distribution, I thought I might give Fedora 11 a whirl. Not having used Fedora since FC4, I was surprised to see the adoption of a live CD installation and relieved to avoid a DVD size download. Just like Ubuntu it’s well polished, perhaps more so with graphical grub.

Installation is painless, launching from a desktop icon and going through the same steps as Ubuntu. I like the inclusion of encrypted filesystem support, enabled by ticking the box during the partitioning stage. This is more important in the environment I work in than it might seem. There have been a number of high profile cases of hard disks and laptops being lost within the MoD and it has taken steps to reduce this. All our laptops now use Flagstone and we have had copies of PGP Desktop Encryption bought for us (it’s being distributed through Forces Gateway) for personal laptops. When people look to me to assist them with a Linux install, encryption is always requested.

Speaking of Flagstone, it has a horrible interface. Fedora’s looks nice as does the boot process in general. The Fedora logo fills from white as the sequence completes. While clever, it’s not as clear as a progress bar – I thought it was hung on initial boot.

Once installed (which doesn’t take long), we’re presented with a first run configuration. After a brief introduction to the license, we’re prompted to create the first user (Fedora uses a root account). This would benefit from Ubiquity’s approach, where username is created from name. Finally we set the time and are asked to submit our hardware profile.

The GDM login screen is welcoming enough, Redhat has put some effort into fingerprint scanning – so that appears too. I haven’t a fingerprint scanner to test this with and the HP laptops we use at work running RHEL don’t have them enabled.

For users of systems other than Ubuntu, the Free Desktop login sound will be familiar. As a fan of the Blubuntu theme, I also like the appearance. The Gnome desktop is instantly familiar, with the network manager applet, desktop places and icons all where you expect. Menus are sensibly laid out, the only caveats for those familiar with Ubuntu are that the terminal is in Applications->System Tools and that the shutdown and logout buttons are under System (as Ubuntu’s used to be). There are also additional applications to configure a firewall, how users authenticate and to configure SELinux.

At this point, I might mention I’m using Virtual Box under Windows Vista (I keep the Ubuntu system clean on another machine). Installing VBox Additions brings my first brush with package management in Fedora since FC4. It has improved greatly, yum resolves dependencies well and works well from the command line with a similar syntax to apt-get. Where it falls down is speed – everything seems to be checked before downloading then again before installing. Strangely, Presto is available but not enabled by default. Presto downloads delta RPMs – so only the part of the package which has changed is downloaded and upgraded. This makes fo a significant reduction in downloads and hence faster updates. I ran two updates, both averaging at a 73% reduction in size.

It also lacks some of Ubuntu’s better thought out groups and packages, build-essential for instance is obtained (for the most part) by “yum install make automake gcc gcc-c++ kernel-devel”. It’s also worth noting that it also installed for a version of the kernel it hadn’t updated – preventing the installation of VBox Additions.

Fedora’s licensing policy is also rather restrictive, much more than Ubuntu’s. I don’t disagree with this policy but it’s not immediately obvious how to obtain software such as VLC nor the rationale behind why it isn’t available. However it doesn’t take long to find repositories such as RPM Fusion but I can imagine this being a stumbling block for many Ubuntu users who already frequently complain about software installation.

That said, Fedora’s update interface is excellent. Icons are used to show the state of each update – downloading, installing, cleaning up and so forth as well as identifying updates as enhancements, security or bug fixes. Coupled with a large description of the fix, notifications are clear – offering full or only security updates. This is a nice touch, especially when you’re on a mobile broadband connection away from home. PackageKit has the ability to automatically download codecs, as with Ubuntu. However its a welcome addition to see that this now extends to the automatic addition of new fonts.

As mentioned earlier, Fedora has a root account enabled. Ubuntu, users are used to using sudo, which is available and the alterations required to make it work are simple.

Pulseaudio is implemented, which seems to have had a mixed reception in Ubuntu. I haven’t noticed any issues with this in Fedora and it seems well integrated. I like Pulseaudio and think improved audio control is much needed for Linux to gain mainstream desktop acceptance.

The default filesystem is Ext4, which seems stable although I’m not running exhaustive tests on it. In any event that’s in Karmic too I believe.

Fedora implements SELinux. Dan Walsh has a much better explanation of this than I can give, available as a PDF. Ubuntu uses AppArmor, although as Jef Spaleta pointed out from OpenWeek (it’s the third comment), this might be replaced by SELinux. From a user’s point of view, this is more or less transparent. There are two tools provided, one to configure profiles and one to troubleshoot. Both work well, though I can’t see the configure tool being ventured into by most users.

Hardware recognition was mostly flawless, in much the same vein as Ubuntu. The only device that it had issues with was a Freecom DVB-T USB card. Fedora refused the firmware, no matter that it works in Ubuntu and Arch, it just keeps asking for it – even though it’s there. Of particular note is that when I installed it on an Acer Aspire One it is the only major distribution I’ve tried it on to work out of the box without tweaking, in fact the only thing I noticed was the WiFi lights are missing but that’s fixed in recent kernels. With easy encryption, this makes Fedora a potential winner in the net book market.

I’m impressed by Fedora. It’s familiar and friendly, with a well defined and complete appearance. Delta RPMs are a great idea – especially as we consider that not everyone has a fast internet connection (Sony wants to take this on board, as I wait here for another massive system update on PS3). Encryption is very welcome as is SELinux. On the downside, the installation licensing limits the distributed applications and yum is still comparitively slow.

The Development Tools group will get you most of what you need. You’ll notice that KDE and GNOME have their own development groups listed.

In addition each repositories are allowed to generated their own comps.xml files with whatever Group definitions they like. You can even make a repository that only contains a comps.xml and references with no actual packages. Yum only pulls this file if you use a command that needs it. So the first time you do a group command you’ll see yum pull the file for each repository you have enabled.

Second, I think you are making a lot of unstated assumptions about how you think yum works to make a reliable comment about speed. I would encourage you to get more familiar with the logic flow yum uses to determine when repository metadata files need to be reacquired and when in the process yum hands to the librpm library .

I think you’re being a little prickly here Jef and would encourage you to take my post for what it is – an experience taken from the point of view of Ubuntu.

I needn’t understand the inner workings of yum in a default install any more than I need understand aptitude in a default Ubuntu install – it’s my experience and that experience is that yum is slower than aptitude. If a number of new users were to try Fedora and find the same experience, would it be wise to dismiss them all as not understanding how yum works?

The point of the post is to draw attention to other distributions, not to insult another distribution.

Prickly? Hmm, not intended to be. I went out of my way to give you some free advice concerning the use of group commands before commenting on your slowness comment.

Let me be clear I’m interested in constructive feedback as to what specifically is slower so that optimizations can be found so things can be improved. Unfortunately, your comparative analysis hasn’t been specific enough to be a constructive start. You’ve assumed an apples to apples functionality comparison across the tools.

I have no problem with someone being critical and pointing out that something is flawed or unoptimal, as long as you can take something specific enough from the critique to work on. There just isn’t enough in what you said to point to an area of operation that can be optimized..and that’s just a real shame.

I thought prickly was a pretty good word for you Jef…. I generally don’t post on forums, because I just don’t feel like dealing with arrogant people behind an internet curtain, but your comment and subsequent reply just annoyed me.

The posters words were clearly a commentary on fedora from an ubuntu users perspective. Not a technical evaluation. We don’t all grasp on to benchmarks and comparisons like you do Jef. We have experiences and opinions that don’t need to be technical or based on some specification. If Dougie has an opinion about fedora, then let him have his opinion without getting flamed for not spending the next week researching why he felt the way he did. Not every person needs to “get more familiar with the logic flow yum uses to determine when repository metadata files need to be reacquired” just to share their experience. Get real.

Now I’m shure at this time you are feling the need to find any grammer erors in my commentso you can feel like a whiner. Hears som good ammo fer ya!

Ha! I’m the last person to comment on grammar or spelling errors. My own writing is the synthesis of Mark Twain’s fondness of alternative spellings and ee cummings disdain for punctuation.

No we don’t all grasp benchmarks, nor do I expect everyone to. But what I expect is that when someone points out a perceived flaw publicly, that they care enough about the issue to be willing to learn how to provide the necessary feedback so that the initial criticism leads to a technical fix by the people who have the ability to fix a problem.

I’ll be a little more blunt. Is the perceived slowness that traditional apt-get users are seeing in yum associated with network activity required to pull repository metadata? The metadata cache expiration is configurable in yum. I’ve yet to see anyone making an effort to account for the network activity involved in syncing repostory metadata that “apt-get update” does when commenting on yum’s relative speed.

If the original poster wants to feel apt-get is better than yum…he’s free to do that…I can’t tell him his feelings are wrong. But on the issue of the perception of slowness…if he cares enough about that issue to comment on the difference…then I do expect him to be open to instruction on how to provide more detailed information so that it can be better understood as to why its happening. If he doesn’t want to be helpful when asked to provide more detailed information…he should probably refrain from publicly commenting.

I haven’t refused to be helpful Jef – I just havet’t had time to answer yet.

That said, I fail to understand the issue. This is a straight default install from a Fedora 11 live CD, in comparison with an Ubuntu 9.04 live CD install. If you’re saying it’s unfair because I haven’t taken time to configure yum then I’m being equally unfair to Ubuntu in saying it has an inferior boot appearance, as that can be recompiled with some effort and time too.

Further, if you wish to discuss a benchmark arrangement that would suit you I’m perfectly willing to run it.

Without wishing to fall out with you Jef I think you overstep the mark suggesting I shouldn’t comment publicly, that I don’t understand what’s happening and in particular the suggestion I’m trying to bolster aptitude in some way – I’ve shown you courtesy in my responses and would politely ask you to do the same.

Is assuming apples to apples a bad place to start from given the context of the article? They mightn’t be identical in terms of architecture or implementation but they are identical in terms of function, certainly from the point of view of someone moving from Ubuntu.

There’s a lot of differences in the design goals of apt and yum….and then there are configuration specific things on top of that.

Does apt-get in Ubuntu still require a separate apt-get update command before you can use apt-get for new security updates from a repository? Yum doesn’t require this as an additional step…not even in the background via a cronjob…and will attempt to refresh repository metadata files as needed as part of normal end-user operations. This makes repository metadata refreshes transparent to yum end-users. Why should users be required to do a separate metadata update command to see the latest available security updates?

How is apt-get in Ubuntu configured in terms of which mirrors to use? A static mirror list? In Fedora yum is configured to pull a dynamic mirrorlist from the central MirrorManager service instead of using a static list. This introduces additional network overhead…but gives you some very interesting capabilities. For example, it lets the Fedora Project prune stale mirrors out of public mirrorlists dynamically without users having to reconfigure anything. Another example.. a University network administrator can register the U.’s network segments with MirrorManager so that all Fedora clients on the local network automatically get handed the repository url of University’s local Fedora mirror without any client reconfiguration. That’s incredibly useful in terms of bandwidth management on private networks with a local mirror…and completely transparent to the end-user. Is the functionality of MirrorManager worth the extra network activity to pull a dynamic mirrorlist? I think it is.

Both of these are examples of design and configuration choices which impact when and how much network activity is seen…without even getting to the more complicated issue of how efficiently the depsolving is actually being done. These design and implementation decisions are directly tied to differences in functionality for the use case the tools are designed to work with. It’s not an apples to apples comparison when you compare apt-get to yum..or even different configurations of yum(you can bypass the dynamic mirrorlist lowering the network overhead). Because of the complexity of how the tools do their work, constructive criticism must consist of something more significant than “it feels slower”, there must be some effort to pin point what interactions are slower and to isolate functionality that is network i/o or disk i/o bound from cpu intensive tasks.

The slowness appears to me exactly because of the implied update step and mirror selection that yum does whenever I want to install a package. Mirror selection seems to take ages on some connections, without any indication as to what is happening; yum ‘seems’ frozen.

Sometimes, installing machines on-site at customer locations, silly things are overlooked, like bad network settings. I find myself staring at a blank yum screen until after minutes it finally tells me it cannot resolve, or something like that.

On very rare occasions, it even picks a bad mirror and the whole process virtually never completes.

Yum seems to have pretty progress bar indicators and some simple terminal magic going on during the actual package installation. It would be tremendously helpful if gave some progress indication during mirror selection too.

Want to see how long it takes to pull the mirrorlists from each enabled repository? That’s a two line patch into YumRepo.py in the _baseurlSetup function.
at the beginning of the function add
mirrorlist_st = time.time()

at the end of the function add
verbose_logger.debug(‘%s: Mirrorlist time: %0.3f’ % (self.id,time.time() – mirrorlist_st))

this will time all the actions of _baseurlSetup which includes pulling the mirrorlist URL and doing some sanity checking.

Then when you run yum -d 3 command… you’ll get a a little timing message for each repository when it sets up the baseurls..including parsing a remotely pulled mirrorlists.

Note that mirrorlist information is cached so the mirrorlist grab over the network only happens when the cache expires. How quickly the cache expires is configurable. Traditional apt-get users may feel more at home configuring yum to never expire cache so that it behaves more like apt-get where you have to explicitly request a metadata sync from repositories before doing any actions. If that is what you prefer, all you need to do is set yum’s cache to never expire in the yum.conf and then manually “yum makecache” when you want to sync repository information..including the mirrors to use. Setting yum up this way is a closer apples to apples operational comparison.

Hum,
I live in a tropical island with a poor Internet connection.
Yum is so slow that you have effectively (because it is unusable without) to activate auto-miror-list and delta mirror. Even with theses technologies, the download is still really slow compare to all other (RPM mandriva is faster too), and package kit spends so much time to query the package mirror list that you risk to drink too much coffee. Furthermore, Package kit is poorly design and partially buggy. SE linux is fairly complex, even so technically better (this remark also apply to some other administrative tools). Appart from these congenital drawbacks, an end user will not find so much differences. But take care that these differences negatively impact your usage, especially if you do not to waste your time learning how to configure Yum so that RPM seems less worse. Fedora like Debian can totally break too (of course Ubuntu is able to…). Ubuntu is far from perfect (I’d like to find a better distro that balance functionality, ergonomy material support, and community – but no one exists taken all together). Just understand why Ubuntu, rather than Fedora or Debian, has got a large increase of its user base. Nevertheless I think that Fedora is technically better, unfortunately, that’s all. If geeks can not understand why it is insufficient, that their problems… Let’s them proud to be the auto-celebrated superior expert that have make the good choice that most of them ignore (and really happy to)…

FabriceV:
Are you claiming that the download speed of information across your connection to a particular mirror is highly dependent on what application is doing the download? Are you suggesting that when yum downloads a particular url using the urlgrapper python module..that the download speed is appreciably slower than using something like the curl or wget commands to download the same url?

Would you be willing to provide urlgrabber benchmarks showing its reduced data transfer rates?
-jef

I did not accuse you of deliberately bolstering anything. Trust me. If I were going to accuse you of deliberate manipulation I would not be subtle about it.

I already handed you a reference blog article concerning a yum developer’s thoughts on the benchmark issue.

Here are my thoughts.
For an adequate and repeatable benchmark for yum performance, you’d have to configure both yum and apt to isolate as much network activity as possible. Which in the case of yum means at a minimum configuring yum to not expire cache and manaully running yum makecache as a preparation step to pull all repository metadata files into the local cache. Should I assume that apt-get update does all necessary metadata caching similar to yum makecache?

In addition I would do any yum install or update commands twice. Once with –downloadonly option (via the downloadonly plugin) to pull required packages over the wire into the local cache. And a second time which will use the cached packages to complete the transaction giving you a transaction time not tainted by network activity associated with downloading the packages.

Ideally you’d want to use local repository mirrors so that repeated runs of the benchmarking tests won’t be impacted by network congestion at any point if network activity is happening that you aren’t aware of.

If you want to make a comparison with apt in a real world scenario..you’d have to take some care with making sure both tools are
pulling data from package collections of similar complexity and size. This can be tough to do systematically, unless you setup a synthetic repository with both tools that are specially crafted packages so that you control the complexity of the dependancy graph of the repository. Packaging policy determines how interconnected a given repository is, and how much dependency information has to be processed..not the tool that processes the information. Comparing the performance of two different tools, by looking at repositories that are significantly different in terms of the dependency map may give you a benchmark that measures the efficiency of packaging policy more than the performance of the tool and you may interpret the result wrong.

For the first cut, I would disable all update repositories and just look at static repositories associated with a distribution release. Update policies differ wildly across distributions and that affects the size and complexity of the update repository structures. These differences may matter more than tool performance. What if at the end of the day, the slowness is attributable to the size and complexity of the update repository which is layered on top of the release repository..more than the objective performance of one tool compared to another? If that’s the case its not really fair to point to the tool..the correction would need to be in the update policy.

Make sure the package transactions you are going to be repeatably testing are comparable in terms of the number of dependencies that need to be resolved. Obviously something that has many many deps to fill will mean more work for the tool. Don’t assume a package with the same name in one distribution is built the same way in another and requires the same number of additional packages. Here again packaging policy across distributions can play a role in a way that makes it harder to determine the actual differences in tool performance. Going further keep up with transaction complexity across a large variety of installation commands, and trend performance as a function of the dependency complexity.

And no matter what you do… publish the exact tool configurations and the benchmarking commands..so that the same methodology can be repeated and verified.
-jef

Not unreasonably slow.. but perceptually slower yes? I’m still interested in helping you quantify that perception into something specific enough to point towards an area that can be better optimized. I’m still not sure that the perceived slowdown is in the tool versus packaging or update policy which lies outside the scope of what can be fixed in the tool codebase.

Fun fact… there are over 4000 binaries in the fedora 11 updates repository…and over 6700 in the F10 updates repository… repositories that can require metadata syncing on a daily basis. Isn’t this significantly different from Ubuntu updates repositories size?

The size and rate of churn of the updates repository will impact the speed of any client side tool which must sync that information and use it any any package install or update functionality. The more updates..the more metadata has to be pulled over the wire. The more churn the more frequently that syncing has to be done. The more information has to be parsed for depsolving with each transaction.

It would be inappropriate to lay the blame for perceived slowness at the feet of the tool if in reality the tool is performing close to optimally and the perceived slowdown is associated with project-wide policy that sets the churn rate for the update repositories. It can be difficult to distinguish policy from tool performance…but if the goal is to help make things better..we have to try our best to do just that. Worst thing we can do is keep implying that yum is somehow sub-optimal as a codebase encouraging the yum developers to waste time trying to optimize code further if the perceived slowdown is in reality tied to packaging or update policy.

Here’s the analogy..you don’t benchmark two different web browsers against each other by pointing them at completely difference websites with very large differences in website complexity and conclude anything about relative browser performance. Such a test would mix website complexity in with rendering performance and you couldn’t draw any usable conclusions about rendering speed. No to benchmark browsers, you give them exactly the same information to chew on and you make it a fair test of their ability to process the same information.

So it goes with the package management tools and pointing them at different repository structures and assuming the end-user experience is dominated by tool performance instead of the repository structure. And for that reason, a useful comparison of apt and yum is much more difficult that you probably realize because they process different information. It’s far easier to compare the performance of yum versus smart as tools, because they can be pointed to the same repository information..like web browsers can be pointed to the same websites to compare rendering speed.

Regardless of the implementation or design differences between two repository structures or the application that accesses them, these are encapsulated from the end user. Because they are encapsulated, to the end user each application is the same – they request the object to fetch updates and it does so.

Most Fedora documentation, especially that likely to be accessed by someone migrating from Ubuntu makes reference to yum when installing software as we make reference to apt-get.

That analogy might suggest you would prefer me to change my original statement to suggest that the repositories are slow and not yum! Besides it isn’t totally correct because a browser is directed to the information it is to render by the end user whereas the user is only asking the package manager to carry out an action and is not concerned with how it achieves it. The browser analogy would be more accurate if you were considering rendering engines.

As long as the user perceives the application as a black box (which, regardless of our fondness for the subtlety of solution as engineers, most users will) would you agree it’s a fair to suggest in one’s experience they found the update process is slower?

The most correct thing to say is that package management in Fedora is slower in your opinion compared to Ubuntu..without mentioning any tool. It is uncharitable to the tool developers to imply their tool is particularly slow when it could be non-tool policy issues they have no control over. If you want tsk tsk Fedora at the project level…feel free. But until we can say its a tool issue and not a repository or packaging policy issue that is dominating the perceived slowness don’t suggest its the fault of the tool. It’s not a fair conclusion to draw, so don’t draw it.

Again.. I will use the browser analogy to make my point. You can’t say chrome is faster or slower than firefox at rendering if you are pointing both browsers to entirely different websites with significantly different information complexity. You could try to rationalize such a comparison with your encapsulation argument and say that differences in website complexity is somehow hidden from end-users and it shouldn’t matter…but you wouldn’t because it’s not a good argument for browsers and you know isn’t. You can’t handwave the packaging policy driven information complexity issue away here just like you can’t handwave the complexity of the website in browsers.

And no I’m not going to support a blanket statment that suggests updating in ubuntu is slower than in Fedora until someone somehwere publishes a repeatably methodology that can be agreed on in terms of procedure and what is being measured. If you want to make that statement based on your personal experience, I expect such statements to be backed up with sound communicated methodology..instead of ancedotal drive-by comments. If its important enough to comment on, its important enough to generate statically valid numbers which inform the discussion. I’m not going to be a party to propagation of common misconception that can be shown to be factually incorrect simply because its fashionable to believe it to be true. People can talk themselves and others into believing all sorts of untrue things…without intending to be malicious or manipulative.

I’m not attempting to talk anyone into anything, rather the intention of this article was to encourage Ubuntu users to try other distributions.

I’m perfectly entitled to say what I find.

You may not choose to accept that updating is encapsulated from the user, that’s your decision. However when comparing a live CD there is no choice but to use what it there.

You’re being unfair to suggest I “tsk tsk” the Fedora project when I’ve actually written a very positive article to which you have a difference of opinion on a very small part of it. To suggest that I can’t state my experience without carrying tests to your satisfaction is preposterous – if my experience isn’t to your standards then don’t take notice of it.

With regard to your analogy, again you discuss the web site it’s being pointed at when in fact the internal operation of the browser would be the comparison – I’m suggesting that the structure of the repository and its operation are akin to the rendering engine whereas you hold that the website is the repository. I’m certainly not handwaving anything – regardless of personal feelings on the mechanics of updating, the process is transparent to the user and in most cases that’s to be expected as the user cares only that their software is updated.

While I’m willing to accept that there are differences between the architecture that offer advantages and disadvantages and that these can be configured and tailored to your specific needs, I’m not willing to back down on the idea that this is for the most part a design choice that is irrelevant to the user.

I regret that you choose to believe I’m attempting to subvert Fedora in some fashion and I certainly don’t intend to do any other developer a disservice by criticising their work. That said, I think anyone reading my comments would appreciate that this is my opinion not an empirical test and that the use of the word “yum” was meant to convey the process of updating in the same way as the phrase “apt-get” conveys this to many Ubuntu users.

If smart were installed by default in both Ubuntu and Fedora then I would agree that it would be the fairest comparison. But it isn’t and if we were to take this approach then it would only be fair for me to address the positives in this review by making the same sort of alterations to a default Ubuntu install. In short, you’re asking me to start from a playing field that those reading the article are not coming from.

To be absolutely clear on this – I am not disparaging Fedora, attempting to spread misinformation, be malicious or manipulative.

At no point have I said that I believe you are attempting to do anything negative. I’ve made no claim or accusation as to your intention. What I am saying is your drawing conclusions that are unwarranted based on the information available to you in the procedure you used. You don’t need to intend to be wrong..to be wrong. The conclusion you are drawing about perceived difference in speed of yum (which you have admitted is not an unreasonably large difference in another comment) is not substantiated by the methodology you chose to perform. The fact that its easy methodology to perform..doesn’t it make it an adequate one to draw the conclusion you have chosen to draw…regardless of your intention.

I am also making several forward looking suggestions on how to construct a test methodology which does adequately support the conclusion about the reasonably speed difference in operation that you say you have experienced.

What if I told you I’ve done my own anecdotal testing and I don’t see what you see? Would you believe me without verifying what I’m seeing for yourself doing exactly what I’m doing to perform my tests? Or would you justify it to yourself to discard my statements based on your preconceived understanding of my intention? I am not discarding your statements, but I also cannot verify them adequately based on the information you have so far presented.

If you did your own anecdotal testing, I would accept your experience as I have no reason to doubt you. I am unsure why you feel that such an amount of time should be spent to quantify the one aspect of the article that is remotely negative.

You don’t understand how an accurate depiction of a perceived deficiency is useful when determining how to better optimize performance?

You’ve pointed out a perceived deficiency. I make no claim as to whether the deficiency is important enough to have been brought up at all. The fact that you brought it up means its important enough for you and that’s good enough for me to assume you care enough about the issue of packaging management performance to want to be helpful in addressing the deficiency. If it’s important enough to talk about publicly, its important enough to talk about with enough accuracy so that it can be optimized way. So I’m looking to you to provide additional, quantitative information so that something positive can be done to reduce the effect of perceived deficiency to the level where you no longer feel its important enough to bring up.

smart supports multiple repository formats and both rpm and deb packages. It’s available in Ubuntu and Debian and is packaged as smartpm.

I would humbly suggest that smart could be used as a common codebase to explore the effect of packaging policy versus tool performance across multiple distributions. You can use the speed of a common version of smart as a gauge that other timing measurements can then be ranked against.

Obvious questions that can be answered:
On debian unstable.. is apt or smart faster?
On Ubuntu karmic is apt or smart faster?
On Fedora 11 is yum or smart faster?
Is a package install/update in smart on debian unstable faster or slower than the corresponding update in Ubuntu karmic?
Is a package install/update in smart on Fedora faster or slower than the corresponding update in Ubuntu?

The performance of Smart can be your measuring stick by which you distinguish tool performance from policy efficiency.

It’s not really relevant to Dougie’s post. He is describing the “out of box” experience that an average user can expect to encounter. In this context his observations are completely fair, balanced and accurate. This is the type of info an end user wants to know when trying to determine which distro to choose without having to go through the procedure of installation themselves.

The points you’re trying to make are really the domain of a different conversation, one which you’re completely able to start with your own blog.

Refreshingly balanced and good review. It is nice to see that people note that there are more things in common between the distributions than are different. At the end of the day most people could pick up either distro and be comfortable, and if there are differences, these are not deal-breakers.