Blog Archives

Microsoft is suing GPS vendor Tom-Tom over alleged patent violations that might include Linux. Is this the beginning of the big Linux versus Microsoft patent showdown? Microsoft has long asserted that open source somehow infringes on Microsoft's intellectual property.

My view is this is a great thing for Linux. Really.

You see to date, Microsoft has never formally engaged in patent litigation on Linux related items. Yes they have patent covenants with some vendors including Novell, but the true scope of Microsoft's patent claims has never seen the light of day in a courtroom.

The problem that I see (that many others have commented on over the years too) is that Microsoft has never 'shown its hand' and layed out what it's grievances are. Once they do, the open source community could then potenitally re-act with prior art issues to invalidate the patent and/or just re-code the offending application to not infringe on the patent.

By knowing what the issue is, Linux can defend itself against patent claims. Ignorance of the claim is not bliss and is not a defence.

Additionally, thanks in part to the legacy of SCO, the Linux community has resources and organizations that could mount a formal legal defence if the need should arise. So far as I see the current Microsoft claim is very specific to just Tom-Tom but prospect for a wider patent battle surely does exist.

"The Linux Foundation is working closely with our partner the Open
Invention Network, and our members, and is well prepared for any claims
against Linux," Jim Zemlin Executive Director of the Linux Foundation blogged. "We have great confidence in the foundation they have
laid. Unfortunately, claims like these are a by-product of our business
and legal system today. For now, we are closely watching the situation
and will remain ready to mount a Linux's defense, should the need arise."

Google is out with the 2.0.166.1 update to its Chrome web browser dev-channel version, adding several new features. Among them is one that I frankly hadn't actually noticed was missing (but it was) - full screen support.That's right friends, you can now for the first time do full screen on Google Chrome with this new dev-channel update.

The improved feature in Chrome will now report on more suspected malware instances. According to Google's code entry:

Any malware resource that we detect on a page is reported if
the page that contains it is not in the blacklist AND the user
has opted in to reporting stats.

Looks like another solid week of progress for Chrome development. This is one project that sure seems to have be moving fast on Windows at least. There is still no Mac or Linux version publicly available for Google Chrome.

Mozilla is now planning on adding a fourth Beta to its oft delayed Firefox 3.1 open source web browser.

Firefox 3.1 will be the first major update to Firefox 3.0.x which was first released in June of 2008 after five alphas, five betas and three Release Candidates. Firefox 3.1 development released have been stalled at Beta since the Beta 2 release in December of 2008. The plan now according to Mozilla Developer Mike Shaver is to push out a Firefox 3.1 Beta 3 next week with the Beta 4 to follow 6 weeks after that.

Firefox 3.1 is set to include a new JavaScript engine, new HTML 5 features as well as a Private Browsing (aka Porn Mode).

Given the time that it has taken to push out Firefox 3.1 some developers are now calling on Mozilla to rename Firefox 3.1 to Firefox 3.5.

"Given all the efforts that went into FF3.1 and given its prolonged schedule and expanded scope, I was wondering whether it might make more sense to name it Firefox 3.5 just as Firefox 1.1 was renamed Firefox 1.5?" Mozilla Developer Simon Paquet wrote in a mailing list posting."That way we would more clearly communicate to users that this isn't just a minor update but a major step forward in many areas."

I personally see 3.1 as a big release, and agree with Paquet. Time will tell whether or not the people that make the naming decisions at Mozilla will agree.

A little bit of buzz today about the United Kingdom going open source - kinda/sorta. The BBC reports that the British government would, "..ensure that open source solutions are considered properly and, where
they deliver best value for money are selected for Government business
solutions."

Sun's Simon Phipps blogged
that the move will advance the digital tipping point for open source in
the UK. Phipps noted a few key provisions that he is keen on which
include:

support the use of Open Document Format (action 8);

work to
ensure that government information is available in open formats, and it
will make this a required standard for government websites (action 8);

general purpose software developed by or for government will be released on an open source basis (action 9).

The
new UK initiative however is not a wholesale rip and replace of the
proprietary tools it already uses. It does not restrict the use of
proprietary software either, but rather 'supports' open standards over
closed proprietary lock-in.

Yes this is a move in the right
direction since lock-in is not something that benefits government
transparency. As well the standard open source argument that open
source leads to better(lower) costs may well also be in play.

Don't
forget though that Microsoft will argue (and has) that it uses open
standards too (as it does) and that it too has open source software
(check out Codeplex for a list).

In my opinion, the shift to
open isn't something that will hurt tech vendors - but it might help to
further encourage those that are not open standards based to rethink
their ways.

Red Hat's Fedora Linux 10 has been out since the end of November 2008, and is now hovering around the 1 million installations mark. Fedora uses a system to measure active installations that check the update repositories in order to determine how many installations are in use.

On Fedora 10 in particular, in contrast with adoption for Fedora 9 for a similar period, Fed 10 is at 115 percent adoption (that is a greater adoption rate for the first 12 weeks of release for Fed 10 than Fed 9).

Apple is out with the first public beta of its Safari 4 web browser with a claim of running JavaScript 4.2 times faster than Safari 3. JavaScript isn't the only thing that makes a browser faster but it's a technology measurement that all browser vendors lately are competing on.

The general idea is that modern web sites use a lot of JavaScript and as such the faster a browser can deal with JavaScript, the faster the browsing experience will be for the user.

In its official press announcement for Safari 4 beta Apple claimed that its new Nitro JavaScript engine," .. executes JavaScript up to 30 times faster than IE 7 and more than three
times faster than Firefox 3. Safari quickly loads HTML web pages three
times faster than IE 7 and almost three times faster than Firefox 3."

Apple does not make any mention of its relative performance against Google Chrome (which like Apple uses the WebKit rendering engine) or Opera. Mozilla is currently working on the TraceMonkey JavaScript engine for Firefox 3.1 which in my opinion will likely serve to change the results. Microsoft too has made speed improvements with its IE 8 browser so the gap between Safari and IE is likely to be a whole lot narrower.

Beyond speed, Safari 4 includes some HTML 5 support and new CSS functionality that web developers will notice. On the end user facing side Apple has taken a page from Chrome's playbook with a Top Sites feature that shows users which sites they've most frequently visited.

Apple has also improved Search with a full history search that looks through web addresses and titles to help users find what they're looking for.

"Apple created Safari to bring innovation, speed
and open standards back into web browsers, and today it takes another
big step forward," said Philip Schiller, Apple's senior vice president
of Worldwide Product Marketing in a statement. "Safari 4 is the fastest and most
efficient browser for Mac and Windows, with great integration of HTML 5
and CSS 3 web standards that enables the next generation of interactive
web applications."

Safari isn't just for Apple Mac users either. The new beta is also available for Windows too (though no Linux version).

Where is Firefox 3.1? It's a question that is being asking by Mozilla developers and others now as the release date continues to slip. Currently Firefox 3.1 is in Beta 2 with a Beta 3 coming - well when it's ready.

Firefox 3.1 when ready will include a host of new features in the open source browser, but in my narrow world-view it is the expected performance improvement from the Tracemonkey JavaScript engine that will be its marquee feature. Tracemonkey is the next generation JavaScript engine from Mozilla and it will compete against Google Chrome's V8 and Safari's SquirrelFish Extreme.

Yet, Tracemonkey has become a time intensive technology for Mozilla developers to get fully stable for the Firefox 3.1 release leading to a call from some to remove Tracemonkey from the release.

Here's my opinion: Removing Tracemonkey from Firefox 3.1 would be a major tactical and strategic error for Mozilla. As such, Mozilla should release Firefox 3.1 only when it's ready, Tracemonkey and all.

For better or for worse, speed is a major bragging rights claim in the modern browser wars. (Of course there are still other elements of a browser beyond JavaScript speed that make the overall browsing experience.)

The Mozilla Firefox 3.0.x browser is still a solid, reliable and fast browser. Mainstream users can wait for Firefox 3.1 until it's as feature complete and stable as Mozilla can make it.

From the great minds that brought us the Hoary Hedgehog, Intrepid Ibex,Dapper Drake and Jaunty Jackalope comes the next wacky name for an Ubuntu Linux release: Karmic Koala.

Ubuntu has always had wacky names and Karmic Koala continues the tradition. The official release name is Ubuntu 9.10 meaning a October 2009 release - the next Ubuntu release will be Jaunty in April.

Ubuntu founder Mark Shuttleworth has already given some indictation of what he wants the Koala to achieve and once again he's aiming high.

As high as the clouds in fact.

"A good Koala knows how to see the wood for the trees, even when her head
is in the clouds," Shuttleworth wrote in a mailing list posting. "Ubuntu aims to keep free software at the forefront of
cloud computing by embracing the API's of Amazon EC2, and making it easy
for anybody to setup their own cloud using entirely open tools."

Yup you read that right. Ubuntu is workin on cloud stuff including the development of a build your own cloud technology called Eucalyptus.

Shuttleworth is also aiming to further improve the Linux desktop experience with Koala:

The
goal for Jaunty on a netbook is 25 seconds, so let's see how much faster
we can get you all the way to a Koala desktop. We're also hoping to
deliver a new login experience that complements the graphical boot, and
works well for small groups as well as very large installations.

One this is for sure from where I sit - amongst all the names that Ubuntu has ever had, the Koala is likely the first one that will inspire a degree of 'cuteness'. After all have you ever seen a cute Hedgehog, Drake, Ibex or Jackelope?

WASHINGTON -- Security researcher Xinwen Fu
took the stage at Black Hat today and claimed that he could break Tor anonymity with a single cell.

Tor is the global onion router network that provides anonymous internet transit for users. The way it works is there is a transit circuit with multiple hops, Fu explained that the entry point knows where the packets comes from and the exit router knows where the packet goes.

Fu claimed that he had discovered a number of mechanisms by which he could create malicious routers and inject them into a Tor router circuit.

Since the Tor network is made up of volunteers, Fu alleged that it isn't too hard to become an entry router that could capture or somehow learn about traffic.

"It's a volunteer based model and it's a big problem," Fu claimed. "An attacker can inject or 'donate' high bandwidth routers into the Tor network."

To make matter more difficult, Fu claimed that there is no way to defend against his Tor privacy attack thanks to the anonymity built into the Tor routing protocols that would make rogue access point difficult to detect.

A few people in the Black Hat audience questioned Wu's claims noting that his approach could in fact be detected by various means. Fu shrugged and noted that he is working with Tor developers to figure out a real solution.

WASHINGTON DC -- Adobe's Flash format is everywhere on the web, but be warned : Flash files could potentially be carriers of security exploits.

At least that's the allegation of HP security researcher Prajakta Jagdale who today talked about Flash security in a session at Black Hat DC. There are a number of different types of vulnerabilites that could affect Flash including information disclosure and cross site scripting issues. Though ultimately Jagdale argued that it comes down to proper coding and validation to secure Flash.

On the low hanging fruit side, Jagdale noted that some Flash developers hardcode username and password information into files. A simple Google search with the search query "Filetype:swf inurl:login " was used by Jagdale to show how easy it is to identify vulnerable flash sites.

Additionally she noted that Flash allows for text boxes that could have HTML values - as such HTML injection could lead to exploit.

"You always need to validate inputs," Jagdale said.

Again she did a basic Google search to try and find potentially vulnerable Flash sites for HTML injection. She used the query "filetype:swf inurl:clickTag". When she did the search she claimed that she got at least 200 results of which in her analysis 120 were found to be vulnerable to XSS.

Jagdale advised that in addition to input validation developers should use SSL and should avoid storing sensitive information in the Flash application.

WASHINGTON -- One of the sessions I was really looking forward to ahead of the Black Hat DC event this year was Adam Laurie's session titled - Satellite Hacking for Fun and Profit.

It's a session that didn't disappoint, Laurie is always entertaining, but it also revealed how much effort is actually required to try and get at satellite signals.

First off, Laurie prefaced his talk by noting that he wasn't going to talk about hacking the actual satellite in space itself.

"I'm playing it safe and just looking at what is coming down," Laurie told the Black Hat audience.

Instead what Laurie focused his talk on was something he called 'Feed Hunting' - that is looking for satellite feeds that are not supposed to be found. Laurie claimed that he has been doing satellite feed hunting for years - at least as far back as the untimely demise of the late Princess Diana in 1997. Laurie claimed that he was able to find a non-public feed from a TV broadcaster that had left their transponder on in a Paris hotel room.

Fast forward a dozen years and Laurie commented that the technology to identify satellite feeds has progressed dramatically. Among the reasons why he satellite feed hunting has gotten easier is an open source based satellite received called the dreambox.

WASHINGTON DC. With or without your knowledge your web browser is storing information that could end up leaving you at risk - maybe. That's the gist of a presentation by security researcher Michael Sutton delivered at the Black Hat conference.

Browsers today store data in a variety of ways including HTTP cookies, Flash local storedobjects and by way of Google Gears and the related HTML 5 storage specification.

With cookies Sutton discussed an attack vector called client side cross site scripting that could potentially let insecure cookies from one site read the cookies from another. Cookies have been used by browser vendors since the earliest Netscape releases and have a limited scope in terms of the amount of data that can be included.

When it comes to Flash, Flash files save data with local stored objects which are similiar in some respects to cookies and are also limited in their storage capacity.

Then there is Gears which provides a fully offline database for online web applications. Gears which began life as Google Gears is a Google technology used for offline Gmail and is also being used by several other third party vendors.

"The problem with Gears could be a data confidentiality issue," Sutton said. "Gears itself is secure but if it is implement insecurely by a site that's where the problems can occur."

Read more after the jump - including one potential attack vector for Gears.

WASHINGTON D.C We all rely on SSL and HTTPS to secure our web transactions. That's why Moxie Marlinspike's session at Black Hat DC on SSL/HTTPS attacks just blew my mind and has me 'concerned' to say the least.

Marlinspike demonstrated how a new tool he has developed called sslstrip - can trick browsers into thinking they are on an SSL/HTTPS secured site when in fact they are not.

The implication is that all the traffic from the regular HTTP site could then be easily collected by an attacker since the information is not secured.

"Lots of time the security of HTTPS comes down to the security of HTTP and HTTP is not secure," Marlinspike told the capacity crowd.

Marlinspike is no stranger to getting around SSL security. In 2002 he released the -sslsniff - tool that could be used in a man in the middle attack to inject an illegitimate SSL certificate into an HTTP stream, tricking a user into thinking they were on an the legitimate SSL secured site (when in fact they were not).

There is news out today about a 'new' IE 7 flaw. The 'funny' thing is Microsoft itself predicted such an IE 7 exploit flaw with its Patch Tuesday advisory last week.

Microsoft's own Exploitability Index pegged the flaw as a number 1 which means that the flaw can be replicated consistently and Microsoft expected an exploit to exist within 30 days.

So a little less than 30 days - but Microsoft's Exploitability Index is right on the money.

In my professional opinion, despite what others may write or blog, this new exploit is NOT a Zero day, it is NOT at all like the flaw that Microsoft had to issue an out of cycle update for last year. This is a flaw that Microsoft knew about, they fixed it and they properly disclosed the risk in their exploitability index. The out of cycle update was a flaw which was out in the wild before there was any patch and there was no advance mitigation prior to vulnerability being in the wild (which is the definition of Zero Day in my book).

Bringing this story full circle, Microsoft originally announced the Exploitability Index at Black Hat Las Vegas last summer as a way to be more transparent about what it perceives to be risk. This new IE7 exploit in the wild proves that Microsoft does have a grip on risk - at least this time.

So when Microsoft pegs a vulnerability in one of their own advisories as being a 1 in the Exploitability Index, better make sure you update quickly as you have 30 days or less till the flaw will be attacked out in the wild.

WASHINGTON. Black Hat events are often times when new security exploits are reported and discussed. For me this year, at the Black Hat DC event which kicks off tomorrow (for the Briefings, training is on today), I see a lot of reasons to be very optimistic.

Sure there is a talk about how to hack satellites that could gravitate towards the pessimistic side, and there is a talk about new techniques for defeating SSL -- but overall the talks here this year that will in my view yield improvements in security.

Renowned database security research David Litchfield is talking about how to identify a compromised Oracle Database server. Dan Kaminsky (yes that Kaminsky) is back talking about DNS (he did save the Internet after all) and I expect his talk will yield some interesting observations about the current state of DNS security. Flash which is an often attacked but not well understood technology from a security perspective also gets some Black Hat attention in a session where researcher Rajakta Jagdale will highlight the issues and provide mitigation techniques.

From a pro-active perspective, researcher Ryan Barret is going to talk about how to use Web Application Firewalls (WAFs) to help mitigate all types of threats while Peter Silberman is going to turn Snort IDS (Intrusion Detection System) signatures on their ear to detect issues in host memory.

Sure there are always a few items that emerge from any Black Hat event that could be causes for concern, but with new tools and new techniques to mitigate and protect users against risk - the only true risk is ignorance.

Red Hat and Microsoft have entered into a support and certification deal for each others virtualization technologies. Red Hat Enterprise Linux will now be a supported guest on Windows Server 2008 running Microsoft's Hyper-V virtualization. Microsoft Windows Server on the other side will now be a supported guest on Red Hat Enterprise Linux.

Both sides in the deal, which was announced yesterday (President's Day), noted the deal was one dimensional and did not include any revenues or patent rights. It's a very different deal than Microsoft Novell interoperability deal of November 2006. This deal is just about providing support for what users are already doing.

Red Hat's Mike Evans VP corporate development at Red Hat explained to me how the deal actually works from a practical point of view.

"You call the first company that you think you have the problem with, and if it can not be solved, Microsoft or Red Hat will work with the other vendor to come to a resolution for the mutual customer,"
Evans explained.

I see this as a win-win for Microsoft, Red Hat and even Novell.

Red Hat can now claim that it can support Microsoft without having had to 'sell its soul' as it were, like Novell did. Red Hat did not yield on the patent issues that make the Novell Microsoft deal what it is.

For Microsoft they can now offer a wider choice to users, claiming that Hyper-V supports both major enterprise Linux operating systems.

For Novell, they now have certified competition in the Windows virtualization space. Competition and choice is always a good thing. Instead of just telling customers that they are the only ones support by Microsoft for virtualization, they can now try and compete on features, functionality, performance and yes even patent protections.

Choice is always a good thing and that's what this new Red Hat Microsoft deal provides.

It's always a bit of a mystery to figure out if it matters whether or not you need to use 'www' in front of a domain name or not. That is www.example.com or just example.com.

Sometimes one will refer to the other and in some cases both will exist which can end up confusing search engines with duplication. Google, Yahoo and Microsoft have now teamed up for a new Search Engine standard that will provide a solution for the problem, properly referred to as a canonical domain (that is what section of the URL before the example.com). It's the new link rel="canonical" tag that can help to specify what should be indexed and how.

"When you use the tag, you can indicate the canonical URL form for crawlers to use for each page of content, no matter how it was retrieved," Priyank Garg Director Product Management
Yahoo! Search blogged. "This puts the preferred URL form with the content so that it is always available to the crawler, no matter which session id, link parameter, sort parameter, parameter order, or other source of variance is present in the URL form used to access the page."

Canonical links can also be extremely useful for sessionID tagged pages that are dynamically generated. Those types of pages tend to be difficult to index and often get a mod_rewrite (that is the webserver rewrites the address to something human readable) but it still leaves two (or more) potential addresses for the same content that a search engine could find.

Google in its discussion of the new tag gives an example that is yet another potential implementation of the link rel=canonical tag. Google's exampls uses the wikia page http://starwars.wikia.com/wiki/Nelvana_Limited which specifies its rel="canonical" as: http://starwars.wikia.com/wiki/Nelvana.
According to Google's blog post on this issue:

The two URLs are nearly identical to each other, except that
Nelvana_Limited, the first URL, contains a brief message near its
heading. It's a good example of using this feature. With
rel="canonical", properties of the two URLs are consolidated in our
index and search results display wikia.com's intended version.

This is a really interesting development from my point of view that will both add complexity and simplicity to web developers' lives.

On the one hand, we've now got greater control than ever for search engine optimization of pages. On the other hand, this is yet another way to re-write URLs which makes overall site management even more complex than before. Instead of just having URLs and then maybe a few rewritten ones, now you've got to worry about natural URLs, rewritten URLs and then canonical ones. Then again a good Sitemap could really help out there too, keeping it all straight.

At 6:31:30 PM (EST) today, Unix time will equal '1234567890'. That number is the number of seconds (so 1.23 billion) since the beginning of the Unix epoch on 00:00:00, Jan. 1, 1971.

Neat.

I will admit that at various points in my career I've used Unix time to stump non-admins about time issue (yeaah i know it's not that funny!). But hey a log file is a log file and my default time stamps weren't Eastern Standard.

Today's numerical milestone is a once in a lifetime event and one that is being celebrated at parties around the world today.

Some Star Wars fans think that Episode V "The Empire Strikes Back" is the best of the original trilogy. A key part of that film is Bespin's Cloud City where Lando Calrissian makes a deal with Darth Vader to betray Han Solo.

What does that have to do with Mozilla?

Well for whatever reason Mozilla has chosen 'Bespin' as the name for its new extensible framework for Open Web development -- which to me is just a 'fancy' name for web editor.

Make no mistake about it though Bespin will be a 'fancy' web editor with a web browser interface, HTML 5 and built in collaboration -- or at least that's the plan.So in a way you could call it a 'cloud' web editor, though I personally think the 'cloud' term is ridiculously overused.

The initial demo version is interesting and shows the basic direction, but there is still lots for Mozilla developers to do here.

(Click left for a screenshot)

Years ago when I was still a Netscape and Mozilla Suite user I used the built-in Netscape Composer, which had its limitations. Over time (like most of my peers) I moved to Macromedia (now Adobe) Dreamweaver. It'll be interesting to see where this project goes, Mozilla has some solid ideas that could change the way many web developers develop.

After the jump I've embedded a Mozilla vid giving more details and direction.

Over the span of 90 minutes today I got a whole bunch of tweets from people I follow with the message "Don't Click." Apparently it was a clickjacking attack. Clickjacking is something that involves getting the user to click on an element that then triggers a second or hidden element or action. I've written on this topic before, which affect sall browsers even though Microsoft has a 'fix'.

"..the harm was restricted to constant reposting of the link, but we take
malicious attacks on Twitter users very seriously and this morning we
submitted an update which blocks this clickjacking technique."

Twitter does not provide details on what the fix is (yet at least), but it's pretty easy to see what they've done. It's a frame busting script of some sort.

Back on January 30th I wrote about clickjacking twitter and it looks like that particular exploit vector has now been mitigated with the frame buster. With a frame buster the twitter log in element itself cannot be 'broken out' of twitter such that it can be hidden on a different site in a hidden frame.

Congrats Twitter on taking action on this - a little later than you could have - but hey it's the right move.

Reuters has an interesting story today titled, "Cuba launches own Linux variant to counter U.S."

The gist of the story is that Cuba is now going to produce their own Linux distro called 'Nova' in some sort of attempt to not have to use American software.

Nonsense.

If I'm not mistaken, Linus Torvalds lives in the US where he leads the global Linux kernel development effort. Red Hat and Novell, the two leading Linux distribution vendors are both US companies as are Linux contributors Intel and Google.

Sure I agree and understand the need for open code to prevent proprietary lock-in, but that's not an anti-American stance at all. In fact, Linux is about as pro-American as you can get with its ideals of Freedom and openness while still providing a route for vendors like a Red Hat or Novell to make money.

Now I'm not calling Linux 'American' software here necessarily either, since it's a global effort. But it's not exactly un-American either, considering the tremendous influence that those living and working in the US have on the development of Linux and its broad ecosystem.

In the Reuters story, Cuba takes particular aim at Microsoft arguing that it could be infected by US security services and that it can't be updated due to the US embargo on the island. Considering that Linux has been around for more than 10 years, why now Cuba? Is it just because Raoul Castro just realized this issue?

Like other governments around the world - democratic or not - the need for open code that provides governments with transparency and a degree of control is important and Cuba is now waking up to that fact.

The Novell led Moonlight effort to enable Microsoft Silverlight on Linux has reached its 1.0 milestone release today. I'm not surprised.

In December of 2008, Miguel de Icaza the Moonlight project lead talked to me about Moonlight 1.0 beta (which seemed complete to me) and told me that it would be finalized by the end of January 2009 (so the official release is yeaah a little later but not noticeably so).**UPDATED** Miguel tweeted me to let me know that actual program release came out end of Jan just prior to the Obama inauguration - it just took PR time to put out the 'official' release.

Silverlight of course is Microsoft's framework for rich media delivery and was widely used by NBC for delivering video content from the 2008 Summer Olympics. It was also the media framework used for the official feed of President Obama's inauguration. Others like Major League Baseball have not been so keen on using Silverlight.

Moonlight is an interesting idea and a helpful one for Linux users that want to be enabled to view Silverlight content. The effort though still has a lot of work to do and frankly I'm looking forward to what Novell is trying to do for Moonlight version 2.0. Officially speaking the 1.0 version syncs with Silverlight 1.0 though Moonlight does have many Silverlight 2.0 media capabilities.

Moonlight 2.0 if I understand the development correctly will be more closely aligned - though we aren't going to see Moonlight 2.0 until September of this year most likely.

Though there are many who will argue with Miguel de Icaza about the fact that Silverlight uses proprietary codecs and is the result of Novell's collabortion with Microsoft (and thus not truly Free). The bottom line in my view is that like it or not Silverlight exists and it is used to deliver content. What Moonlight does is extend the reach of Silverlight so that it's not limited to just MIcrosoft users and Linux users won't be left out.

Mozilla is out now with its first milestone release of its Fennec mobile browser for Windows Mobile based smart phones.

Fennec developers have labeled the release as a pre-alpha and it currenlty only supports the HTC Touch Pro. It's basically an early adopter player to get it out there for people to test on Windows Mobile.

The first Fennec browser Alpha came out in October of 2008 and was targeted at Nokia N810 Internet Tablets - a second Alpha followed in December. What is somewhat ironic with Fennec is that the Windows Mobile build milestone is coming months after the project. The last time Mozilla tried to build a mobile browser with Minimo, Windows Mobile came first.

Anti-virus vendor Kaspersky was hacked over the weekend allegedly a victim of a SQL injection attack. It's a disturbing development from my point of view and points to a security issue that can affect nearly anyone -- even those who should know better. SQL injection is in my opinion difficult (though not impossible) to defend against on a live production environment, it's something that needs to be fixed before a site or application is live.

Officially speaking Kaspersky put out a statement yesterday noting that they detected an attack but no restricted information was lost:

The attack was unsuccessful and, despite their attempts, the hackers
were unable to gain access to restricted information stored on the
website. Claims by the hackers responsible for the attack that they had
managed to gain access to user data are untrue.

Though Kaspersky has claimed no data loss they have hired noted database security expert David Litchfield to look at their databases.

I've sat in Litchfield security sessions at Black Hat several times and I've always been overwhelmed with his approach. Litchfield is what I would call a forensic investigator looking for clues in database table rows that look fairly innocuous to normal humans.

The reality from where I sit is that anti-virus software cannot stop a SQL injection attack. SQL injection is something that typically exists either in the database software itself -- that needs to be patched -- or in a configuration related component that ensures that commands are validated in some way.

From an end-user point of view there is no way to defend yourself from being a victim of a SQL Injection attack. The web site (or application) itself need to protect itself and by extension its end-users. Whether or not Kaspersky had unpatched software, some kind of configuration issue or if this is a new zero day attack is currently unknown. What is known is that SQL injection is a very real threat and it's one that all vendors must take very seriously.

President Obama has a lot of things to do to fix America. The open source community (or at least 14 open source vendors) want Obama to consider using open source technology as part of the fix. In an open letter published this morning, open source vendors make an argument of the standard sort of open source mantra of providing better value and transparency for all. Here are a few choice excerpts:

There are no 'black boxes' in open-source software and therefore no need to guess what is going on 'behind the scenes.' Ultimately, this means a better product for everyone, because there is visibility at every level of the application, from the user interface to the data implementation. Furthermore, open-source software provides for platform independence, which makes quick deployments that benefit our citizens much easier and realistic.

The letter also petitions the president to make have source code open a key element of the Governement's procurement practices under the guise of accessibility.

... we urge you to make it mandatory to consider the source of an application solution (open or closed) as part of the government's technology acquisition process, just as considering accessibility by the handicapped is required today (as defined by section 508).

It's an interesting idea for sure in my opinion.

Though the list of companies that signed on to the letter is equally interesting. For one it does not include a single Linux distribution.

As far as I know open source software is already used by the US Government, Linux is used in multiple branches including the military. The idea of making open source, or at least some form of open code a section 508 accessibility issue could work in favor of commercial closed source vendors too. Certainly a big vendor, be it Microsoft or and Oracle could make their code accessible to the US Government but not necessarily be open source on a broader scale.

Still it's a good idea to ask and it will be interesting to see if the first president to use email in the Oval Office will respond with any measures.

If you're not using a Linux powered cell phone yet, you might be sooner than you think. The LiMo Foundation today announced that at least six major operators will be delivering Linux based mobile phones in 2009.

LiMo is group focussed on providing a standard Linux based operating platform for mobile providers. In June of 2008 it absorbed its rival the LiPS (Linux Phone Standards) and in my view now competes squarely against Google's Android and Nokia's Symbian.

The new Linux phones will come from NTT DOCOMO, Orange, SK Telecom, Telefonica, Verizon Wireless and Vodafone all of whom are LiMO contributors in some way.

"The powerful commitments being made by LiMo's operator members clearly demonstrates that the LiMo Platform is delivering a highly efficient, consistent and flexible code base that can be optimized to meet the market and technical requirements of major mobile operators," said Morgan Gillis, executive director of the LiMo Foundation in a statement. "This also signals substantial growth and opportunity for OEMs and developers to create devices and applications that meet the needs of major operators."

LiMO claims now to have 33 commercial mobile phones certified as being LiMo compliant.

While I've been writing about mobile Linux for years this new push of handset from some of the world's biggest carriers is a big deal. It furthers LInux's push into mobile and it definitely positions Linux as a mainstream technology for mobile.

There comes a time when for whatever reason a system (Linux or Windows) won't boot. It was during during one such emergency years ago that I discovered the System Rescue LiveLinux CD. System Rescue is a bootable Linux operating system that will show you what partitions are on a drive and enable you to 'fix' them.

The latest version of System Rescue version 1.1.5 is out today and it includes a few notable improvements over its predecessors. The most important in my view is support for the ext4 Linux filesystem. Version 1.1.5 includes a new Linux kernel with support for ext4 as well as including a version 0.4.2 of the GParted partitioning software that includes support for ext4.

Ext4 is not yet widely used in Linux distributions, though it will be real soon. Both Red Hat Fedora 11 and Ubuntu Jaunty now in Alpha support ext4. So what that means is if you're running a Fedora 11 or Ubuntu Jaunty Alpha now and run into trouble, your trusty System Rescue CD will now be able to help you out.

Sure, most mainstream Linux distribution have some form of partitioning software (often GParted based) as part of their install media -- but the reality is when a system fails to boot an OS (for whatever reason) I personally have never found a better tool System Rescure to try and fix the problem.

Where it also works wonders is for those of us who run multi-boot machines (Windows/Linux and/or multiple flavors of LInux). It's always a bit of a guess when installing an OS how much space to give it, but what do you do when you need to resize? Again System Rescue to the rescue.