Category Archives: Adobe and the Web

Post navigation

Apple recently announced its official release of iOS 7 . This is a release packed with very significant changes, in particular a radical transition for the iOS user interface design and user experience.

Part of the updates to iOS 7 is an upgrade of mobile Safari which comes with multiple new features. One of these features is CSS regions. CSS regions is a revolutionary CSS specification draft that allows a deeper separation of concerns in the way designers and developers structure their content and layout. They can now manage the way content should flow across different regions of the page design (hence the name CSS Regions) separately from the content itself . Then content can now be made to flow in different chains of regions, typically laid out differently for a mobile, tablet or desktop/laptop use. See the complete post with examples on Adobe’s Web Platform blog here.

Recently, I have been reflecting on, discussing and writing about open source. After the publication of an article on the Wired web site, one of my colleagues, Kristofer Joseph, came to me and essentially said: “I think there is something about open source that your article does not cover, that is important and that people often miss”. Of course, Kristofer had my full attention with that introduction. So we talked more and Kristofer went on to explain that he felt that the ‘working in the open’ part of the open source culture was either overlooked or not understood. I think he is right so, let’s talk about that. Why open source often means working in the open (it should)? Why is it important?

Raising standards

When you work in an open source project, your code is visible by anyone. Not only your source code, but every single code commits you make and every interaction (comments, issue tracker, mailing list) you have with the community. There is no hiding in open source. Your contributions and interactions paint a living memory of your persona.

You have to show your true color and that is why open source is a meritocracy and not a status based culture. Your work speaks for yourself. That typically makes people raise their standards, strive for excellence and I am convinced that the open collaboration explains a large part of the open source software’s success.

Get early and constant feedback

If you work in the open, you can interact with people as you develop your code. In a way, it is related to the lean start-up model that is geared towards early customer feedback that allows quick iteration and course correction. Transposed to an open source project, working in the open is letting you be lean: your customers, people who might use your project, can see on-going project, try early versions of the software and comment. You can also reach out to them to ask for opinions, preferences or guidance. You can experiment fairly quickly and validate or invalidate hypothesis you have.

A great example of early and constant feedback is the work that Kristofer and his team do on the Topcoat project. By working in the open, we have made a lot of design choices and course correction, thanks to the feedback we got from our bleeding edge users.

But working in the open has its traps. As Kristofer mentioned, it is not always understood.

Working in the open means good and bad work in progress is shared

People not familiar with open source sometimes have the expectation that it should be like a product from a commercial company. So if I get code from the project, and if it is not perfect, then there is disappointment.

This is where open source project are usually careful at setting expectations and try to guide their users, making it clear where to get a stable build, versus a development version or a beta build. For example, you can get nightly builds of WebKit and by the design of that version of the project, it is clear it is work in progress.

The best open source projects use continuous integration to get natural quality assurance on their code for building, testing, coverage and performance regressions, which helps maintain high standards even for in-progress work.

Working in the open means that you can implement your own wishes!

Another thing which sometimes happens is that people would expect that they can interact with an open source project like they do with a vendor. If there is a feature that I want, they I should demand that it be added. It works a bit differently in open source.

Of course, projects typically welcome suggestions or requests for features. That is part of getting feedback and guidance from the users. But if you really want something to happen on your schedule, the best approach is to actually contribute to the project and start engaging and contributing to the effort. A lot of individuals and companies do that routinely with a lot of success, for example around the Eclipse open source project.

So, why work in the open? Not just for open source!

Working in the open makes your project run like a start-up trying to get constant feedback, reacting to demand quickly and adjusting course as needed. It makes you and your team raise your standards. It means that you have to set expectations properly too, but that is ok. And it also allows you to welcome contributions to your project, making it more valuable than you could on your own. So Kristofer is right, this is all important!

A final, important point: working in the open can also be done for non ‘open-source’ projects. It is an approach you can take for internal project and even very large companies such as SAP have been able to implement that successfully, as Dirk Riehle described in his research.

A recent blog post from Drew Crawford has generated many comments and tweets about the relative performance of web and native apps. Drew’s well-written post is notable for its thorough documentation.

So should we all pack our mobile web app work and go native? Well, no.

Though the article singles out JavaScript, it really dives into the trade-offs between garbage-collected languages – JavaScript, Java – with lower-level alternatives that require the developer to manage memory. For those applications that are extremely sensitive to the kind of unpredictable interruptions caused by garbage collection, Drew argues that they will always lag behind native implementations (or, rather, behind native implementations that manage their memory properly).

First, the performance of JavaScript needs to be put in perspective: it is only a subset of the performance profile of most web apps. HTML, CSS, SVG and the network also consume CPU/GPU cycles. For some web apps, the layout and rendering of HTML/CSS/SVG consume the majority of CPU cycles (it is even possible to write a game without any JavaScript code albeit a limited one.) Not only does it mean garbage collection only affects a fraction of an application’s overall execution budget, but a large part of the platform can be optimized and improved for all apps. So JavaScript is only one portion of mobile web apps’s code and its strengths and shortcomings alike only affect that portion.

Figure 1: A web application’s CPU budget

One may argue that this does not matter: why would anyone ever want to use a language that is slower than native? Wouldn’t you always want the faster option?

As Drew’s article points out, productivity is also important to developers. Drew uses the example of hashmaps: while managed languages usually build those in, native apps either rely on harder-to-use versions or roll out their own. Thus a big motivation for using a language such as JavaScript is its ease of use and dynamic nature. While not perfect it remains a language that is more accessible to more people than native code. It also is the same language across many platforms and devices, while native code is inherently platform-specific. Trading off some performance (for a fraction of the application, as explained above) for broader reach is appealing for many applications.

In addition, a core motivation for using JavaScript is that it is part of the wider web platform and lets us leverage a very powerful native component: the browser engine. I think of JavaScript as the puppet master of the browser engine: a little bit of code can exercise a variety of powerful native features from CSS layout and restyling to hardware- accelerated animations.

Granted, there exist native libraries that provide similar graphical, animated or layout features to what browser engines offer (and more). However, no solution that I know of has the the flexibility and ubiquitous reach that the web platform brings to the table.

Finally, a web app is not just client code. At the very heart of the web is the concept of distribution, of content as well as code. A web app can leverage the web and distribute its computing needs. The collaborative 3D authoring application Lagoa is a shining example of that possibility, as it distributes computation-intensive work to the cloud, operations that even the most powerful client code could not handle as well. Web apps, by nature, have access to the flexibility of this powerful architecture.

Web apps are way past the hype phase and climbing up the slope of enlightenment. Articles grounded in hard data like Drew’s are certainly useful. But we need to be mindful about decisions we make and consider a web application’s overall context before making the jump to native, and forego the many benefits of the web architecture.

In some cases (e.g., highly computationally intensive game), native code may indeed be the appropriate answer. But in most cases, web apps demonstrate the way of the future, even though the ‘puppet master’ code will run slower than its native counterpart. Remember that this relative slowness is a trade-off for other important benefits, such as higher productivity and unparalleled reach.

There are moments in technology that are a turning points and I think this is one: we just announced the first integration of Web Platform Docs content with one of our tools, Edge Code.

Why is this an important turning point? There are several reasons…

The Web Platform Docs effort is important

It is the Wikipedia of the web platform. Web Platform Docs is truly momentous. This may sound like a grandiose statement, but I am convinced it is momentous. Beyond the immediate goal of documenting today’s web platform, the founding vision is for webplatform.org to grow with the web. The effort will scale in both scope and time.

Initiated by industry leaders and the W3C, weplatform.org aims to document all things Web in an open, user-friendly manner. The scope of the effort is as broad and deep as it sounds: from the web’s authoring markup (HTML, XML, CSS) to its file formats (PNG) through its protocols and APIs (HTTP, WebRTC, Web Apps APIs, Sys Apps APIs). Documenting all these is an enormous task that can only be undertaken by the web community. No single private organization can match its size and talent.

Then, the time scale of the effort is also unprecedented for technical work. It is critical for all of us that the body of knowledge and data we are creating on and about the web platform be accessible today and also in the long run. Fifty, a hundred of five hundred years from now, future generations will need to understand and access the digital information we are leaving behind. We are taking the first step to document what is now the foundation for human knowledge. And like there is no single company that can match the effort of a community, no single private organization is likely to outlast the web either.

Open access to the latest documentation

This is another turning point: webplatform.org provides a free, online, up-to-date encyclopeadia of reference documentation, techniques, samples, and tutorials for designers and developers. The quantity and quality of the content will only improve as Web Platform Docs becomes the central repository we all invest in. To take but one example, maintaining up-to-date feature compatibility status has always been a challenge; several sources like caniuse, quirksmode, and Mozilla Developer Network, are contributing their compatibility data to Web Platform Docs. This means all tools now have one common source of compatibility data. One all creatives and developers can also improve, edit and correct.

Where to from here?

Already, the collective Web Platform Docs effort is mature enough for use in development tools.

This first integration effort led by Alan Greenblatt illustrates just one of the ways Web Platform Docs contributes to the momentum of the web. Documentation from webplatform.org is available directly within Brackets and Edge code. I believe this is only the beginning of a new norm for web standard documentation in development tools (Adobe’s as well as others’).

Stay tuned for more on the evolution of Web Platform Docs and Adobe tools!

The Web is an ever changing place and the first half of the year has been rich in surprises, big announcements and industry shifts! A diversity of implementations is good for many reasons we will discuss. But a more fragmented web could be the price to pay. Will it be the case?

Reactions were interesting as we went from WebKit monopoly concerns to worries about web platform fragmentation in a matter of weeks. Quite a 180 degrees turn!

At Adobe, we actively contribute to Web standards and browser implementations (historically mostly WebKit and Chromium, even though we also make some contributions to Gecko). As such we are delighted to see Opera join one of the projects we contribute to. Their considerable web expertise will undoubtedly be an asset.

There was some debate before the Blink announcement about whether or not we were heading for a WebKit monoculture: a web where content can be written with the assumption a WebKit-based engine is most likely to render it. While WebKit browsers share much core layout code they also differ in many ways at runtime: different JavaScript engines and graphics libraries, even different sets of features enabled by default. This makes it difficult in practice to write once for WebKit and run everywhere.

So we were not too concerned about a WebKit monoculture. But…

… there was a but in that view. The web is bigger than any one of its leading browser implementations and too important to be limited to a single code base even if that implementation has variations. The web is even growing to be an OS platform (e.g., ChromeOS, FirefoxOS, the new Windows Runtime), the core technology behind packaged applications (like PhoneGap applications). And ongoing innovation across HTML, JavaScript (in the TC-39 group at ECMA) and CSS needs validation, testing, consolidation.

“The web needs multiple implementations of its evolving standards to keep them interoperable.”

I believe this tenet to be central to delivering on the promise of the Open Web. A single implementation does not establish a standard. The W3C process even recommends two implementations in order for a specification to reach completion.

The Web needs Mozilla’s Gecko and Microsoft’s Trident engines to nurture an open, innovative environment. Historically, both companies have done a lot for the Web – think of XHR which Microsoft invented (among other key contributions) or WOFF from Mozilla – and they continue to innovate: Microsoft and Mozilla co-edit the CSS Grid specification, which provides much needed and improved layout flexibility to CSS.

I trust that the addition of Blink will strengthen an already healthy browser competition. Over time, the Blink code base will diverge from WebKit’s but no harm to the web occurs if both engines implement the same features in different ways. Only significantly different feature sets could result in harmful fragmentation. Making sure that WebKit, Blink and other browser engines interoperate is more important than it has ever been.

About testing, fragmentation and experimental features

As the founders of Test the Web Forward, we have come to appreciate the mutually reinforcing benefits multiple independent implementations bring to standards. Historically, testing has been key to the success of web standards. For example, the focused testing effort on CSS 2.1 has shaped that specification and its implementations in the corner stone CSS has become. A single implementation would leave a lot of stones unturned.

It should also be noted that the Blink policy regarding prefixes is really good for standards and compatibility across browsers: draft standard features can become truly experimental features that will not be used (and abused) in production. This should help avoid browser compatibility headaches down the line and I hope this example will be followed by all browsers.

About fragmentation and Adobe’s contributions

In this new web platform landscape, what about Adobe’s contributions to open source browsers? What impact does additional browser fragmentation has on Adobe’s efforts?

Adobe contributes to standards in open browser implementations for many reasons.

One of them is that our new generation Edge tools use a ‘web design surface’. For well over a year now, we have chosen to use the Chromium Embeded Framework (CEF) to provide this ‘web design surface’. So naturally, we will contribute to Blink since it is now the core engine that powers CEF.

Another reason for contributing to open browsers is to accelerate the availability of new features on the web. This is why we collaborate with Mozilla on a number of standards and contribute code to Gecko (like this patch on masking for canvas). And this is why we will also contribute to WebKit, in addition to Blink, now that the two are separate projects.

An open, innovative and tested web!

So yes, I think it is good to have multiple browser engines and Blink is a welcome addition to the web platform landscape. It is bringing a healthy diversity that I hope will help keep the web open and foster innovation as long as all browsers strive to implement ‘the same web’.

And this is where testing efforts are key to achieving diversity without fragmentation. I hope testing activities (of browser code of course, but of standard test suites as well and major initiatives that the W3C is driving) will be a major focus for all the browser vendors going forward, in particular for Google with its new Blink implementation.

It is really amazing to see the level of energy and enthusiasm around Web technology and how the envelop get constantly pushed by developers. This becomes really visible in the conferences that happen around the world on the topic. In our teams, we attend and participate to a lot of events as a way to present new advances on the Web but also as a way to learn and be aware of developer pain points.

In the last week alone, for example, we have been involved in various events.

Unfortunately, I was not able to attend W3Conf myself, but that was for a good reason: I was speaking at another great conference in Bangalore, India. The conference was called Meta Refresh and gathered a lot of developers and designers. I enjoyed giving a talk about “The Quest for the Graphical Web” and listening to speakers who thought beyond simple responsive layouts to get into content prioritization and interleaving (see Arpan Chinta’s talk “Getting serious about Responsive Web Design”) and questioned our approach to design altogether (with Tulsi Dharmarajan’s talk “High on design”).

All those events are a testimony of the activity around the Web platform (if proof was needed!) and how creative it is becoming. It is also nice to be in a day and age when, if you miss a conference, or cannot travel to it, you can alway catch up online with videos. See the W3Conf videos and the Meta Refresh videos for example.

We are actively involved in organizing or sponsoring events to make the web better, please join us at html.adobe.com/events!

Adobe acquired PhoneGap a little over a year ago because it was and continues to be the leading solution for Mobile Application developers who want to use their HTML5 skills to create native applications.

Essentially, PhoneGap lets a developer write HTML, JavaScript and CSS content, use mobile APIs providing much needed functionality (such as device orientation, access to the address book or location) which is not always yet available in browsers and package these applications as native ones so they can be distributed in application stores such as Google Play or the Apple AppStore.

PhoneGap is the solution that is based on the open source Apache Cordova project, similar to the way the Chrome browser, for example, is based on the WebKit open source project.

Since PhoneGap is using an open source platform targeted at developers and created by a community, the following gives recent updates about the different aspects.

Platforms

Despite the holidays, there was a flurry of activity in the mobile web world. The Cordova team released 2.3 with full support for Windows 8 and Windows Phone 8 (Window 8 support was added in 2.2). The popular iOS and Android projects saw more performance and bug fixes. Long anticipated BlackBerry 10 is shipping this month with complete support. Working closely with Mozilla, the team also has Firefox OS on the horizon early this year.

Tools

New common Command Line Interface (CLI) tooling is progressing to beta quality for building projects. The plugin tooling is now quite mature for iOS and Android. Work is now starting to migrate the core API to plugins, and add support for BlackBerry and Windows Phone. The Ripple emulator received much love in December bringing in beta quality support for remote device proxy and the ability to host Ripple. Also good news, the long awaited PhoneGap/Build CLI is ready for beta, integration to the PhoneGap release can be expected in the coming releases.

Community

An open source community health is directly proportional to the activity on the code. Operationally speaking, Cordova offers monthly stable source-only releases and a bleeding edge development channel. However, things are progressing and we will likely see stable, beta, and dev channels available in Cordova 2.4. The project has matured in adoption enough to justify this third release channel for developers that want to be on the bleeding edge. The team will continue to ship PhoneGap on the same cadence.

We added one committer from IBM in December, and have seen two new contributors become active in the project from the Google Chrome team.

Our team has been committed to spreading the practical application of Web standards & supporting the right usage of them. To this end, we initiated the Test the Web Forward event (there is one on Feb 8-9 2013 in Sydney, register here!) and we are very glad to be the host of W3Conf organized by the W3C.

The Conference has a line-up of skilled experts as speakers, more details are available on our Web Platform Team blog. What is more, we are happy to offer a discount code ‘adobe’ that offers $100 off the early-bird price of $300 ! Registration is now open.

In conjunction with this event, we are also organizing a workshop with the acclaimed Digital Media artist Joshua Davis on 23 February 2013 at Adobe SF. The all-day workshop is available for the low price of $32 (including fees), so do register!

However, while features are obviously key to a better, richer web, they are one of several elements which, when combined, deliver an enhanced web experience.

One is proper implementation of the web standards. For the web platform to be reliable, it is very important that implementations follow the various standards properly and reliably. The specifications define what browsers and other web components should do, but we need to make sure that implementations stick to the specification, from the most common features and down to the most obscure corner cases.
Tightly related to proper implementation is interoperability. It is possible, and this has happened many times in the past, to have standards, pretty solid implementations but poor interoperability because of various interpretation of the specifications by implementors. For example, in the early days of the Web, there were a lot of discrepancies between implementations of the Cascading Stylesheets specification. Interoperability issues are the plague of web developers as it either neuters the use of features (because the feature is not guaranteed to work in a consistent way across browsers for example) or weakens its appeal (because it will only be available to a fraction of the end-users).

Testing is the key to insure proper implementation and address interoperability issues. And great testing is the recipe for great implementations and awesome interoperability. In the realm of web standards, testing comes in the form of specification test suite which are used to validate that a specification is implementable.

The testing challenge

Unfortunately, writing tests is fraught with difficulties. It requires dedication, expertise, persistence and careful attention to details. In addition, it is important to have the widest test coverage as possible to ensure testing depth and the desired implementation quality and interoperability. Historically, it has sometimes been difficult for implementors of particular specifications and working groups defining specifications to create test suites that are as deep as they would like. This issue has been at the root of implementation, interoperability and adoption difficulties for new standards.

Test the Web Forward

“Move the Web Forward” is a grass roots movement that engages the community and challenges those passionate about the web to act on their desire for a better web. “Test the Web Forward” is exactly in that spirit: there are implementation or interoperability issues which the community of developers is painfully aware of, let’s try to enable developers to do something about this and contribute to better test suites which are an excellent way to improve the web.

Following that train of thought, Adobe and others in the community such as Microsoft, Mozilla, Google, W3C and Facebook have started to engage the community to contribute to web standards tests with a series of events call “Test the Web Forward”. To date, three events have been held: one in San Francisco (in June), one in Beijing (in October) and one in Paris (also in October). So far, about 700 new tests have been created that will be contributed towards web standards test suites. The events are typically held over a day and a half. During the first half day, experts from the standards working group (such as the CSS or SVG working groups in W3C) give short presentations about standards testing frameworks, browser bug filing and other topics related to reporting issues, isolating problems or ready a specification carefully to identify testable portions. The full day that follows is dedicated to ‘hacking tests’ in groups where the web developers work with the experts to write new tests, convert tests that may need reformatting or review existing tests so that they can be integrated into official test suites.

The following blog posts relate the events as they happened in San Francisco, Beijing and Paris and this video gives a good description of what the events are about, how they foster interest in testing the web, generate good discussions, suggestions and produce concrete results.

Next Steps

While the Test the Web Forward events are fun, there is a desire to find ways to keep engaging between events and at the recent W3C Technical Plenary meeting in Lyon, France, Adobe suggested concrete ways for interested web developers to keep contributing. There are also very interesting discussions about how the “Test The Web Forward” movement relates to the Web Platform Docs effort and a lot of suggestions that the two efforts should be closely related.

It is very encouraging and exciting to see the web community interested in contributing to a better web and offer time and expertise in efforts such as TestTWF. Our team at Adobe will continue working on this effort and with our partners to help it grow and further demonstrate its efficacy to help build a better web.

So if you and your team are passionate about the web, want to help move it forward please follow #TestTWF on Twitter and visit http://testthewebforward.org to learn about upcoming events and new developments around this initiative!

Every great software platform needs some essential ingredients: one or more programming languages, great tools such as editors, compilers and debuggers, frameworks and libraries that make things easier, an enthusiastic community that help each other out and good documentation that helps get the most of the platform. The web platform is probably the biggest, fastest growing and most ubiquitous platform in the (short) history of computing. And while it has many of these essential elements, there is one that was still lacking: official documentation.

Think about what you do when you have a question about HTML, CSS or JavaScript. There are probably a few sites you trust, a few printed books you keep close at hand, if you’re old-fashioned, but more times than not you just search what’s out there and see what comes up. It could be a well maintained, up to date, credible source, or it could be articles or blog posts that are out of date or just plain wrong.

And the web platform is not static! The browsers keep evolving and implement new functionality, specs keep getting updated, and new specs get proposed and implemented. Best practices evolve as well.

Since there’s no single, definitive resource to go to, there’s no way to know for sure, except through trial and error.

All of that is changing today. The W3C – in collaboration with Adobe, Apple, Facebook, Google, HP, Microsoft, Mozilla, Nokia, and Opera – is announcing the alpha release of Web Platform Docs, a new web destination that will become the definitive resource for all open web technologies. You can find the W3C press release here. The Web Platform Documentation (WPD) will include:

API documentation

Information on browser compatibility

Examples

Status of specifications

And the WPD project will be open and community driven, just like the web. WPD is built on top of MediaWiki, the same engine that powers Wikipedia — which means that anyone can contribute. The initial content is being provided by many of the stewards listed above, but anyone with knowledge, examples, snippets or other relevant information is welcomed and encouraged to contribute.

The stewards have been working incredibly hard on this project for a bit over a year, and I want to congratulate them on the launch today. We are very proud to be participating in this effort. This is the culmination of the effort to build this infrastructure, but in many ways this is also a first step. It is now up to the web community to help create and maintain the most comprehensive and authoritative reference for web technologies. So, go check it out and start contributing. Document the web!