Eclipse 4.0 RCP: Dynamic CSS Theme SwitchingEclipse 4.0 RCP: Dynamic CSS Theme Switching
The css theming capabilities of the Eclipse 4.0 SDK are improving. Especially the mechanism to implement a dynamic theme switcher is much easier now. If you want to implement a css theme switcher, you just need to contribute to the extension point org.eclipse.e4.ui.css.swt.theme. Here is an example from my e4 contacts demo for declaring two themes:

The Eclipse Foundation is presenting Helios In Action - a virtual conference where you can interact with project leads involved in the release and see demos of the new features. The annual simultaneous release has now grown to 39 projects with over 33 million lines of code, contributed by committers around the world. With such a large global community, Eclipse wants to bring Helios to you!

]]>
A Maven/Tycho ExperimentA Maven/Tycho ExperimentWith this said, I recently decided that the PsychoPath XPath processor needed to migrate off of the Athena Common build script that it had been using to a Maven based build. The reason was two fold:

I needed the ability to generate and eventually host a Maven repository for PsychoPath to help make it available outside of eclipse. PsychoPath is just a bundle, but doesn't have specific OSGI dependencies so can be used outside of osgi container.

Portability of the build script. While Athena itself is pretty portable, the underlining eclipse base builder, and the assumptions it makes on project configuration and layout can be difficult to work with at times and can be very system dependent.

Speed. Even with PsychoPath, it takes on average 20 minutes to run the build, and test using Athena Common Builder. Much of this is due to the way the base builder and Athena setup the target platform.

Robert Munteanu, answered a twitter call to help get PsychoPath setup using Maven 3 and Tycho. The results can be found on the GitHub fork of the eclipse code. The results were pretty impressive.

Average build time for Psychopath is number under 2 minutes.

Test runs went from an average of 2 minutes to 46 seconds. This is still running all of the 8306 unit tests.

A P2 update site is created and available for consumption and Maven repos are available as well.

Pointing to which P2 repositories were needed and what dependencies to install was just a matter of adding the appropriate repository urls. Tycho figured out the dependencies and built just what was needed to build and run the tests.

A Maven based build is not without its disadvantages. In particular the fact that it downloads the Internet. However after the initial download and unless your upstream dependencies change it only needs to do this once. Nick has done some improvements to Athena in this area, but the reconstruction of the target platform and the p2 provisioning is the longest running portion of an Athena build. Almost half the time of the build is spent during provisioning.

After Helios is out, and things have settled down again. I'll be migrating Robert's contribution back into the main PsychoPath code base. The Hudson based build will be updated to use the new Maven build for the CI builds.

One item I have not yet figured out how to do correctly with Tycho and Maven, is to include Code Coverage plugins like Emma and Cobertura in the builds. The Instrumentation always seems to happen after the Tests have been run from the test plugin and not before the Test has been run. Any insights on how to get Instrumentation working correctly with Tycho would be appreciated.

Updated June 8, 2010: Information on using Emma using ECLEmma headless can be found here.

]]>
Top 8, err... 9 features of Apple iPhone 4Top 8, err... 9 features of Apple iPhone 4
As expected, Steve Jobs announced the Apple iPhone 4 today at WWDC 2010. He highlighted 8 9 features as being particularly important in the new device:

1. All new design. The new phone is very thin, only 9.3mm. It has a stainless steel outer cover that doubles as the antenna for Bluetooth, WiFi, GPS, UMTS, and GSM. There’s a camera and LED flash on the back, and two microphones for noise cancellation (like the Nexus One). There’s a black and a white version.

2. Retina Display. At 326dpi, the iPhone 4 has the densest display on the market. According to Jobs, 300dpi is about the limit that the human retina can distinguish at 10-12 inches away from the eye, so he’s calling this the “retina display”. It uses IPS technology for a wide viewing angle (just like the iPad) and has an 800:1 contrast ratio.

Using the same 3.5 inch diagonal size as the iPhone 3GS, iPhone 4 packs in 4 times as many pixels (960×640). Because the aspect ratio is exactly the same as older iPhones, no change is needed from developers and all old apps run unmodified. However, developers can include higher resolution graphics if they choose.

3. New processor. The iPhone 4 will use the same A4 chip used in the iPad. Along with a bigger battery, the chip provides up to 7 hours talk time, 6 hours 3G browsing, and 300 hours of standby. Up to 32GB of storage is provided.

4. Gyroscope. The new device has a real gyroscope in addition to the accelerometer that is standard now in most smart phones. The combination gives you 6-axis motion sensing. Developers can use a new CoreMotion API for extremely precise movements. The standard compass, proximity sensor, and ambient light sensor is there too.

5. Camera. The camera’s megapixels have been increased to 5, plus they’re using a backside illuminated sensor for higher quality. In addition to great pictures it will also record HD video, 720p at 30fps. Geolocation embedded for both photos and video. There’s a camera in the front too (more on that later).

To go with the new camera, Apple has written a new app called iMovie for iPhone. It allows you to edit movies with transitions, music, titles, and so forth right on your phone. It’s for sale now for $4.99 in the App Store.

6. iOS 4. Formerly called “iPhone OS 4“, the new operating system has a ton of new features including multitasking, unified inbox, conversation threading, and so forth. Google will still be the default search engine on the iPhone but users will have the option of choosing Yahoo or Microsoft Bing.

The name change simply reflects that the same system will be running on iPhone, iPod Touch, and iPad. The golden master candidate of iOS 4 is being released to developers today. iOS 4 will be free to all 3GS, 3G, and iPod Touch 2nd generation users on June 21st.

7. iBooks. The iBooks app that originally appeared on the iPad is now available for the iPhone as well. That includes a new capability to read any PDF files, and iBookstore integration. You only have to buy a book once and it’s available on all your iOS devices. iBooks will automatically sync your place, bookmarks, and notes.

8. iAds. Advertisements allow developers to make money on free and low-cost apps in the App Store. With the new iAd platform, there’s a standardized, supported platform to create in-app ads that don’t bounce the user out into the browser. Apple sells and hosts the ads, and developers get a 60% cut via iTunes connect (the same way you get paid for your apps).

iAds will be enabled July 1st but Apple has been selling spots for 8 weeks. So far they’ve sold over $60 million. JP Morgan estimated the entire mobile ad market to be worth $250 million this year, so that’s a significant number.

Originally Steve said he was going to highlight 8 features, but he couldn’t resist adding “one more thing”…

9. FaceTime. Video calling makes its Apple debut on the iPhone 4. It works with WiFi only right now. You can set it up so the other person sees you (via the front camera) or what you’re looking at (via the back camera). FaceTime is being proposed as an international open standard. It uses other standards like H.264, AAC, SIP, STUN, RTP, SRTP, TURN, and ICE.

So how much is all this going to cost you? With a 2 year contract, $199 for the 16GB model and $299 for the 32GB model. AT&T will let you upgrade to an iPhone 4 6 months earlier than usual if you re-up for another 2 years.

Preorders start June 15th, with the phone in stores on June 24th. 18 more countries in July, 24 more in August, and 40 more in September (total of 88 countries).

]]>
Developers made over $1 billion on iPhone, Jobs saysDevelopers made over $1 billion on iPhone, Jobs says
At today’s WWDC 2010, Apple announced that payments to developers for iPhone, iPod Touch, and iPad apps have now exceeded $1 billion US. Apple takes care of billing, taxes, and other administrative tasks, and in return takes a 30% cut of all sales. The rest goes directly to developers.

For free applications, developers can make money by showing advertisements instead of on the sale of the app. Apple’s new iAd platform aims to take over that market too, with a slightly larger cut going to Apple (40%).

According to a new report by Nielson quoted by Apple, in the first quarter of 2010 here are the smartphone shares in the US:

RIM (Blackberry) 35%

Apple (iPhone) 28%

Microsoft (Windows Mobile) 19%

Google (Android) 9%

Other 9%

However these numbers don’t tell the whole story. Only iPhone and Android have any significant developer ecosystem. By the end of June 2010 there will be 100 million “iOS” devices (iPhone, iPod Touch, and iPad) in the field according to Apple. Apple has a good head start on Android, but Android is catching up fast.

Most analysts feel it is inevitable that Android will surpass Android in the next couple of years given the dozens of partners producing the devices, all trying to compete with each other and with Apple in terms of price and features. But for the near term, Apple still rules the roost.

]]>
Trends from the Eclipse Community SurveyTrends from the Eclipse Community Survey
The results of the Eclipse Community Survey 2010 are now available. Thank you to everyone, all 1696 people, that took the time to give us your feedback. A challenge for lots of open source communities is understanding the dynamics in the community, so these results provide a useful data point.

We have published a report, called the Open Source Developer Report, that provides a summary of the survey results. The detailed results and numbers are also available [xls] [ods]. For those interesting in trends, we have done a similar surveys in 2007 and 2009.

Each year I learn a lot analyzing the survey results. Last year I discovered the popular products used in the Eclipse community and in 2010 a lot of those same products are still very popular. However, some things did jump out as interesting trends for 2010.

Trend #1. Linux on the developers desktop continues to grow. We asked developers what was their primary operating system for software development. In 2007, 20% said Linux was their development operating system. Now, in 2010 almost a third (33%) say Linux. The biggest loser seems to be Windows 73.8% in 2007 down to 58.3% in 2010. Interestingly, Mac OS X has only gone from 3.5% to 7.9%.

Trend #2. JQuery has a lot of momentum and usage in the RIA space. JQuery ranked the highest (26.9%) RIA framework of those the stated RIA/Web Apps was their primary style of software. Th next closest was Adobe Flex at 9.1%. In the 2009 survey, JQuery had around 5% adoption.

Trend #3. Open JDK has gain a lot of adoption. I don’t follow the JVM market that closely but I was pleasantly surprised to see 21.7% of the respondents state they target Open JDK. Sun Hotspot predictably scored the highest at 68.8%.

Trend #4. DVCS usage is growing; CVS is shrinking. DVCS is a hot trend for software development and Git support is a hot topic for Eclipse project committers. Therefore, I was not surprised to see Git usage up from 2.4% (2009) to 6.8% (2010). Mercurial usage also increased from 1.1% to 3%. This growth seems to be coming from the decreased use of CVS, 20% (2009) to 12.6% (2010). Subversion continues to be the most popular at 58.3%.

Trend #5. Eclipse users upgrade quickly to new releases. 75.5% of the respondents said they were using Eclipse 3.5 (Galileo) and an additional 7.1% use the Helios milestones. I’ve always known the Eclipse community moves quickly to a new release but 82+% in less than 1 year is pretty impressive. If you are building products that target Eclipse users, providing support for older versions of Eclipse might not be that important. Granted, products that are built on top of Eclipse probably don’t move as fast.

Trend #6. Lots of fragmentation in the methodology space. I don’t follow the software methodology space that closely but I was surprised by the fact that 1) 25% of the respondents don’t use a methodology and 2) the most popular, Scrum, has only 15% adoption. The rest of the respondents identified over 18 different methodologies that they use for a development methodology.

Trend #7. Open source participation seems to be stalled. In the survey, we asked a question about the corporate policies towards open source participation. In 2009 48% claimed they could contribute back to OSS but in 2010 only 35.4% claim they could contribute back. Conversely, 41% in 2010 claimed they use open source software but do not contribute back but in 2009 it was 27.1%. Obviously not a trend any open source community would like to see. I am not sure the reason companies would become less restrictive in their open source policies. Any insight or feedback from the community would be appreciated.

Trend #8. The community is satisfied. Once again it appears the Eclipse community is pretty satisfied, 39.9% are very satisfied and 48.5% are satisfied. Pretty consistent with last year, so congratulations to everyone the makes Eclipse a great place.

There is a lot more information available in the report and in the detailed data [xls] [ods]. Let me know what you learned and your impressions. As with any survey, there are obvious biases and this is just one data point but I do think it represents a decent view of what developers are doing.

]]>
Xtext: Webinar and Eclipse Demo CampXtext: Webinar and Eclipse Demo CampAt 5pm CEST I will jump in for Sebastian in the Xtext Webinar. Sven and me will give a free one hour live seminar on Xtext introducing the framework and demonstrating its new fancy features in Helios. Moritz will assist us in answering your questions on the live chat. It's free and it's broadcast live via internet. Why not join?

If you're based in Germany's Ruhr area, you might want to visit the Eclipse Demo Camp in Dortmund. I will give another 20min demo of Xtext and - guess what - its new amazing features! But even for non-Xtext enthusiasts there will be nice demos, e.g on SAP's new graphical modeling framework "Graphiti", on developing Android apps with Eclipse and a lot more intriguing topics.

Finally, I plan to have my hopefully well-earned after-work pint in the Eclipse Stammtisch.

]]>
Off to Epicenter in DublinOff to Epicenter in DublinI am once again off to Dublin this week to speak at the Epicenter 2010 Conference again. Although I do enjoy a good Guiness I am also looking forward to the conference and the topics I am covering this week as well as the hours of travel to catch up on some pre-release tasks. We are just wrapping up our 2.1 release of EclipseLink as part of the Eclipse Helios release train and there are many new features I am excited about so travelling this week will give me a chance to complete some examples for the release and blog about the features all EclipseLink users will want to learn about.

I am speaking this week on performance and scalability. Even after 13 years of helping customers use TopLink and now EclipseLink I still find diagnosing, solving, and innovating in these areas some of the most interesting work I take part in. Helping customers learn what they need to know about their models and application use cases and translating them into their object-relational mappings, schema, and usage of their persistence layer is challenging and often poorly understand by Java developers.

The persistence layer enables performance and scalability but there are many simple decisions developers can make while configuring and coding to their persistence layer which are made early in projects and have big effects late in projects when trying to reach their performance and scalability goals. Understanding what these decisions are and what the trade-offs are is so important and hopefully I'll help some of our Irish community avoid common pitfalls.

After working with a couple clients this week dealing with tough performance goals and very complex models I have some great examples I'll be adding to my cook book of slides on my way over the Atlantic.

Hope to see you in Dublin...

]]>
Helios and the Download PageHelios and the Download Page
Helios Developer Builds.

The overall design goal for this page is to make the downloads page less cluttered while still providing all relevant information to the user. This is the first stage in that process, its not finished, but I think we're close.

If you've got any feedback, comments, or complaints please visit Bug 310525 and tell us about them.

]]>
Writing Games for Android ... and other app stores.Writing Games for Android ... and other app stores.At any rate, I watched Chris Pruett's session on Android game development from Google I/O and found it very educational. The first half was about technical details and tips behind Android game development, but the second half was just about good game development for mobile platforms. It's a great view and I've embedded it at the end of this post.

The real game changer (sorry to use the pun :) in my mind is the growth of app stores or markets. Pretty much every mobile platform vendor is creating some sort of one stop shopping experience for their devices and it's a great vehicle for app developers to quickly get their wares into the hands of paying customers. And with systems like Valve's Steam, you also get this great experience on your desktops and laptops.

The biggest advantage of markets, as Chris illustrated, is that it's also a great way to keep in touch with your customers. Customers tend to be brutally honest about what you've provided them, and it's good to get first hand looks at that feedback. And also, thanks to the markets, it's also easy to get updates and fixes into their hands. It's a win/win.

]]>
Eclipse is to IBM, as an egg is to a chicken…Eclipse is to IBM, as an egg is to a chicken…
Actually, I’m not sure if that’s really a good analogy or not, but anyway…

opafan48: i have an issue with eclipse, i cant stand its connection to ibm, thats why i try to mind it.. how would u deal with this issue?rcjsuen: opafan48: Eh?rcjsuen: opafan48: Could you rephrase that?rcjsuen: opafan48: You mean you don’t like the fact that IBM has people working on it?opafan48: rcjsuen, not exactly, i dont like its origin.. which is as far as i read is in ibm..rcjsuen: Well, there’s nothing you can do about it.rcjsuen: That’s like sayingrcjsuen: I don’t like eggs because they come from chickensrcjsuen: well okay it’s not like it’s the egg’s fault…?rcjsuen: and it’s not like you can suddenly make eggs come from somewhere elsercjsuen: Eclipse’s origin is IBM, and that isn’t going to changercjsuen: if you can’t stand it…i don’t know what to tell youopafan48: a fork would be nice though :)opafan48: with less commercial backingrcjsuen: well, it’s open source, no one’s stopping anyone :orcjsuen: so it sounds like your problem is actually the fact that IBM has ppl on itopafan48: i dont even kno the ppl.. so no problem with thatrcjsuen: Anyway, if you want to fork it, go right ahead.opafan48: unfortunatelly i dont have either the talent nor the time..rcjsuen: So what exactly would happen if it had less commercial backing anyway?rcjsuen: I mean if it bothers you that much, feel free to use another tool. :)opafan48: well i tried intellij idea, which is more sympathic in this issue.. but their license model doesnt fit my needs..rcjsuen: I thought parts of IDEA was under ALv2rcjsuen: And ALv2 is even more lax than the EPL.opafan48: well thats not enough.. without the graph apircjsuen: opafan48: It sounds to me like you just want free stuff?opafan48: rcjsuen, sure i do..

Well, at least this person’s honest…

NoobFukaire1: a fork OF ECLIPSE?NoobFukaire1: do you realize the magnitude of thatrcjsuen: opafan48: well i dunno what you want us to say here exactlyrcjsuen: you want a fork, okay, but you don’t have the time, okay…

Twenty or so minutes later…

opafan48: anyway i check vi
* opafan48 has left #eclipse

So if you were on IRC at the time, what would you have said to opafan48?

As stated earlier BIRT supports exporting to Excel. The BIRT Excel emitter creates a Microsoft Office XML XLS document that can be opened in Microsoft Office 2003 or greater. To use this feature either add the __format=xls parameter to the BIRT viewer URL or use the AJAX export button.

if you are using the Report Engine API, simply set up a render option for XLS.

While the XLS output is quite good, some features are not supported. For example new worksheets on page breaks are not created and images and charts are not exported to the XLS. While the team continues to improve the XLS emitter there are some other options for emitting XLS. One of these options is to use the Tribix emitter located on source forge. The Tribix project offers emitters for RTF, PPT and XLS.

If you wish to use just the XLS emitter, download the org.uguess.birt.report.engine.emitter.xls_version and org.uguess.birt.report.engine.common_version plugins and copy them to the plugins directory in your Eclipse install location. You will also need to copy them to the runtime location as well. For example, if you are using the WebViewer this will be the WebViewer/WEB-INF/Platform/plugins directory. You will also need to remove org.eclipse.birt.report.engine.emitter.excel.config_version and org.eclipse.birt.report.engine.emitter.prototype.excel_version plugins from both locations to replace the out of the box XLS emitters. Restart Eclipse with the –clean option and the Tribix emitter should work. No API changes should be required if you are using the RE API. New worksheets per page and image support should now work.

If you desire more XLS output options take a look at the Actuate XLS emitter that will be available in Actuate BIRT 11 which will be released this fall. It allows exporting charts as either images or as native XLS charts.

It also provides the capability to export formulas using a new scripting language called EasyScript, within the BIRT Expression Builder.

Reason I : You want to learn how easy Domain-Specific Language (DSL) development can be

People state that developing external DSLs is much more complicated than internal DSLs. With Xtext the opposite is the case. Xtext itself provides a concise language to describe DSLs, so that it takes only minutes to create a first draft of your language. You don't have to deal with complex meta programming or multiple syntax alternatives as you do in internal DSL development. With Xtext your language definition is concise and declarative and you get language specific IDE support.

Reason II: You want to see the new Helios features in action

In the Helios release Xtext graduated to version 1.0. It has grown up from a tiny little editor generator to a mature language development framework. Still simple things are simple, but you now get much more out of the box. Features like namespace-based scoping, a workspace index, a builder infrastructure, validation and linking against dirty editor state, quick fixes, linking to any Java elements and tight integration with JDT, enhanced serialization and formatting support and much more.

Reason III: You'ld like to see how "Modeling 2.0" looks like

The days of heavy weight, dogmatic modeling approaches are over. In 2010 Modeling technology is mature and a pragmatic solution to many problems. Xtext and EMF are a dream team! Models are now also code and integrate seamless with common development environments such as version control.

Reason IV: You need a decent IDE for a particular programming language

It's astounding how many programming languages are used in the different industries. There are so many languages I'm sure you haven't heard of. Most of them have one thing in common: They lack decent tool support. I'm not talking about syntax coloring for Vim, but something close to what modern Java IDEs offer. We at itemis have already successfully implemented a couple of IDEs for various programming languages. We will show something in the webinar.

Reason V: The webinar is free and interactive

As usual with Eclipse Live webinars, the event is free of charge and the integrated chat application allows us to communicate. The Xtext committers will attend and be happy to answer any questions. Unfortunately Sebastian won't be able to join (for personal reasons). But Jan was so kind to jump in and do the presentation with me. We are looking forward to meet you online!

Have fun.

]]>
How to Run JUnit ProgrammaticallyHow to Run JUnit Programmatically

]]>
Upcoming Event: Patterns and Best Practices for Effective Java UI TestingUpcoming Event: Patterns and Best Practices for Effective Java UI TestingRegister Now

Phil Quitslund (Instantiations)

Abstract:

Join Java and Eclipse expert Phil Quitslund as he presents best practices for creating effective, consistent, and robust user interface tests. The user interface (UI) of an application is your front-line connection with the customer—if it doesn’t look good and work effectively the first time, your company reputation is at stake. The talk will address testing at all the stages of the development process and concrete examples will be drawn from extensive real-world experience testing Java/Eclipse/RCP applications.

There is not much information on how to include Spring Beans within a report design. This post details an example of injecting the Spring ApplicationContext into BIRT’s AppContext object which will allow you to call your Spring Beans in BIRT expressions or event handlers. A link for the source is listed at the bottom. A readme file containing instructions for building the example is included in the download.

You can include the BIRT runtime by following the post above or examining the download. The Example implements a Spring Controller with the following code.

Note that no error checking is implemented in this example. This controller class just extends the Spring AbstractController class passes the response object to the BIRT engine to output the report. The report name is retrieved from the request. Before running the report the BirtEngine class is used to retrieve the BIRT engine. This is virtually the same BirtEngine class used in the servlet example available in the BIRT wiki with some exceptions that are noted here.

The setEngineHome is passed a blank value and setting the PlatformContext to use a PlatformServletContext class will by default look for the BIRT plugins in the WEB-INF/Platform/plugins directory. Next we set the parent classloader for the report engine plugin so that classes available to the project will also be available to the BIRT engine. The final line gets the Spring ApplicationContext instance and loads it into the BIRT AppContext object and names it “spring”. The method used for retrieving the Spring context is described in a great post here.

This Class implements the ApplicationContextAware interface which causes the Spring framework to callback this class with the Spring context, when it is created. The Spring configuration file for this example looks like the following.

We have a carPojo bean that is available. To access this from a BIRT expression, all we have to do is use the following syntax.

var mypojo = spring.getBean("carPojo");

You can then call the methods on the object. Eg mypojo.getYear();

Output for the example is presented below.

CaveatsIf you preview the report in the designer the report will not work. This is because the Spring context is not available to the designer. The report has to be deployed to test it. There are many ways this example could be extended to circumvent this issue. BIRT provides an extension point to enhance the expression builder which could be used with Spring Remoting to access the Spring context. BIRT also provides an extension to implement a BIRT application context object within the designer that could be used in this same fashion. I will try to implement one of these methods in the future to illustrate this concept.

]]>
EclipseLink Summit 2010 Wrap UpEclipseLink Summit 2010 Wrap UpLast week we held our first EclipseLink Summit here in Ottawa with attendees from Canada, Germany, India, and the US. The principal goal of the summit was the exchange of technical information and ideas. We believe the event was a tremendous success!

We spent 2 days with committers leading technical sessions discussing areas from high level components and architecture to the detailed workings of EclipseLink's querying, caching, transactions, management, diagnostics, metadata processing, JPA 2.0 metamodel, and several other subsystems.

We would like to extend a big thank you to all of the committers who lead those sessions. The preparation time invested was obvious and the quality was amazing. We know it was tough to squeeze this additional work into your hectic pre-Helios schedules. The feedback from all attendees was excellent.

On the 3rd day of the Summit we focused more on the project itself. Starting with a talk from Jeff McAffer (EclipseSource) on OSGi and Eclipse RT technologies. We then discussed EclipseLink and OSGi, documentation, development process, build, testing and our road map planning.

While not all of these sessions allowed us to come to concrete conclusions the discussions were great and will hopefully carry on in our weekly committer meetings where we continue to improve our processes and refine our direction.

For those interested a more detailed summary of our Thursday sessions will be published to the eclipselink-dev@eclipse.org mailing list and topics requiring further discussion will be added to the weekly meeting agenda.

Thanks again to all the presenters and attendees. We are all looking forward to future EclipseLInk Summits where we can gather committers, contributors, and and users to share ideas and grow our community.

Your Summit Program Committee

Doug Clarke, Peter Krogh, and Shaun Smith

]]>
Welcome to 171 New Friends of EclipseWelcome to 171 New Friends of Eclipse

The response of our Friends of Helios campaign has been simply outstanding!! Since we launched the campaign 171 people have joined as a Friend of Eclipse. Thank you, thank you, thank you!

We just need 189 more people to join by the end of July to hit our goal of 360 Friends of Helios. If you haven’t already joined please consider joining today.

]]>
Opening files in Eclipse from the command lineOpening files in Eclipse from the command line
query to see all the bugs fixed in the Eclipse Platform in 3.6; it is a long list (4309 and counting). Felipe gets credit for the oldest bug fixed (raised in 2001), but in a close second is bug 4922 (raised only a day later).

This bug is about opening files in eclipse from the command line. Fixing it required a coordinated effort between Platform UI, SWT, and the Equinox launcher. A lot of the credit for what was done goes to Kevin Barnes.

This post is an effort to explain some of the technical details of what is going here.

On the Mac...: On the mac all we do is handle the apple event "kAEOpenDocuments", most of the rest of this post doesn't apply to the mac.

Windows and GTK... Everything below applies to Windows and GTK, though there are some differences in the implementation details.

On Motif... Sorry, this doesn't work on motif.

The Launcher

Everything starts in the eclipse launcher. We added a few new command line options:

--launcher.defaultAction : less obvious, specifies the action to take when the launcher is started without any '-' arguments on the command line. Currently the only support value is "openFile".

--launcher.timeout : a timeout value for how long we should spend trying to communicate with an already running eclipse before we give up and just open a new eclipse instance. Default is 60 (seconds).

The first argument is obvious enough, open the specified file in eclipse.

eclipse --launcher.openFile myFile.txt

This is great, but it is a bit much to type on the command line and is not quite enough to make everyone happy. We introduced the "default action" argument. This goes in the eclipse.ini file, the value should be "openFile":

This tells the launcher that if it is called with a command line that only contains arguments that don't start with "-", then those arguments should be treated as if they followed "--launcher.openFile".

eclipse myFile.txt

This is the kind of command line the launcher will receive on windows when you double click a file that is associated with eclipse, or you select files and choose "Open With" or "Send To" Eclipse.

Relative paths will be resolved first against the current working directory, and second against the eclipse program directory.

Talking to SWT

The launcher talks to SWT through the use of a hidden window. The launcher and SWT both need to agree on the name of this window. This allows the launcher to find an already running eclipse and tell it to open the file. Any RCP application will need to ensure they get this right for things to work.

The launcher bases this on its "official name". The official name can be set with the -name argument. If -name is not set, then the official name is derived from the launcher executable, the extension is removed and the first letter is capitalized: rcp.exe becomes Rcp.

SWT bases this on the value set with the Display.setAppName() function. Normally, this is set by the Workbench when it creates the display and the value is the "appName" taken from the product extension point.

Listening to SWT

To take advantage of this, an RCP Application will need to register a listener for the SWT.OpenDocument event. It should register this listener before calling PlatformUI.createAndRunWorkbench so that the listener is in place before the workbench starts running the event loop.

The event loop will start running while the splash screen is still up, so events may arrive before the workbench is ready to actually open an editor for the file. This means that the listener should save the file paths it gets from the OpenDocument events so they can be opened at some later time. WorkbenchAdvisor#eventLoopIdle can be a good place to check for saved open file events.

Implementation details

Here is an overview of the flow of events in the launcher when processing --launcher.openFile on windows.

Get the Official Name. As mentioned above, this is the "-name" argument, or derived from the executable name. For this explanation, we will be using "OfficialName".

If multiple files are selected and opened on windows, then a seperate eclipse process will be created for each one. The mutex allows us to ensure only one eclipse instance is actually started.

One process will win the race to acquire the mutex, at this point, there will be no eclipse instance running that has the SWT window available. This process will start normally and eventually create the SWT window at which point it will release the mutex.

All the other processes wait trying to acquire the mutex, once the original process releases it, they will be able to find the SWT window and post their open file message there.

Each process only waits for --launcher.timeout seconds (default 60 seconds) before giving up and just starting its own full eclipse instance.

Find the window named "SWT_Window_OfficialName"

If no such window exists, we are the first eclipse instance. In this case, we set a timer to look again later and then proceed with starting eclipse.

The timer fires every second for --launcher.timeout seconds.

If we don't find the SWT window before the timeout (perhaps it took too long for the workbench to create the display), then we will be unable to open the file.

Send a message to the SWT window

Once we've found the SWT window, we create a custom message named "SWT_OPENDOC". We send this message with wParam & lParam specifying a shared memory id.

We write to the name of the file to open into shared memory, and when SWT receives the SWT_OPENDOC message, it uses that id to read the shared memory.

The launcher has long used shared memory on all platforms for the splash screen, restarts and exit messages.

Once SWT reads the file name from shared memory, it posts its own SWT.OpenDocument event.

Semaphores are not cleaned up automatically if the process exits unexpectedly. So we try to hold the semaphore for as short a time as possible and we install SIGINT and SIGQUIT signal handlers for the time we hold the semaphore.

The launcher creates a hidden GTK window named SWT_Window_LauncherOfficalName which is used in the same way as the mutex on windows. This lets us avoid holding the semaphore for an extended time while the first eclipse process starts up.

The value is a colon separate list of file paths to open. Shared memory is not used like it is on windows.

]]>
My Helios releaseMy Helios release
Well, the same procedure as every year (no, it's not p2 this time:-) - here is my personal "Helios" release including updates of all of my own plugins (there is no FindBugs Eclipse plugin update until now, as I was/I'm heavily involved in the MercurialEclipse development). BTW, 5 of the plugins are hosted now on Eclipse Labs and using Mercurial as version control system behind.

Side note: Last August after the disaster with bugfix propagation between different branches in the FindBugs project I've started to looking for a replacement for the SVN (which was the evil system). I quickly became the committer rights in the MercurialEclipse plugin - and now few months later I've managed all of my project updates in a full featured IDE without the command line.

There are no groundbreaking changes - mostly some code polishing and few small enhancements, as most of my plugins are mature enough. Go to the project pages to see what is the difference to the old versions. Warning: with this "Helios" release I give up support for Eclipse releases older Eclipse 3.5. Technically there are no changes preventing the code to run on older Eclipse versions, but I simply don't have time and energy to support ALL of the possible plugin/platform/OS combinations. Eclipse 3.5 + 3.6 should be enough from now.

One of the biggest challenges when implementing the AUTOSAR standard is to create scalable solutions. AUTOSAR projects might become very, very big and in order to work with such huge projects the tools need to perform very well. In the modeling world people usually strife for repository-based (i.e. database-based) solutions as soon as projects get really big. However, the folks at BMW Car IT wanted to develop AUTOSAR projects in a traditional text-based manner and given all the good experiences with tools like JDT or IntelliJ, it's clear that text-based IDEs can scale very well.

In order to implement the language they first had tried the old Xtext version from oAW, which was way too slow. Then, later, when the new TMF Xtext came around they gave it another try and saw that the performance had been improved significantly. Sebastian showed the following slide in his talk:After that he explained how they have solved a couple of other problems, such as supporting different releases of the standard or making the language extendable. All the solutions looked very nice, they must have some very skilled people at BMW Car IT.

He compared working with the state-of-the-art commercial graphical modeling tool (What's the name of it?) and the ARText IDE and found that the use of ARText reduces development time by about 40%.

In the end there was this nice summary slide I don't want to withhold:It was a very nice talk and of course a pleasure to see that Xtext is used by such smart people in such an interesting environment. And even better it seems that they've had as much fun using Xtext as we had and still have when developing (and using) it. :-)

Btw.: If you have other interesting applications of Xtext or Eclipse Modeling in general, please contact me. (Even if you don't want me to blog about it ;-))

]]>
Pair Programming in the WildPair Programming in the Wild
XP's most controversial practices, and that may have been one of the reasons I initially got attracted to it about 5 years ago. After all, sticking only to practices that are mainstream, will end up with mainstream results, yet studying out-of-the-box practices that may potentially yield a world's difference in productivity and quality, is like discovering an O(1) algorithm in comparison to O(n): big difference!

Notice the emphasis on how the programmers "solve problems together" as opposed to write code together. In other words, writing code is not the bottleneck, solving problems is.

If writing code was indeed a bottleneck, then pair programming would have been a very different skill. It would have been about one programmer learning how to type on two keyboards at once instead of two programmers typing on one keyboard. It would have been about dedicating your left brain for one computer monitor and your right brain for another. It would have been about writing code that writes code for you. All of these things would have been interesting skills to master if writing code was the bottleneck.

In reality though, writing code is just a tiny concern in comparison to solving big programming problems for business. And, here are just a few examples of the problems I am talking about:

Where do I put the responsibilities for reporting on a collection of objects to make the code as maintainable as possible in the future?

What is the most efficient SQL query I can write to have the report run fast?

Do I need pagination or is the result set small enough?

Is it worth applying the State Pattern to this problem or are the state related actions few enough to warrant not applying the pattern?

Do I need a layer of presentation objects between the models and the view or would the code end up simpler without it?

I cannot emphasize how often I have spent hours on such problems on my own, only to take a break and talk to another developer, and then get an immediate solution from their point of view.

That made me curious about all the scenarios that benefit from pair programming:

Decisions related to code aesthetics/API often get resolved quickly when validated against another developer's opinion, finishing faster, and with clearer code.

When deciding on one of multiple alternative solutions to a problem, a developer working alone may hesitate quite a bit about picking what is best for the team. Having a second developer present provides more confidence and speeds up the decision process.

Synergy is the idea of 1 + 1 > 2. This can help a lot in solving problems that involve creativity. Often developer A has one solution in mind that is not optimal and developer B has another solution that is not optimal. So, leaving one developer to implement his solution alone may yield mediocre results whereas having the two developers discuss their solutions first may yield a new solution that is much better than the two original ones.

When solving a problem that requires multiple skills (e.g. OO skills vs SQL querying skills), it is common that no one developer on the team is the best in all of them. So, having two developers work on the problem will increase the chance of addressing all parts of the problem optimally, and at the same time cross pollinate the developer skills. For example, I have learned quite a bit from pairing with a developer who was proficient at SQL, while I helped him learn quite a bit about OO design.

When the driver spends too much time focusing on a problem that is of low priority, the navigator who has more of a bird's eye perspective will often notice that quickly and prevent the driver from getting derailed for a few hours unnecessarily.

Under the surface though, there are less apparent under-estimated benefits that improve developer skills and the development team quite a bit in the long term:

Having developers socialize while programming on a daily basis increases team bonding and commitment toward the success of the project.

It can be quite fun, thus greatly motivational.

When developers of different experiences pair together, they cross pollinate their knowledge, learning quite a bit from each other, and getting stronger in the long term. One example of this is the number of shortcuts I learned while programming with the Eclipse IDE on Java projects. I got to a point where I can almost do anything by keyboard without ever wasting time reaching for the mouse. And, whenever I paired with new programmers, they would get surprised by the number of shortcuts I knew, and tell me that it intimidated them to learn that many shortcuts. I had to explain to them that it was like watering a plant: I learned all my shortcuts a few shortcuts a week over 12 months of pairing with different developers, thus expanding minimal yet consistent effort.

Given that I am clearly sold on pair programming, does that mean I do it all the time? Well, there are cases when I avoid it for practical reasons:

I get exhausted from pairing for 5 hours straight. Yes, pairing can get exhausting, so it is important for a pair to realize the point at which they need to take a break from pairing.

I come to work tired from lack of sleep. I know I would not be effective pairing in that mode.

I have boiler plate work that is mind numbing such as data setup or the like. In this case, typing would indeed be the bottleneck, that can be a bad sign indicating lack of automation or having the wrong person do the job (developer doing the job of a data entry clerk).

I would like to work with a new technology on my own for a while in order to solidify my learning of it after having spent sometime pairing with someone on learning it.

So to summarize, pair programming is about synergistically solving problems, not just having two developers typing on one machine. As a result, the benefits are:

Increased productivity

Higher code quality, indirectly contributing to productivity in the long term.

Better solutions, indirectly contributing to customer satisfaction.

Increased team commitment

Continuous improvement to developer skills

Comments are welcome, especially to share personal experiences or ask questions about pair programming in the wild.

]]>
Autonomy, Mastery, and bug 313989Autonomy, Mastery, and bug 313989
day job to watch an interesting video on YouTube about what motivates people, as recommended by David Carver.

The video explains what might motivate people (who typically hold a challenging and rewarding day job) to contribute their efforts, for free, to open source projects, and it really got me thinking (spoiler alert -- go watch it first): The video concentrates on three significant drivers for motivation: Autonomy, Mastery, and a Sense of Purpose. I considered this is terms of my WTP interests:

"Autonomy" is right on target: Nobody told me to get involved with web tooling in Eclipse, this is purely an itch. For a free-time contributor, I believed I scratched my itch really good.

"Mastery" is on target too, since nothing teaches you a spec(*) as well as trying to implement it, or filing a bug against said implementation. It's nerdy, but rewarding!

"A Sense of Purpose" is a bit more difficult... What is the purpose of contributing to an open source project, anyway? Is it ... mastery for the sake of professional/career development? ... just "scratching an itch"? ... to improve the quality of a common resource? ... to earn the respect of my peers? ... to make the lives of users (other developers) easier? I'm not sure I have a clear answer on this one, but it got me thinking.

I wouldn't have taken as much notice if my watching this video didn't coincide with the WTP 3.2 RC2 build, which I took for a spin, and found three really annoying defects (bug 313989 being the most trivial of those). These bugs weren't really enough to warrant PMC reviews and all that process, but to me, they felt just like when you notice the first scratch on the paint of your brand new car: Sure, you realise it's probably going to get worse -- but you would have preferred not to know about it.

The point is this: I should have found those bugs earlier! If I hadn't been so busy investigating all kinds of other unrelated, non-WTP stuff (issues in Xalan and Hibernate, besides investigating how face recognition works, just because...) I could have done much better! Boom, there goes my sense of purpose, no matter how I look at it: Lost in unfocused dabbling.

So my conclusion was this: The "sense of purpose" motivating me to work with WTP is to develop the best IDE for working with XML schemas and documents, and to make our XPath2 implementation consumable for the likely adopters.My open source effort will be concentrated on that (WTP+XML) for the next two years: Make it to the New and Noteworthy. So while I might take other tools for a spin (Xtext rocks!), those other projects shouldn't wait up for patches from me -- for the next two years.

The mirroring task is used extensively in the Eclipse and RT Equinox build. Mirroring a p2 repository allows you to copy the metadata and artifacts to a new location. You can mirror your entire repository or a subset of IUs to a new location. We call mirroring a subset of IUs slicing.

We run the mirroring task with a comparator. This allows us to compare the bundles that have just been built with the bundles that already exist from other builds in the composite repository. We want to guarantee that the bundles with the same unique identifier and version have the same binary content. Do you know which of or your bundles is not like the other?

A different compiler with the same source could produce different byte code. Using a new builder can change the content of your bundles, for instance if you enable source references.

We use a comparator with a baseline to compare the bundles with the same name and id available in our repositories with the ones that were just created in the current build. Newer bundles with the same id and version are discarded. This process guarantees that if a user installs a build from a repository or a zip, they will have the same bundles in their install. Otherwise, you risk inconsistent bundles for your users. Not good.

We call the p2.mirror task like this:

We are mirroring from the source (unzipped repository of our build time feature containing all features and plugins in the build) to the child repository location. The IgnoreErrors flag is set to true so the mirroring operation doesn't fail if there are differences. The org.eclipse.equinox.p2.repository.tools.jar.comparator is used to compare the bundles between the two locations and output the differences to a log. The repository location or baseline is the existing composite repository with content from older builds. The comparatorLog is parsed by a JUnit test which generates a failure if the log indicates differences. You can also exclude certain bundles from being compared, as you can see in the exclude stanza.

Hey Kim, what's with all the oranges? I'm very fortunate that I have the opportunity to run my first marathon with my friends on a beautiful course in Ottawa and Gatineau this Sunday. It will be a challenge. After the race, there will be sweet orange slices in the recovery area for the 39,000 people running in Ottawa this weekend.

]]>
No OSGi on your phoneNo OSGi on your phone
OSGi was initially created for mobile and embedded world. I think it is the dream of an every OSGi geek dream to develop applications. for a cool OSGi engine that is bolted together with the operating system of your phone. An OSGi phone that would provide access to everything your phone can do together with all the goodness that comes from OSGi. A dream phone that would allow you to bring in your OSGi service that integrates with your cloud service, over the air when declared as dependency. Although, OSGI had a few attempts to really break into mobile phone world it is unfortunate that OSGi phone will remain a dreams for foreseeable future.

Nokia did work on an OSGi based Java environment for Symbian phones. Nokia even went to the trouble to create a JSR for it. It established a pretty ambitious R&D program around OSGi. It was actually these ambitious goals that eventually caused it to fail. The R&D program was ambitious because it not only promised to provide OSGi but it also tried to get Midlets to work together with the OSGi engine. However, OSGi aware midlets model especially the MIDP security did not fit to OSGi and the R&D effort was never able to deliver a solution that was acceptable. In my personal opinion the main flaw was on the effort was treating OSGi as another runtime on the device rather than the main engine.

Most of the effort did not get wasted on Nokia's effort though. On the older MIDP environment all the pieces of the Java environment (all JSR implementation etc..) was compiled into a single binary together with the VM and was loaded together with it. OSGi model required a flexible architecture so almost all the pieces of the Java environment was re-designed for it so that they would be separate libraries consisting of a jar and possibly a native dll. These pieces are compiled separately and are loaded on demand by VM. This architecture later converted into MIDP environment as well. The Java environment of S60 3.2 and later devices and the Java environment currently available as part of the Open source Symbian foundation code carries this architecture. A few advanced APIs such as eSWT was also created in this era.

A second opportunity was when Google started working on Android. The goals of the Android was almost a perfect match with OSGi. And this time OSGi would be the engine that runs all the services and applications of the phone which lacked on the earlier Nokia attempt. It is also known that there were members of the Android team who did know well about OSGi. It is hard to know as an outsider what really went but Android did not use OSGi and build its own version of concepts to provide similar functionality.

Although there were later attempts like the Sprint Titan platform to bring OSGi to mobile phones, they also failed when the smartphone market changed rapidly to different directions. Unfortunately, in the current climate it looks very unlikely that anyone will bother to spend the time and energy to make OSGi part of mobile phones.

]]>
An embedded interpreter for eclipseAn embedded interpreter for eclipse
As a java developer who is starting to use ruby and javascript for a lot of things lately — there is one thing I miss most. An embedded shell/interpreter for eclipse!

An embedded console or an interpreter is a very powerful tool, it allows you to do some very interesting things with your software as it is running, play around with it, tweak it and anything else you can ever imagine. All of this without the edit-save-compile-relaunch cycle.

Lately I’ve been working on an embedded console for eclipse. The primary motivation was to try out scripting approaches for SWTBot. But I soon realized that I was using it for more than just scripting tests. I was using it to learn how eclipse works, try out different approaches to decide which one is best.

Some of the features include code completion, history lookup. This is possible using jruby’s objectspace and the readline support.

Here’s a small teaser video of what you can do with it an embedded jruby console for eclipse.

In this two-part screencast you are going to see how to install and use the SpringSource Tool Suite with a Google Web Toolkit to develop a working web application.
Screencast is based on Gooogle I/O 2010 presentation with detailed step by step instructions how to install and use STS with GWT 2.1

Yes!, This plugin notify the test result to Growl.This is simple, but effective to drive to develop your application!This plugin uses native library. So there are some constraint to use it.If you use Eclipse for cocoa 64 bit edition, you can't use this plugin because the native library can't load on your Eclipse Environment.

The plugin is published below.http://kompiro.org/download/junit.extensions.eclipse.quick.mac.growl_0.1.0.201005252224.jarIf you interested in this plug-in, then download and put it dropins folder!

]]>
Where Do Bugs Hide?Where Do Bugs Hide?They hide in the code that is not being executed by your unit tests. The code that is not under test is just an incubator for them to multiply and fester. Along with code reviews, and static code analysis tools, tools like Emma and Clover, should be a part of your build and development process.

For Maven users, it is as easy as adding emma:emma as one of the goals.

Hudson has a really nice Emma plugin that makes these results visible. The stats do not tell the whole story but they tend to reveal a lot about the test coverage of your current test suite.

]]>
My love-hate with SVN, Part 8: Installation Ease Of Use (UPDATED)My love-hate with SVN, Part 8: Installation Ease Of Use (UPDATED)
Back in July 2009, I blogged about My love-hate with SVN, Part 6: Installation Ease Of Use. With Helios just around the corner, I wanted to produce an updated repo for use with the latest & greatest Eclipse 3.6.

The conference was thick with Eclipse-love, starting with multiple mentions during the keynotes on Tuesday. It was clear that everybody that was on stage assumed that everybody in the audience knew about Eclipse.

We received a steady flow of guests at our booth in the Sandbox. Most of the visitors knew about Eclipse. Well… they knew at least something about Eclipse. Most knew about Eclipse, the IDE. Ian and I took the opportunity to broaden horizons wherever we could. “Yes, Eclipse is a Java IDE. But would it surprise you to learn that Eclipse is really an integration platform? A platform for building tools? The most comprehensive set of open source modeling tools and runtimes anywhere? A runtime platform? Would it surprise you to learn that Eclipse has entered the runtime space? Heck, we have more than 200 different projects cover everything from IDEs to identity management, and object persistence” (it’s always a challenge to come up with a good pithy gamut for Eclipse).

A lot of our visitors use Eclipse to build applications with Google Web Toolkit (GWT); they came to us with both kudos and questions about the GWT Tooling. I was a little embarrassed that I have not spent any time with GWT development, but still took the time to tell them about recent efforts to provide EMF support for GWT, providing me with ample opportunity to introduce vast numbers of modeling rednecks to a brave new world.

Perhaps the lion’s share of the visitors to our humble booth use Eclipse to develop applications for Android. Again, there were kudos and questions. As is often the case with questions about Eclipse, the first challenge is to determine who is the right group to field the question. Since the Android SDK is based so heavily on Eclipse, it’s difficult to know if the Android SDK project, or the Java development tools (JDT) project, or the Eclipse Platform project, or some other source is the right place to go for help. Most of the questions were pretty solidly the domain of the Android SDK team, but the exercise highlighted the fact that finding help is still a big challenge. Frankly, I think that Eclipse Forums are an excellent place to find help; but I also quite like Stack Overflow (especially for questions that venture outside the domain of Eclipse projects). A couple of visitors asked about building Android apps with native code. Thankfully, Doug took interest in this topic some time ago, so we have an answer for this.

There were a lot of folks who just wanted to come by and bask in Eclipse greatness. I love the whole fan-boy thing. Some folks just want to say how much they love Eclipse. Others came to challenge me to show them something that they hadn’t already seen. For some, “CTRL-1″ did the the trick. For others, I pulled out Mylyn. Nobody left disappointed.

My main take away from the conference is that Eclipse is very much a part of Google’s tool strategy. My sense is that there is a lot of opportunity for other Eclipse technology; like every other audience of Eclipse technology, our task is to leverage the love of Eclipse-based IDEs into broader knowledge of Eclipse as a whole. I think we made some excellent progress on that front last week.

I also managed to take away two phones: a Nexus One and an HTC EVO (with a month of voice and data service that actually works in Canada). Both are very nice (though the EVO is a little bulky). They have inspired me to spend a little more time with the Android SDK. Let’s see what other Eclipse technology we can shove in there…

]]>
My take on Android Keynote at Google I/OMy take on Android Keynote at Google I/ODespite the lack of surprises, there were a few interesting points that were made, and a couple that were not which gave me pause.

First, was Google TV. Yes it too was rumored and came out pretty much as was rumored. There were two things I found interesting about it. And no, a set top box based on Android isn't one of them as we pretty much expected that eventually. No, first was confirmation that the initial Google TV boxes will be based on Intel Atom chips. What that means is the real first confirmation of an x86 port of Android from Google officials. The new Android Native Development kit r4 that also came out today also confirms it with an x86 port of the native libraries. It's not quite cooked yet as there's no toolchain to build with and no image to run on, but they're there and a sign of things to come.

I guess the other thing that struck me about Google TV that I think was even more interesting was that they were running Google Chrome on top of Android on top of x86. I've heard rumbling about the lack of information about Chrome OS at the conference. Now, if they have Chrome running on Android for Google TV, why not run it on netbooks too? And then why have Chrome OS at all. That's where I'm guessing there won't be one or if it does appear, we'll all be wondering why when you can get all that plus Android apps to boot just like on Google TV. Or, maybe, that is what Chrome OS is. We'll see (or maybe not)...

So there were a few new things in the Android NDK for me to chew on. One that will make the CDT build integration I've been working on easier to do is the ability to build outside of the NDK directory tree (weird, yeah, but they previously reused the build system from the platform that is done that way). So look for that sometime in the near future. There's also gdb support for native applications. I'll need to see what's needed to hook that up to CDT's debug interfaces. In the end it'll be a great public example of how to use the CDT in a cross compile and remote debug scenario. Not to mention it'll be fun to use :).

]]>
Helios RC1 +1, +2 +3, goHelios RC1 +1, +2 +3, goHelios is organized in layers, Eclipse projects gets a number like +1, +3, -1. It is meant to indicate that a +2 project depends on the builds of a +1 and lower projects.

For example, GEF is +1 and depends on the 0 layer Eclipse platform build.

Now that the GMF Restructure is completed, I was keen to make sure I adopt and build with all the RC1 builds for all the dependencies.

Historically, some projects do not do this. They build with the latest available. This leads to some problems and you will see an RC1a build here and there. I wanted to make sure GMF was clean.

GMF has historically been a +2 component. If you look at the diagram below, +2 never fit really well, but given there are +3 components depending on the GMF Runtime, we made it work.

The projects in green are the builds I produced this week. Unfortunately, I was "mostly late" on all of them.

I produced GEF RC1 late on Monday 9:20 EST, we were late pushing some fixes for a few critical bugs.

I was delayed with the EMF project builds. I was waiting for UML2 RC1, but decided I could not wait anymore. EMF Transaction was the last one after EMF Query and EMF Validation on Tuesday at 3:11 EST. (The UML2 team did confirm they were not going to produce an RC1).

Since Tuesday was supposed to be +2, GMF Runtime was going to be a little late as well. It actually was built at midnight on Wednesday, the +3 day.

I decided not to wait for M2M QVTOML RC1 and pushed the last GMF Tooling build. So GMF Tooling got done Wednesday morning 9:09 AM in time for the RC1 Helios packages.

Apologies for the anyone out there waiting for any of the projects I am responsible for.

]]>
SWTBot and Eclipse 3.4 (Ganymede)SWTBot and Eclipse 3.4 (Ganymede)SWTBot has so been supporting Eclipse Ganymede since the last 2+ years before it moved to eclipse.org.

Ganymede is now almost 2 years old and the last bug fix release was in Feb 2009.

There has been a Galileo release of Eclipse in the Summer of 2009, and there is a new release Helios coming up the horizon.

Given this situation, it is very difficult to continue to provide a light weight testing tool that works across 3 different versions of eclipse on 4 different platforms linux, windows and macosx(carbon/cocoa) while backporting APIs that only work on newer eclipse versions.

In light of this, I’m considering dropping support for eclipse 3.4 for future releases. I’m happy to assist anyone wanting to contribute efforts towards maintaining a release of SWTBot for Ganymede.

The v2.0.0.568 of SWTBot made last night would be the last that supports Ganymede.

]]>
Implementing Date Support with Quickfix using XtextImplementing Date Support with Quickfix using Xtext
Intro

Now that Xtext is at 1.0 RC1 I thought it was time to start using more of all the new features for Eclipse b3. One of the features I wanted to add was to support time stamps in a nice way in the editor. Internally, a time stamp is naturally stored as a java.util.Date so there is never a question about the exact UTC it is representing. When editing however, you may want to use some other format (if not copying an actual timestamp, you may want to use something like 'feb 10, 11:00:00am' .

The issue is that the reference to 'feb 10, 11:00:00am' in the source text has no time zone information, and the name of the month may not be in english etc. In order for the source to be valid everywhere, it would be required to fully specify the date format used, as well as the timezone and store this in the source. I choose a middle ground where the editor understands the more human friendly formats and offers to help to convert it to a format that is always possible to parse.

All of this may not be all that interesting, but it gave me opportunity to try some features of Xtext that I had not used. The rest of this blog is about my first iteration of the implementation, and it shows some Xtext techniques like:

Using an ecore data type in the grammar

A Date value converter

Overriding the SyntaxErrorMessageProvider

Providing a quick fix for a ValueConverterException

The Grammar

This simply declares that a language element 'Entity' has a 'timestamp'. The TIMESTAMP rule declares that it returns an ecore:EDate. Luckily we don't have to state more than the import of ecore to make use of it in our language. Also in our favour is that EDate is declared in ecore. If this was for a datatype not in ecore, we would need to create a model containing the definition of the data type. As this was not the case here, we can move on to the data converter.

Date Value Converter

This is almost boiler plate code, but there are some interesting details. Here is the converter method.

01@ValueConverter(rule ="TIMESTAMP")02publicIValueConverter<java.util.Date> TimestampValue() {03return newAbstractNullSafeConverter<Date>() {0405@Override06protectedString internalToString(Date value) {07SimpleDateFormat fmt =newSimpleDateFormat("yyyyMMddHHmmssZ");08fmt.setTimeZone(TimeZone.getTimeZone("UTC"));09return'"'+ fmt.format(value)+'"';10}1112@Override13protectedDate internalToValue(String string, AbstractNode node)throwsValueConverterException{14string = string.substring(1, string.length()-1);1516// First choice, if a timestamp string, use it.17try{18// Allow non UTC strings since they are fully qualified with offset and can thus19// be parsed by anyone.20SimpleDateFormat fmt =newSimpleDateFormat("yyyyMMddHHmmssZ");21fmt.setTimeZone(TimeZone.getTimeZone("UTC"));22returnfmt.parse(string);23}24catch(ParseException e) {25// ignore and try timestamp format26}27// Second choice - if using java default for the locale28// Needs special processing as it probably does not contain TZ in the string)29try{30// try the default locale style of Date Time and see if it parses31DateFormat.getDateTimeInstance().parse(string);32// if this parsed, it is not likely that the default is the full33// format with timezone offset, so flag this as a special error :)34// that is fixable35// Although simple, it makes sense from a user perspective, a time in36// local format can be entered and transformed to a timestamp.37throw newValueConverterException("Not in timestamp format", node,newNonUTCTimestampException());38}39catch(ParseException e) {40DateFormat fmt = DateFormat.getDateTimeInstance();41String defaultFormat =(fmtinstanceofSimpleDateFormat)42?((SimpleDateFormat)fmt).toLocalizedPattern()43:"Default format for the locale";44throw newValueConverterException("Not in valid format: Use 'yyyyMMddHHmmssZ' or "+ defaultFormat +45"Parse error:"+ e.getMessage(), node,null);4647}48}49};50}

The code first tries to convert the string entered by the user using the wanted timestamp format. If this fails, an attempt is made to use the default format. If this works, we know we have source text that (most likely) does not have the correct time zone information in it, and we want to offer a quick fix to convert the format. But how can that be done — the ValueConverterException does not allow us to specify a 'diagnostic code' that allows a quick fix to detect the particular problem. The ValueConverterException is also final (in the 1.0RC1 release at least), so the only option is to use a marker Exception as the cause (In this case NonUTCTimestampException).

The final attempt to convert (again using the preferred timestamp format) is there simply to catch the error (it could have been remembered from the first attempt).

As you will see later, the design can be improved further by supplying the actual format that was used to successfully parse the entered timestamp in the marker exception, but I left that for a later iteration.

Note that the error message includes the two valid formats as feedback to the user in case the entered text was unparsable. It would be easy to try several formats.

Overriding the Syntax Error Message Provider

The default SyntaxErrorMessageProvider is a class that hands out SyntaxError instances describing a problem occuring in a particular context. In my case I just wanted to add handling of the ValueConverterException with my special non-UTC cause Exception.

As you can see, this is straight forward, simply return a SyntaxErrorMessage with a diagnostic code (a static string) that I called IBeeLangDiagnostic.ISSUE_TIMESTAMP__NON_UTC. At this point, non of the new code (except the data value conversion is in effect, and a bit of magic is needed to make it kick in.

Xtext makes good use of google guice dependency injection. In addition to the standard guice, there is also advanced so called 'polymorphic dispatching'. This means, that even if it is not apparent in the guice module Xtext generates for a DSL that something can be bound to a specialized class, it is still just as easy to bind almost anything by simply adding a method.

This is pretty much bolier plate code for a quick fix (when generating a DSL with Xtext, there is a sample that shows ow it is done). The code above simply converts the source string using the default format in the value converter, turning it into a timestamp in the correct format. It the replaces the string in the input text.

An improvement would be to pass the date format used in the 'Issue' (it is possible to pass data with a diagnostic code), but I did not look into how to do this with the SyntaxError class yet.

A big thank you to Sebastian Zarnekow at Itemis for pointing me in the right direction

]]>
Eclipse Banking Day in Copenhagen - just around the cornerEclipse Banking Day in Copenhagen - just around the corner
Eclipse Banking Day in Copenhagen. The event takes place at IBM's offices in Lyngby north of Copenhagen on June, 1. As of this moment there are 80 registered participants, so we have room for just a few more.

For those that don't know yet, here is the official description of the event:

Eclipse Banking Day is a day-long event for senior technical developers, architects and managers in the finance industry to learn how to better leverage Eclipse technology and the Eclipse community as part of their development strategy. The event will focus on three themes:

Eclipse as a platform for application development;

Leveraging Eclipse modeling technology for data exchange; and

Collaborating with the open source community.

Attendees will have the chance to hear speakers from leading financial institutions and experts from the Eclipse community. This event builds on the success of Eclipse Banking Days in London, New York and Frankfurt.

Attendees to the event must be employees or contractors of a financial institution. There is no cost to attend but pre-registration is required.

We found many different and varied speakers - some with a banking background and some with a technical background - some of them are:

Jochen Krause, Eclipse Source, talking about RAP and how an RCP application can be presented in a web browser

Oliver Wolf, SOREPA, talking about SOA technologies in Eclipse

Patrik Tennberg, Nordea, talking about their RCP application in the bank

And many more

The full program is found here. There will also be ample opportunity to talk with all these people during the day.

We have room for 100 people from the financial world in the Nordic countries (Denmark, Sweden, Norway and Finland) which helps determine the development strategy of their firms. It's typically people from banks, insurance companies, pension funds, and mortgage companies with titles such as CTO, development manager, system architect and the like.

Eclipse Banking Day is sponsored by BSI AG, EclipseSpource, the Eclipse Foundation, IBM, Instantiations, Purple Scout, ReportSoft, SOPERA and the RCP Company. The support provided by these organizations has made it possible to offer this event free of charge to participants.

]]>
On Model-Based Modeling Builds...On Model-Based Modeling Builds...This has become essential for the Modeling project, which now has roughly sixty active sub-projects, many of which are one or two committer efforts with no way to justify a full-time release engineer. With that in mind, the Modeling PMC has recently decided to standardize on one build engine - Buckminster (often affectionately referred to as "Bucky") - for all of its projects, starting with the Helios release.

Why standardize? The obvious reason is to spread the joy of supporting build infrastructure across multiple projects. Less obvious, but no less important, is our not-so-distant goal of having a single build chain that can support true continuous integration for the entire Modeling stack, which should be much simpler if all of the builds are using the same technology.

Why Buckminster? The people and technology were familiar, so that was obviously a factor. But we tried to make as objective a decision as possible. Key considerations were the following:

CDO and Teneo, having independently Buckminsterized last year, were enthusiastic supporters and made a strong case for the benefits.

Unlike the alternatives, Buckminster is model-driven (it uses EMF). This makes it a no-brainer for us modeling zealots.

We wanted to be able to reuse existing metadata, which is an advantage that Bucky has over Maven alternatives.

Having a build that runs the same way in a developer workspace as on the server makes it much more efficient to spread build responsibilities across the teams.

Adopting Buckminster gets us a step closer to using b3 (Buckminster will soon be supported as a build execution engine for b3), which we think is the future.

Last, but not least, someone (i.e., Cloudsmith) stepped up to do the work!

Upon closer inspection, Buckminster had a few holes that needed filling. Support for automated build identifier generation/insertion, CVS tagging, and dependency version range management were non-negotiable for build slackers like Ed Merks (not to mention the rest of us mere mortals), and automated build promotion via Hudson was also highly desireable. So we rushed these changes through in time for Helios.

The effort of migrating from various older build systems (PDE Build, Athena, and variants) was not inconsequential. However, it ended up being relatively painless. One reason I can say this is because Michal Ruzicka (Buckminster committer and my colleague at Cloudsmith) did pretty much all the work. Michal was able to Buckminsterize most of the key Modeling projects in roughly a month of effort, which was pretty amazing, all things considered. Thanks again, Michal!

The first Buckminster build of EMF went live with M7 two weeks ago and the many other Modeling projects will soon follow. We'll be cutting a few key projects over as Helios heads toward completion. A number of others have chosen to postpone switching until just after the Helios release.

We'll send out periodic updates as the individual projects adopt the new build system over the coming weeks, so stay tuned for more details. In the meantime, if you want to hear more about what we're doing (and how), let us know!

]]>
Mobile phone is the new browserMobile phone is the new browser
Internet, especially web has changed our world. It has become a major part of how we shop, have conversations, have relations, learn, teach etc... Web browser has been the principal tool for most of the interaction with Internet. Viola was the first web browser I have ever used. Regardless of many advances on the web technology and many browsers and browser versions that accompany them, the main capability of the web browser stayed constant.

Internet, on the other hand, did not remain constant, continued to be part of our work and leisure life. When we moved into the cloud computing era, a mindset change of how we think about computing also accompanied it. Our interaction with Internet evolved to be two way, creating content, taking part in the social networks became the normal interaction. Using cloud services for all sorts of computing needs started to be the primary choice.
The new Internet experience and cloud services, to enable their full potential, require a new browser. A browser that is not made only for consuming content but also for creating it. One that can be an almost natural part of our daily life. Our current browsers do provide a limited way to participate on the web mostly in a textual manner, It is a better than nothing interim solution. They fail completely on becoming a natural part of our life. I believe, the new browser is the mobile phone, and I do not mean just the mobile browser that comes with your phone.

I think it is easier to understand why we need a new tool for easier content creation. Active contribution has been central to Web 2.0, the concept that has been shaping the web for the last decade. We are at a point, that we expect to be able to contribute to web applications that we use. Most mobile phones already include great content creation tools on board. Camera, video, GPS capabilities already provides opportunities for content creation. Web sites like CNN's iReport do benefit from these capabilities. A quick visit to Flickr's camera finder reveals that at least one of the top five cameras on the Flickr community is a cameraphone. GPS is also another mobile phone built-in feature that is having an impact on content creation. Panaromio, is a good example of how geotagging the content, in this case photos, innovates what can be considered a legacy content. An excellent example of how mobile phones can serve our needs for content creation in new and innovative ways is the Ocarina app. for iPhone. An application that allows you to create music using the sensors of the phone and then share your creation. Applications like Ocarina is a precursor of how our new browser can innovate our latest addiction.

Of course, a mobile phone because it is mobile and with us all the time is already part of our life. However, its communication capabilities is what makes mobile phones eligible to be the browser of our life. Broadband 3G and WLAN is crucial for communicating with the cloud services. Besides its various built-in sensors, its further communication technologies, such as bluetooth and NFC, allows mobile phones to act as a gateway for all kinds of remote sensors. What do I really mean by browsing your life and how it relates to sensors, let me try to explain by some examples. An already widely used example of such applications is the Nike+ products where the data collected by sensors during sports activity is uploaded to a service using a mobile device, in this case iPod. Another similar service that I enjoy using is the sports tracker, where data is collected for outdoor sports through GPS and optionally a heart rate monitor by a mobile phone and uploaded to a cloud service. This technology can easily stretch beyond sports. Another product that consists of a wearable monitor for collecting medical information such as the hearth rate, respiration, body fluid status already exist. The product uses its own separate transmitter to transfer the collected data to the web service for further processing. I think, in the future, this transmitter will be replaced by a mobile phone software and which in turn will make the service more affordable and common. I believe what we see today are just the beginning of the kind of services that will be built around the life browsing capabilities of mobile phones.

I hope this gives another perspective on why the traditional consumer electronics companies are less relevant to mobile phone market. Mobile phones are not about consumer electronics anymore it is about the next and possibly the final round of browser wars.

]]>
Webmaster kudosWebmaster kudosI'd like to take the opportunity to say thanks to the webmasters for some work they did this weekend that substantially improved CVS performance. From Denis

"We moved the download.eclipse.org and archive.eclipse.org mounts to the other NFS server (the one that serves pserver and other less important stuff), and that seems to have made an enormous difference".

Our build now takes 40-50 minutes less because of this change. This makes our team much more productive. And a faster build makes me a happy release engineer.

Thank you Matt and Denis for all your hard work improving the eclipse.org infrastructure.

]]>
Staging changes, one by one…Staging changes, one by one…
It’s been exactly one month since I bickered about the EGit plug-in and I must say that it’s really come a long way. I would like to thank the developers for their hard work, their employers for letting them work on open source *cough, cough*, and for reviewing and committing my patches as I push them to Gerrit for review.

Regarding my last blog post on this matter, the synchronization feature that I hacked up has now been handed off over to Dariusz Luksza, the (un?)lucky student who will be working on JGit/EGit thanks to the Google Summer of Code programme. Naturally, this is good news for me because it means I can spend more time on Demon’s Souls (holy mackerel, this game is hard, I’m pretty sure I died ten times in three hours) and StarCraft II. But, of course, programming is my true calling (or is it IRC?) so I’ve actually been cooking up a side dish during the evenings and the weekends.

Obviously, I’ve already pushed the change to Gerrit for review by the EGit developers. If anyone’s feeling adventurous and decides to fetch the change and test it, please feel free to leave a comment on bug 313263. As I said before, I don’t know anything about Git, so what makes sense to me probably doesn’t make any sense to you. ;)

Please don't even try to start your first Eclipse Labs project with Subversion. Subversion time is over. It is now "legacy" and clearly behind Git and Mercurial. Please give Mercurial (and MercurialEclipse) a try! It is worth the effort, especially if you use MercurialEclipse as Mercurial GUI frontend.

]]>
"Add new expression" inline in Expressions View"Add new expression" inline in Expressions ViewThe debugger expressions view has a new feature: the ability to add a new expression without opening a dialog. When user clicks on "Add new expression" entry, a cell editor is activated to enter the new expression.

I hope most people will appreciate this little convenience but since introducing it few months ago I got one complaint and a request to make it optional. What do you think? Is it worth adding yet another preference to try to make everyone happy?

]]>
Introducing Eclipse LabsIntroducing Eclipse Labs
Back in December, I discussed a number of initiatives that the Eclipse Foundation was going to be working on in 2010. The one that attracted the most feedback was “Eclipse Labs”. Well, we are very happy to announce that thanks to Google, this idea has become a reality. Better yet, Google has already released a cool new project “Workspace Mechanic” on Eclipse Labs.

The Eclipse community has a large and vibrant ecosystem of commercial and open source add-ons to the Eclipse platform. In the open source world, there are two options if you want to start an Eclipse oriented project: 1) propose a project with the Eclipse Foundation or 2) start a project on one of the existing forges, ex. Google Code, SourceForge, Codehaus, etc. For some projects, the IP due diligence and development process expected of Eclipse projects is not warranted. However, creating an Eclipse project on a forge makes it difficult to gain visibility in the Eclipse community. Can we find a third option that allows projects to start and prosper without the process of the Foundation but at the same time gain some of the visibility Eclipse projects often get by being at the Foundation?

Last year, we started a discussion with the people running the Project Hosting on Google Code service to see if they would be interested in creating an Eclipse area on Google Code. They had already been thinking along the same lines and were very receptive to the idea. Therefore, I am excited to announce the availability of Eclipse Labs, a third option for Eclipse oriented open source projects.

What is Eclipse Labs?
If you have ever created a project on Google Code you will quickly recognize Eclipse Labs. Eclipse Labs allows you to very quickly create an open source project with access to an issue tracking system, source code repository (Subversion or Mercurial) and a project web site. The default license is EPL but you can change it to the other licenses available on Google Code. Anyone can create a project on Eclipse Labs at any time. (Assuming you agree to the Google Code terms of use and the Eclipse Labs guidelines.) Eclipse Labs projects are encouraged to use the org.eclipselabs namespace, but are not required to do so.

Eclipse Labs project owners will also be encouraged to create tags/labels to describe your project. We have pre-populated a set of Eclipse specific labels that will be displayed on the Eclipse Labs search page. Eclipse Labs will also have an API that allows people to search on these labels. My hope is that Eclipse projects will begin to highlight on their own web site Eclipse Labs projects that are relevant to their own project. For example, Eclipse BIRT could list all the BIRT add-ons created on Eclipse Labs. We also want to populate Eclipse Marketplace with the projects from Eclipse Labs. The API is not yet available but it should be in the next couple of weeks. I think this will present a lot of opportunity for cross pollination for Eclipse Labs projects.

What is Eclipse Labs Not?
Remember, this is a third option. Projects hosted on Eclipse Labs are not official Eclipse projects. Therefore, they can’t be called Eclipse projects, use the org.eclipse namespace or be included in the Release Train or Packages. If an Eclipse project wants to include an Eclipse Labs project they will need to go through the normal IP process. If a project wants any of these benefits they must become an Eclipse Foundation project. The details have been specified in the Eclipse Labs Guidelines.

Moving ForwardEclipse Labs is open for business now. It is still in a beta form, so please provide your feedback.

Our hope is that Eclipse Labs quickly grows to a larger number of projects than are already hosted at the Eclipse Foundation. We need to make it as easy as possible for someone to open source their awesome Eclipse based technology. Not all projects need to be hosted at the Eclipse Foundation and in fact I am hoping more projects will start at Eclipse Labs and then, if they choose, graduate to the Eclipse Foundation.

Big Thanks to Google
The people at Google have been great during this process. Google has once again shown their commitment and support for the open source community. Obviously without this support Eclipse Labs would not have been possible.

Thanks also goes to Ian Skerrett for driving this from our side!

]]>
Beauty of the Maven POM editorBeauty of the Maven POM editor
people still like Maven POM editor I designed few years ago for Maven integration for Eclipse. The editor allows simple XML editing with number of code completions and template support, as well as structured form-based view of the entire Maven POM model. It also includes several tools, such as Dependency Hierarchy and Dependency Graph views for the current project.

Not many people know that POM editor can be used with pom.xml files outside of Eclipse workspace, including files opened from CVS or SVN Repositories, History view or Maven Repositories view. So, you can see a form-based representation of the project dependencies, as well as explore dependency hierarchy for projects without importing them into Eclipse workspace.

Unfortunately there is several regressions since Sonatype took over the project. For example you can't see form-based representation of an effective POM and editor pages had been shuffled in an odd order, but most of the features still there.

I believe that POM editor is playing a key role in Maven integration for Eclipse and it opens huge number of possibilities to help developer to do various common tasks, from analyzing project dependencies from artifact down to the class level, down to collaboration within project team. That is why I created extension points to allow 3rd party integrations. For example, you can add a custom POM editor page/tag using org.maven.ide.eclipse.editor.pageFactories extension point. Custom menus can be also added in various places using standard Eclipse's object contribution mechanism. So, it is now up to you to extend it.]]>
Eclipse IAM WTP support, now EARs tooEclipse IAM WTP support, now EARs too
I recently had some time to spend in Eclipse IAM, working on improving the WTP support.

Version 0.11.0 already had good support for WAR projects, including war overlays (which was a bit tricky to implement in Eclipse). Now the last builds of the coming 0.12.0 version have EAR support.

You can import your Maven EAR projects and Eclipse will recognize the Maven-generated application.xml and configure automatically the dependencies to the other WAR projects opened in the workspace, with no extra configuration from you. And from the usual WTP "Run in Server" wizard you can run the EAR project and all associated WAR files in your favorite application server.

When I am working with Eclipse quite a few times I would copy a file’s path or want to open a location in Command Prompt to run some scripts. This is always cumbersome. I have to go to the properties and copy the path, go to command prompt, cd to the location…

StartExplorer is a nice plugin that does all of these and more. The screenshot should say a lot… Options like show resource in explorer, start cmd.exe here topped with keyboard shortcuts. Saves a lot of time for me everyday. You can create custom commands too.

]]>
What I have been working on for the last 2+ yearsWhat I have been working on for the last 2+ years
Many of you will have seen me around, often speaking at EclipseCon (since ‘07). But this year, I couldn’t make it - we have been working on releasing something cool. See the video below:

Most UML Tools focus on generating code from diagrams, in addition we want useful diagrams made easily from code. Where other tools require months of work to get something useful, we want to get you useful results in minutes if not seconds. Some tools require reading lots of documentation to use, but we have wanted a tool that you can get up to speed in 5 minutes.

What do you think of it? I would like to hear what you think. Post comments here or on the Architexa blog.

]]>
Virgo kernel checked in and ready for useVirgo kernel checked in and ready for use
checked in to Eclipse git earlier today and is ready for you to take it for a spin.

No, this isn't the Virgo kernel - it's a 5 Mb hard disk from 1956 which weighed over a ton. The Virgo kernel zip file would have needed a couple of tons of hard disk for storage. Thank goodness times have moved on.

]]>
On Google I/O...On Google I/O...
Google I/O on May 19 and 20, talking up the work we've been doing with EMF on GWT and just generally learning more about all the great Google technologies we depend on.

Regarding EMF support for Google Widget Toolkit, we hope to have a working implementation of full modeling support for GWT applications before too long. Ed has been hard at work on this, and we'll soon have optimized object serialization between client and server, and a generic GWT editor for EMF-based models. We think this work will be really useful for GWT development once it's done.

With respect to other Google technologies, Cloudsmith is particularly interested in App Engine and BigTable; we're using them now but still coming up the learning curve. Next after that is Wave, which we'd like to use but doesn't seem quite ready for prime time. We're hoping/expecting to see a renewed Wave commitment and inspirational roadmap from Google at I/O next week.

Planning on being there? Let me know if you'd like to meet up!

]]>
The b3 aggregatorThe b3 aggregatorThe Eclipse b3 Aggregator is based on and part of the Eclipse b3 project. Eclipse b3 provides a versatile and adaptable framework supporting build, assembly and deployment processes. It supports a rich set of use cases. One of those - the aggregation of repositories - is the focus of the b3 Aggregator tool.

The Eclipse b3 Aggregator combines repositories from various sources into a new aggregated p2 repository. It can also be configured to produce a hybrid p2/Maven2 repository. There are many situations where using aggregated repositories is a good solution, here are some examples:

Projects want to provide convenient access to their products - Installation instructions requiring the user to visit several repos for a complete install are not uncommon. An aggregated repo for all those locations provides a convenient one-stop-shop strategy. The aggregation can perform mirroring of all consumed p2 repos or selectively provide indirection via a composite repo.

Organizations or teams want control over internally used components - It may be necessary to have gated access to relevant/"blessed" p2 repos where an organizational "healthcheck" has been performed prior to internal distribution. Furthermore, internally used aggregated repos can provide a common basis for all organizational users (i.e. for both IDE distribution as well as for content used when building internal applications).

Increase repository availability - by aggregating and mirroring what is used from multiple update sites into internally controlled servers.

Distributed Development Support - an overall product repository is produced by aggregating contributions from multiple teams.

Owners of a p2 repo for a given project may not be in position to host all required or recommended components due to licensing issues - Buckminster's SVN support can serve as an example here, as it requires components available in the main Eclipse p2 repo as well as third-party components. Hence users have to visit several repos for a complete install.

The b3 Aggregator is focused on supporting these specific requirements, and it plays an important role in the full scope of the b3 project. The Aggregator is however used in scenarios outside of the traditional "build domain" and this has been reflected in the user interface which does not delve into the details of "building" and should therefore be easy to use by non build experts.

Functional Overview

The b3 Aggregator performs aggregation and validation of repositories. The input to the aggregator engine (that tells it what to do) is a b3aggr EMF model. Such a model is most conveniently created by using the b3 Aggregator editor. This editor provides both editing and interactive execution of aggregation commands. The editor is based on a standard EMF "tree and properties view" style editor where nodes are added and removed to form a tree, and the details of nodes are edited in a separate properties view. Once a b3aggr model has been created it is possible to use the command line / headless aggregator to perform aggregation (and other related commands). (Note that since the b3aggr is "just and EMF model", it can be produced via EMF APIs, transformation tools, etc. and thus support advanced use cases).

The model mainly consists of Contributions; specifications of what to include from different repositories, and Validation Repositories; repositories that are used when validating, but that are not included in the produced aggregation (i.e. they are not copied). The model also contains specification of various processing rules (exclusions, transformation of names, etc.), and specification of Contacts; individuals/mailing-lists to inform when processing fails.

Here are some of the important features supported by the b3 Aggregator in Eclipse 3.6M7:

p2 and maven2 support — the aggregator can aggregate from and to both p2 and maven2 repositories.

Maven2 name mapping support — names in the p2 domain are automatically mapped to maven2 names using built in rules. Custom rules are also supported.

Mirroring — artifacts from repositories are mirrored/downloaded/copied to a single location

Selective mirroring — an aggregation can produce an aggregation consisting of a mix of references to repositories and mirrored repositories.

Cherry picking — it is possible to pick individual items when the entire content of a repository is not wanted. Detailed picking is supported as well as picking transitive closures like a product, or a category to get everything it contains/requires.

Pruning — it is possible to specify mirroring based on version ranges. This can be used to reduce the size of the produced result when historical versions are not needed in the aggregated result.

Categorization — categorization of installable units is important to the consumers of the aggregated repository. Categories are often choosen by repository publishers in a fashion that makes sense when looking at a particular repository in isolation, but when they are combined with others it can be very difficult for the user to understand what they relate to. An important task for the constructor of an aggregation is to be able to organize the aggregated material in an easily consumable fashion. The b3 aggregator has support for category prefixing, category renaming, addition of custom categories, as well as adding and removing features in categories.

Validation — the b3 aggregator validates the aggregated result to ensure that everything in the repository is installable.

Blame Email — when issues are found during validation the aggregator supports sending emails describing the issue. This is very useful when aggregating the result of many different projects. Advanced features include specifying contacts for parts of the aggregation which is useful in large multi layer project structures where issues may related to the combination of a group of projects rather than one individual project - someone responsible for the aggregation itself should be informed about these cross-project issues. The aggregator supports detailed control over email generation including handling of mock emails when testing aggregation scripts.

Documentation

The b3 aggregator documentation is available here on the Eclipse Wiki.

]]>
Patently RidiculousPatently RidiculousWhat an exciting and innovative idea, you would exclaim to yourself, and to those around you, as you jumped for joy, reveling in your own brilliance.

Sorry to disappoint you, but don't bother. IBM has patented that: 7506303. The lesson learned? Just because something is simple and obvious doesn't mean you can't patent it. So run, don't walk, to your nearest patent lawyer, turn your obvious ideas into incomprehensible legal babel, file a claim, and then sue someone's assets right off their balance sheet, perhaps with the help of a patent troll. Surely such patented ridiculousness serves primarily to suck the lifeblood of the software sector much like collateralized debt objects did the vital stuff of the financial sector.

]]>
Internet of SubjectsInternet of SubjectsWell, here’s a new organization, iosf.org, that I should have known about. I hope I can get to their event. It’s on July 5th and I have to be in Paris on the 6th for an Information Card workshop with FC2. Hmmm…should be possible.]]>
Oisín’s Precepts Version 1.0.qualifierOisín’s Precepts Version 1.0.qualifier
I’m making a set of precepts that I’m going to try and stick to from a professional engagements perspective. These have been very much influenced by Uncle Bob – especially his keynote for EclipseCon 2010, which provided the inspiration to put these together in this form – and of course by the many mistakes I’ve made in the past, which is what we call experience. So, in no particular order:

Don’t be in so deep you can’t see reality.

If you haven’t communicated with a user of your software in over a month, you could have departed the Earth for Epsilon Eridani and you wouldn’t know.

Seek to destroy hope, the project-killer.

When you hear yourself saying well, I hope we’ll be done by the end of the week, then you are officially on the way to that state known as doomed. If you are invoking hope, trouble is not far away. So, endeavour to destroy hope at every turn. You do this with data. Know where you are – use an agile style of process to collect data points. Iterate in fine swerves that give you early notice of rocks in the development stream.

When the meeting is boring, leave.

Be constructive about it, however. You should know what you want to get out of the meeting. If it’s moving away from what you are expecting, contribute to getting it back on track. It won’t always go totally your way. If you can’t retrieve it, then make your excuse and leave.

Don’t accept dumb restrictions on your development process.

Pick your own example here. Note that restrictions can also take the form of a big shouty man roaring the effing developers don’t have effing time to write effing tests! (true story that).

There must be a Plan and you must Believe It Will Work.

This is pretty simple on the face of it. One theory on human motivation includes three demands – autonomy, mastery, purpose – that all need to be satisfied to a certain degree before one is effectively motivated (see Dan Pink’s TED lecture). If there is no plan, or the plan stinks like a week-old haddock, then the purpose element of your motivation is going to be missing. Would like to earn lots of money, work with fantastic technologies and yet have your work burnt in front of your eyes at the end of the month? I wouldn’t.

Discussions can involve shouting. That’s ok, but only now and then.

Without extensive practice, humans find it difficult to separate their emotions from discussions, especially when there is something potentially big at stake. Just look at the level of fear-mongering that politicians come out with to influence voters. There will be some shouting – expect it – but it’s not right if shouting is a regular occurrence.

Refuse to commit to miracles.

How many times have I done this already over the last eighteen years? Ugh.

Do not harm the software, or allow it to come to harm through inaction.

No making a mess – your Mom taught you that. Stick with your disciplines. Don’t let any one else beat up on the software either. It’s your software too, and hence your problem if it is abused.

Neither perpetrate intellectual violence, nor allow it to be perpetrated upon you.

Intellectual violence is a project management antipattern, whereby someone who understands a theory, or a buzzword, or a technology uses this knowledge to intimidate others that do not know it. Basically, it’s used to shut people up during a meeting, preying on their reluctance to show ignorance in a particular area (this reluctance can be very strong in techie folks). Check out number nineteen in Things to Say When You’re Losing a Technical Argument. Stand up to this kind of treatment. Ask for the perpetrator to explain his concern to everyone in the room.

Learn how you learn.

I know that if I am learning new technologies, I can do it best provided I have time to sleep and time to exercise. I also know that my learning graph is a little like a step function, with exponential-style curves leading to plateaus. I know when I am working through problems and my brain suddenly tells me to go and get another coffee, or switch to some other task, or go and chat to someone, it means I am very close to hitting a new understanding plateau. So I have to sit there and not give in I also know that I need to play with tiny solutions to help me too.

You have limits on overtime, know them.

This should be easy for you – if you are tired, you are broken. Don’t be broken and work on your code. Go somewhere and rest. Insist on it.

Needless to say at some point in the future this post will come back to haunt me I am sure. But I’m hoping that if I produce a little laminated card with these precepts on it, keep it in my wallet, then I’ll at least not lose track by accident.

]]>
IntelliJ IDEA Support for dm ServerIntelliJ IDEA Support for dm Server
announced support for dm Server. Although I haven't tried using the support, it appears to be at a reasonable level of function and only a little behind the SpringSource Tools Suite (as far as dm Server support is concerned) with plans to fill the gap. IntelliJ haven't mentioned Virgo yet, but I'm hopeful that the Eclipse (RT) branding won't put them off.

This, in my opinion, is a sign of a healthy runtime: multiple vendors competing on tooling. I'd like to see the same on management tooling which could be implemented relatively easily layer to dm Server's JMX interface.

]]>
New BPMN2 project leadNew BPMN2 project lead
I was voted new BPMN2 project lead after earning Kenn’s trust.

Here is what we want to do:

Implement the reference metamodel for BPMN2

Create a basic editor with enough graphics that we become mainstream (I mean, we’d like to see the same icons reused all over the world)

Create a diagram editor with BPMN2. That would happen in the SOA BPMN modeler.

If you’re interested, we are looking for contributors:

Your tasks as a committer: reply on bugs, reply on newsgroups, eventually do some marketing.

Implement stuff as part of your commitment to the project to the extent that your groups is interested. Ie if you have no stake into doing a validation framework, that’s ok.

And help with doing the website.

All over more fun than paperwork, and a lot of community building. As lead I’ll handle CQs, release reviews, move reviews, etc.

]]>
Making Sense of ComplexityMaking Sense of Complexity
Making Sense of Complexity.

The ideas are (mostly) presented in reference to complexity in social systems...but as someone interested in (reducing) complexity in software systems, as well as the psychology of complex system design and development...I also found the thoughts interesting from a software architecture and design point of view.

]]>
WTP 3.2 M7 Declared!WTP 3.2 M7 Declared!
3.2M7 has been declared! Check out what's
New and Noteworthy
and
download
it now, or try it out in a Helios milestone today!

]]>
Epicenter 2010 Dublin – register now!Epicenter 2010 Dublin – register now!
At last, after a belated spring has sprung, and the local flora are finally catching up with their deadlines, we have the usual Epicenter Early Bird Closing date heaving into view. Well – it’s more already sitting on your lap, since today is the last Early Bird registration day. Run, don’t walk, to the tickets page.

Epicenter is in its second year and is well on the way to being Ireland’s top software conference. Good news for the local islanders – no jousting with ash clouds or having to urinate into a bottle on a Ryanair flight because you haven’t got change for the toilets! The conference is your typical multi-zone, multi-track affair, with each day focussed on a different technology or industry subject. Check out the website at epicenter.ie, scroll down the page for information on the speakers and talks.

There’s a good selection of speakers – Jeff Genender will be speaking, as will Eugene Ciurana – both well-known Open Source stalwarts. Matt Raible will join them, as will Eclipse Ecosystem buddy Doug Clarke. I’ve heard that the ever-Groovy Guillaume Laforge will be making an appearance too, but I can’t find his name on the website. Maybe I wasn’t meant to write that down. Ooops.

Just in case you are reading this and would like to help out from a sponsorship aspect, there are all sorts of packages that start off at an accessible €200. You would think that even Enterprise Ireland and the IDA should be able to manage to find that much down the back of the sofa for one of the biggest software conferences held in the country

Since Epicenter is in June, I’ve half a mind to see if I can get some people together for an Eclipse Demo Camp maybe before or after the main event. Leave a comment if you would be interested in attending or presenting a demo.

]]>
Change Begets ChangeChange Begets ChangeChange is inevitable so best to embrace it, make it work in your favor. That being said, it's important to choose the path forward carefully, and the company you keep, wisely. It's all too easy to make a wrong turn or to pick up bad habits. Can you say gambling?

My life was on a very steady course up until the time I left IBM. That particular radical change was the beginning of many to follow; it was voluntary and good even in hindsight. But it was carefully planned like the rest of my life. The demise of my partner of 27 years, on the other hand, was untimely, arbitrary, and beyond my control. Such things make one reconsider life's carefully laid plans. It's clear that time is fleeting and that one must make the most of today because there is no guarantee for tomorrow. Workaholic Ed died and the phoenix that rose from his ashes took a really good look around. Guess what? There really is time for me to swim 2km every weekday morning. Go figure!

Looking around a little further, I discovered that I have the greatest neighbors in the world. Okay, granted Warren is a bit of a princess.

So's Linda come to think of it.

But they've helped me more in the last year and a half than one could reasonably expect from another human being. For example, they've looked after my girls for countless weeks whenever I traveled; the girls love it next door. Last week, they even threw a birthday party for Else, the most recent addition to my dog collection.

Most important of all, they've helped make my Frank feel more than a little welcome in his new home in Canada.

Warren and Linda are the epitome of what it means to be good friends and I consider myself fortunate for having them in my life.

Looking around further still at what's happening with modeling at Eclipse and beyond is also eye opening the the extreme. Talk about change that begets change! I'm more than a little gratified and relieved to see that it's taken on such vibrant life of its own. I don't need to obsess quite as much about driving the vision of modeling forward. There are so many others who do that job even better. I've learned an important lesson: don't push the river, it follows by itself! As Kim so aptly put it: Eclipse is like family. What a great family and what a great place to be. Thanks Cloudsmith and itemis for helping make it economically viable for me and for all their other great contributions to the Eclipse community.

Speaking of great places to be, it struck me a few months back that I'd much rather live back in British Columbia. I grew up there. My parents, brother, and sister live there. I like the weather better there. I can grow a more interesting garden there. Frank and I can build a new life together from scratch there. I only moved to Ontario for IBM. So I bought a great property with this view.

My house is already sold, I've got a rental house lined up, and I've scheduled my move for the end of May. Of course there are more changes yet to come.

I write this blog today from Berlin, in summer like weather, as I anticipate traveling to JAX in Mainz next week where there's an Eclipse day and a Modeling day. Could life get any more interesting and exciting?

Oh yes, and it turned out workaholic Ed didn't really die, he was merely transformed into a more well-rounded version of his former self. I've spend the past several weeks porting the core EMF runtime to GWT and modifying the generator to produce GWT-enabled models and edit support on top of that runtime. It's all committed to CVS in time for M7, but I've not had time for documentation yet. Modeling in the clouds; stay tuned for yet more change.

]]>
Drupalcon San FranciscoDrupalcon San Francisco

See me? I'm there in the back! Look Closer! There were over 3000 of us at Drupalcon this year, including for a few who got managed to sneak by that big volcano! Even those that didn't make it over rallied together and had there own local Drupalcons all over Europe.

DrupalCon was a really great experience. It really reminded me of attending one of our fabulous EclipseCons. Absolutely everyone was super excited for Drupal, Open Source, and even Eclipse! I met a ton of people who were all using Eclipse for their Drupal Development. I even managed to give away a bunch of Eclipse Install stickers!

Here at the Eclipse Foundation we use Drupal for Eclipse Marketplace, Eclipse Live and the EPP Packaging Site. We chose Drupal when we launched Live because of the community that was swelling. It turns out we bet on the right horse. Drupal 7 when it releases promises to be the best ever, and from my point of view they'll meet that goal and then some.

Viva Open Source! I highly recommend checking out Dries Buytaert's (Drupal's Founder) Keynote - State of Drupal, as well as Tim O'Reilly talking about Open Source in the Cloud Era. If you've got time to spare all the sessions are available to watch on archive.org. I'm sure you'll learn something!

So now that the Eclipse has started drinking the Drupal Kool-aid, lets see if we can get them to chug some of our Eclipse Soda!

]]>
Categorize our OrbitCategorize our Orbit
As part of our “Give Orbit some love for Helios” initiative I started adding categories to Orbit repositories. Whenever you include an Orbit p2 repository in your target platform, you’ll now be able to cherry pick only the bundles you really need from a convenient set of categories. No need to download any unnecessary overhead.

Try it out yourself. Here is a link to an Orbit p2 repository with categories:

Frankly, I’m not a creative guy but I had to come up with some categories. Your feedback on the categories is really appreciated.

]]>
Asynchronous Remote Services - The future or the callbackAsynchronous Remote Services - The future or the callback
previous postings I described how ECF is now making it very easy for OSGi service developers to expose asynchronous/non-blocking remote method calls to clients.

In short, all that's now required is to create an asynchronous version of the service's OSGi service interface. See this documentation for example and source. Just declaring this asynchronous interface is all that's needed. At proxy discovery time, ECF's implementation of OSGi remote services will provide the implementation of this asynchronous interface.

Future or Callback

There are various approaches to doing asynchronous remote method invocation, and two common ones are callbacks and futures. For example, GWT uses callbacks, while Amazon EC2 uses futures for exposing asynchronous access to their APIs (like SNS, SQS, etc). ECF's asynchronous remote services supports both of these approaches (futures and callbacks). The asynchronous service interface declaration can, for a given synchronous method declaration, use either a callback, or a future, or both.

For example, let's say we have the following synchronous service interface method:

String foo(String bar);

The async declaration for this method using a callback would look like this:

void fooAsync(String bar, IAsyncCallback);

The async declaration for thie method using a future would look like this:

IFuture fooAsync(String bar);

And that's it. The remote service client can then use either/both of these fooAsync methods (if they are declared, of course), simply by casting the proxy to the async service interface type and calling the appropriate fooAsync method with the necessary params.

In this way, the remote service designer can determine what asynchronous style the client will have available...by declaring fooAsync using callback, future, both, or neither.

]]>
Planet Eclipse TestPlanet Eclipse Test
After upgrading to WordPress 2.9.2, my Eclipse related blogs did not appear at Planet Eclipse anymore. If you see this post at Planet Eclipse, then it is working again

]]>
OSGi &amp; Servlets: Flexibility by SimplicityOSGi &amp; Servlets: Flexibility by SimplicityStrangely enough, simple things tend to be more flexible than complex things. I bet you too have seen people go to great lengths to ensure a certain solution provides utmost flexibility. Often, this flexibility isn't needed, so you're introducing accidental complexity.

In a recent post, I showed you how to create a plain servlet and register it in an OSGi environment. As both Jeff and Scott pointed out, my using a ServiceTracker to register and unregister the servlet is a little bit clumsy and can be improved by using Declarative Services.

I highly recommend reading chapter 15 in "OSGi and Equinox", but in a nutshell Declarative Services allow you to define components which can provide and consume services. Binding and unbinding references between components and services is performed by the DS runtime (also known as the Service Component Runtime).

Without further ado, here are the changes I had to make to DS-ify my simple servlet:

Remove the activator. Yes, that's true: we don't need an Activator any more. Delete the class and also remove it from META-INF/MANIFEST.MF

Delete HttpServiceTracker. Registering the servlet with the HTTPService will be handled by the DS runtime.

Implement a component to register and unregister the servlet with the HttpService:

I saved this file in OSGI-INF/component.xml and added it to the Service-Component section of META-INF/MANIFEST.MF. In fact, as I used the Create New OSGi Component wizard, the wizard added the entry to the ServiceComponent section - it's really easy to forget this if you do it manually!

That's it!

Please note that I did not change the servlet implementation at all (apart form issuing a different text to make it easier to tell the servlets apart)!

Before launching, please make sure to add org.eclipse.equinox.ds and org.eclipse.equinox.util to your launch config to enable Declarative Services.

The biggest advantage of this approach is that you do not have to take care of acquiring the HTTPService. The DS runtime will only activate your component when all prerequisites have been met, i.e., all dependencies are available. If the HttpService is not available for any reason, your component will not be started. This makes the code for registering the servlet simpler and cleaner.

]]>
BIRT User Group UK — Don’t miss it!BIRT User Group UK — Don’t miss it!
If you’ll be in the London area this Tuesday evening, April 27th, don’t miss the BIRT User Group UK meeting! Register now to hold your spot before it fills up.

Besides being a great opportunity to network with other BIRT users, the organizers have two terrific speakers lined up for this meeting:

James Governor, the highly respected open source analyst from RedMonk, will be talking about how to leverage Web 2.0 and social media technologies into applications and how open source technologies can help keep up with innovation in these areas.

Virgil Dodson, our senior BIRT Exchange evengelist, will talk about Eclipse BIRT and cover a range of topics like using the BIRT Designer, re-using BIRT assets and giving BIRT some style.

Plus, Virgil will show off the BIRT Mobile Viewer on his new iPad! If you haven’t seen the future of mobile BIRT (or just want a first hand look at an iPad), you can register and/or get more information about the meeting here. I understand there will be giveaways too — sorry… but not for Virgil’s iPad.

]]>
Graphical Modeling at EclipseGraphical Modeling at Eclipse
Graphical Modeling Framework (GMF) Project received EMO approval for it's restructure request. For those who were unaware of the restructure, read further.

The GMF restructure is mainly a reaction to the Graphiti project proposal. The original version of the proposal was to start a new modeling project.

The scope, architecture and dependencies of Graphiti align directly with the GMF Runtime. Given this fact, it did not make a lot of sense to create a second community. The Graphiti project belongs in the same umbrella project as GMF.

To this end, we proposed the creation of the new Graphical Modeling Project that includes the existing GMF project and will include Graphiti.

]]>
BPMN modeler mirrored on githubBPMN modeler mirrored on githubRemember last week, when I was telling you that the BPMN modeler had moved to git ?

Ketan and I maintain the eclipse account on github. We opened a support request and here we go:

http://github.com/eclipse/bpmnmodeler

The BPMN modeler is now mirrored on github! If you need another Eclipse project to be mirrored, open a support request on github and ask us to open an empty repository with the name of the project.

Kudos to the github team for their reactivity and their outstanding service!

]]>
CheckCacheThenDatabase in TopLink GridCheckCacheThenDatabase in TopLink Grid
query hints that allow you to query the cache rather than, or before, the database. This is useful because if you know you've warmed up your cache you can execute queries for objects and yet not have to query the database.

As of TopLink 11gR1 (11.1.1.2.0) TopLink Grid doesn't support these hints but it's only a problem for the 'Grid Cache' configuration in which Coherence is used as a shared cache replacement. In Grid Cache, only primary key queries are sent to Coherence and all other queries go to the database. In the 'Grid Read' and 'Grid Entity' configurations all read queries are directed to Coherence.

I was recently asked how to get a query to CheckCacheThenDatabase with Grid Cache and came up with the GridCacheQueryHelper utility class below. It provides the static method checkCacheThenDatabase which will adjust an EclipseLink JPA query so that it queries Coherence (if the query can be translated to a Filter) and if no results are found in Coherence then it queries the database.

]]>
BIRT Exchange DevShare Contest Winner for March 2010 SelectedBIRT Exchange DevShare Contest Winner for March 2010 Selected
The winner of the DevShare Contributor of the Month Contest for March 2010 is David Mehi (a.k.a. Le BIRT Expert) for his contribution to BIRT Exchange titled: Using a Popup Debug Window in BIRT.

He describes his submission as follows, “The Javascript Debug Popup Window was something that I had been using for over a year on various projects. I often found that when doing scripting in BIRT, I would make small mistakes that would cause the report to either stop running or produce the wrong result. Having this “window” to output messages to became invaluable when trying to find the right method calls and debug problematic JavaScript code. Since Eclipse is based on Java, opening up a JFrame and printing messages to it made sense. It was much more convenient to see the messages in their own window than looking in a log file every time.”

David Mehi is a Java certified programmer who worked as a Senior and Principal Consultant for Actuate Professional Services for almost 6 years where he was based out of New York City and London. He has worked on a wide variety of BIRT and Actuate projects across many different industries. David is currently based in Detroit, MI, USA and has recently started doing independent consulting through his company InfoLight Solutions, focusing on JEE, BIRT, Actuate and mobile related solutions.

In March of 2010, David created LeBIRTExpert.com, which is a web site and blog dedicated to bringing more information about BIRT to developers. It was created as a way to give back to the BIRT community and help foster its growth. David will soon be releasing an eBook detailing the BIRT Best Practices he has learned over the years. More information can be found at www.lebirtexpert.com.

David chose an iPod Shuffle for his award.

It’s not too late… to get your own DevShare article posted in time for the next month’s contest which closes on April 30th. If you’ve cooked up something that you can share with the BIRT Exchange community, please post it soon! You can create a tutorial, build a BIRT application, template or component, or write up some of the tips and solutions to problems you’ve run into during the past — anything that’s helped you use BIRT more effectively will probably be useful to others too. And don’t forget the BIRT Exchange Open Marketplace. Your code-based DevShare submission could also be a great candidate for the Marketplace.The monthly DevShare contest rules and list of previous winners can be found here.

]]>
The importance of a clear description for bug 8009The importance of a clear description for bug 8009
bug 8009 "Split File Editor" is discussed - n being rather large... This never-ending story is one of my favorite pass times when it comes to following Eclipse bugs... I really only have one wish: we need some new and fresh comments.

Well... lately, there have been some development in the comments, as already described by Holger Voormann, a bounty has now been started. In my book, the people behind the bounty now only need three things: a clear description, sufficient funds and a convinced committer. Will that happen? I think not.

In order to ask somebody to create a patch, the bounty givers/providers (?) must agree on a clear description of the problem and the wanted solution. Looking quickly through the 146 (!!) comments, I have not found a clear description of the wanted solution - actually I have found many different vague descriptions -, so one must wonder: does the current bounty givers have a common consensus for the wanted solution? If that is not the case, I can only guess that some of the people are going to be rather disappointed when they see a solution...

Most of the committers of the platform seems to agree, that the any implementation of this bug is going to be non-trivial. Thus, one must expect that the bounty for the bug must be relatively large - unless somebody goes the for the bounty just for the fame. But it is not enough to create a patch for the bug: as the patch is going to be relatively large, all sorts of committer rules are going to be in effect. Remember that somebody will have to maintain the patch afterwards. I.e. one of the existing committers for the platform must accept the provided patch as his/hers own and maintain it ever after. (Admitted, here I take it for granted that the bounty givers are not going to collect enough funds to pay for the continued maintenance of the solution... not an unreasonable assumption, I think).

I make my living by developing or help developing Eclipse RCP applications. So as a consumer of the Eclipse sources, I really, really don't want anything new in the platform unless all of the current committers can vouch for the solution. The platform has to be rock solid, otherwise we cannot base mission critical application on top of the sources. (Yes, I know that this basically means the development of the platform is almost none as the current code base is so old and complicated, but... that is exactly only of the aims of the e4 project, right?)

So... summa-sumarum: 1) you need a clear description of the problem, 2) you need funds and 3) you need to convince an existing committer that your solution is sound and good...

I can see 2) happen, I think 1) is going to be hard and 3) is going to be nearly impossible!

In the next step, you get to configure the instance. A small instance will do for now.

Leave the Advanced Instance options as-is

Next, you need to create a key pair (this will allow you to sign in to the instance using SSH). If you haven't created an EC2 instance before, select Create a new Key Pair and follow the instructions. Otherwise, select an existing key pair

To access the instance from outside using HTTP and SSH, you need to create and assign a security group. Make sure to add SSH and HTTP port configurations to this security group. Unfortunately, it is not possible to freely choose the port ranges, so we need to edit the security group after we're finished with the wizard.

On the summary page, review the configuration and click on Launch to actually start your instance.

After a little while, the newly created instance will show up in the list of AMIs. You can test your instance by opening a terminal window and SSH'ing to your instance:

Your Eclipse RT Equinox server is now up and running. Time to export the servlet!

Exporting the servlet

We will export our simple servlet using a feature based update site, so it can be installed using p2.

First, create a new feature project, naming it simple.servlet.feature. Provide the following details to the wizard:

ID: simple.servlet.feature

Version: 1.0.0.qualifier

Name: Simple Servlet Feature

Provider: Your name

Open the Plug-ins page and add simple.servlet to the list of packages plug-ins.

Now, we can create the update site. It will only contain the feature we just created. Using the New Update Site wizard, create a new update site project, naming it simple.servlet.updatesite. Add the feature we just created to the list of features on the first page of the site.xml Update Site Map.

When you're done with that, , you can build the update site by pressing the Build All button. This will build the servlet plug-in, the feature and finally the update site. After that, we're ready to deploy.

Installing the servlet

Installing the servlet into our Amazon EC2 instance is quite easy, as we can leverage p2:

Using your favourite SFTP client, copy the entire update into the /tmp directory of your Amazon EC2 instance. When that's done, hop over to your terminal window that's connected to the OSGi console of your Eclipse RT server (remember, we started this server when setting up the Amazon EC2 instance). Issue the following commands:

Start the servlet by typing start 47 (again, please use the bundle ID issued by ss).

Open a webbrowser and navigate to ec2-xxx-xx-xx-xx.compute-1.amazonaws.com:8080/simple (the DNS address is the same you used to SSH to your instance) and you should see the servlet output:

Hello from the cloud!

Conclusion

In this post, you saw how easy it is to create an Amazon EC2 instance and deploy an OSGi-based servlet on it. With this knowledge, you can now start sky-diving into the joys of cloud computing. Have fun!

Most of this information has been taken from the Eclipse Wiki. You are encouraged to add your own tips & tricks to this wiki page. All you need is an Eclipse Bugzilla account. Setting one up is easy, start here.

]]>
Epsilon FlockEpsilon Flock
We’ve just released Epsilon v0.8.9, which includes many improvements and bug fixes, as well as a new task-specific language, Epsilon Flock. Epsilon Flock is a model-to-model transformation language tailored for migrating models following changes to their metamodels.

In some cases, specifying migration with existing model-to-model transformation languages can be cumbersome. For example, consider the ETL code below, for migrating Persons to a metamodel that has extracted a Telephone class:

In the transformation above, there is some redundancy. Firstly, model elements that have not been affected by the metamodel evolution (such as name, address, gender and dob), must be copied from old to new Persons. In Epsilon Flock, the values of these features are copied automatically. Secondly, the Person rule must define a source type and variable and a target type and variable. In Epsilon Flock, migration rules are scoped to a single type, and two built-in variables (original and migrated) are used to access old and new Persons. The following Epsilon Flock code is equivalent to the ETL above:

Epsilon Flock also provides concise mechanisms for changing the type of and deleting model elements, and we have plans to enhance the language in future versions of Epsilon.

We’ve added to the Epsilon website documentation, an example and a screencast to help you get started with Epsilon Flock. There’s also a technical report, which includes examples of Epsilon Flock for migrating a Petri nets model and a UML class diagram. The Epsilon Flock paper will appear at ICMT 2010 in late June. As always, please do leave questions and feedback in the Epsilon Forum, and we’ll get back to you.

Recently I have been laboring on porting a deployment system from shell scripts to Phing, a loose PHP port of Ant. And naturally, I miss the above. The only aid I can get from Eclipse is that Phing’s syntax is very close to Ant’s, so I can at use Ant editor for Phing files to enjoy property navigation and target integrity validation.

I would be more than happy to announce that I’m going to fill the gap and implement Phing plugin for Eclipse PDT, but unfortunately - I’m too busy and too lazy. On the other hand, if you, my dear friend, will suddenly decide to accept this challenge, I can gladly invest my time in architecture, design, review & testing free of charge. Or should I anyway try to start it myself?

]]>
Heart-Touching QuotationHeart-Touching Quotation…When I was going to school we were always taught, “In the olden days of computing, computers were expensive and programmers were cheap. Now it’s the reverse. Therefore…” We are back to the future. At internet scale, programmers are (sometimes) cheap compared to the cost of electricity.

Kent Beck

]]>
Be niceBe nice
This happens with disturbing frequency…

An Eclipse user is frustrated. S/he been trying to make something work for a few hours. Maybe even a few days. It’s just not working like s/he thinks it should be. Finally, s/he decides to send me a note. In that note, s/he questions my parentage, or muses openly about whether or not the synapses in my brain where functioning properly when I decided how I was going to implement some API or feature in Eclipse. In short, in frustration, s/he writes a nasty note with an underlying question or two about how something is supposed to work in Eclipse. In my response, I am nice. I completely ignore the nastiness and focus entirely on the problem expressed. I understand that the sender is frustrated. I always try very hard to answer questions earnestly and provide pointers to more information. Some hours later, I get a much more nicely worded note back thanking me for my thoughtful response along with an apology for the earlier rudeness. In that final response, the obvious frustration is replaced with obvious embarrassment.

Frankly, you don’t have to be nice when you email me. I’m going to be nice in my response regardless. However, I think it’s better for all involved if you are nice. It’ll save you from being embarrassed later.

General rule: take a deep cleansing breath before you type that email. More importantly, don’t wait until you’re frustrated. If you just don’t get it, and Google isn’t helping, try the forums. Then try me.

FWIW, my parents are both fine, upstanding, and productive members of society who have dedicated their lives to public service.

It’s been a long time since I didn’t post here and guess what … I am posting to ask some help !!!

I sometime use the Builder pattern as defined in Effective Java 2nd Edition item 2. Incidentally, I recommend this book to everybody writing Java code. Because I am a lazy man, I love JDT’s Java templates and will be really happy to have such a template for the builder pattern.

Let’s take the following example of Builder in order to illustrate this post (I just pasted 3 of the many parameters of the circle renderer class for readability):

I would like to create a builder template “asking for questions to the user”. I.e the builder template should ask the end user what are the required fields of the enclosing class, and what are the default values of non required fields. Is it possible to do that ? If yes, where can I get information to start ?

Note: I often use eclipse “equals and hashcode” generator, and I guess it’s implemented by Java code. Thus the only solution to my problem is may be to implement my Builder generator the same way ….

Manu

]]>
Where is DS Annotation?Where is DS Annotation?
Declarative Service is cool stuff.It makes to solve bundles startup order.But I confused a thing.How do I inject the service at startup time?I think services needs to initialize at startup.So I choosed the way to solve by singleton class like below.

It's not good way(T_T)... KanbanUIContext is a singleton class and be used to call ServiceLocator Pattern. It is not easy to test by code...I haven't know there is no way to inject to use annotation for field injection.Is there any way to use annotation and field injection?

]]>
EPL/GPL CommentaryEPL/GPL Commentary
A while ago, we received a request to take a look at an open letter on the compatibility of the Eclipse Public License (EPL) and the GNU General Public License (GPL). This led to a number of conversations with the Free Software Foundation (FSF) on the topic. What we have learned and the conclusions that we have drawn are outlined below. You can also find the FSF’s summary and conclusions on their blog.

1. Introduction

In this context by “Eclipse plug-in” we mean a software module written in the Java programming language which is specifically intended to execute on top of the Eclipse platform which is provided under the Eclipse Public License (EPL). Eclipse plug-ins can be distributed in two different ways: (a) combined (e.g linked) with a copy of the Eclipse Platform, or (b) independently. In the latter case, such a plug-in would have to be combined by a user with the Eclipse platform in order to be executed. In short, Eclipse plug-ins are by definition useless without the availability of an instance of the Eclipse platform to be executed on.

2. Caveats

This blog post is not a substitute for professional legal advice. It is intended to provide general guidance to developers, but it was not prepared by an attorney nor is it in any way legal advice.

3. Generally Speaking, These Licenses Are Incompatible

The EPL and the GPL are inherently incompatible licenses. That is the position of both the Free Software Foundation and the Eclipse Foundation. In preparing this we consulted with the FSF to make sure we fully understood their interpretation of the GPL and how it interacts with the EPL. You may not link GPL and EPL code together and distribute the result. It doesn’t matter if the linking is dynamic, static or whatever. This bears repeating: if you or your organization are distributing EPL and GPL licensed code linked together into a single program you are almost certainly violating both the EPL and the GPL.

It is important that to understand the importance of linking in this discussion. If the EPL and GPL components interact with each other via pipes, sockets, etc. or if Eclipse is simply running as an application on top of GNU Linux then that is a completely different scenario and outside the scope of this analysis. The Eclipse CDT project for example makes use of gcc and gdb in its support of C/C++ development by restricting its interactions with those GPL-licensed facilities to the command line interface.

Note, however, that as free software proponents both of our organizations are interested in the freedom of users to make use of software. It may be possible for end users to create combinations of plug-ins where that same combination could not be lawfully distributed as a single program to third parties. The rest of this paper will look at this possibility and how such plug-ins could be distributed and assembled by end users.

It is the position of the Eclipse Foundation that the EPL is the preferred open source license for you to use for your Eclipse plug-ins. After all, the platform that you are leveraging when writing such a plug-in was provided to you under the EPL.

4. The EPL Perspective

The definition of “Contribution” in the EPL states that “Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program” (emphasis added). Further, we make it clear in the EPL FAQ that simply linking to EPL does not constitute a derivative work. So, in other words, if you independently write an Eclipse plug-in that adds value on top of the Eclipse platform you are not required to license it under the EPL. In fact, many such plug-ins are licensed under commercial terms and conditions, as well as various open source licenses.

Under the terms of the EPL, it appears that it may be possible to license plug-ins under the GPL. What is clear, however, is that it is not possible to link a GPL-licensed plug-in to an EPL-licensed code and distribute the result. Any GPL-licensed plug-in would have to be distributed independently and combined with the Eclipse platform by an end user. This is why, for example, the Eclipse Foundation has allowed the independent distribution of GPL-licensed plug-ins via our Eclipse Marketplace.

If you or your organization owns all of the copyrights to the plug-in(s) in question, there is one mechanism which may provide a solution to this dilemma. That is to create a license exception which specifically allows linking to EPL-licensed code. However, if you do not own all of the copyrights to your GPL-licensed plug-in or other GPL-licensed libraries used by your plug-in then this issue remains.

The FSF’s position is, however, a matter of some debate. It is clear that they prefer that Eclipse plug-ins not be licensed under the GPL. But in the same section of the FAQ they also say that “This is a legal question, which ultimately judges will decide”. It is important that you seek your own competent legal advice if this situation applies to you or your organization.

6. Summary

Choosing to license an Eclipse plug-in under either version of the GPL places important constraints on how you distribute that plug-in. The purpose of this blog post is to provide developers with some general guidance on those constraints and the requirements placed on them to conform with the terms and conditions of both the EPL and the GPL. But this post was not prepared by an attorney nor is it in any way legal advice. Seek the advice of your own attorney.

]]>
Xtext For Your Ecore ModelsXtext For Your Ecore Models
Ecore2Xtext Wizard.Why should you use the Ecore2Xtext Wizard?

You want your models to be in a syntax that humans can not only read but also understand.

You want a model editor that offers all the convenience of a modern IDE.

You already have an Ecore model but don't know how to start with Xtext.

Your Ecore model is huge and you want a quick start with Xtext. You can easily fine-tune the syntax later on.

Select the EMF generator models1 from your workspace for which you want a textual syntax and choose your root element's type.

Fill in all the language metadata on the second page of the wizard. Remember the file extension.

Click Finish and wait until Xtext has generated the two common Xtext plug-ins and the Xtext grammar for your language.

Run the MWE2 workflow located in the same directory as the grammar. Now Xtext generates the language infrastructure (parser, editor, formatter etc) .

Spawn a new Eclipse runtime workbench, create a sample Java Project, and open a new model file with the file extension you have chosen in the wizard. Play around and have fun with your new textual model editor.

1 the genmodel is needed because we need the fully qualified names of the generated Java classes as well as the location of the Ecore file. The genmodel offers both of them.

Watch this short screencast to see it in action:

What is the generated syntax like?

Names of EClasses and EStructuralFeatures become keywords, containment is marked with curly braces, elements in lists are separated by commas, etc... Here's an example of an entity model in the generated language:

What if it doesn't work?

The grammar is the primary artifact of every Xtext language, but there are a couple of further services you might have to configure:

A IQualifiedNameProvider to define how the fully qualified name of an element is derived.

A IScopeProvider to define which elements are candidates for a cross reference.

...

Please consult the Xtext documentation for more information.

Another geekish meta-confusing example

All right geeks, Ecore itself is defined in Ecore, so let's generate a textual syntax for Ecore and see how Ecore looks in that syntax! Only two adaptions of the generated code where necessary to get this editor:

(The '^' chars are automatically added by Xtext to distinguish identifiers from keywords, which of course collide a lot in this example) It is certainly not as complete as EMFatic, and it has a quite verbose syntax, but it could be the starting point for a nice textual Ecore editor.

Face time with our colleagues from Nokia, the Symbian Foundation, Eclipse and CDT communities. Ronnie King, Warren Paul, and I came from the Austin office and had a great time catching up with people we usually don't get to see in person very often. I missed EclipseCon last year but I heard attendance was up and organizations were able to send more people this time.

I really liked the session format: the reduced times seemed to really focus the presentations and kept a wide variety of material flowing throughout the day. I know the CDT session Doug and I did seemed to fly by quickly.

Had a great response to the new EDC (Eclipse Debugger for C/C++) that will be in Carbide 3.0 and our team is contributing to CDT 7.0. People liked the overall concept, features, and extensibility and I think we'll soon start seeing more contributions from the community.

Lots of discussions about usability and performance, both very important to us for Carbide. There is new work going on in the Eclipse & CDT communities that will help us smooth out some rough spots our Carbide users have had to put up with.

Visiting developers at the Nokia office in Mt. View. We love getting to shadow developers around and watch them work. Great unfiltered feedback. We had a chance to introduce them to some features in Carbide they didn't know about and help them through some difficult issues that are tough to understand unless you can look over the person's shoulder.

The Startup Lessons Learned Conference is by-and-for entrepreneurs, and only entrepreneurs. We have a lineup of speakers who are primarily active practitioners of the lean startup methodology. They'll be speaking about their real-life experiences trying to put these ideas into practice.

JBoss Developer Studio provides a single-install of a full Eclipse based development environment which includes Eclipse 3.5, productized subset of JBoss Tools 3.1, TestNG, SpringIDE and an (optional) bundled JBoss Enterprise Application Platform 5.

JBoss Developer Studio is for those who would like to just install and IDE and get work done without the hassle of configuring Eclipse and related runtimes.

Information about installation and migration from previous JBoss Developer Studio versions are available from JBoss Developer Studio 3 page.

Have fun!

]]>
getting realgetting real
short article (3o seconds read time)

I just completed the book "getting real". The book promises a smarter, faster, easier way to build successful web applications, written by 37signals, the group that brought us "ruby on rails". It is a book about web app design, covering the development process from idea (over marketing) to support. The proposed approach is agile development with a "keep it simple" attitude. It does not contain many new ideas, but is well structured and a quick entertaining read with tons of quality references.

details (5 minutes read time)

What is the "getting real" approach? The most emphasised point in the book is: Always strive early for "real things". There is no good in discussing designs you can't see, to discuss software that you can't use. Build it early and than refine. Since you are always working on the real thing, developers are always motivated and customers are involved early. This is actually agile development 101. Another heavily emphasised point is: Think a lot before you put in a feature. Always weight the evident benefits of a features against the costs of a heavier app.

What was in it for me? Despite the fact that many of the ideas are not knew, it is good to have a comprehensive and easy to re-read compendium tailored for web apps. The presented design paradigms and ideas are important to anyone building web apps (or generally software). The book is thereby thin enough to be embraced by the open minded and thick enough to slap all others with it. If you have to deal with "not-knowing" colleagues, the book helps to communicate the ideas quickly.

Even if you are familiar with "agile and more" methodologies, there might be a few small points that you never saw so clearly before. Here are my personal blind spots:

The build and refine approach fits web apps perfectly. Since it is a web app, it can be put online early on and there is no costs in delivering new versions.

Think in terms of what you can do differently from competing products. Building distinctive characteristics a design paradigm.

Three state solution. Always design for a regular, blank and error state.

Do not obey the customer and clutter your app with unimportant features. In terms of clear design, even reasonable features might cost your app more than they sell.

Take out the middle man and do your own support and feel customer pain first hand to stay "real".

What is it not? It is not about programming languages, frameworks or css-hacks. Some points work especially well for web apps, but it is not fully web app exclusive. It is not a text book, it is practical.

What is next? The follow up book "rework" should be in my mail tomorrow.

]]>
In search of slidesIn search of slidesWant to find out which speakers have uploaded their slides or files related to their talks? Well this year it is easier than ever to find out. On the top of the sessions page is a set of search options for drilling down into the types, categories or tags for a talk. As of today you can also find talks with slides and files the speaker has made available.Now we just have to figure out how to encourage all the speakers to make their slides and code samples available. Any ideas?

]]>
Eclipse plug-in sightseeing: Ribbon IDEEclipse plug-in sightseeing: Ribbon IDE
Code Bubbles project and older attempt landed as a well known Eclipse project, which unfortunately abandoned a great idea for other not so cool things...

]]>
Undisputable Best e4-Rover Mars Challenge ClientUndisputable Best e4-Rover Mars Challenge ClientWell, except for the fact I don't qualify for the challenge because I'm a Foundation Staff member, I would say the contest is over.

If you activate Chuck Norris mode in my client, the rover collects all the RFID tokens, beats them into submission, and then runs back and forth over them building a score of Infinity - twice in one turn.

Can you do better? See Lynn for a key and information on how to participate.

- Don

]]>
Apps and Personal Data StoresApps and Personal Data Stores
This post presents an architecture comprised of apps, a dashboard, and a personal data store (PDS) that can be implemented by multiple developers, hosted by multiple operators over an open, personal data network and whose goal is to give users more control over their own identity (personal data, profiles, preferences, affiliations, and relationships). It is in support of aspirations that have been widely reported by others and called variously VRM, data portability, user-centric identity, the Data Web, Augmented Social Network (2003), and so on.

I’ve annotated the diagram above with little “H” and “A” markers so you can see specifically the areas that Higgins and Azigo are working on respectively. Lots of other folks are also working on other parts of the picture too, of course.

Apps

Apps are of course the most important kind of component since they are what the end user sees and appreciates. Apps gain access to the user’s data by making calls (e.g. getAttribute) to an API exposed by the PDS Client. Architecturally, we’ve seen the need to support both conventional kinds of apps: web, mobile (iPhone, Android, etc.), and desktop, as well as a more unusual kind of app, I’ll call a Javascript app. In this latter case Javascript is fetched from a web service (e.g. from Kynetx KNS) injected locally into your browser by a browser extension. This same browser extension exposes the same PDS Client API to this Javascript program.

Dashboard

The dashboard is an admin GUI app for your personal data. It is an occasional-use tool that provides: (a) a control panel to manage the permissioning policies that control which of your attributes are shared with whom (including so-called “selector” functionality to approve the release of your info) (b) a dashboard GUI to see and manage all of your identity data attributes (including profile data, credentials, friends lists, etc.) whether stored in your own PDS or managed by others (c) a place to directly enter self-asserted attributes (d) an embedded app marketplace (e) a canvas area where apps can extend the UI to add their own admin interfaces (f) a place to import & manage your i-cards and OpenID OP relationships.

ASIDE: Dashboard is a new word I’m trying out. The reality is that this piece of software is a bit of a swiss army knife where each blade/tool is called something different. A few examples: Microsoft calls the aspect that pops up to give notice and consent to release a set of attributes an identity selector. Inside Google they call identity-related client add-ons to a browser an active client. The “show me all of my stuff” aspect does sound like a dashboard. On the other hand, the permissioning aspect is something Eve would call a relationship manager (or I think she would). And I think Bob Blakley would too.

The dashboard combines aspects of earlier client efforts. In 2006-2007 we saw Information Card Selectors like Windows CardSpace as well as the Higgins selectors provide an interface to view and manage multiple digital identities displayed as visual cards, as well as provide notice and consent to the release of your selected digital identity. In 2009 Azigo augmented the selector concept support for Kynetx apps in Azigo (along with cross-platform and card roaming support). Prototypes shown by Microsoft (e.g. OpenID Active Client) and Higgins at IIW in 2009 added OpenID support thus demonstrating multi-protocol support. Mozilla Lab’s Account Manager is doing some great work in this area. The Higgins project is working on a next-generation client as part of the Higgins 2.0 Active Client expected in 2011.

Personal Data Store

A PDS is a web service that works on your behalf, giving you more control over your own personal data whether it is stored in the PDS or managed elsewhere. PDS stores local attributes in blinded form so that only the user has the decryption key–not the PDS service provider. The PDS is an idea that has been underdevelopment for years. For some background see Joe Andrieu, Joe again, and Iain Henderson. As part of Higgins 2.0 the PDS is being developed. Another interesting PDS development project is Mine!

PDS Client

The PDS Client has no UI, but provides an API for apps that wish to read/write attributes from the PDS. Here are some of its functions:

Maintains (and syncs to the PDS and other clients) the user’s ”permissions”–the decisions that the user has make as to who (what app or relying party) has access to what attributes. For example, the first time a new app/RP asks for a certain set of attributes, the PDS Client will trigger the PDS Dashboard to present the policy decision to the user. The next time this same request happens, the PDS Client remembers the grant and usually doesn’t have to bother the user about it this time.

Maintains a local copy of some or all of the person’s personal data stored in the remote PDS

Maintains an OAuth WRAP access token that it gets by authenticating itself to an external authentication service. It passes this token along in XDI messages to the remote PDS service.

Can be configured to encrypt attribute values before they are sent over the wire (e.g. in XDI messages) to the remote PDS

Contains a local Security Token Service (STS) that allows it to create and sign SAML (for example) tokens for self-asserted attributes.

Contains an STS client to support remote IdP/STSes managed by external parties (e.g. to support managed i-cards).

Performs cross-context schema mapping.

The Higgins 2.0 PDS Client is packaged as either a C++ or Java code library or as a separate operating system process (e.g. on Windows it is a Windows Service).

Network Protocol

Drummond Reed with his OASIS XDI and OASIS XRI work was first to my knowledge to define an open data web. A few years later Tim published his Linked Data paper. We’re starting to see implementations of Linked Data so now the Semweb folks also have a data web. Both of these approaches are important.

An open community is starting to form around the XDI that is focused on PDS-related use cases and create might be called a profile of XDI in this area. The community is leveraging XDI’s existing strengths in the areas of identity management integration, security, access control, data sharing and versioning, as well as extending them where needed in order to meet the PDS-related requirements.

This focus probably provides a critical time-to-adoption advantage over the Linked Data effort in this PDS area. Since the objective is interoperability (i.e. an interoperable ecosystem of PDSes and apps over a common protocol) assembling a community focused on this area would seem to be the fasted way to get there. Linked Data (like “vanilla” XDI) has a much broader link-all-the-worlds-data-together mission and lacks direct support for many of the PDS-related requirements. As a consequence RDF developers (including Higgins) define ad-hoc extensions to RDF to make it support the PDS use cases that are only interoperable within their own developer community.

PDS Schema

The Higgins PDS uses its own internal schema called the Persona data model. This is not to say that the PDS architecture imposes a single ontology on its clients. Quite the opposite. Every attribute call (e.g. getAttribute) may request attributes in any vocabulary. As I’ve mentioned in my schema mapping post, we follow the philosophy of mapping into and out from the internal schema.

Authorization Manager(AM)

The AM provides the “back end” authorization manager for access control of attributes managed by data services other than your own PDS. The Higgins project has been tracking the promising UMA Authorization Manager effort that Eve Maler and others have been developing.

Kynetx KNS

KNS is a web service that serves up compiled Javascript apps for injection into browsers. The app developer uses the Kynetx AppBuilder tool to create apps. Each app is packaged as an information card. The developer puts this app on their website for folks to download and install. If you click on it and already have a PDS Dashboard the new app gets installed in about one second. If you click on it an you don’t already have a PDS Dashboard, then you download an installation package that includes a Dashboard (with the app pre-installed inside it).

]]>
e4-Rover Challenge - So Simple, even an MBA Can Playe4-Rover Challenge - So Simple, even an MBA Can PlayWow. I was just part of a beta test of next weeks e4-Rover Challenge at EclipseCon, and this is pretty cool stuff. The NASA and e4 teams have clearly been working hard on this. I suspect it's going to become a major distraction for a lot of people next week.

Here's the best part - if I can do it - I can only imagine what you can do with it. And I'm not just talking about getting the instructions and doing a basic run around. I was just hacking around e4 for the first time, adding new controls, writing a lag management API (lag is part of the game, remember - you're controlling something on Mars!) Impressive stuff all around.

Now, where did I put that Prolog-Java code I once had, and can I hook it up to the telemetry API...

Hmmmm, I'm thinking I might not be eligible for the Grand Prize trip to NASA JPL, but it would still be good to teach the whippersnappers a thing or two...

- Don

]]>
API Tooling at EclipseconAPI Tooling at Eclipsecon
presentation.If you came to the presentation two years ago about the overview of API Tooling, you will learn more about API usage report and migration reports.

All questions and comments are welcome.

See you there.

]]>
JDT at Eclipsecon 2010JDT at Eclipsecon 2010If you want to find out more about what JDT has done for the Helios release, you might want to come to the following events at Eclipsecon 2010.1) What's new in JDT? (lightning talk)2) JDT Fundamentals (2h tutorial)

Of course I will be available for questions, discussions or comments during the whole conference. Send me a message if you want to meet there.

Thanks and see you soon at Eclipsecon.

]]>
Listen up Europeans!Listen up Europeans!
Yes, we'll do it again. For all of you who are not getting out of Eclipse Con and Santa Clara on Thursday: Come by and have a drink with us European Eclipse enthusiasts. As every year, we will gather at David's Restaurant @ Santa Clara Golf and Tennis Club, about 0.2 miles from the convention center. And as every year, we will start our little event around 7pm Pacific time.

Thanks to the German OSGi User Group we have been able to raise our liquids budget by 300 Euro. If you are from Europe and your company wants to become a sponsor too, please let me know! We'll happily add your money to pay the tap.

PS: If you don't have a European passport, we'll let you in too :-) Promise! You might not qualify for a large beer though.

]]>
Running a SQL Script on startup in EclipseLinkRunning a SQL Script on startup in EclipseLinkThere's no built in support for this in EclipseLink but it's easy to do thank's to EclipseLink's high extensibility. Here's a quick solution I came up with: I simply register an event listener for the session postLogin event and in the handler I read a file and send each SQL statement to the database--nice and clean. I went a little further and supported setting the name of the file as a persistence unit property. You can specify this all in code or in the persistence.xml.

The ImportSQL class is configured as a SessionCustomizer through a persistence unit property which, on the postLogin event, reads the file identified by the "import.sql.file" property. This property is also specified as a persistence unit property which is passed to createEntityManagerFactory. This example also shows how you can define and use your own persistence unit properties.

]]>
Congratulations to Ed, Chris, and BorisCongratulations to Ed, Chris, and BorisAll three are immensely talented individually and will be even better as a team: Ed with his skills in coordinating multiple related projects; Chris with his experience attracting new technologies and mentoring non-corporate projects; and Boris with his extensive knowledge of what is needed to support the core committers. I know they will represent the needs of the most important part of the Eclipse community -- the engine of production and innovation -- the committers and contributors (especially the core committers). It will be hard work, but these three have stepped up to the challenge and I look forward to their accomplishments.

Congratulations!

]]>
Random acts of kindnessRandom acts of kindness
thank you' to the ladies and gentlemen making all this possible, the unsung heroes of the more-or-less-oiled machinery called 'build', the closing-time-panic induced commitathoners, the map-file conflictinators, the cooler of the cool 'hope-the-next-build-will-be-green'-cats, the ... well you know who you are!

Thank you all very much!

For those of you going to EclipseCon this year, I recommend buying a pint for your build-wizard. But don't overdo it in case somebody (could be me) breaks a build somewhere!

]]>
Announcing the e4-Rover Mars ChallengeAnnouncing the e4-Rover Mars Challenge
cool programming contest to introduce e4 to the EclipseCon attendees, and to encourage them to try out e4 and learn about it. (See below for acknowledgments.)

The idea is to drive a LEGO rover to collect points - your mission is to align the robot's on-board "instruments" with martian "rocks" in an arena. There are two ways to win - collect the most points to win a Lego Mindstorms NXT 2.0 set, or write the best e4-based client to win a Lego Mindstorms and a trip to the NASA robotics lab in Los Angeles. Other prizes include credits for Amazon Web Services and T-Shirts.

The architecture of the game system is interesting - the Lego robot executes commands that are sent to it from a local machine via bluetooth. The clients won't communicate with this local machine directly though - to make everything scalable, data about the state of the game, and an overhead view of the arena and the rover is made available using Amazon Web Services. We have an Equinox-based server on EC2, and are using S3 for making data available in a scalable way.

We provide a basic e4-based client, with a joystick-like way to control the rover. If you want to win the grand prize, you'll have to improve the client to make it look better and operate the robot more efficiently. We basically want people to hack the client to beat the game. To help you do this, we have started a tutorial (which we'll refine and expand over the next few days) and a FAQ. We will also add some more comments to the client code, and perhaps tweak the code a little bit, so make sure you check if there are newer versions of the source code available.

To give people a head start we are making the current source code for the client available today. You can download the code now, run the client but you can't actually control the robot. You need a hash key to control the robot and those will only be given out at EclipseCon. What you will be able to do is watch us have some fun playing with the robot, erm, I mean, perform more testing, because the client includes a webcam view of the robot. More detailed instructions are available on the contest web pages.

I have been playing with the robot for the last week. For a little while, I was at the top of the high score list, but since then others have gotten much better. I'll have to end the blog post here to do some more testing. ;-)

Driving the rover is a lot of fun! It's only a week or so until you can play too, at EclipseCon.

P.S. Credit for the idea goes to Jeff Norris, who will be presenting a keynote on Wednesday. Khawaja, Victor, and Mark from Jeff's team worked on the hardware and the server side, and Brian de Alwis, Benjamin Cabe and myself wrote the simple e4 based client. Lars Vogel is helping with the documentation. Ian Skerrett from the Eclipse Foundation is coordinating everything, Lynn is going to help run the contest at EclipseCon, and a good number of e4 committers have been recruited as volunteers.

]]>
Set Jetty buffer size (Maven)Set Jetty buffer size (Maven)

Working with large cookies and jetty, you may have faced this error :

2010-03-11 18:18:31.275:WARN::HttpException(413,FULL head,null)

This is because jetty allows only 4ko for HTTP request and response headers. Using large cookies is enough to reach the limit.

To add more room for headers, simply add <headerBufferSize>16384</headerBufferSize> to your connector configuration (16k should be enough).

]]>
Eclipse Refactoring for legacy codeEclipse Refactoring for legacy codeHeap Dump Analysis with Memory Analyzer, Part 2: Shallow SizeHeap Dump Analysis with Memory Analyzer, Part 2: Shallow Size
In the second part of the blog series dedicated to heap dump analysis with Memory Analyzer (see previous post here) I will have a detailed look at the shallow size of objects in the heap dump. In Memory Analyzer the term shallow size means the size of an object itself, without counting and accumulating the size of other objects referenced from it. For example the shallow size of an instance of java.lang.String will not include the memory needed for the underlying char[].

At first look this seems like a clear definition and a relatively boring topic to read about. So why did I decide to write about it? Because despite of the understandable definition, it is not always straightforward (for the tool developers) to calculate the shallow size, or (for a user) to understand how the size was calculated. The reasons? – different JVM vendors, different pointer sizes (32 / 64 bit), different dump formats, insufficient data in some heap dumps, etc … These factors could lead to small differences in the shallow sizes for objects of the same type shallow sizes being displayed for objects of the same type, and thus to questions.

Is it really important to know the precise size? Not necessarily. If you got a heap dump from an OutOfMemoryError in your production system, and MAT helps you to easily find the leak suspect there – let’s say it is some 500Mb big object - then the shallow size of every individual object accumulated in the suspect’s size doesn’t really matter. The suspect is clear and you can go on and try to fix the problem.

On the other hand, if you are trying to understand the impact of adding some fields to your “base” classes, then the size of the individual instance can be of interest.

In the rest of the post I would have a look at the information available (or missing) in the different snapshot formats, explain what MAT displays as shallow size in the different cases, and try to answer some of the questions related to the shallow size which we usually get. If you are interested, read further.

As I mentioned already, the various snapshot formats contain different pieces of information about the objects. I will look at each of them separately, and additionally differentiate between object instances and classes. For more information on the different heap dump formats see part one of the blog series.

Instance Size in HPROF Heap Dumps

The heap dumps in HPROF binary format do not provide the correct size of each instance. What they provide is the number of bytes used to store the necessary data in the heap dump, but not the number of bytes the VM really needs to store the instance in the heap. Therefore, in MAT we have to (and attempt to) model how the VM would store the instance and how much memory it would need.

The sizes originally provided in the hprof file do not contain the object header and the additional space the VM uses to have the object addresses aligned in a certain way. These are namely the parameters we guess on our own. Does this always work? No. Unfortunately not. In the Bugzilla entry 231296 you can find some discussions on the topic, and also what the current state is. Here is just a short summary:

With the formula we use to calculated the sizes for dumps from 32 bit Sun VMs we observed: correct results for 1.6 dumps; small deviations for a handful of objects in 1.5 dumps

For the special case of a x64 Sun VM with compressed OOPs we have no solution at the moment. We haven’t found a way to guess from the HPROF file that the pointers were compressed

Instance Size in IBM System Dumps (read with DTFJ)

The DTFJ provides already the correct instance size for the objects, and in MAT we don’t have to do any guessing – the instance sizes are correct. What needs to mentioned is that it may happen that two instances of the same class and in the same heap dump have different shallow sizes. The Memory Analyzer was not prepared until recently to handle such a case. More information on when exactly such a difference could appear and some discussions on the necessary changes can be found in Bugzilla entry 301228.

Instance Size in PHD Dumps

The sizes provided in the PHD (Portable Heap Dumps from IBM JVMs) dumps are also correct and MAT just displays them without any further computations.

Class Size in HPROF

The HPROF format does not provide information about the memory needed for a class - for bytecode, for jitted code, etc… For every class the Memory Analyzer will show as shallow size the sum of the shallow sizes of all static fields of the class.

Class Size in IBM System Dumps

DTFJ provides more information about the classes sizes. The shallow size for classes reported by MAT includes the size of all methods (bytecode and jitted code sections) and also the on heap size of the java.lang.Class object.

Class Size in PHD Dumps

The PHD dumps do not contain information about the method sizes. The shallow size for classes in PHD dumps is just the size of the java.lang.Class object.

Shallow Size of a Set of Objects

The “Shallow Size” column appears in many views where objects have been aggregated in groups based on different criteria. The shallow size of a set of objects is just the sum of the shallow sizes of the individual objects in the set. There are two things to mention here, which have raised questions in the past:

in a class histogram (i.e. when objects are aggregated based on their class) the table may contain more than one entry with the same class name. This happens if the same class is loaded with more than one class loaders. If one is interested in the total shallow size of the instances of all classes, one can filter the histogram and sum up the sizes

the classes are added to the record for java.lang.Class. This means that the shallow size in the histogram entry for java.lang.Class is the sum of the shallow sizes of all classes (calculated as described above in the “Class Size …” paragraphs.

Summary

My personal view is that if one is using the Memory Analyzer to find the root cause of an OutOfMemoryError, then the shallow sizes of the individual objects are not that important.

If for a given purpose one needs to understand in detail the sizes of objects, then it is important to remember that they depend on the concrete JVM and heap dump type, and that in some cases the displayed sizes are not given by the VM but are calculated in MAT. I hope that the short overview given in this blog could be helpful for better understanding these details.

What Comes Next?

In the next post I plan to write again about size, but a different one - the retained (or keep alive) size of objects and object sets.

]]>
Aligning Carbide 3.0 with Eclipse HeliosAligning Carbide 3.0 with Eclipse HeliosWe hesitated to do this in the past because it means we effectively have the entire CDT team contributing to Carbide in real time: any one of the committers can break our build or introduce bugs. But we’re more confident now and our experience with the CDT community has been very good: people rarely break the build and the quality and review of contributions has been excellent.

And since we sync up with the latest platform and CDT eventually anyway is makes more sense to discover any issues at the moment when the work is developed and contributed, not months later. Now both our internal test team and the Carbide beta group can give the new platform and CDT stuff a workout before it’s released. This also lets us get rid of our internal copy of CDT we would occasionally sync with the public one. Trying to keep it updated was a pain and it created a temping way for us to make changes that weren’t really ready to go back into the community.

Rather than explaining why you should vote for me :-), I thought it would be useful to summarize the key election facts:

You are voting for a one-year term starting April 2010. Voting is open as of yesterday, for three weeks until March 12, 3 pm Eastern time. Check out the committer candidates and their vision statements.

To vote, check if you received an email from emo@eclipse.org on Feb 19, 2009. It contains the URL for voting, as well as the required voting password. If you haven't received a voting password, you are probably not (yet) an Eclipse member, and have to sign the membership agreement first. To understand why this is required, read this page, then read this page to make sure you fill out the form correctly. No voting privileges without signing the form!

]]>
Meet the Eclipse Community at CeBIT 2010Meet the Eclipse Community at CeBIT 2010
From March 2 - 6, we will be at the CeBIT in Hannover, Germany.

With the help of the European Marketing Group, members of the Eclipse Foundation are teaming up to showcase Eclipse technologies and solutions at CeBIT 2010. Drop by the Eclipse Island, Booth #D38 in the Open Source Park, Hall 2, to meet representatives from EclipseSource, Actuate, SOPERA, Bredex and the Eclipse Foundation.

Visit the demo theater and see the RCP, RAP, BIRT, and Swordfish in action, as well as products based on these eclipse technologies such as Yoxos and GUIdancer. And, enter the drawing to have a chance to win great prizes every day.

I'll be there on Tuesday, Wednesday and Friday and would be delighted to chat with you.

]]>
PDE will tell youPDE will tell you1) Plug-in Selection Spy (ALT+SHIFT+F1) activate a part or dialog page and hit ALT+SHIFT+F1. Plug-in spy will open a popup and describe the contents (at least it will try). It will provide information about what ID and implementation class the focus part has, as well as which plugin contributed it, what are the active identifiers (menu, help, etc) and what type is the selection that part publishes.

2) Plug-in Menu Spy (ALT+SHIFT+F2). Hit ALT+SHIFT+F2 and then pick a menu item. The popup will provide information about where that item lives, action ID, command IDs if available, etc.

3) In the PDE Editor, the Browse... button. Many extensions need IDs provided in another extension. For example a menu contribution (org.eclipse.ui.menus) needs a commandId (org.eclipse.ui.commands). If you are asked to fill in an ID and there is a Browse... button, use it. It will give you a filterable list, and cuts down on cut&paste errors.

These 3 are examples of PDE tools that you probably don't use very often, but when you need them they're *really* helpful.

]]>
Interesting opportunity for RCP/OSGi expertsInteresting opportunity for RCP/OSGi experts
One of the things I love about being a trainer is that I get to visit and work with so many development teams. Every group of developers has their own chemistry, culture, skills and domain interests. Sometimes I think I’m learning as much as the teams I’m training.

As it happens, one of the teams I’ve enjoyed working with the most is looking for a full-time RCP/OSGi project lead. The company, EXTOL, is a small, successful ISV that creates B2B integration tools. I don’t do this too often, but I wanted to pass this on because I think it’s such a cool opportunity. What’s so cool about it, you ask?

First, I don’t think there’s any better programming job than working for a small ISV. You can have a big impact and what you do matters. A lot.

EXTOL is re-architecting it’s products from the ground up (this is greenfield development) and they’re using a lot of interesting technologies – RCP, OSGi, EMF, GMF and more.

One of the coolest things they’re doing is leveraging OSGi on both the client-side and server-side. Very few projects are leveraging OSGi in this way, and I think the opportunities here are awesome.

Finally, the team is great. The developers are smart and easy to work with, management knows how to let developers be successful.

So what’s the catch? Well, whether there’s a catch or not depends on what you’re looking for in your life at the moment. EXTOL is located in a small town (Pottsville) in the hills of eastern Pennsylvania. It’s a beautiful area with lots to do outdoors, a great place to raise a family and much more. It’s not for everyone, but I imagine it would be great for more than a few of the developers I’ve met.

If you’re interested, here’s a post on the EXTOL blog that goes into more detail. They’ll also have developers at EclipseCon, so feel free to introduce yourself to one of them (or me) if you’re there as well.

The previous photo in his stream has a screenshot of his desktop in the background so you can see what lines up where. It’s more fun to guess, though

]]>
UML may suck, but is there anything better?UML may suck, but is there anything better?
UML has been getting a lot of criticism from all sides, even from the modeling community. Sure, it has its warts:

it is a huge language, that wants to be all things to all kinds of people (business analysts, designers, developers, users)

it has a specification that is lengthy, hard to navigate and often vague, incomplete or inconsistent

it is modular, but its composition mechanism (package merging) is esoteric and not well understood by most

it is extensible, but language extensions (profiles and stereotypes) are 2nd-class citizens

it lacks a reference implementation

its model interchange specification is so vague that often two valid implementations won’t work with each other

its committees work behind closed doors, there is no opportunity for non-members to provide feedback on specifications while they are in progress (membership is paid)

<add your own grudges here>

However, even though I see a lot of room for improvement, I still don’t think there is anything better out there. The more I become familiar with the UML specification, the more impressed I am about its completeness, and how issues I had never thought about before were dealt with by its designers. And it seems that the OMG recognizes some of the issues I raised above as shortcomings and is working towards addressing them. Unfortunately, some fundamental problems are likely to remain.

In my opinion (hey, this is my blog!), for a modeling language to beat UML:

it must be general purpose, not tailored to a specific architecture or style of software

it must not be tailored to an implementation language

it must be based on or compatible with the object paradigm

it must not be limited to one of the dominant aspects of software (state, structure, behavior)

it must be focused on executability/code generation (and thus suitable for MDD) as opposed to documentation/communication

it must be modular, and user extensions should be 1st class citizens

its specification should follow an open process

it must not be owned/controlled by a single company

it must not require royalties for adoption/implementation

My suspicion is that the next modeling language that will beat the UML as we know today is the future major release of UML. Honestly, I would rather see a new modeling language built from scratch, focused on building systems, that didn’t carry all that requirement/communication/documentation-oriented crap^H^H^H^Hbaggage that UML has (yes, I am talking about you, use case, sequence, instance and collaboration diagrams!), and developed in a more open and agile process than the OMG can possibly do. But I am not hopeful. The current divide between general purpose and domain specific modeling communities is not helping either.

So, what is your opinion? Do you think there are any better alternatives that address the shortcomings of UML without imposing any significant caveats of their own? Have your say.

]]>
Myths that give model-driven development a bad nameMyths that give model-driven development a bad name
It seems that people that resist the idea of model-driven development (MDD) do so because they believe no tool can have the level of insight a programmer can. They are totally right about that last part. But that is far from being the point of MDD anyways. However, I think that unfortunate misconception is one of the main reasons MDD hasn’t caught on yet. Because of that, I thought it would be productive to explore this and other myths that give MDD a bad name.

Model-driven development myths

Model-driven development makes programmers redundant. MDD helps with the boring, repetitive work, leaving more time for programmers to focus on the intellectually challenging aspects. Programmers are still needed to model a solution, albeit using a more appropriate level of abstraction. And programmers are still needed to encode implementation strategies in the form of reusable code generation templates or model-driven runtime engines.

Model-driven development enables business analysts to develop software (a variation of the previous myth). The realm of business analysts is the problem space. They usually don’t have the skills required to devise a solution in software. Tools cannot bridge that gap. Unless the mapping between the problem space and solution space is really trivial (but then you wouldn’t want to do that kind of trivial job anyways, right?).

Model-driven development generates an initial version of the code that can be manually maintained from there on. That is not model-driven, it is model-started at most. Most of the benefits of MDD are missed unless models truly drive development.

Model-driven development involves round-trip engineering. In MDD, models are king, 3GL source code is object code, models are the source. The nice abstractions from the model-level map to several different implementation artifacts that capture some specific aspect of the original abstraction, combined with implementation-related aspects. That mapping is not without loss of information, so it is usually not reversible in a practical way, even less so if the codebase is manually maintained (and thus inherently inconsistent/ill-formed). More on this in this older post, pay attention to the comments as well.

Model-driven development is an all or nothing proposition. You use MDD where it is beneficial, combining with manually developed artifacts and components where appropriate. But avoid mixing manual written code with automatically generated code in the same artifact.

What is your opinion? Do you agree these are myths? Any other myths about MDD that give it a bad name that you have seen being thrown around?

Rafael

]]>
e4 and "early" compatibilitye4 and "early" compatibility
As the model for the e4 workbench stabilizes we're back working hard on the compatibility layer. Right now it consists of the gutted org.eclipse.ui.workbench plugin. The idea is to support the API we have in org.eclipse.workbench, but based on the e4 workbench model and e4 services, instead of the mass of internal code in parts, perspectives, and presentations.

We're taking a 2 pronged approach. Creating an e4 IDE application and slowly adding useful views and actions, seeing what is needed to bring them up. We want to support a useful number of views (like the Project Explorer and Problems view) sooner rather than later.

We're also running the org.eclipse.ui.tests.api.ApiTestSuite (after cleaning up internal references in the tests themselves with the aid of a tweaklet). ApiTestSuite covers the most common scenarios (opening and closing windows, perspectives, views, and editors), and supporting our API is a good way to help 3.x plugins run on e4 with the compatibility layer.

]]>
EclipseLink on LinkedInEclipseLink on LinkedInUltimately we got the issues resolved in relatively short order and enjoyed a great meal with some of the consultants on the project in Halifax. I truly enjoy any chance I get to down into an application and help developers solve their persistence challenges.

During my visit I made some notes on a couple of take-aways.

1. Update the EclipseLink wiki's best practices to include a couple of additional scenarios around long-running transactions.

2. Help connect the existing community of Java professionals using EclipseLink.

I have already started on the first and will post some highlights here when the work is completed. To address the second action item I created the EclipseLink Group on LinkedIn.

The goal of this group is to allow any and all Java professionals who use LinkedIn to connect and share ideas, job opportunities, news, and upcoming events. If this sounds interesting to you please join the group and share your ideas.

Doug

]]>
Dropins diagnosisDropins diagnosisWhile there are certainly areas for improvement, the P2 team does a great job in fixing bugs. They are preparing API, which in my opinion will speed up P2 adoption because you all will be able to wrap existing functionality into your UI, so there will be much more testing and much more bug reports :-).

P2 will inform you what bundles were found in dropins/ folder, what request was generated, and what is the plan of installation. Maybe it is not detailed explanation of what actually happened, and what went wrong, but it should give you strong information about where to start - was your bundle in the plan? Was it installation problem (P2 fault) or maybe it is just not optimal to include your feature?

This is not a lot and a lot at the same time ;-).

]]>
Heap Dump Analysis with Memory Analyzer, Part 1: Heap DumpsHeap Dump Analysis with Memory Analyzer, Part 1: Heap Dumps
Almost two years passed since the Memory Analyzer tool (MAT) was published at Eclipse. Since then we have collected a lot of feedback, questions and comments by people using it, and we also gathered experience in using the tool ourselves. Most of the people find their way to solve memory problems using MAT relatively easy, but I am convinced there are also a lot of unexplored features and concepts within the tool, which can be very handy if properly understood and used. Therefore I decided to start a series of blog posts dedicated to memory analysis (with MAT) - starting from the basics and covering the different topics in detail. I would try to answer there some of the questions which pop-up most often, give some (hopefully useful) hints, explain the benefit of certain “unpopular” queries, and (please, please, please…) collect your feedback.

As the Memory Analyzer is a tool working with heap dumps, I will start with a detailed look at heap dumps – what they are, which formats MAT can read, what can be found inside, how one can get them, etc… If you are interested in the topic, read further.What Is a Heap Dump?

A heap dump is a snapshot of the memory of a Java process at a certain point of time. There are different formats for persisting this data, and depending on the format it may contain different pieces of information, but in general the snapshot contains information about the java objects and classes in the heap at the moment the snapshot was triggered. As it is just a snapshot at a given moment, a heap dump does not contain information such as when and where (in which method) an object was allocated.

What Are Heap Dumps Good for?

So what are heap dumps good for? Well, for a lot of things
If there is a system which is crashing sporadically with an OutOfMemoryError, then analyzing an automatically written heap dump with MAT can be a very easy way to find the root cause of the problem (read more here).
If you wan to analyze what the footprint into memory of your application is, then MAT and heap dumps are again a good choice. This combination can also help you to find which are your biggest structures, to find redundant data structures, to find space wasted in unused collections, and much more. Such topics will be covered later in this blog series.
If you however are trying to find out why too many garbage objects are produced during a certain operation, or want to see which methods allocate most of the objects, then you would need to use a profiler which is collecting data over time from the VM. Leak detecting techniques relying on analysis of the objects behaviour (allocation / garbage collection) are difficult to inplement using heap dumps (see object identity below).

Types of Heap Dumps

Currently the Memory Analyzer is able to work with HPROF binary heap dumps (produced by Sun, HP, SAP, etc… JVMs), IBM system dumps (after preprocessing them), and IBM portable heap dumps (PHD) from a variaty of IBM platforms. Let’s have a closer look at each of the types.

HPROF Binary Heap Dumps

A detailed specification of the content of an HPROF file can be found here.

Below are summarized some of the important pieces of the information used within MAT:

Information about all loaded classes. For every class the HPROF dump contains its name, its super-class, its class loader, the defined fields for the instances (name and type), the static fields of the class and their values

Information about all objects. For every object one can find the class and the values of all fields – both references and primitive fields. The possibility to look at the names and the content of certain objects, e.g. the char[] within a huge StringBuilder, the size of a collection, etc … can be very helpful when performing memory analysis

the callstacks of all threads (in heap dumps from JDK 6 update 14 and above)

IBM System Dumps

On IBM platforms one can preprocess a system dump (core file) from a Java process with the jxtract tool, and analyze the result with Memory Analyzer on an arbitrary other box (DTFJ libraries have to be additionally installed, see details below in the “How to get heap dump” section). As the core file contains the whole process memory, this kind of dump also provides all the details seen in an hprof heap dump (including the field names, primitive fields’ values, stacktraces, etc…). There is even more information (e.g. process related information), but at the moment it is not used in Memory Analyzer.

IBM Portable Heap Dumps (PHD)

The PHD files are much smaller in size than the corresponding system dumps. However, they contain less information.
The major difference between the HPROF dumps (or the IBM system dumps) and PHD dumps is that a PHD dump does not contain the values of the primitive fields. Only the non-null references from an object are provided. The second important difference is that the field names are not present, i.e. one can’t distinguish from which field a reference is made, and because of this the presented reference chains (paths) are not as concrete as with the other dumps. Using just the object graph is still enough for the analysis of many memory-related problems, but when the content of some fields is needed to get an idea why an object is too big then one has to use the system dumps.
Usually when a PHD dump is generated there is also a corresponding javacore file. If they are put together in the same directory when the PHD dump is opened with MAT, then some of the data in the javacore file will also be used.

So, having less information has both advantages and disadvantages - the PHD dumps are ways easier to transport from a customer (smaller size), can be used to find the biggest objects in the heap. And as they are usually written by default they are a good place to start the analysis. However, in some cases the information is enough to analyze in details the root cause of a problem.

A Common API for Them All?

Having different formats for the heap dumps is definitely easier for the VM providers, as they can provide very efficiently the specific data they have. This however doesn’t hold true for the tools, which are faced with the different formats, have to understand each of them, and possibly optimize for every format separately.
An attempt to solve this problem and make the life of tool writers easier is made under the Apache Kato project and the related JSR 326. They put efforts to provide a common API for accessing data from vendor specific snapshots and thus give tools a standard way to extract the data needed for post-mortem diagnostics (including memory related problems).

How To Get a Heap Dump

How to obtain a heap dump depends on the platform and the used JVM. In general all VMs provide the possibility to request a heap dump manually, or to get one written from the VM when an OutOfMemoryError occurs. The second option is very convenient for the analysis of problems happening on production systems, or happening only sporadically, as one does not have to observe the system and wait for the problem to reoccur.
A detailed description how a heap dump can be obtained depending on the JVM is provided here.

Object Identity

One of the questions which we are asked very often is if MAT can recognize the same objects in two or more heap dumps from the same process. The answer is unfortunately still no. Object IDs which are provided in the known to us heap dumps are just the addresses at which the objects are located. As objects are often moved and reordered by the JVM during a GC these addressed change. Therefore they cannot be used to compare the objects. Tagging the objects while they are allocated is something a profiler could do (usually at a relatively high cost), but in the standard heap dumps described above such information is missing. Some ideas how to guess identical objects were discussed in this bugzilla entry.

Are Dead Objects Present In the Heap Dump?

Another question which often pops up is if garbage objects are included in the heap dump. This again depends on the heap dump, but usually a GC is done before the heap dump is written. Nevertheless there are always some objects which are unreachable from the GC roots, i.e. should be thrown away. The Memory Analyzer removes such objects during the initial parsing of a heap dump in order to simplify the analysis. If you want to have a look at the “garbage” or even want the objects to remain, then find here what to do.

In Closing …
This was my attempt to give a detailed explanation of the different heap dump formats which the Memory Analyzer understands, and also give the answers to some of the questions which we frequently get. I’m sure there are still questions to be answered, and the MAT team will be very happy to get them from you, be it as comments here, in our fourm, or in bugzilla.

It seems that I am contradicting myself, because in 1443 I propose to create your own (copy) of a declarative UI and in 1483 I whine about all those similar looking interfaces. How does this go together?

Well, in the case of all the different property stores, they all do the same and having 10 different classes/interfaces is indeed annoying. On the other hand. I simply do not believe that there is "the" declarative UI that solves everybody's problems. Whenever you try to create the ultimate declarative UI you are doomed to fail because it is either simple but too restrictive or powerful but too complicated (I'd be very happy if someone could prove me wrong).

The idea of a domains specific language is to be Domain Specific! One could argue that UI is a domain and a (e.g. xml based) declarative UI is a domain specific language for creating UI. But, in reality UI is not UI. It's like creating a domain specific language for programming. You end up creating a Turing complete general purpose language. OTOH, if you want to create a domain specific language for form based UI on top of EMF you have a good chance it becomes a good abstractions. But do not expect this language to be used to create something like JDT or PDE.

I do not believe there is the ultimate declarative UI language. But I believe that saving properties is so universal (and simple) that one 'language' should be sufficient.

]]>
The Ribbon IDE - a leaner, modern UI for EclipseThe Ribbon IDE - a leaner, modern UI for Eclipse
Hexapixel Ribbon widget into Eclipse. I'm working on this over the holidays. It will be available for free under EPL.

]]>
Xtext - how to startXtext - how to start
Sometimes is better to have screwdriver than swiss knife. At least people from the are of DSL think that. Well, I have decided to follow this path. I need a screwdriver. In our world one of the options is to use Xtext. Xtext is a framework for development of textual domain specific languages (DSLs).

What does it mean?

You can build a grammar description and with the use of the framework get a:

EMF model related to this grammar.

Fully functional text editor.

Scaffold for the generation tool

What for?

I need to record some data in structured form. I need model. Text editor is more convincing. But this is only my motivation. There is a sea of use cases.

How to start ?

You can download Eclispe Modeling tools distribution from Eclipse downloads site. Or use update site.Then you can follow Xtext documentation. And this is the reason for this post. Although, you don't need to spend weeks on learning Xtext principles to build usable tools, still there are some things missing from the documentation.

You have created project, you have written your grammar. Now is a generation step. Is build successful ? Not really. If you look at the console view, you can find why. You are generating without ANTLR. It is highly recommended to download and use the plug-in 'de.itemis.xtext.antlr' using the update site http://download.itemis.com/updates/milestonesOf course solution is to download the plug-in. After that you can follow the getting started tutorial.

Code is generated and now is time for building your XPand templates to generate Java code from your model. How to do that? There is no "generate sth.." button in your target environment. The simplest way is to import your xxx.generator plug-in project from source to target. You are almost done. The only thing left is to add required dependencies to imported project. It is also helpful to add XPand nature to this project. After that you can follow tutorial, build XPand templates, and generate Java code.

Hope this short note will help somebody to start the adventure with the screwdriver even smoother ;)Next time I will show you how to change editor default coloring.

]]>
PDE Headless Build and P2PDE Headless Build and P2
As we said here in France Mieux vaut tard que jamais. I spent the last few days (almost 1.5 year after the first P2 release) playing with P2 and the new PDE Build facilities on top of it.

It was longer than expected (I mean here longer than what I expected) to catch the main concepts of P2 but I think I finally get these concepts !!! And I’m happy with that !!!

In order to help other not already aware with P2 and the PDE Build facilities on top of it I am just gathering here all the links you need to get started:

All required information to catch P2 and PDE build for it are available using these 6 links !!!!

Good luck and hope this can help a little others to get started quicker than me ;o)

PS: I started this blog entry for ME to avoid looking through Google over and over to find the information

]]>
Tolerance and RespectTolerance and Respect
While I no longer work on Eclipse nor for a member company, I regard many many people in the community as my friends, so felt compelled to write.

I think one measure of the strength of a community, or of a society for that matter, is the degree to which it tolerates viewpoints and discussions which challenge, irritate, or upset it. At best it may learn from them, at the least it can disregard them while it celebrates the fact that those voices exist. That voice could be yours.

There’s been much discussion about the need for openness. But that openness starts with a willingness to listen to, or at least accept, the words of others regardless of whether they align with our own. It begins with tolerance and respect. When as a community we engage in disrespectful acts such as personal attacks and name calling, and entertain the banning of those whom we no longer have the desire to listen to, we are no longer open. We lessen ourselves, and are not much of a community.

Is this who you want to be?

]]>
Eclipse: Empowering the Universal Platform Technical BriefingEclipse: Empowering the Universal Platform Technical Briefing
Eclipse: Empowering the Universal Platform that provides an introduction to Eclipse - the platform, the foundation, and the ecosystem. This is a nice introduction if you're starting out with Eclipse or want to get a better sense of what's available in Eclipse land.

]]>
Eclipse RT Day TorontoEclipse RT Day Toronto
Eclipse RT Day page.) My interest is currently in server side equinox. I've first started playing with this technology about two years ago and my current product, Rational Insight, recently released v1.0, which includes a server side equinox based component. The runtimes are really coming along nicely and have been stable and usable for some time. For me the big takeaways from the day are:

1. The required ancillary features, like filter support for server side equinox, security, and provisioning have now been or are currently being addressed.2. Tool support is growing for Eclipse RT.

This one day event was really great. The two tracks kept everyone together for most of the day, there was low overhead for me as this was in Toronto (and where I work no less), and there was only a single day commitment, which is much easier on my schedule than a multi-day conference. Thanks to everyone who came down from Ottawa and other places to present and participate. I hope to see more of these events in the future.

]]>
Altering OSGi Service Lookups with Service Registry HooksAltering OSGi Service Lookups with Service Registry Hooks
OSGi Services are great. They can solve many problems very elegantly. Take for instance SPI patterns. SPI patterns are used to make implementations pluggable. This type of pattern outside OSGi often comes with a bunch of external configuration (e.g. in META-INF/services) that makes it hard to manage and gives you a once-off chance to choose your implementation that you're stuck with for the rest of the VM lifecycle. OSGi Services provide a much more elegant way to solve this. In many cases clients simply want to use some functionality and don't particularly care how that functionality was created. When using OSGi Services the functionality is looked up in the OSGi Service Registry which is a directory of these. Services can be looked up by implemented interface (like a phone book) or by provided attribute (like the yellow pages). How the service got there is not interesting to the client and he's not involved in that. What if the service gets replaced while the client is active? No problem, the OSGi Service Programming model is actually built around this dynamicity. You don't need to stop service consumers when you are replacing or updating the service, they are automatically rebound.

The Service Registry also provides us with an attribute based selection mechanism which is very nice if there are multiple implementations to choose from. As an example, take a system where every available printer is represented in the Service Registry as a Service that implements the org.acme.Printer interface. Each printer will have additional properties registered that help with the selection:

When looking for a Printer OSGi allows you to use an LDAP-style filter to select the printer you want. So if you want a printer that can do A3 you would use this LDAP filter:

(&(objectClass=org.acme.Printer)(paper-size=A3))

or if you want a printer in a location that starts with 'b' you do this:

(&(objectClass=org.acme.Printer)(location=b*))

This is all great and you can build your system on this using either plain OSGi code or component frameworks such as OSGi Blueprint or OSGi Declarative Services. However sometimes you may want to tweak the properties at a later date without having to rewrite your system. Assume that in your organisation all of a sudden the office numbering has changed, which would really mean that you'd have to update the 'location' attribute of all the printers. Or maybe you want to add some additional metadata to existing printers that influence the selection process, like their energy rating.

Service Registry Hooks provide the building blocks that make this possible. I wrote a little bundle called ServiceJockey that uses these to manipulate service registrations and how they are looked up. BTW you can find details at the end of this posting about getting ServiceJockey, which is open source and available freely under the Apache License.

Service Registry Hooks: what are they?

One of the driving factors for Service Registry Hooks (an OSGi standard introduced in the 4.2 Core specification) was the Remote Services (Distributed OSGi) work done in the OSGi Alliance. One of the things that implementations of the Remote Service spec need to know is what kind of services consumers are looking for so that they can go out to a discovery system to see if it might be available remotely. Eagerly registering all available remote services is clearly not scalable so we need a smarter mechanism to allow for transparent on demand discovery of remote services. This is what the ListenerHook and FindHook provide us. Together they allow us to find out what service consumers are requesting making us to only look in a remote Discovery system for those that are relevant to the current framework.

Another problem was frequently brought up. What if you have an existing bundle that doesn't know anything about Remote Services? What is the default behaviour? Should all service lookups all of a sudden always include remote services? The Remote Services spec does include a special property that you can add to your filter to influence this (service.imported) but you clearly don't want to break up all your existing service consumers to add this extra property to their lookup filters.

It turned out that there wasn't really a valid answer to that question applicable to all cases. If you're working on a system only using Remote Services for a select set of services it may make sense to default all other services consumers to not use Remote Services. On the other hand, maybe if you're building a Cloud-based infrastructure where services can move freely from one container to another the default behaviour could be that you do allow remote services for just about anything.

Since there isn't really a one-size fits-all here the Service Registry Hooks make it possible to provide a policy for this problem in a separate bundle. You can create EventHooks and FindHooks to influence what services consumers can see. This effectively allows you to add extra conditions to the lookup of service consumers without the need to modify those consumers.

Finally, EventHooks and FindHooks, together with the a ServiceListener, allow you to proxy Service Registrations, where you provide a second registration of the same object with modified properties, hiding the original.

The ability to provide an alternate registration on the fly and the possibility to impose extra conditions on existing service consumer lookups is what I built ServiceJockey around.

ServiceJockey doesn't replace the Service Object, it only effectively replaces the registration in the Service Registry plus it can put additional constraints on client lookups. But you could go further. If you wanted to actually replace or shadow the real service object and do some work when somebody uses it (maybe log it or something) you could put an additional object between the consumer and the original service object.

Because the Service Registry Hooks effectively modify some basic behaviour of the framework (the visibility of Services) they should really be started early in the Framework lifecycle. You can achieve that by giving them a low start level.

The Service Registry Hooks are available in Felix 1.8.0 and Equinox 3.5 and newer versions of these.

Altering Service Registrations

Let's start with a bundle that consumes (uses) a Printer. It isn't interested in which printer, as long as it can print A4:

BundleContext context = ... // from Activator.start()

Filter filter = context.createFilter(

"(&(objectClass=org.acme.Printer)(paper-size=A4))");

st = new ServiceTracker(context, filter, null) {

public Object addingService(ServiceReference ref) {

// print out some information on the printer

// or use the printer

returnsuper.addingService(ref);

}

};

st.open();

When I run this it's reporting both printers that I have in my system:

Printer:

objectClass: [org.acme.Printer]

service.id: 27

name: p1

location: b283

capabilities: [Double-sided]

paper-size: [A3, A4]

Printer:

objectClass: [org.acme.Printer]

service.id: 28

name: p7

location: a12

capabilities: [Colour, Staple]

paper-size: [A4, Letter]

So the client has arbitrary access to both of them. Now lets see if you can modify the client behaviour so that it only uses a printer with an Energy Rating < 50.

But hold on, we don't even know about energy ratings yet! We need to add this information to the Printer registration without changing the bundle(s) that register the Printers. Let's user Service Jockey to do this.

Service Jockey uses the OSGi Extender Model. This means that it is driven from a data file in a bundle.

So I've got a bundle (called PrinterJockey) that contains META-INF/sj.xml:

I will have to tell the Service-Jockey that my bundle contains configuration for it, for that I'm adding a header to the Manifest:

Service-Jockey: META-INF/sj.xml

Lets run the Printer Consumer bundle together with Service Jockey and my 'Printer Jockey' configuration bundle that contains the sj.xml file:

id State Level Bundle

0 ACTIVE 0 org.eclipse.osgi_3.5.1.R35x_v20090827

1 ACTIVE 1 org.coderthoughts.servicejockey_0.0.1

2 ACTIVE 1 PrinterJockey_1.0.0

3 ACTIVE 10 Printers_1.0.0

In my case the Printers bundle both registers and consumes the Printers, but that's because its a little test bundle. It's not really relevant. Note that the Printers bundle has a higher start level than the Jockey ones and therefore starts later.

When I run my system again I can see that it's doing some work. Now my client reports these services:

You can specify a regular expression to say what the Bundle Symbolic Name of the consumer bundle(s) is that the rule applies to. In my case .* matches anything. So it applies to any bundle that has OSGi Service consumers. By the way, there's also a tag.

You specify to what services the additional filter applies with the tag and you specify the additional filter in . In my case I'm adding the (energy-rating<=50) condition to any Printer Service returned to a consumer (the condition looks a bit dodgy because of the XML escaping of the '<' sign - you could use a CDATA section instead...).

Note that the Service Filter applies to any service returned to a service consumer, regardless of the filter used by the client itself looks like.

Let's run the client again:

Printer:

objectClass: [org.acme.Printer]

service.id: 32

name: p7

capabilities: [Colour, Staple]

location: a12

paper-size: [A4, Letter]

energy-rating: 50

.ServiceJockey: Proxied

Bingo - it only selects the service I wanted it to select. Based on additional properties and criteria listed in my Service Jockey configuration file. I did not have to modify the Printer Service registration bundle or the Consumer bundle...

Is it overkill?

Seems like a lot of work for just adding a service property, is there not a lighter way to at least add values to Service Registrations? Well at least that was my initial impression. However looking at the ServiceJockey bundle, it's only 14kb. Besides that there is currently no other way to achieve this...

Service Jockey - ride your service interactions

I didn't show any of the code to actually use the EventHook and the FindHook. Have a look at the HidingEventHook and HidingFindHook source code for that.

You can check out the source (Apache License) as an Eclipse Project from SVN here. If you fancy modifying the code, I would recommend you also get the tests project and add new ones. The test bundle depend on the Mockito bundle for its mock objects...

I'm sure the Service Jockey isn't perfect, because I only wrote it during an airplane flight so feel free to send patches :)

After almost 10 years at EmbeddedSupportTools/WindRiver/Intel, I'm heading off to pursue a new opportunity. I will be joining National Public Radio as Director of Technology for Public Interactive. This Boston-based division of NPR is responsible for the web technology platform used by many of the NPR affiliate stations. NPR is a great organization with a great mission, and I'm excited to be joining their team.

For my colleagues in Eclipse who know me as the embedded and mobile guy, the leader of DSDP, the guy who's always on the EclipseCon Program Committee, or one of your friendly Committer Reps, this is clearly "something completely different" as the Monty Python crew would say. In my new role, we will predominantly be users of Eclipse rather than contributors, and as such I will unfortunately be stepping down from my leadership roles in the Eclipse community.

I want to thank Wind River for supporting my work at Eclipse and for investing in the CDT, DSDP, and Platform projects. I also want to thank the many leaders in the Eclipse community and at the Foundation, from whom I've learned a great deal about meritocracy, IP policy, coopetition, copyright, governance, collaboration, and free beer.

]]>
Java modularity presentation in PreziJava modularity presentation in Prezi
One of the talks I give most often is called “Why Java Modularity Matters”. This is my attempt to explain how modularity in general and OSGi in particular represent the next logical step in the evolution of software development. I’m actually giving this talk at the Madison Java Users Group tomorrow night, and if you’re in the area please feel free to stop by.

Anyway, I spent some time last week moving the presentation over to Prezi, which I’ve been interested in trying for a while. What I like about Prezi is that it allows you to convey structure and meaning in ways that are impossible with regular slideware.

If you’re interested in what this looks like, check out the presentation embedded below. It’s obviously not meant to convey a lot of information on it’s own, but you’ll get the general idea. And as always, I’d be interested to hear what you think.

]]>
Mo, Mo, Mo, MovemberMo, Mo, Mo, Movember
crazy idea on Friday afternoon, with a little encouragement from Kevin Barnes, is starting to take off. We now have an Official Eclipse Mommitter team -- a team of Eclipse contributors who plan on growing Mustaches during the month of Movember to raise awareness (and funds) for men's health issues.

But if you're reading this blog it means you're interested in Eclipse and you are encouraged to join the team. In fact, I would like to put a challenge out there: let's try and get 20 people to join the Eclipse Mommitters (no voting needed). Considering there are close to 1,000 committers, 20 people is only 2% of that population. Considering the release train is known to have a few million users, that's less than 0.002%! It would also be cool to get at least one committer from each top level project. (And someone from the Foundation too).

Since November has already started (it's likely November 2nd when your reading this) we have to act quick. There are a few things you must do:

Our awesome Eclipse Release Engineers are considering using our Mommitters logo during one of the integration builds to help raise awareness too -- You could be part of Eclipse history!

No doubt you have a few questions, so here are answers to some of the more common questions:

What is Movember?

Movember (the month formerly known as November) is a moustache growing charity event held during November each year that raises funds and awareness for men's health.

At the start of Movember guys register with a clean shaven face. The Movember participants, known as Mo Bros, have the remainder of the month to grow and groom their Mo, raising money along the way to benefit men's health. -- In Canada we are raising funds for prostate cancer, however, different countries are raising funds for local charities related to men's health.

How can women get involved?While growing a Mo is left to the guys, Mo Sistas do a lot of important work for Movember. Mo Sistas can get involved by:

Registering online, recruiting a team and raising money

Organising events like Mo Parties

Making a donation to a Mo Bro

Supporting and showing love for the Mo

I already have a Mo - how can I participate?If you already have a Mo you can do a ‘reverse Movember’ and have people donate to you to shave it off. Alternatively, you could shave off your moustache at the start of Movember and then re-grow your Mo throughout the month…. Maybe it’s time to try a new Mo style?

Are goatees or beards allowed?The definition of a Moustache:

There is to be no joining of the Mo to side burns – That’s a beard.

There is to be no joining of the handlebars – That’s a goatee.

A small complimentary growth under the bottom lip is allowed (aka a tickler).

Remember, it’s Movember, not ‘Beardvember’ or ‘Goateevember’

]]>
Problems with Eclipse buttons in Ubuntu 9.10Problems with Eclipse buttons in Ubuntu 9.10
After upgrading to Ubuntu 9.10 (Karmic Koala) some buttons no longer work in Eclipse 3.5. Clicking has no effect but keyboard shortcuts still work.

It looks like Eclipse is doing some nasty stuff advanced hacking in SWT on GTK. This bug is fixed in 3.6M2 but you can work around the issue in Eclipse 3.5 by launching Eclipse through the following small shell script (assuming Eclipse is installed in /opt/eclipse-3.5):

#!/bin/sh
export GDK_NATIVE_WINDOWS=1
/opt/eclipse-3.5/eclipse

]]>
Future of eclipse: The Road Construction AnalogyFuture of eclipse: The Road Construction Analogy
Frank Gerhard will host a session at ESE called Symposium on Eclipse Foundation 2.0. Half a year ago I burned my fingers with my "Eclipse Freeloader Award" blog entry. I got lots of negative comments and mails. I also got a few positive reactions. I decided not to continue talking about the future of eclipse.

Today, half a year later I look back and one interesting mail I wrote at that time is an email press interview. I decided to post it as I wrote it and I use it as my position paper for the symposium.

The last months I have been heavily involved in an internal Wind River project based on eclipse and modeling, especially Xtext and I have not had too much time to work on eclipse except bug reporting and asking stupid questions on the tmf newsgroup.

I have no idea how the community will react on that (very long) post.

Disclaimer: The text below was written half a year ago. It expresses my personal opinion at that time. I have not followed the recent developments and I might change my mind at any time... ;-)

My reply to an e-mail interview on April 18. 2009:

Q: If you were ready to give the freeloader award, who would be the three finalists for the "honor." Why these guys?

A: I had no particular company in mind. However, it is the general mentality of the industry that frustrates me: the attitude to take advantage of something like open source and not give back anything to the system. This is also known as the "Tragedy of the Commons": Scott Lewis pointed out that the bigger a community is the less people participate: "The Logic of Collective Action"

From an architectural perspective, there are things to be done in eclipse that should not been driven by direct interests of some companies but in the common interest of the community. IBM somehow took this role for a while, when they put some of their best people on the core of eclipse. At that time they were interested in the overall success of eclipse. Since some years they started removing quite some developers from eclipse (and put them on the Jazz/RTC project).

I believe there should be an independent group of developers driving eclipse. In my opinion it would be best if they would be paid by the community but act independently in the best interest of eclipse.

Now the questions is: why should companies put money into something that is free? If they put money in the system and their competitors do not, they have a competitive disadvantage. They support the community but they have no direct advantage. And in fact the company I work for (Wind River) just ended their membership (which is a sponsorship of the foundation) just recently.

That is why I was brainstorming about ideas how to stimulate companies to contribute. I am really afraid that eclipse will suffer in the future because the architecture degenerates over time and there is not enough manpower to keep modernize the architecture. Yes, there is e4 (the next generation of eclipse), but I am not 100% convinced that this is the solution.

To stimulate companies to give money/resources to support the commons there has to be a benefit. If there is no positive benefit why would a company do that? So, my idea was to create peer pressure. One way to create peer pressure for companies to make them avoid negative press. A freeloader award would create negative pressure. No company would like to win the freeloader award.

Q: Other people I've spoken to, include Mike M. at Eclipse are much less concerned about the problem. Why do you see it as so serious?

A: Well, I probably have a different perspective than Mike. Mike sees the eco system and the new companies joining eclipse. My focus is more on the architecture of eclipse. So, the irony is that although eclipse is widely successful the underlying architecture ages and dissolves slowly. Partly because eclipse is stretched into so many different directions. It is never good for an architecture to go into too many directions at the same time. When eclipse started it was a platform for IDE like applications. It was later retro fitted to be the basis for rich client applications. The people ported eclipse to embedded systems. Then the it was used for the web and for servers.... All this is good for the short term success but I am afraid that the long term impact on the architecture could be disastrous. The architecture council is not really focusing on architecture. Partly because the members have never been the ones that created and drove the original architecture. And partly because architecture by committee is doomed to fail.

One analogy for the general eclipse community problem I have been thinking about recently is road infrastructure in a country. Suppose a big company (=IBM) created a basic road infrastructure and decided to make this available to the public for free (=original eclipse).

In the beginning, the big company somehow drove the journey and continued building the free infrastructure. They also had some interest in this infrastructure because they wanted to have a well accepted public road system to deliver their products. But over time they gave the roads to the public and let the other companies continue building the roads. The hope is that the market is self regulating and it would continue maintaining the road infrastructure and new roads would be build by other companies.

In the beginning the companies understood the plan of the big company and continued building in that spirit. But some companies realized that it is not in their interest to build roads for their competitors. And it makes no sense for them to maintain the highways in general. They might build junctions to their private road network but hope that others maintain the net of highways created by the big company long ago. Sure they would fix some obvious holes on the roads. One could argue that roads build like that really reflect the needs of the community. But some of companies make sure the competitors do not get access to their key roads. What happens is: many private roads are be build in parallel, instead of some new highways. Companies would build "public roads" that connect their private roads. If the companies would understand that they all would benefit form good public roads, then they could come together and hire an independent road building organization and give them money (a kind of tax) to build the roads that are in the interest of the entire eco system. But at the moment there is no authority to enforce the tax payment. Lots of the roads are public and some companies donate "public roads" to the system. Who could blame those companies? They give something free to the community but also act in their own interest. They are much better than companies that use the public roads without contributing back. But something is wrong here. The nicely designed infrastructure degenerates over time. Can we blame the ones who contribute for that? That is the key to the problem: companies are contributing, but the contributions are not really in the best interest of the eco system.

How to solve the problem? How to make sure that roads are build that are in the common interest? Would it make sense to have a "tax paid" independent organization build and maintain the infra structure? How can we motivate the community to pay the taxes? Can we blame the companies that fix the roads and add now ones?

Q: Any suggestions, besides public disgrace, to ease the problem?

A: The questions boils down to: how can we motivate companies to maintain and enhance the common infra structure?

Positive motivation: all participants understand that they have to invest into the system beyond direct interests. But do all participants understand that? And how to deal with community members that do not want to pay their taxes? If someone can get away not paying the taxes the others who pay the taxes are the stupid ones. So, positive motivation works if all participants understand that they depend on the system. It essentially requires a certain level of moral. The bigger the community (and the more anonymous) the easier it is to get away without paying the tax. And in a capitalist world companies act in their own interest. Who can blame them for doing so? How else can they survive?

Negative motivation: have a police, a pillory, public opinion (press), a freeloader award. Something that puts pressure on members that do not want to pay their "taxes"...

Q: I always prefer to quote people by name, but if you like, I can withhold yours.

A: You can quote me. Those who know me know that I can change my mind very quickly. Partly because I simply forget what I say and partly because I like to look at problems from very different angles. Therefore, I could never ever become a politician because you could find lots of statements where I contradict myself.

If I look into the eclipse world, I see that the number of people warning about the future problems is underrepresented. That is why I raised my voice. If everybody would scream and warn about the future of eclipse, I would certainly find a lot of good arguments why eclipse has a bright future and why the current eclipse eco system works so exceptionally well and why eclipse is a great example of a successful open source project.

But at the moment I think there is not enough awareness of the problems of the tragedy of the commons. And that companies think they act in their own interest, but in the long term and from an higher level, they do not act in their own interest because they "ultimately destroy a shared limited resource".

[sorry for the long reply, but this is a topic that upsets me at the moment because I can see the disaster coming]

As an example, I've created my own Eclipse product. It is composed of the Eclipse Platform, CVS support, the CDT and Mylyn. I'm calling it the ADT (Andrew's Development Tools).

It's not hard to create a feature based product that includes these things, and do a product build to end up with something like this:

As explained in this newsgroup post, there are two kinds of things that are included in an Eclipse install:

Things that are explicitly installed

Things that are required by the things that are installed.

Here in my example, only my development tools "org.example.adt" is installed, the rest (CDT, CVS, Mylyn) are required by my product.

Only things that are explicitly installed will be searched for when you look for updates. Also, the installed things generally specify the versions of things they require, which makes it hard to install/update those required items independently of the root product. In both the newsgroup postings I referred to above, the problem was trying to install/update one of the required items without updating the root product.

So the question becomes how to allow updating sub-components of the product without updating the product itself.

Composing for Updatability

What we want to do is to update sub-components of the product without updating the root product itself. In this example we do not to allow updating the Eclipse Platform independently, to do that, the user will need to update the product itself.

I have created a example builder to do this. Get it from cvs (dev.eclipse.org:/cvsroot/eclipse/pde-build-home/examples/adt.builder).

I am able get this project from cvs using the cvs repository perspective in eclipse. If the link to ViewCVS does not yet show the adt.builder project, I expect there is simply some delay in refreshing and the project will show up there sometime soon.

We need to do two things:

Use version ranges to include sub-components in our product so that we allow upgrading those components.

Explicitly install those sub-components so they will be found when checking for updates. This is essentially a book-keeping step.

The ADT .product File

There is a adt.builder/product/adt.product file which we will use to run a product build. If we were to include the features for our sub-components in the .product file, then we would end up with requirements on specific versions of those components. Instead we only include the platform feature [1].

To get requirements to our sub-components, we use a p2.inf file to customize the metadata. We add requirements with entries that look like this:

The .feature.group suffix is the name of the p2 Installable Unit corresponding to the features we are interested in. We specify the version ranges in which we will allow those components to be updated.

The ADT Builder

The adt.builder project includes a buildADT.xml ant script which will run a headless product build for us. The first thing it does is download zips containing the things we need. This example illustrates three different ways of reconsuming metadata.

The CDT and CVS both come as zipped p2 repositories. Things that are not referenced directly by the .product file only need to be available as repositories. We can reuse these zips directly by specifying them as context repositories using jar: urls. See the p2.context.repos property in the adt.builder/build.properties file.

Mylyn is not a p2 repository, it is a zipped old style update site. For this, we use a publisher task to generate p2 metadata for it. [2]

The Eclipse Platform is a p2 repository just like the CDT and CVS. It is similar to the delta pack in that it contains the org.eclipse.equinox.executable feature that is need to get launchers in product builds. Because the platform feature is included directly in the product, we can't just specify the platform as a context repository, we need the bundles available to pde.build like in a normal headless build. To do this we transform the repository using the p2.repo2runnable task. See the transformedRepoLocation and repoBaseLocation properties in the build.properties file. The transformed repository automatically gets included along with the pluginPath property used by pde.build.

Adding additional director calls

In order for our sub-components to be independently updatable, they need to be explicitly installed in our resulting product. By default PDE/Build performs a director install for just the product being built. We can use a customAssembly.xml script to perform additional director[3] calls before the final archive is created.

We make director calls for each of the sub components we allow to be updated. In the example we do CVS, Mylyn, CDT, and the CDT-Mylyn bridge.

The final result

Run the adt.builder by right-clicking on buildADT.xml and choosing Run As -> Ant Build... Be sure to run in the same JRE as the workspace. After running the build, the results are available under adt.builder/buildDirectory/I.<timstamp>.

Running the resulting product, we see that the CDT, Mylyn and CVS are all showing up as installed roots, and are therefore independently updatable.

Notes

PDE/Build will automatically generate start level configuration information, but only for things that are included in the .product file. If we didn't include the platform feature, or at least the bundles that need start level information, then this would not happen automatically and we would need to handle start levels ourselves. See the help page here for more information of configuring start levels.

We publish the p2 metadata for mylyn into ${p2.build.repo}. This property specifies the location of the p2 repository that will be used internally by the build. Publishing the mylyn metadata here instead of some location specified as a context repository saves the build from mirroring the required IUs into the build repository.

PDE/Build provides a "runDirector" target that can be used to invoke the director. This works by executing the director application in a new process. Normally, this requires setting the "equinoxLauncherJar" property specifying the location of the equinox launcher to use, but because we are calling the director from customAssembly.xml, we inherit this property from the generated assembly scripts.

Running this build produces a properly p2 enabled product. It does not produce a corresponding repository for that product other than the build time repository ${p2.build.repo}. To produce a final repository containing the final product, define the properties p2.metadata.repo and p2.artifact.repo in the build.properties. The product and its requirements will then be automatically mirrored into that repo.

Inevitable for growth

At some perspective this has been an inevitable move — code editors have turned into big platforms. They are used for much more than developing pure text-based artifacts. To remain competitive you need to allow the maximum level of integration, openness and visibility. Which becomes impossible without open-sourcing all the core components. Controversially, using a permissive license (ASL, BSD, EPL, LGPL, not GPL) can turn your business into a charity organization.

Commercial offering

IDEA Platform plus Java, Groovy and Scala support are all open-sourced. It looks like IntelliJ is retaining some of the revenue stream by keeping Java EE stack closed-source, calling it IDEA Ultimate and offering it as a commercial product.

The Importance of Java Enterprise (JEE) Tooling

I think IntelliJ decision about keeping EE as a separate commercial product is a very important indicator for the whole IDE marketplace and particularly for Eclipse. Much to my surprise, at the last Eclipse members meeting Q3 call there were some very interesting download stats reported. Eclipse Galileo IDE for JEE gets 41% of the downloads! While Classic and Java combined were 34%! (Off-topic but rcp/plugin edition was 2%).

This 41% of JEE downloads does not include 3rd party distribution providers, many application server vendors have their own bundles with a pre-configured settings (for example GlassFish Tools Eclipse Bundle).

Indication for a better future

IntelliJ move and Eclipse Galileo JEE download clearly demonstrate how important is a good tooling support for Java Enterprise development.

IntelliJ move can also be a sign for Eclipse that there is now a friendly competitor who has put their bet on outperforming current Eclipse JEE feature-set with a commercial offering.

I have no experience with the IntelliJ JEE offering, is it better than the JEE tooling from Netbeans and Eclipse?

]]>
A single Calendar from Desktop to PhoneA single Calendar from Desktop to Phone
I'm really happy with the C905 because it has all the features I wanted (Wifi, GPS, 8MP camera, loads of memory) but it still looks like a phone and it's not too big.

One problem I was starting to run into was that I had too many calendars going. I had the calendar in my email client (Thunderbird + Lightning), I had my Google Calendar and now the phone also has a calendar. I really wanted a single calendar that works on all of my devices. I had already been playing with the Google Calendar Provider for Lightning and quite liked it. If only there was something like that for the phone...

After a little digging I found out that Google supports the Exchange ActiveSync protocol. And my Sony Ericsson (and many other models too) supports it as well! It's not very well documented so I decided to write down the steps I took to get it all working.

1. Go to the Organiser and select Synchronization.

2. Create a new Account.

3. Select Exchange ActiveSync and give it a name, I called mine 'Google'.

4. In the General tab, enter as the server address: https://m.google.com also provide your Google user name and password. I didn't provide any other information here.

5. On the applications tab I only selected Calendar, although the Contancts and Email integration probably also work.

6. Now it asks me to delete all my contacts and calendar items. Very annoying because I don't want to sync my contacts, but anyway, for best results you have to clean them all. So I backed up my contacts previously and select clean here. I can restore my contacts later...

Ok - I'm all set up!Let's create a calendar event on the phone:

Synchronize with Google...

I think you can even do this automatically, but I haven't enabled that because I don't want my phone bill to explode :)

Let's have a look at Google mail:

Yep, it's there! Now let's see in Thunderbird Lightning:

And in all three modes I have read/write access. Aaaaah, nice!

]]>
September Eclipse Board MeetingSeptember Eclipse Board Meeting

Rather than my usual tome, here's a quick update from the Eclipse Board meeting in Boston.

First, the Board voted unanimously to allow jGit to use EDL licensing. jGit is a dependent component of the eGit project. By approving this license, the entire Eclipse tooling stack for git support can now live at Eclipse as a project. This is great news. Vive le git! Or is that "too le git to quit?"

Second, project plans for next year are due. Wayne's blog covers the details much better than I can. Project plans are an essential tool for communicating to the world what you are doing, and from now on, projects won't pass reviews without them. Please work with your PMC on getting your plans together.

The other stuff we covered was the usual: KPI's, operations, money, and the beginnings of a strategic plan for next year. I'll let Mike comment on that.

]]>
New HorizonsNew Horizons
I have taken a new job at a new company and unfortunately will no longer be working on Eclipse.

I just wanted to say what a great pleasure it’s been working on Eclipse (again). It’s a really fantastic community around some excellent technology. As good as the technology is, it’s nothing without fine people such as yourselves to tend to it, advance it, and promote it. Eclipse is one of the strongest open source communities and it’s been an honour helping to build that in some small way. I’ve really enjoyed getting to know many of you, professionally and then as friends.

Best of luck and hope our paths continue to cross,
Kevin

PS. For those who would like to keep in touch, I am on linkedIn.

]]>
Better late then neverBetter late then never

I won't tell you where I found it but it has been there for almost 6 years ;) Your day has come!

The bundles to analyze (which can be provided as an API baseline, a PDE target definition, or simply as a directory)

Whether to report API and/or internal references

Optionally, the scope to analyze (use regular expressions to include bundles to search and bundles you are interested in references to, or leave blank to scan everything)

A directory to write the report in

In this example, I ran a report for API & internal references in one of the M2 warm up builds. I only included references between org.eclipse.* bundles. What does it show? Lots of stuff.

]]>
OSGi 4.2 specs are now available!OSGi 4.2 specs are now available!
here.

Work continues in the OSGi expert groups on the specifications for the enterprise spec.

]]>
Baseline Eclipse for Java Game Development?Baseline Eclipse for Java Game Development?
The Eclipse IDE as it is offered for download at eclipse.org has become quite large. Size in bytes is probably not the biggest issue, but I think some of the functionality showing up in the UI is unnecessary clutter for many purposes that Eclipse could be used for. One such purpose I’m personally interested in is game development on the Java Platform (using Scala, Java, or another JVM language).

Some Java game development tools are already being built as Eclipse plug-ins and most likely more will be in the future. But right now they must provide them as a plug-in for an existing Eclipse installation that has all the clutter or build a completely custom Eclipse-based application, possibly an RCP app that doesn’t even include the Java Tools. I think this shouldn’t have to be the case: users of Java game dev tools should be able to download a version of Eclipse that is freed (as much as possible) from the cruft that they will not need, and be able to install specific tools and engine specific libraries into that baseline game development environment.

So I’m thinking of creating a lighter distribution of Eclipse JDT + XML tools that removes rarely used features from the Eclipse for Java Developers distribution and is specifically geared towards game development rather than enterprise Java or application development. Perhaps this should also extend to C/C++, but I am not familiar with the CDT so I’m not considering that aspect right now.

This distribution shouldn’t come from eclipse.org of course, but rather from the open source game engine and tools developers. I am willing to work on such a project, as long as it provides actual value to game developers and doesn’t eat up too much of my time. The first version of it could be fairly basic, just a lighter distro of Eclipse. Later versions can start incorporating game dev specific tools and UI concepts. Is there anyone interested in contributing to such a project? Or using such a build of Eclipse? Please let me know.

The first step of the project should be to create a reduced version of Eclipse JDT + XML tools that is still updateable, so that game development plug-ins and plug-ins such as Scala IDE, M2Eclipse, Subclipse, EGit can be installed. I have done some initial experiments and found an approximate list of features that can be easily removed from Eclipse JDT. If you would be interested in using such a distribution, please comment what you would like to be kept:

APT or Annotation Processing Tool support — I think this is very rarely used and just clutters the UI in some screens.

Ant support — I think a lot of people would be against this, so it should probably stay

CVS support — CVS should not be used by anyone for other than legacy reasons, now that we have Git, Mercurial, SVN etc. And I expect that games development will not involve much poking around in legacy repositories. I think most open source game tools use SVN (correct me if I’m wrong).

Help — this is debatable, but I would remove it from the first version and decide later what to do with it

Welcome screen — same as help, can be added back later if it’s found to have some use

JUnit3 support — JUnit4 should be enough, I think

Various internal tools and plug-ins, backwards compatibility stuff

I’m estimating that the size of this distro would be somewhere between 50-70 MB. The P2 update manager included in Eclipse will increase the size on disk though, because it may create a lot of meta data and caches. But disk space is relatively cheap, and I don’t think there’s a better update manager available. My main concern is removing UI clutter so that game dev tools that plug into it will have more UI space and better visibility. Compared to the Eclipse IDE for Java Developers distribution, which lists the following “features” (feature in Eclipse terms is a collection of plug-ins):

org.eclipse.cvs

org.eclipse.epp.usagedata.feature

org.eclipse.equinox.p2.user.ui

org.eclipse.help

org.eclipse.jdt

org.eclipse.mylyn.bugzilla_feature

org.eclipse.mylyn.context_feature

org.eclipse.mylyn.ide_feature

org.eclipse.mylyn.java_feature

org.eclipse.mylyn.wikitext_feature

org.eclipse.mylyn_feature

org.eclipse.platform

org.eclipse.rcp

org.eclipse.wst.xml_ui.feature

Only these would remain (and even those not in complete form):

org.eclipse.equinox.p2.user.ui (this is the update manager)

org.eclipse.jdt (custom lite version with APT and maybe a couple of more small things removed)

org.eclipse.platform (custom, with various plug-ins removed)

org.eclipse.rcp

org.eclipse.wst.xml_ui.feature (possibly custom with a couple of plug-ins removed)

Later versions could add optional updates that bundle engine-specific tools and libraries, Scala, SVN, Mercurial, Git or Maven support etc.

What do you think? Is there need for such an Eclipse distribution? I’m especially interested of the opinion of Eclipse based game tool developers, if any of you happen to read this post. Would you contribute if someone does the initial work? Has someone already done something like this (not engine specific)?

]]>
Eclipse Guest (Web) Lecture at Rajagiri College @ IndiaEclipse Guest (Web) Lecture at Rajagiri College @ India
]]>
Eclipse Summit IndiaEclipse Summit India
Plugin Development - Tips and Tricks" and "Design Patterns Used in Eclipse". In the first workshop I showed how to make a simple RCP application flexible with OSGi Services, extensible with Extension-Points and achieve production quality leveraging from advanced concepts like usage of Adapters, JFace Data Binding, Presentation API and, finally, deploying the product with P2 provisioning mechanism. In the second workshop we've had a look at OOP design patterns from another perspective: how this or that pattern is used in Eclipse and what are the possible advantages or pitfalls.

]]>
Eclipse has a future.Eclipse has a future.
We’re getting close to releasing 0.9 of e4, so I wanted to take some time to talk a bit more about the context for why we started on this path, and how far we’ve gotten in the last year. To give this some structure, let’s look at a few of the big questions…

1) Does Eclipse have a future on the desktop?
2) What would an ideal Eclipse platform look like?
3) What is e4?

… and maybe a couple of more pragmatic ones too…

4) When is e4 done?
5) What can I do with e4 0.9?

Does Eclipse have a future on the desktop?

Easy answer: “Yes, but not if we sit still”.

I’m sure it’s not news to anyone in our community that, unlike in the early days of Eclipse, there are many credible threats out there when it comes to technologies for building desktop applications. Adobe AIR, Silverlight, even pure HTML5+JS, for example, can all be used to build rich, cross-platform desktop apps. For Eclipse to be a viable platform in this environment we need to innovate on many dimensions, including packaging/distribution/size issues, flexibility and richness of presentation, integration with other technologies and, most especially, ease of development.

We’ve got to get to the point where developers are choosing Eclipse because it’s both easier to use than the other technologies out there and because the results work better and integrate better than anything built any other way.

What would an ideal Eclipse platform look like?

Well, obviously it needs to address the issues above, but there’s more to it than that. To me, the ideal Eclipse platform is one that has the ability to move when there are new pressures on it. Technologies change, uses change, the community itself changes over time, and whenever that happens, either the platform has to be able to move with it, or we die.

The ideal platform should have a strong separation between presentation and implementation, and it should be dynamically configurable to run in the widest possible range of contexts. It should have the smallest possible number of “pre-canned” constraints, and it has to be easy for new people to start using it so that it can be discovered and consumed by whole new communities.

<intermission>

“But wait!”, some of you are saying, “for my uses of Eclipse, I don’t need any of those things. I just use the IDE to develop; I don’t care what it looks like. And besides, we’ve already built all our plug-ins. The most important thing is don’t break us.”

I get it. You’ve invested in Eclipse long enough to get something that works for you, and you just want the world to stop so you can get as much value from that investment as possible. That’s ok. You’re a part of this community too, and we need to support you. That’s why the focus for the R3.x stream of the SDK is the strongest possible backwards compatibility: If you ran on R3.4 or R3.5, we want you to move to R3.6 because it’s effortless.

But the thing is…

This post isn’t for you.

What we are talking about here is what it’s going to take for Eclipse to be relevant in another ten years. This is for those of you who care about making that happen in addition to getting the great day-to-day value you currently get. If you believe, like me, that Eclipse is simply too important to allow it to fade away, then read on.

</intermission>

What is e4?

A few years ago, the Eclipse Project developers were facing what seemed like an insoluble problem: We could see that our industry was heading toward another period of big change and the platform was going to have to react to it, while at the same time we continued (and still do!) to get more and more real-world, important products built on us — products that could not afford to deal with the potential for instability caused by major work on platform internals.

At the same time, we were (yes, finally) beginning to realize that we needed new blood; that the monolithic nature of the team, which had helped us make progress early on, was now hurting our ability to grow our community and potentially risking our future.

This was one of those times when things “just worked”. From those starting points, we have become a new community, based around the e4 Incubator project, that is made up of a wide range of people who are working together to build that “ideal Eclipse platform” for the future.

It’s been a wild ride (and we’re not close to done) but I have to say that it blows my mind how far we’ve come in the last year. I’m not going to go into the individual technology details, but I will point you at John Arthorne’s excellent e4 Technical Overview white paper for the full description.

When is e4 done?

Hopefully, e4 the incubator is never done; it will be there as long as there is a need for the platform to innovate. Today though, you’re probably more interested in:

· when things that are currently being worked on become part of the platform· when Eclipse 4.0 appears· when Eclipse 4.0 gets on the release train

We have always said that code that is developed in e4 will mature at different rates. Some of the things that are being worked on could simply be moved to the R3.x stream, once they meet the backwards compatibility and stability requirements. Personally, I’d like to see CSS skinning of the widget layer, improved web browser integration, and the flexible resources work all move to R3.6, but that’s still to be determined.

Some of the changes that are being made, however, are simply too fundamental to ever make it into the R3.x stream. Those will only appear in a new, separate development stream of the Eclipse SDK, with the first version, called “4.0″, shipping a year from now.

The thing to keep in mind though, is that a year from now the odds that Eclipse 4.0 will have been subjected to the kind of testing and tuning that it absolutely must have for it to be used as the basis for product delivery are pretty low. It will be, in the truest sense, an entirely new platform that is capable of hosting your plug-ins. Will it be backwards compatible? Yes. Will it behave in exactly the same way as R3.x, so that every possible subtle interaction with your existing plug-ins is supported? No. Wait… what?

This is probably the most critical part of understanding the Eclipse 4.x stream: Even though we plan to do everything we possibly can to make Eclipse 4.0 backward API compatible, either because of use of internals, or dependencies on unspecified timing/ordering behavior, or any number of other subtleties, it’s possible that your plug-ins may not run. Since the only way we will know about that is if you tell us, our success will depend on you making an investment in trying to run on Eclipse 4.0 builds:

We will work with everyone who does this and finds problems to either improve our own code, or help them fix the incompatibilities in theirs.

Believe me, I understand that we’re imposing on you; if you don’t see value in moving to the next Eclipse, that’s ok, we really do plan to develop the R3.x stream for as long as anyone is using it. But at some point, one of two things will have happened: either the Eclipse platform itself will no longer be relevant, or so much of the community will have moved to the R4.x stream that your consumers will be asking you to move too. My crystal ball doesn’t say when that happens, but I know it will.

In case that sounded too harsh, I must also say that this is going to be an organic process that is controlled by the consuming projects, not by us. The best example of this the answer to the question about when Eclipse 4.x gets on the release train:

We will not ask to put Eclipse 4.x on the release train until every other release train project with a platform dependency runs on it.

So, no it won’t be in Helios. However, it will be real enough at that point that you should take it, work with it, and help us understand what needs to be done. If you’re an RCP developer, you will be very pleasantly surprised by how much easier it is to get the look and feel you want. For the SDK, we plan to have every single view and editor that is in R3.6 also working on R4.0. That should be a good start on backwards compatibility.

What can I do with e4 0.9?

All that’s great, but we’re shipping something this week too.

The “e4 Compatibility Platform” is what we’re calling the main download for the 0.9 release. It’s not even close to being complete enough to be an “Eclipse SDK” yet, but it is enough for self-hosted development (using views and editors from R3.5(!)) and demonstrates well all of the things we’ve been talking about. Kevin McGuire wrote a great post that describes a perfect example of why we we’re so psyched:

Despite some immediately obvious missing pieces (like no save/restore of workbench layout, no main toolbar, no min/max behavior, and numerous small bugs), it’s definitely usable. For the last week or so, I’ve been spending my evenings using the e4 Compatibility Platform (with (SchemeWay) and Ahti Kitsik’s Virtual Word Wrap[edit: updated link] plug-ins installed) to write some wiki software in Gambit Scheme. For my use, this worked great.

I also know some people on the team are using the e4 compatibility platform for all development. Currently, that takes more perseverance than any of us would like, but the good news is that we know what the issues are, we just ran out of time to fix them.

In any case, just so there are no illusions about exactly how “young” this code is, here are some hints that I learned while using it:

If you can’t get a menu or keybinding to work, try the popup menu instead.

This is a symptom of all the myriad ways we have for hooking up menus in R3.x, some of which just haven’t been wired off yet. That’s a perfect example of the kinds of things that make it hard for new people to understand Eclipse R3.x development.

Don’t check the “always exit without asking” check box when you close the shell.

Until save/restore starts working, it’s painful to have to re-open/re-layout all of your views. You don’t want to exit by accident.

If you close a perspective, don’t immediately open that one again.

If you try, it silently fails. Instead, open any other perspective first, then you can open the original. I’m not sure why this is happening, but it’s just a bug.

Well, that’s where we are. Once R0.9 is out, we’ll be reviewing all of the work areas, identifying the missing pieces, and building the plan to get from here to Eclipse SDK 4.0.

In Closing…

Please, give the e4 Compatibility Platform a try. If you see the value in what we’re doing, please consider participating. We are entirely open to new people who want work with us on the existing development areas or on anything else that you think is interesting. Even if you don’t think we’re on track, come talk to us: if we’re missing something important, we need to know.

For me, I think we have hit the “sweet spot”: we’ve found a way to continue to work with interesting people, on interesting problems, innovating a future for Eclipse on the desktop, while still providing the backwards compatible, it-just-has-to-work, world that we must have on the release train.

Further steps: Need more people to update the table with results (see http://wiki.eclipse.org/Automated_Testing#UI_tests)

]]>
Portable Eclipse and Portable JavaPortable Eclipse and Portable Java
I like portable software. I carry around a bunch of them in my flash drive. I just found out that both Java and Eclipse are available as portable versions at PortableApps.com. The Java version they have is Java 6 update 14 but Eclipse is stuck at 3.4.2.

Eclipse uses portable Java, so Java need not be installed in the PC to run it. Nice. The portable version has some trouble finding the workspace but once you correct the path it works fine.

I updated the Eclipse binaries in to the just released 3.5 and it work great too. Nice way to take Eclipse and your work with you.

]]>
P2 Still Not AwesomeP2 Still Not AwesomeP2 has surely seen a lot of improvement in Eclipse 3.5, but some functionality that was actually somewhat acceptable in the old Update Manager is still lacking awesomeness. I’m trying to install VE into Galileo (Eclipse for Java + M2Eclipse + Subclipse + Scala IDE).

There are two usability issues with this. The first is that after selecting the “Visual Editor” feature from the site, I don’t have a “select dependencies” option. But from past usage of VE I remember that it requires Java EMF Model. So I select that too. A new user would not know to select this. I’ll have to navigate to the next page to find out if there are unresolved dependencies. And then we come to the second problem, which is this screen:

Am I really supposed to decipher this text and take action based on that? No thanks, I’ll just skip installing VE this time, I don’t have actual need for it right now.

]]>
Everywhere a Tweet Tweet..Everywhere a Tweet Tweet..Not to be left behind in the buzz surrounding Twitter, the Submissions System now tweets when a new talk is proposed. There are already three talks proposed on the Summit twitter page, new follower are always welcome.

Not planning on presenting a talk at Eclipse Summit but still want to try out the tweets? Then cruise on over to the demo conference and propose a new talk. The demo conference exists just to try out new features. The Demo conference twitter page already has one follower (not sure how that happend).

]]>
Subclipse and Eclipse 3.5/GalileoSubclipse and Eclipse 3.5/GalileoSubclipse 1.4.x is based on Subversion 1.5 client APISubclipse 1.6.x is based on Subversion 1.6 client API

Install the version of Subclipse based on the version of Subversion you want to use. This is mainly an issue if you want to use multiple clients with the same Subversion working copy. If you do all of your work from Eclipse, then just grab the latest version. All Subversion 1.x clients can work with all Subversion 1.x servers. So, if possible, just use the latest version.

In other news, Subclipse 1.6.x now includes the CollabNet Merge client. This was developed as part of the merge tracking feature in Subversion 1.5 and makes merging from Eclipse very easy to do and manage. The CollabNet Merge client is part of the CollabNet Desktop - Eclipse Edition, which includes Mylyn and connectors for CollabNet's trackers. The merge client is now also available directly for Subclipse users with no other dependencies. Users that want the full merge client, which adds the change set merge option, can install the CollabNet Desktop.

]]>
Notes &amp; Thoughts from Portland Eclipse DemoCamp Galileo 2009Notes &amp; Thoughts from Portland Eclipse DemoCamp Galileo 2009
These topics were the focus of demos at the Portland Eclipse DemoCamp sponsored by Instantiations. Here are my notes from Wednesday night with my commentary at the end. Get in touch with the presenters and check out the links to learn more.

ECF First up was Scott Lewis (EclipseSource) and a demo from the Eclipse communication framework.Some 3.0 highlights:

Next up was John Roberts (Mind Warm): Using Android with EclipseJohn threw out that Android could overtake the iPhone by 2011?!Demo of the Eclipse Android tooling.Seemed very quick to try out and develop with an emulatorChallenges that remain with the toolset: tough to design UI with base set within Eclipse.One recommendation was DroidDraw - recommended as WYSIWYG

The CodePro AnalytiX JUnit Test generator needed a new UI. Challenged with making a modern UI using only the SWT features from Eclipse 2.1 (wow!!)Nice review of the choices and reasoning behind the decisions. But hey...please don't start your demo with "This is not very exciting". May not have been exciting but it was interesting. Let your audience decide ;-)

My commentary: I have been to many, many democamps. I have been doing Eclipse development since the dawn of time.What hit me several times last night was the diverse nature of what was being done within and with Eclipse. AND how a large majority of the demos were on OLDER versions of Eclipse. This would have never happened several years ago. Eclipse has reached the maturity level that cool new products can be developed on versions of Eclipse that are 2 or 3 years old. 3.5 and the latest and greatest is really cool (see Ian Bull's countdown and all the Galileo blog reviews) but at the same time older versions are cool as well.Take my day job. We just shipped the Jazz Foundation 1.0 which is based on Eclipse 3.4.

Is this bad? I really don't think so.Bleeding edge, pushing the envelope work can continue at Eclipse as it always has while the community can adopt and adapt only as much or as little as they deem necessary for their success. This keeps everyone happy and winning.I propose Eclipse has fully become the framework it was intended to be. Eclipse can now respectfully step back out of the spot light and allow the applications that have been built and will be built on its impressive shoulders their time to shine. Just like every proud parent should do...(These are of course my thoughts alone and may or may not reflect the position of anyone else).

]]>
EuGENia: Polishing your GMF editorEuGENia: Polishing your GMF editorEuGENia is a front-end for GMF that enables developers to generate a fully functional GMF editor by attaching a few high-level annotations to the Ecore metamodel. The original aim of EuGENia was to lower the entrance barrier for new GMF users and enable people to quickly and easily develop the first version of their editor.

However, after the initial excitement of (at last) being able to get a working GMF editor with minimal effort and no cryptic error messages, all users (including ourselves) used to come to a dead-end. EuGENia could generate an editor that looked 90% similar to what we wanted but when we started polishing the .gmfgraph, .gmftool, .gmfmap and .gmfgen models manually to get to 100%, it meant we couldn’t use EuGENia any more as subsequent invocations of the tool would overwrite our manual changes. Obviously, providing support in EuGENia for all the options that GMF provides wouldn’t be reasonable either as this would progressively make EuGENia as complex as GMF itself.

Our first thought was to try merging (instead of overwriting) the generated with the existing GMF models, but due to the complexity and inter-weaving of these models we’ve found no sensible way of doing this in an automated way. Therefore, we’ve come up with a different solution.

Behind the scenes, EuGENia runs set of model-to-model transformations written in EOL which generate the necessary GMF models from the annotated Ecore metamodel. In order to accommodate the need for persistent customizations, we’re now allowing developers to specify their own little EOL scripts next to the annotated Ecore metamodel, which are responsible for customizing the generated GMF models.

Let’s go straight into a short example. We define this minimal flowchart metamodel using Emfatic.

Now, each time we click Eugenia->Generate GMF tool, graph and map models, EuGENia will perform the built-in transformation and then it will also run our custom script on the generated models so that we can get the tailored output we need. Similarly, by specifying a transformation named FixGMFGen.eol we can customize the generated .gmfgen model (in this example we add a dependency to the figures plugin).

Except for enabling developers to now use EuGENia throughout the GMF editor development process, this extension also provides a good evolution mechanism for EuGENia itself. Reoccurring scripts will be good candidates for inclusion in the form of high-level annotations, that will be natively supported by future versions of the tool.

The complete source code of this example is available (projects named org.eclipse.epsilon.eugenia.examples.flowchart.*) in the SVN. You can also find another example, as well as more technical details about this extension here.

]]>
After the break up ...After the break up ...

"A lot of people knew I left. I was a fool not to do what Paul did, which was use it to sell a record." John Lennon (1970)

Unless you've been living in a cave for a while, you've noticed that there's an ideological war underway around content. By content, I mean software, music, tv programs, movies, books, and any other piece of information or entertainment you can package. The war is between paid and free, closed and open, restricted and unrestricted. Did this war start with open source software? I'm not sure, but open source has definitely helped arm the conflict. Let's consider the content categories.

When you look for a software application or you need to perform a task with software, you can almost always find free software to do just about anything. In many instances, the free stuff is nowhere near as good as its commercial counterpart (e.g. GIMP vs. Photoshop). This reality keeps us software types employed for now. But the free stuff is still there, and sometimes it's good enough.

Then there's entertainment media sites like hulu, youtube, pandora, last.fm, and countless others that are supplying us with endless time-shifted and (mostly) free entertainment. Sure, it's not always in HD on your giant flat panel or in CD-quality through your audiophile stereo, but often it's good enough.

Switching to books, you can find lots of online material in Google Books or in any number of free audio book libraries. If you're looking for open college materials, check out MIT Open Courseware or the excellent collection of CC-licensed college lectures at Academic Earth. Wikipedia, The New York Times, and many other excellent sources of information are all open and free. Google alone is hell-bent on ensuring all content is in the open, whether people want it to be or not.

And this is all of the legal stuff. For everything else, grab a torrent client, search a database, and (in some cases) break the law to find what you're looking for. DRM? Forget it. For every smart group of engineers that implements DRM, there's another smart group that cracks it. It's a waste of money to even bother implementing it. Perhaps this is why Apple is dropping DRM from much of its iTunes library and why Amazon MP3 never had it in the first place.

Now consider Diller's claim: "people have always paid for content," and once this "accident of historical moment" passes, people will again be paying for it. Are you kidding me?! Um…you know that point in your life when some teenagers drive by in their car blasting music, and you think it's too loud…and then suddenly you feel really old? (Well, it hasn't happened to me yet, but I've heard of it.) Anyway, this is what it looks like when it happens to someone else. And at a Web 2.0 conference no less!

The open content ship has sailed. This content war is about a more fundamental question: the accessibility of information. And the challenge for all of our businesses—software, music, entertainment, publishing—is not about restricting access to content, it's how to support open or very inexpensive content while still making enough money to keep producing it. This is what progress looks like. Diller, you'd better crank up your stereo.

]]>
More from the archivesMore from the archivesThanks to Jim des Rivieres for pulling out his camera and documenting some of the artifacts that we found there. There's a lot of OTI and IBM history amongst the rubble as well as a few things I thought were funny over the years. I could explain it all but that would detract from the fun.

Steve

]]>
I'm moving to WordpressI'm moving to Wordpress]]>
news from the deadnews from the dead
bonnaroo things should calm down a bit, leaving me more time to focus my attention here.

]]>
Minor Changes to Swordfish WIKI pagesMinor Changes to Swordfish WIKI pages
Minor changes have been made to the Swordfish WIKI pages. There is now Swordfish for Contributors category and a section under it called, User Documentation for Contributors.

All user documentation aimed at the Swordfish project committers and contributors will be collected there.

All end-user documentation on the use of Swordfish can be found in the Swordfish WIKI pages under the category User Documentation.

]]>
Swordfish Tooling: Sneak PeekSwordfish Tooling: Sneak Peek
For the past few months we have been working on tooling support for Swordfish. This has resulted in an impressive Swordfish tooling project that helps a user to perform various tasks like, Getting Started with Swordfish Tooling and Creating a JAX-WS Service Provider Project from an existing WSDL.

Getting Started with Swordfish Tooling

A quick “Getting Started” section in our help, enables users to set up a Target Platform.

Creating a JAX-WS Service Provider Project from WSDL

We have provided three ways to start this task:

Using the plug-in project wizard: This is the basic method, by which the user can create a plug-in project using the Eclipse Plug-in Project wizard. After choosing a name for the plug-in project and choosing the necessary template, the user can select the WSDL from the workspace. Alternatively, the user can select the WSDL from the file system. After this the new service provider project is visible in the workspace.

Using the import wizard: This method involves the Import option in the File menu. After making the selection as shown in the next screen, the steps to be followed are identical to the previous option, “Using the plug-in project wizard”.

Using the context menu of a WSDL in the workspace: For this method, you must right-click the WSDL and select the Import option. The steps that follow are identical to the previous section, “Using the import wizard”.

]]>
Useing Qt to write Equinox-OSGi-UI-ApplicationsUseing Qt to write Equinox-OSGi-UI-ApplicationsI'll start with a Technical Topic because it's a really exciting thing I guess not only for me but also for the whole Equinox-OSGi/Java-Community.

Since some time Qt is released under LGPL and since some weeks now their Java-Binding named Qt-Jambi is released too under LGPL. I've been playing with Qt-Jambi (because my UFaceKit project has a Qt-Port) before but now that the code is under LGPL it's getting more interesting to the wider Java-audience and naturally also people who use Equinox-OSGi for their applications.

A simple QtJambi-Application

Before digging into the details what I've done let's look at a simply QtJambi-Application if we are not using Equinox-OSGi.

This looks not much different to a SWT-Application besides the fact that one doesn't has to pass a parent when creating a widget and instead of running the event loop one simply calls QApplication.exec().

QtJambi and Equinox-OSGi

Couldn't be hard you think when you've used other UI-Toolkits (SWT,Swing) in your Equinox-OSGi-Applications already but the problem is that Swing is not problematic because it is part of the JRE and SWT is shipped as an (in fact multiple) Equinox-OSGi-Bundle/Fragment.

What we need to do is to Equinox-OSGify the bundles coming from Qt but this task is more complex then it looks on the first sight because using the simple converter provided by PDE is not providing us a solution because QtJambi-Code expects to load the libraries in very special way which means we need to patch their Java-Code to make it aware of Equinox-OSGi.

The really cool thing is that patching and maintaining the patch is easier than one might think because they provide their sources through a git-repo one could simply clone and maintain the patched sources. So maintaining the patch is easier than it is for example to maintain a patch for the eclipse-platform because of git.

The tough thing is to get the environment setup in a way than one can produce .jars from the sources because one

Has to compile the Qt-Sources

To generate the Java-Binding-Classes to the Qt-Sources (extracted from the C++-Header-Files)

which is a bit time consuming and not documented very well at the moment. Though this is doable for a medium skilled Java-Dev I think one should be able to checkout the complete project with native and generated Java-Code and doesn't have to compile all the stuff.

After having managed to setup a build environment I patched the libary loading classes and recreated the .jar-packages. QtJambi is split in 2 .jars:

qtjambi.jar: Hold platform independent Java-Classes

qtjambi-${os}.jar: Holding native libraries for the platform and the JNI-Glue

So the setup is similar to SWT but in SWT also the Java-Code is part of the native fragment because it differs from platform to platform and the host bundle is simply an empty bundle. In contrast to that in Qt the Host-Bundle is holding all Java-Classes and in the native fragments one has the native-libs and JNI-Glue.

Added ant-tasks who fetch the native libs from Qt-Software and repackage them

One could also use these ant-tasks when not using UFaceKit (I'm using it for my RCP-Development-Setup).

The Equinox-OSGi-Support is not fully finished and I'll maybe rework it a bit in future when understanding the code better but for now it sufficient to go on and file a CQ to make use of Qt in UFaceKit. Let's see what's coming out from this now that Qt is LGPL.

Simple Qt-Jambi and Equinox-OSGi-Application

Let's create an Equinox-Application which uses Qt as UI-Toolkit now. The easiest thing is to use the PDE-Wizard to create a "Headless Hello RCP" and add a MainWindow.java.

Well as you see I'm not a good designer and the application looks well not really nice though it looks native on my OS-X though this is only faked by Qt because they are drawing everything on the screen as far as I understood it.

One could think that this fact is a draw back of Qt but IMHO it's the other way round because with this strategy they can support things SWT can't support easily - Completely restyle your application using a declarative language and well they use CSS like e.g. E4 does too.

The first thing to do is to add a method to load a stylesheet to Application.java:

As you see this is also not my design but then one you get when using the Eclipse-Forms-API with the difference that in Eclipse one has to learn a new API to deal with besides SWT whereas in Qt the UI-Code is still Qt and styled by a declarative syntax and if you ask me the Forms-API is going to replace in space of Eclipse in E4 through SWT + CSS but this is only my personal opinion.

So should we all now move to Qt-Jambi to write UI-Applications in Java like we did years ago when we abandoned Swing and started using SWT?

Let's look at some potentially problematic areas:

Qt and QtJambi misses an application framework like eclipse RCP provides one for SWT-Application developers

Qt and QtJambi misses Databinding support like Eclipse-Databinding provides one for SWT, JavaBeans and EMF

Nokia removed all resources from QtJambi development and wants to build a community to work on it

For at least 2 of the above there are solutions already today:

E4's core application platform is UI-Toolkit agonstic so though E4 is not released until 1.5 years it would give people the possibility to use Qt as their UI-Toolkit of choice which supports many many things starting from animations, multimedia integration, ...

Still the killer problem is the lacking support from Nokia on QtJambi and it's unclear if a community could be build around it who not even maintains but also adds new features.

I think this is a bitty because with getting a real application framework with E4, it's themeing, multimedia and animation support I think QtJambi could get a real possibility to write cross-platform RCP-Applications in Java without the sometimes really hurting lowest common denominator problem we have in SWT.

So what should one do? Though QtJambi looks like a real solution for writing nice looking RCP-Applications the uncertainness caused by Nokia by cutting resources makes it unusable for most companies.

For form developers I could point you once more to UFaceKit which supports both SWT and Qt and your form application code is not affected by changing the underlying technology but one can still rely on native stuff where needed (e.g. using Qt animation/multimedia support).

For me as one of E4 committers and UFaceKit-Project lead it means:

I'd try to keep the application runtime widget agonstic if possible (well we are on a good track here)

I'll file a CQ to let UFaceKit make use of QtJambi and provide first class JFace-Viewer and Eclipse-Databinding support for QtJambi

1. updated my wrong capitalization of Qt2. please note that I'm using Equinox specific stuff to make this work so it is maybe not runnable on other OSGi implementations but I'm happy to incooperate feedback and suggestions into my git-clone to support other OSGi implementations

]]>
Eclipse Galileo update site organization could be more usefulEclipse Galileo update site organization could be more useful
Wayne's call for Mac users to try the Eclipse 3.5 RC1 Cocoa port. So being an Eclipse user and a Mac user, I thought I would give it a try out. I have been using 3.4 for my OSGi development work since it came out last June. I have updated to each point release and am 3.4.2 now. I started with the EPP for RCP developers since that very closely described what I needed to do. All that I needed to add to that was an svn team provider.

So I downloaded the 3.5 RC1 Cocoa driver, unzipped it and started it. But it was rather bare bones. I was missing the extra goodies from the RCP EPP download and, of course, an svn team provider. So I went to the Install New Software dialog and selected the Galileo site. But it took me quite some time to figure out what I needed to select to get back to the function I already had in my 3.4 install. It would have been much more useful to have the update organized like the EPPs. That is by the type of developer I am and the things I need to do. Then I could have found a grouping for RCP developer and installed all the things under that grouping. Since that was not there, I had to consult the 3.4 EPP pages to figure out what RCP EPP build added and then search for those things on the update site (as well as the subversive svn team providers sans svn connector :-( ).

So I think I have all the function I need installed, but it could have been much easier.

]]>
Eclipse DemoCamp Galileo 2009Eclipse DemoCamp Galileo 2009
A public service announcement for all those Eclipse enthusaists who live close to Portland...or who need an excuse to visit our amazingly beautiful city!!

ECLIPSE DEMO CAMP GALILEO 2009 – Portland, OR

Instantiations and The Eclipse Foundation will co-host a pizza and salad buffet, including beverages. Come as a presenter and demo the cool stuff that you have been working on in Eclipse and network with your local Eclipse community!

If you have questions, please contact Tina Kvavle at Instantiations . Feel free to pass this along to your colleagues, and be sure to sign up on the wiki if you would like to attend or present! We look forward to seeing you there.

Even though most of the speakers were lawyers (speaking a funny kind of English), I did learn quite a few things:

"Distribution" has a different meaning in US law and European law.

The real power of the EUPL is that it shows the growing political/economical importance of free and open source software.

Licenses that have compatibility clauses might have loopholes to relicense something with a more permissive license. For example: 1000 lines of EUPL + 1 line of GPLv2 results in 1001 lines of GPLv2 which means you've lost the Affero-specific clauses.

Bruno Lowagie, the author of iText, gave a very interesting presentation about his struggle with software licensing and how the inclusion of iText in the Eclipse Callisto simultaneous release helped him to clean up dubious pieces of source code.

]]>
BiRT ANT TaskBiRT ANT Task
BiRT ANT task for use with Apache ANT.]]>
Our Old Friend the libpthread.so thead crashOur Old Friend the libpthread.so thead crash
libpthread.so thread crash that was fixed in Java6-10 is showing up again. I was using Java6-13 and Eclipse 3.4.2 and everything seem to trigger it on Ubuntu 9.0.4 32-bit. I am not sure if the workaround at the bottom of this old bug report works.]]>
HOWTO: Create an archived p2 repoHOWTO: Create an archived p2 repo
Generating a p2 repo can be done a number of ways, from the trivial case for a single feature build (add p2.gathering=true into your build.properties file, as discussed with slides and working sample here, to the more elaborate, with signed & packed jars (eg., using the buildUpdate task here).

Should you want an unzipped version of your p2 repo published on download.eclipse.org (eg., so it can be consumed by EPP or Galileo), watch bug 266374. I’m working on improving this currently manual process - copy zip then unzip zip - so it’ll be less effort and more automated.

As always, feedback & contributions welcome.

]]>
ApacheCON slides: "Enterprise Build and Test in the Cloud" and "Eclipse IAM, Maven integration for Eclipse"ApacheCON slides: "Enterprise Build and Test in the Cloud" and "Eclipse IAM, Maven integration for Eclipse"
Here you have the slides from my talks at ApacheCON

Enterprise Build and Test in the Cloud

Building and testing software can be a time and resource consuming
task. Cloud computing / on demand services like Amazon EC2 allow a
cost-effective way to scale applications, and applied to building and
testing software can reduce the time needed to find and correct
problems, meaning a reduction also in time and costs. Properly
configuring your build tools (Maven, Ant,...), continuous integration
servers (Continuum, Cruise Control,...), and testing tools (TestNG,
Selenium,...) can allow you to run all the build/testing process in a
cloud environment, simulating high load environments, distributing long
running tests to reduce their execution time, using different
environments for client or server applications,... and in the case of
on-demand services like Amazon EC2, pay only for the time you use it.
In this presentation we will introduce a development process and
architecture using popular open source tools for the build and test
process such as Apache Maven or Ant for building, Apache Continuum as
continuous integration server, TestNG and Selenium for testing, and how
to configure them to achieve the best results and performance in
several typical use cases (long running testing processes, different
client platforms,...) by using he Amazon Elastic Computing Cloud EC2,
and therefore reducing time and costs compared to other solutions.

Eclipse IAM, Maven integration for Eclipse

Eclipse IAM (Eclipse Integration for Apache Maven), formerly "Q for
Eclipse", is an Open Source project that integrates Apache Maven and
the Eclipse IDE for faster, more agile, and more productive
development. The plugin allows you to run Maven from the IDE, import
existing Maven projects without intermediate steps, create new projects
using Maven archetypes, synchronize dependency management, search
artifact repositories for dependencies that are automatically
downloaded, view a graph of dependencies and more! Join us to discover
how to take advantage of all these features, as well as how they can
help you to improve your development process.

]]>
Java BarCamp Paris 4th ed. : Cloud and DDDJava BarCamp Paris 4th ed. : Cloud and DDD
A big success! It was full of people in the great Google’s office. 2 schedules / 4 rooms and a total of 7 sessions, I present the both where I participated :

Cloud computing
Not really a Java subject but it attract people. We tried to define the cloud computing and we fixed that there is 3 offers :

IAAS (Infrastructure as a Service): this is typical Amazon services products, S3 for storage and EC2 for virtualized servers. Amazon offer very basic service now with a very powerful management tool in Eclipse plug-in (see the demo). There is also Elastic Grid proposing to develop and deploy easily on the Amazon infrastructure, GoGrid an Amazon concurrent. I believe that the recent IBM / Sun merge will create new offerings.

SAAS (Software as a Service): we find a lot of solutions (often based on the previous offer), for example Amazon SimpleDB, Amazon SQS, Google Apps, Microsoft Azure Services CloudMQ, ZumoDrive … and I could continue like a long time…

PAAS (Platform as a Service): is hosting the application on a common and scalable platform, it is typically Google AppEngine, it is possible to deploy all yours web application if you know Python. Microsoft probably has a deal in Azure (I should have a look) and Sun has just launched Zembly.

A lot of discussion on what about offline, security, and where is java in the cloud. For me offline mode is really important in a world of increasingly nomad people. Cloud is primarily storage space allowing me to share my data between my devices, then an area of services, and finally deployment platform of my apps.
Finished managing a backup that is never done, losed time finding a way to share data and finally used USB key, now my data are in the cloud and synchronized on all my devices. I have set up Zumodrive in my company and it’s very cool, the documents are shared even outside the company and I don’t care about backup.
Security is the most bigger difficulty for acceptance in the company, I hear the same remarks when talking about the payment on Internet ten years ago. All these services are secure and there is no zero risk.
Java have his place in the cloud on client and server side. The multi platform aspect facilitates the developments on the client (eg: ZumoDrive client is in Java), I want to see more and more Java APIs « cloud-ready » facilitating the integration of service in code. Similarly on the server side I look forward to Google AppEngine in Java.
Finally the advantage of cloud computing is primarily economic, smalls company are the first customers and have found lower cost and flexible capabilities.

DDD (Domain Driven Design)
I had little success at the last barcamp with this subject, this time it was proposed by others much more stronger than me and have made relevant arguments on the benefits of the concept. One of the important point raised we used talking too much technical and framework implementation than focus more on the reality, something that we tend to forget wanting to put our new framework in our code. I talked about Qi4j, that is not pure DDD implementation, but is for me the best way to modelling reality. I want make a demo of my medical record implemented with Qi4j to really prove that this approach is relevant.
Of course I’m convinced that a DDD refactoring of an existing code is difficult. DDD is a best practice and a new way of development vision. Have to follow for sure …

Thank you again to the organizers. It is always a good opportunity to exchange. And i hope Google will open their doors as often as possible;)

]]>
Got a bug? Who ya gonna call?Got a bug? Who ya gonna call?

No one!

Seriously, this is the problem we face all the time when users run into a problem with our software. They don't call, they figure out an inefficient work around, they get frustrated, or stop using the software. It's counter intuitive but I've seen it over and over again. The reasons and excuses are endless but it all boils down to the fact that few problems are ever going to make it back to you.

Have you read Joel Spolsky's The Joel Test: 12 Steps to Better Code? You should. I like it enough to add a thirteenth step, log all error messages. Logging errors to a file that no one reads doesn't count. You need to track the errors so that you can be proactive in finding solutions.

We took Manoel Marques's Plugging in a logging framework for Eclipse plug-ins and extended it so all errors are logged in a database, emailed to our user support group and appropriate software developers. We've even gone as far as having wiki pages that shows the latest trends in real time.

This doesn't mean we're being constantly interrupted during our work day. But it does give us the flexibility to monitor users and jump in to solve either a usability, training, or software bug issue.

Eclipse is robust. I've been amazed at how well it handles nasty NullPointerExceptions without crashing. Don't get me wrong, it is a good thing but it can hide problems from the user. In alot of cases our users don't know they are in trouble until we burst into their office like Murray, Aykroyd, and Ramis looking to bust up a few ghosts.

We found it a great help to extend Marques's code to allow us to customize the logging properties. We allow log properties in the following order.

Load from the users home account

We can drop a custom logging properties file in the users account, restart our application and we can monitor whats happening in greater detail.

Load from command line

Customize logging from the run configuration dialog. Developers use this to avoid inundating the group with errors.

Load from bundle

This is what is shipped to our clients. Thanks to Eclipse plugin fragments each client gets to setup their own settings.

Load from default

In case the log properties file is broken, we have a default setup. Paranoid you say. Yep!

We have a set of standard logging tags. When you playback the log file, think airplane black boxes, it makes it easier to figure out what the user was doing leading up to their problem.

We log messages to the users log file, to a PostgreSQL database, and send emails. Having errors stored in a database allows use to generate wiki reports showing the latest trends in usage, bugs, etc.

The payback; obviously fewer bugs but what was really important was a more relaxed user community willing to help use improve the software. In many cases we're not tracking down software bugs but working to improve software usability. Users want to use our software, I have more time to spend writing new code, and tracking down bugs takes less time.

]]>
March 2009 Eclipse Board MeetingMarch 2009 Eclipse Board Meeting

While most EclipseCon attendees were enjoying the excellent tutorials or participating in the annual Members Meeting, the Eclipse Board was quietly having our quarterly face to face board meeting. As usual, here is a brief summary of our meeting.

Elections. We said goodbye to elected Board Members stepping down at the end of their terms: Robert Day, Mik Kersten, Jeff McAffer, Emma McGrattan and Tracy Ragan. The committer reps would especially like to recognize our fellow reps, Mik and Jeff. Jeff has been on the Board for several years and has contributed extensively to the direction of Eclipse, both in his project leadership and participation and at the Board level. I have personally worked with Mik over the past year on committer issues, and he is a strong community advocate, in addition to leading one of the coolest projects at Eclipse, Mylyn. To this year's re-elected Committer Reps, Chris Aniszczyk, myself (Doug Gaff) and Ed Merks, we welcome newly-elected Boris Bokowski to our ranks. Boris has a long history with Eclipse and will serve the committer population well.

Key Performance Indicators. At every Board meeting, we review a large deck of slides called KPI's. These slides summarize progress on the Strategic Initiatives for the Foundation, membership statistics, and other vitality metrics such as site traffic, download stats, project activity, EPIC stats, etc. While some of the information is Board confidential, much of it can and should be shared with the public. Doug and Mike agreed to work on publishing some of the KPI's to the community for better visibility. Stay tuned.

EclipseCon. We reviewed the statistics on EclipseCon registration and expenses. While attendance was down this year, anecdotally because of attendee travel restrictions, the conference was still very well attended. Bjorn and the EclipseCon staff worked diligently to keep expenses in check with income while not affecting the quality of the conference. You probably didn't even notice the cost-savings measures, as most were completely hidden. The food, beverages, receptions, and wifi were all on par with past years. The Board formally recognized Bjorn for his incredible commitment to making EclipseCon a great conference.

Git. Yes, we discussed git, too, although my other post has much more current information on the topic.

Automotive Industry Working Group. Ralph Mueller, Director of Ecosystem in Europe, presented an update to the Board on the Automotive Industry Working Group currently under discussion with industry participants. This will be the second IWG created at Eclipse. (Pulsar, the Mobile Working Group, is the first.) In short, things are progressing nicely and Ralph hopes the group will form later this year. The Board feels very strongly that Industry Working Groups are a critical future direction for the Foundation and Ecosystem, and we're excited by the progress.

Membership and Finances. It is no secret that the entire world is in a massive economic recession. The Eclipse Foundation and Eclipse Member companies are not immune to this reality, and Foundation has experienced a net drop in membership. The Executive Director (Mike Milinkovich) and the Financial Officer (Chris Larocque) presented a very detailed revised 2009 budget that cut expenses in several areas while still preserving the level of service and staffing that the community has come to rely upon from the Foundation. The board was very impressed with the detail and thoroughness of the analysis, and we want to remind the community that the Eclipse Foundation is in excellent managerial hands. On an additional positive note, we'd like to welcome our first Enterprise level member, Research in Motion (RIM).

Finally, as is apparently a tradition for March Board Meetings, we had a guest speaker. (Think "tutorial envy".) John Hagel from the Deloitte Center for Edge Innovation spoke to us about "Shaping Strategies". While I don't expect to do this topic justice in a single sentence, Shaping Strategies are proactive market strategies that leverage disruptive technology and allow a company to instantly grab market share in a new space. Think of Google's foray into the telecom space as one example. Check out John's paper for a much better explanation.

]]>
What a great EclipseCon!What a great EclipseCon!
planet knows, last week was EclipseCon time. As always, it was a great experience. Seriously... How many conferences can one discuss, during lunch, the double-checked locking pattern or how EMF could support facets? (thanks Eike and Kenn)

All talks I've attended were awesome. It's reinvigorating to see all the energy and technology happening behind Eclipse: from known technologies to hot topics like E4 and XText.

And it gets better! The people at the conference are simply fantastic. Just to name a few, it was great to chat with Nick, Chris, and Boris and to finally meet in person Tom, Kevin, Peter, and Paul Webster. There are many others that, although not listed here, have contributed to make last week a memory that will not fade away from my mind.

A bit more personal, I like to think that the EMF tutorial was well received. We chose to go deeper into several subjects instead of mechanically read all the slides. Unfortunately, because of that, we didn't cover all the material. The offer to be available to answer individual questions after the conference is still valid ;-)

I got some positive reviews for the modeled UI in e4 talk, which definitely doesn't seem to agree with the official numbers. I wonder if we were too "slide driven" while the folks attending were expecting to see more action. Or perhaps my delusional-self is correct and people did mistake the '+1' for the '-1' bucket. Oh well... I am sure we'll do better the next time.

Finally, a special thanks to Steve: I was peacefully seated at the closing event when he handed me a winning deck for Thursday's poker game. My son loved the RC boat and will enjoy playing with it on summer time.

Another finally... The EMF book was the first one to be sold out at EclipseCon! I had the pleasure to autograph a few copies. On the subject, EMF and modeling in general are everywhere. It is good to see something I helped built making others more productive ;-)

]]>
Updating UI Elements From Command HandlersUpdating UI Elements From Command Handlers
upcoming talk at EclipseCON (should I keep plugging EclipseCON, or is that enough already?), and one of my simple samples was building on the Hello World, CommandPDE sample. I wanted a simple toggle command instead of the default push-style command, but I could not get the menu item to toggle state (checked versus unchecked) when I executed my command with the keybinding. A little searching led me to this newsgroup post, and an eventual solution.

]]>
Ant WizardryAnt Wizardry
Ant. Back in the old days, I always wished there was an easy way to do simple string replacement/concatenation, creating a new property as a result. Sure you could use the antcontrib tasks, but I usually did not have access to them when it really counted and it was one more dependency to worry about. The other option is to mess with temporary files or some other naughty-feeling hackery (I think I used pathconvert once to do something like this and felt really dirty about it).

Anyway, as of Ant 1.7, check out what is possible. Run the following build file:

test: [echo] abc=One two three four [echo] one=two [echo] three=four [echo] test=two two four four [echo] test2=two two foul foul

BUILD SUCCESSFULTotal time: 0 seconds

I wish I had this in my back pocket about four years ago...

]]>
EVars UpdateEVars UpdateSemantics for Xtext LanguagesSemantics for Xtext LanguagesThe semantic annotation toolkit provides support for another approach. Languages are built from scratch (i.e. there is no reuse between grammar fragments), but the semantics of various language building blocks are still reusable. By annotating grammar elements with semantic annotations, the necessary Xtext infrastructure can be generated to make those grammar elements behave in a specific way.

Technically, this is implemented by generating extensions and checks as well as by model transformations and extensions of the meta- model.

Because the selected movies have the same director and same writer, those values are displayed in the master-detail fields at the bottom. However, since they have different titles and release dates, those fields use a stand-in "multi value" instead.

Editing the release date field simultaneously changes the release date of all movies in the selection.

Here's the code that was used to hook up the detail fields to the multi-selection of the viewer:

Disclaimer: The method DuplexingObservableValue.withDefaults() was added after the 3.5M5 milestone, so in order to use it you need to check out the databinding projects from CVS HEAD, or be using a more recent integration build.

If you use TM as a dependency for your offering, you may want to check that the stuff you need works in m5!

What can you do?

Just try out the stuff that you'd like to work fine, and file a bug if it doesn't. We'll provide a bug reporting template for you, so it's super fast and easy to participate.

How long will it take you?

If you've just got 1 hour for downloading, installing and trying it out that's a very valuable input for us already. Of course you're free to report enhancement requests as well!

Any additional information like the test candidate to download, bug reporting template, and other information will be on the Eclipse Wiki. For any other questions, please contact us on the TM mailing list.

Lastest update (1-Feb): Test downloads have been provided, instructions are updated. Thanks for joining the public test!

]]>
EMF crafting wondersEMF crafting wonders

Some days ago, while discussing with a colleague, very clever at electronics I must say, I was asking myself how to easily achieve command handling with queues. The goal was to asynchronously post/consume commands to pilot RGB LEDs on a card connected by serial USB connection.

It came to my mind, that I would have to model something first before coding anything ... ;-)

Doing so, I went to think on how to design an application possibly having several "threads" concurrently processing commands posted and/or consumed in queues in transmission (Tx) or reception (Rx).

Here is what I finally got after some time :

The central piece is "CmdEngine" owning Qx/Rx queues as well as a list of events, actually kind of recording what temporally happened in the system.

Okay, this is a model ! Where are the "threads", "synchronized", "volatile" ?

I know, and in first instance, let me tell you that I prefer Jobs to plain threads. As you can see in EngineClient, we introduced an attribute which is actually the core of commands processing ... !

Trust me or not, in conjunction with EMF Transaction and EMF Query, Jobs are simply a very simple and very efficient solution for creating asynchronous applications in Eclipse.

Having simply using code generated from previous model and having built a simple framework using EMF Transaction, GEF and BIRT, I came to build this application in 3 week-ends (A video had been a better demo for something moving) :

This application allows to monitor commands posted on queues all in live !

I created a sourceforge project : xqz.sf.net (xqz is for "Cross Queues"). I also made an RCP application available here. (Thank you PDE Build and JUnit !)

I'm not sure whether or not this could become an Eclipse technology project, but I had a lot of fun to craft all this MDD/EMF(T) stuff.

]]>
BPMN WebinarBPMN Webinar
The BPMN modeler project will be hosting a webinar on the 12th of February. Please contact us on the BPMN modeler newsgroup if you have specific topics in mind, or have questions about it.]]>
Welcome LilyWelcome Lily If finishing school, buying a house and starting a job wasn't enough excitement, on Friday my wife gave birth to our second daughter, Lily Marie Bull. Lily weighed in at 8lbs 7oz, and is already starting to understand the complexities of Graph Visualization, Modeling and of course, p2. :) Both Tricia and Lily are doing Great!

]]>
Eclipse 3.3 Startup Changes Take ThreeEclipse 3.3 Startup Changes Take Three
Hopefully this will be the last post with the words “startup changes” in its title for a very long time to come.

As mentioned previously, in 3.3 we prevent unknown Runnables from executing during the startup procedure via Display.syncExec() and Display.asyncExec(). I last mentioned a strategy for avoiding use of such runnables during the initialization of editors. I believe that advice is still valid. However, there are scenarios where you may legitimately need access to these methods. Without them, for instance, Splash Screen implementors are forced to spin the event loop themselves if they want to do any clever UI work while the workbench is coming up. In an answer to this problem, we’ve added the new org.eclipse.ui.application.DisplayAccess class as API to the 3.4 stream. This class has one static method, accessDisplayDuringStartup(). Calling this method on any thread not created by the UI (ie: any user Thread) will allow that thread to access the Display.async() and Display.sync() methods as if they were one of our privileged startup threads.

This simple splash handler creates a canvas on the splash shell that will initially have a yellow oval taking up the entirety of the drawing area. We then create a thread which declares that it will be used to access the display during startup. After a short nap this thread will set the color of the oval to green and cause a repaint. You can verify that without the call to DisplayAccess.accessDisplayDuringStartup() the oval will remain yellow until the splash comes down.

]]>
Upcoming Changes to the Transform BundlesUpcoming Changes to the Transform Bundles
I wanted to give a heads up to anyone who’s currently using the org.eclipse.equinox.transforms projects that live in the Equinox incubator. They will be changing shape shortly in an effort to resolve some intractable build issues (alluded to previously). While providing transforms will be virtually identical there will be migration path for older transform bundles. I will outline the changes when the new code is in the incubator which I expect to happen sometime in the next week or so. In the meantime I’ve gone and tagged the existing code with Version_1 for any clients who were making use of it.]]>
Guess What HappenedGuess What HappenedThree things are certain:Death, taxes, and lost data.Guess which has occurred.

It all happened so quickly. I have created a new project in Eclipse to add some minor library to the prototype application I have been writing for three days. Since my projects sit outside the workspace, I had to uncheck this little checkbox suggesting me to put a new project in default location and type the new location (a directory not far from there) by myself.

For some strange reason, when you uncheck this box, the text field with the directory path becomes empty so despite the fact that the directory I needed was just beside the one that is already written there by default I had to switch to explorer, copy and paste the path manually into the field.

This is where I erred. After pasting the directory, I forgot that this was the path of the directory where all my projects resided, the parent directory - not the directory of the new project I wanted to create. Before I realized that I hit OK, and there it was - my new project created in the projects parent directory instead of a nice little directory of its own.

Well, I suppose these things happen, said I when I realized my mistake, I will just delete that project and create a new one in the correct directory. So I said, and pressed the "delete" button.

But this story is not about me, it's about Eclipse. After I pressed that button, Eclipse inconspicuously asked me whether I wanted to remove the project from the workspace only or delete its contents as well, all that with straight, poker face, calm as a frozen lake. I thought "why would I need to keep contents of an empty, just created, project in the wrong location" so I hit "delete contents" - and that was exactly what Eclipse did.

He deleted the contents of the entire directory, all three meaningless files that were created with the new plugin project and ALL SUBDIRECTORIES WITH ALL MY OTHER PROJECTS as well.

It was almost too easy. BOOM - everything was gone.

I know what you are thinking so NO - I did not have a backup, NO - it was not in a source control and NO - I did not have anyone to blame. Three days of work, were gone. Eclipse does not have any Recycle Bin and I did not have a good recovery software installed, and I only have one logical drive - so if I would install one I would risk accidentally overwriting the very data I was trying to rescue. The irony tends to get thick in situations like these.

It looked hopeless, but I know you are waiting for the happy ending, so here it comes.

Once, in one obscure Eclipse conference, I heard that there is such a thing called "local history" in Eclipse 3.2, and that in that history they save all the files you work on so you can go back and undo some changes that you have done and compare previous versions. Lucky for me, this history is saved in the workspace .metadata directory that was not deleted during the accident. After a few minutes I have found it, a bunch of tiny little files with long meaningless names, just lying there happily, each holding a class full of my precious code.

It only took three hours of scraping around to restore 90 percent of the code I have written, and I filled in the rest - so in almost no time I had all my stuff back in good working condition, even better, as in some cases I found bugs while reviewing the code and looking for the changes.

Now for the moral.

First, do backup, do use source control and don't be an idiot.

Second, be extra careful with what you DELETE with Eclipse. Eclipse has it's own file system underneath, so everything it deletes does not go to the Windows recycle bin, or any other place you can salvage it from. This is just plain barbaric. Even my browser has a trash can, in case I accidentally close a tab I did not yet bookmark.

Third, just for the sake of emergencies. Find, try and buy a good piece of software that restores deleted files and keep it installed on your computer at all times. It might save you some nerves one day or another.

And last, listen carefully in conferences, you never know what might save your day next time :)

]]>
Calling all artistsCalling all artistsThe Google Summer of Code (SoC) program [1] is underway and Eclipse has over 20 projects [2]. SoC, which is sponsored by Google, provides funding for students to work on open source projects over the summer. We (the students) submitted proposals which were reviewed by several Eclipse committers (and many of your favorite evangelists). While the program is sponsored by Google, the management and mentoring is done by the Eclipse community. For the past month many of the students have been getting involved with their projects, meeting people on mailing lists / newsgroups, getting access to the code, etc...

Finally we (the students) will be using this blog [4] to update the community on our progress, solicit ideas, etc... Over the next few days some of us will outline what we have been up-to and how we are proceeding with our work.

Since my last blog entry about the project, there've been changes in the main goals of the project after discussion with my mentor Francois Granade and Daniel Megert.

Here are the current goals of this project:

General Goal:

Unifying "Search" facilities so that there won't be 3 different search (and replace) functionalities in Eclipse

Specific Goals:

Unifying Ctrl+F (Find/Replace dialog) and Ctrl+J (Incremental Search) by mostly behaving like Ctrl+J. Providing better UI utilization by not showing modal dialogs if the user doesn't request wider options. Summary of the idea: Firefox-style "Find" with enhancements. = a lightweight Find/Replace. Our main goal here is providing all functionality available in Ctrl+F with this lightweight Find/Replace. Here's the related Bugzilla entry: https://bugs.eclipse.org/bugs/show_bug.cgi?id=195455

Providing "bridges" between Ctrl+F and Ctrl+H. This means carrying query information and search parameters back and forth between them when it's possible. --- To explain it better, it doesn't mean "sharing" input/settings, it is enabling the user to transfer "Find/Replace" input/settings to "Search" easily and extend his/her current search scope & query.

Investigate better presentation of the search results in Ctrl+H ( adding an alternative view like Problems view - results in a table - might be a good idea to show result lines and numbers, paths etc. )

Current Progress

I'm working on a lightweight Find/Replace ( first Specific Goal above ) right now. I've converted IncrementalFind's Label widget on the status bar to a Text widget and using IFocusService I've managed to give cut/copy/paste input to this Text widget when focused.

For this lightweight Find/Replace, I've tried to create a prototype figure and attached it the Bugzilla entry https://bugs.eclipse.org/bugs/show_bug.cgi?id=195455. What I have in mind was basically a "Find" field appearing on the statusbar first Ctrl+F and when the user presses Ctrl+F again, "advanced feature" set appears as a menu above the status bar showing necessary checkboxes,parameters etc. but this prototype figure and behaviour is open to change.

After adjusting the behaviour of the code a bit , I'm going to work on the floating menu that will show up when the user requests "advanced Find/Replace features" next week.

I've created a feature and an update site for the project but even though the feature exports fine, when the feature is installed on a new Eclipse installation, Eclipse doesn't start. I've also tried to create a Plugin Fragment project first (since I've made all my modifications to org.eclipse.ui.workbench.texteditor), and then create a Feature & Update Site for it. This time, everything worked fine except that my changes to org.eclipse.ui.workbench.texteditor didn't show up in the new Eclipse installation.

I've committed my work as a patch to "soc-search" module in "eclipse-incub". It's very primitive right now but you can checkout anytime and see what I'm doing. First take a look at readme.txt to see how to build the project.

Feel free to comment here or on Bugzilla. Thank you for your time and feedback.

]]>
From iterator-loop to foreach-loop with one regex.From iterator-loop to foreach-loop with one regex.
EMF. We've learned a lot during the process and actually had some fun trying to make sense out of Generics and Annotations. Of course there were also some very annoying tasks, such as converting Java 1.4 "Iterator loops" into the new "foreach loop" style. For example, this snippet

Unfortunately Eclipse's "Source->Clean..." magic action doesn't do a good job here since it doesn't keep the names and types of the "each" variable. At least back in December, I would end up with Object object : instead of Integer integer : for the example above.

Either to boost my productivity or just to exercise the right to be lazy ;-), I've come up with a regular expression that does the conversion for me. It works flawless with Eclipse's Find dialog (ctrl+f):

First step : you have to install the plugin. If you are here, I am sure you know how to do it.

Second step : you have to create a Java Application which uses JNI.

Third step : set a breakpoint in the Java code after the code which loads the native library

Fourth step : set a breakpoint in the C code in the begin of a native method

Fifth step : Open the debug dialog and create a new configuration of "Java JNI Application" kind

In the tabs you have to configure the Java and C project, and specify the javaw path as C/C++ application. On linux machines you could have to add the current directory in the library path environment variable

Then click on debug

Sixth step : the debug perspective will be launched, with the 2 debuggers.

It makes 3 weeks that I am working on C source files visualisation. I spend too much time on this, and don't find why it doesn't work.

To resume :

I have created a JMLSourceLookupDirector

I have added CParticipant and JavaParticipant

I have created a JMLSourcePathComputer which contains a JavaSourcePathComputer and a CDelegate.For the computeSourceContainers() method I only merge results obtained from JavaSourcePathComputer and CDelegate computeSourceContainers() methods. I checked with the debugger, and it works well.

When I try to debug a JNI Application with my plug-in, the JVM is launched, gdb is attached and the code stop on the first breakpoint in the Java Code. Then I debug step by step until the native call.When I arrive to the breakpoint in the C code, it stops well, but the Editor doesn't display the file.

Of course the path is correct and the directory is in the source path.

]]>
Why users don't bother to file bug reportsWhy users don't bother to file bug reports
This has to be the saddest bug I have ever seen. Unfortunately, I see this type of response all too often, where the user, despite having a perfectly readable stack trace, dump or error message, is expected to either a) prove to the developers that the problem still exists in the latest nightly build or b) provide a reproducible test case.

Running the latest nightly build may be trivial for client software, but for server software running on busy, production servers, this is impractical and difficult, if not impossible. Furthermore, in a production environment, reproducibility is not an easy feat, as conditions are never the same, and accurately reproducing the load of hundreds of users is far from scientific.

Need more examples of saddness? Here's another one. It's a MySQL bug about corruption on a storage engine. This one is particularly bad, as the developers keep insisting on trying to reproduce the problem with various versions, despite several users (myself included) confirming the problem across many versions.

But wait - it gets even sadder. Here is how the above PHP bug is closed (comment by the reporter) :

Whatever. If you do not want bug reports, I will not post any. I thoughtyou welcome help and want to improve the product but it seems you careonly about having less work. Forget it. Let this bug be.

IMHO, that a fair statement.

The MySQL bug is closed with this automated message:

No feedback was provided for this bug for over a month, so it isbeing suspended automatically. If you are able to provide theinformation that was originally requested, please do so and changethe status of the bug back to "Open".

I understand that the developers' time is precious and that good bug reports are required, but users are not intimate with the source code, and often cannot easily provide more than a crash dump or an error message. That doesn't mean there is not a problem with the code, so relying on the user to do all the heavy lifting seems quite unfair, and a great way to convince your users to not report bugs.

]]>
Plugin ReleasesPlugin Releases
Flying Metrics batman!]]>
"No repository found" errors in Eclipse"No repository found" errors in Eclipse
Trouble installing Eclipse plugins!]]>
In Nashville, TN and Washington, D.C.In Nashville, TN and Washington, D.C.
Just a quick note, team Architexa is in Nashville this week to present our work at OOPSLA. If you are around here and want to see a demo or want to just chat with us about Eclipse or other fun Software Challenges feel free to drop a note.

Next week is EclipseWorld in DC, and I will be there as well. E-mail me if you want to meetup then.

]]>
Server-side OSGi, is it really useful ?Server-side OSGi, is it really useful ? The recent Paris JUG was an opportunity to talk about OSGi technology, already mentioned severaltimes in this blog, and continues to hear about it. Although OSGi is present on the client side with Eclipse, the development of the server side and especially in Java EE environment sometimes leaves developers not convinced. Spring Source (Spring DM server), ObjectWeb (JOnAS), Sun (Glassfish) and IBM (WebSphere 6.1) have clearly made the choice. What are the real benefits for our applications on server side?
First don’t forget OSGi is a specification designed for the embedded domain. This make an implementation without the new Java 5 features : annotations, generics, etc … and that makes us appear OSGi like an old technology. But OSGi stay attractive because what is important above all is the concept: modularization. Concept that on each ear of OOP developer can not be ignored. By development on development we have tried to improve the way we write code, trying to organize it to not create inter-dependencies and go over possible reusability. The arrival of the DI pattern help us to do that and the success of Spring is a good example. OSGi creates a continuity in offering us an infrastructure that obliges us to respect the rules and allows us to dynamically manipulate our components. The dynamic aspect and hot deployment is the icing on the cake but this is not what makes OSGi essential on server side, the current deployment technics with clustered servers, or even with the simple WebObjectsMonitor tool, help us to update applications gracefully. What is interesting it’s how the code is organized and the hierarchy throw the management of dependency imposed by OSGi, in application servers and applications themselves.
So in fact this specification is not suited to Java EE and remains technically difficult to understand, but the concept of modularization is a good approach to improve the quality of our developments. That is why Spring focused on because it fits with their framework.
Moreover reconciliation between JCP and OSGi promises well, I hope in the right direction, to make the best of 2 worlds, i.e. all existing OSGi in one hand and the server aspect and Java 5 new features for Sun on the other.
However, we must not forget the dynamic aspect because although users are not insist to see a new button dynamically appear at each time thez need a new functionnality, the fact is with OSGi it is technically possible. But is it really an improvement, actually with a web application it is also possible in PHP, in Java (must reload session). For RIA this becomes more complicated because part of the functional is deported on the client side and update requires complete reloading. This is typically what Chris Brind has managed to improve by combining Flex and OSGi with Solstice. This framework show the potential of modular approach in this domain.
Again what is important is the concept, the modular approach will bring us more quality in our development and greater flexibility in deployment. Let the community choose the best technology to do it…]]>
Eclipse in the Banking IndustryEclipse in the Banking Industry
Despite all the troubles in financial sector, I still would like to remind about Eclipse in the Banking Industry symposia (at Eclipse Summit Europe 2008). I definitely welcome everyone who sees a point in using great open source Eclipse technologies in financial world. See you in Ludwigsburg - a beautiful small town that should not be missed.

]]>
OSGi on Amazon EC2 is availableOSGi on Amazon EC2 is availableCloud Studio with OSGi support is finally out. Now it is possible to upload your bundles into S3 based bundle repository and create an instance profile (similiar to "Run Configuration" in Eclipse). After profile is created, an EC2 instance that hosts OSGi framework with required bundles can be launched.

]]>
BPMN modeler sub-project proposal available for feedbackBPMN modeler sub-project proposal available for feedback
The BPMN modeler component is planning to evolve from the status of component to the status of sub-project.

]]>
Update is a many-splintered thingUpdate is a many-splintered thing
A recent post in the p2-dev@eclipse.org mailing list got me thinking about the use cases for different ways to install software. Considering that the Linux world has this solved (and then some!) let’s look at the different ways I can update my recently repurposed xubuntu 8.04 laptop. Bear in mind this is without installing OTHER tools, just the ones that come OOTB with an xubuntu installation. (Yes, there are other choices too. Fedora has its tools, gentoo has its solution, etc.)

1. apt-cache: a commandline tool for querying the repositories for available packages, versions, and details.

2. apt-get: a commandline tool for installing/removing packages.

3. Adept Manager: a GUI tool to query the repos for available packages & to install/remove them.

4. Synaptic Package Manager: a more refined GUI tool to query the repos for available packages & to install/remove them.

6. Update Manager: a task tray resident application that monitors the repositories for updates and alerts users about available updates, also to ease Windows users into the Linux experience

So, do we need all of these? Perhaps not all 6, but linux distros are still trying to sort out their target audience, so they often include more tools that you need.

Sure, you can manage updates with apt-get, Synaptic, or Adept, but the Update Manager is smaller and more end-user focused.

Sure, you can install everything in Add/Remove Applications… with the tools above it, but Add/Remove Applications… is friendlier.

Personally, I use apt-get/apt-cache (if I more or less know what I want to install), Synaptic (if I want to browse for something new or install something with many dependencies), and Update (if I just want patches/security updates), because different tools are suited to different needs.

You can remove a screw with a coin, or hammer in a nail with a shoe, but there are better-suited tools for those tasks.

Do we need all 4? Try them and decide for yourself. Like with Debian/Ubuntu installers, there’s bound to be some overlap. But each serves a purpose by itself, and does so with as little installation overhead as possible.

Could things be merged? Perhaps, if p2 wanted to follow the hierarchical model of Synaptic building on apt-get, and Add/Remove Applications… & Update Manager building on Synaptic. There are certainly places to simplify the UI experience. What would you do?

]]>
Help as a Standalone ApplicationHelp as a Standalone ApplicationThe first question here is on help: "We have a set of doc plugins for our product. How to host them on the net like the one at help.eclipse.org?"

Eclipse ships a class called org.eclipse.help.standalone.Infocenter, which is available in org.eclipse.help.base_{version}.jar. It has a main method, so you can run it as a standalone application. You need the following arguments:

When a customer has a bug, all he has to do is to start Eclipse with -debug option and send you the output. Simple? But the problem is your customer would have installed umpteen number of plugins and the log would literally be unreadable. How to enable logging for a specific plugin alone?

The .options file comes to rescue. This is a normal properties file, where you can specify the options including which plugins should be in debug mode and which are not. You can google for sample files.

Plugins are expected to look for this before printing the statement. So the above code should be:

]]>
Eclipse Summit EuropeEclipse Summit Europe
Now that Ganymede is out the door and the teams have had a chance to breathe a bit, its worth taking a few minutes to think about how you can promote all the hard work you’ve put into the release. One of the best forums you could ask for is Eclipse Summit Europe. I’ve been to all of the ESE events and have found them to be fantastic. The conference is chock full of interesting people and talks. The venue is very inviting and if we are lucky, there will be another caipirinha night. Why not make one of those talks (and drinks) be yours this year? The first step is to propose a talk. The conference is in November but the talk submission deadline is coming up in September. Don’t put it off, submit early and often…]]>
What's new in Ganymede for Java EEWhat's new in Ganymede for Java EE
Eclipse Ganymede has been out for about two weeks now, but when perusing its download page I didn't find links to the New and Noteworthy items in each package. The Java EE package would appear to be a hit, likely because that's an easy way to install the Web Tools Platform. Admittedly I'm biased since I both work on WTP and regularly coordinate its New and Noteworthy documents, but check out what's new in WTP right here. As if that weren't enough, to really see all that's new in the "Eclipse IDE for Java EE Developers", you also want to read about what's new in the Data Tools Platform, the DSDP's Remote System Explorer, EMF, Mylyn, GEF, and the Eclipse Platform with its JDT and PDE trimmings.

You might want to go grab a drink before you sit down and read it all.

]]>
Documentation goodies in 3.4Documentation goodies in 3.4Cola is just too cool!Cola is just too cool!
This is just too cool. The ECF guys (in particular Mustafa Isik) have some up with a system for real-time shared editing mechanism called Cola. I’ve not tried it myself but the screencast they put together is so compelling that I had to stop watching it and post this! Check it out. Very cool. And the HD mode on the video is awesome. Well done guys.

p.s., yes I realize that I used “cool” three times in this short post. What can I say…

]]>
The More Things Slow Down, The More They Heat UpThe More Things Slow Down, The More They Heat UpOr is it?

Firstly, I'm busy prepping to participate again this year in the Ride for Sight, which is the longest running motorcycle charity in Canada. It's going to be a full day this coming Saturday, with many hours of riding, and I have to make sure all my gear is in order. And also, it's the final stretch of fundraising, so I'm busily annoying all my friends. (Shameless plug... to find out how you can donate, go to my donation page ). Hopefully this weekend I won't forget my hat like I did at the Port Dover Friday the 13th Rally last weekend and get sunburnt...

But more on topic, my team still has a lot of work going on right now. Right now most of us are working on Remote Development Tools, which is not on the Ganymede train. We just contributed our first initial drop of our C/C++ remote indexing tools to Bugzilla for consideration, and are still working hard to get more done so that we can get to the point where people might actually start using this stuff.

So... what is this stuff?

Essentially, we are trying to create a development environment where you can run your IDE on your local workstation, but the actual code resides on another target machine. Maybe this machine is a mainframe with no graphical windowing capabilities, maybe it's a gigantic supercomputer that you don't physically have on your desk (or if you did, you'd need a REALLY big desk...). In any case, the code you're actually working on resides somewhere that is not local.

Most of the most exciting value adds provided by Eclipse compared to other development environments require knowledge of the structure of the user's source code. Features such as source code navigation, content assist, intelligent search, call hierarchy view, type hierarchy view, the include browser, refactoring, and other features all require parsing the user's source code and producing an index which allows for name based lookups of source code elements.

Parsing and indexing are both very CPU and memory intensive operations, and good performance is a key requirement if these features are to be used by the average user. The remote scenario provides for some unique, additional challenges which have to be overcome in order for the features to work quickly and correctly.

Some important points to consider:

Network mounting the files and operating on them "locally" has been proven to be slow, even on fast (100 megabit) connections with very few intermediate hops.

Downloading the entire set of remote files (both project files and included system headers, which are not generally found on the local machine) is similarly slow.

Sometimes the remote machine uses a different text codepage encoding than the local machine. This means that not only must the source files be transferred, but they may have to undergo a codepage conversion process, which slows things down even further.

Abstract Syntax Trees (ASTs) and indices are typically much larger than the original source code from which they are produced, because they store much more information. I.e., they store a certain amount of syntactic and/or semantic understanding, which is inherently more information than is imparted by the raw bytes that correspond to the source text. As such, it's even more impractical to transfer ASTs or indices than it is to just transfer the original source.

The way a file needs to be parsed in order to be interpreted correctly is often dependent upon the manner in which the file is compiled. E.g., macros and include paths may be defined/redefined on the command line of any individual compilation of any individual file. A successful parse requires that those same macros and include paths be provided to the parser when it runs.

Often the remote machine has greater CPU power than the local machine, so it can often complete parsing and indexing tasks more quickly than the local machine.

Remote machines are often accessed at geographically separated locations. The intermediate topology of the network can often be complicated, with many hops, and slow links. As such, in order to maintain performance it's important for as little data as possible be transferred back and forth between the local machine and the remote machine.

As such, we feel that if the Remote Development Tools are to be successful, then they must provide remote services that allow the user to do all of the parsing and indexing on the remote machine. The local machine can query the remote machine for data it is interested in, and only this data gets transferred over the remote connection.

So, that's the motivation. We just contributed a framework and reference implementation that implements the following features for C/C++:

A New Remote C/C++ Project wizard that allows you to create remote projects and configure your service model (files are served by EFS)

But that's not all. Currently I'm working on a remote make builder, which will essentially let you farm out the build commands for your project over the protocol of your choice (e.g. SSH), scan the build output for errors (like we already do in CDT), and map those back to the EFS resources in your project so that you can do standard things like click on errors in the Problems View from your remote compile and be whisked away to the corresponding source location.

The builder in fact is probably the most important feature to have. Really the "I" in IDE isn't there if you can't build anything. Source introspection tools are nice, but if your tool can't build my source, chances are I'm not going to use it.

At any rate... it's looking like it's going to be a busy summer...

]]>
Compare Merge Viewer Example: Merging Word DocumentsCompare Merge Viewer Example: Merging Word Documents
a Wiki article describing how to implement a custom (i.e. non-text based) compare merge viewer. The example I used was a Word document comparison. Hopefully I'll be able to get the code into the Eclipse 3.5 stream once floodgates open for 3.5 development but, in the meantime, there's a link in the article from which you can download the bundle (it's a fragment of the org.eclipse.compare plug-in). The code is compatible with 3.3 and 3.4 so if you are interested in either seeing how to implement a custom compare merge viewer or you want to try out the Word document compare viewer, check out the article.

]]>
3.4 New and Noteworthy3.4 New and NoteworthyThe Problem of Perspective MultiplicityThe Problem of Perspective Multiplicity
Some six years ago, I switched my primary IDE from NetBeans to Eclipse JDT (then 2.0). At the time, I did this primarily because NetBeans was too much of a resource hog for my pathetic development machine, but I quickly learned to appreciate the power of the Eclipse development environment. NetBeans has since made great strides of course, but at the time, Eclipse was lightyears beyond it in both features and polish.

One of the more interesting features offered by Eclipse was the concept of a “perspective”, a collection of views in a specific layout conducive to performing a specific series of tasks. The major upshot of this was instead of the debugger views popping in and out, they simply remained hidden in a separate perspective, ready to restore to your customized configuration as necessary. This innovation was also present in other areas, such as the CVS Team view and the Update Manager (yes, the Eclipse update system was once a set of views and editors).

You could switch between these perspectives manually of course, but most of the time Eclipse was able to just detect which perspective you needed and make the switch automatically. If you were to launch an application in debug mode for example, the “Debug” perspective would be opened automatically, bringing useful views to the fore. Once you were done debugging, it was easy to switch back to the “Java” perspective for more streamlined editing. It was a good system, and it worked well.

Unfortunately, times have changed. Don’t get me wrong, I still love having all my debug views and layout saved for me in a discrete section of the app, ready to access on a moment’s notice. But Eclipse is no longer the single-purpose application it once was. Yes, I know that it has always been billed as “an open tool platform for everything and nothing in particular”, but back in the day (and especially before OSGi) most people had yet to realize this. The only language supported by Eclipse on any serious level was Java, thus the perspective system worked extremely well for organizing IDE views. Now, Eclipse serves as the foundation for IDE frameworks supporting dozens of different languages, requiring an equal (if not greater) number of perspectives.

Even in this screenshot, I’m still hiding easily 70% of the perspectives available to Eclipse. With all of these different view collections and configurations, it’s no wonder that people often find Eclipse to be confusing compared to other IDEs. In NetBeans (for example) you can work with as many languages as you want within a single perspective/layout/configuration. The outline shows the relevant information for whatever file you have open, and the project explorer view is fully integrated with each language, showing all available projects and their associated structure. Most importantly, this view is able to show project logical structure as dictated by the support module for that language (e.g. src/, test/, etc).

Effectively, other IDEs have evolved a single “Development” perspective, one which shows a generic set of views common to all languages. Unlike Eclipse, which requires switching to the Ruby perspective or the C/C++ perspective to get the appropriate project viewer, NetBeans has one project viewer which is extensible by any module. Eclipse has some of this with the Package Explorer, but some plugins like DLTK don’t properly integrate and so the view isn’t as streamlined. Additionally, some functions like “Open Type” don’t work appropriately unless in the corresponding perspective for a given language.

Yes, I am aware that I could simply open any views I want within a single perspective, but that’s not what I’m looking for. I don’t want to open five different views for navigating project files, I want to have one master view which shows me everything through the filter of whatever language is relevant to the project. Project Explorer comes close, but it fails to handle the tighter integration (such as the “Referenced Libraries” in JDT or script outlines for DLTK).

Theoretically, Eclipse only needs four or five perspectives for the average developer working with any number of languages: Develop, Debug, Test, Repositories, Synchronize. Obviously, more perspectives would be needed for functionality which does not conform to normal development conventions (such as “Planning” or even “Email”), but I think that these core perspectives could provide a consistent, generic framework to which any language IDE could conform. We can already see something similar happening with the Debug perspective, which is used by Java, Ruby, Scala and C/C++ alike.

What is needed is a common super-framework to be extended by actual language implementations such as JDT, CDT and the like (similar to what DLTK provides but more encompassing). This framework should provide a common platform with features such as project viewing, outline, documentation, type hierarchy, call hierarchy, open type, etc. This platform would then be specialized by the relevant IDE and the same views would allow extension to fit the needs of the language in question. This already happens with the Outline view, but it needs to occur with other common functions as enumerated. Views which are not common to different languages (such as Ant Build or Make Targets) would of course not be contained within this super-framework, but would be separate views as they are now. This framework would allow a developer to use a single set of views for any language, never requiring a workflow-disrupting change of perspective.

The building blocks are all in place, and such an effort would still be in line with the Eclipse philosophy of total extensibility, it’s merely a question of implementation and opinion. The implementation is simple, as I said, most of the functionality is already available (often redundantly) in any one of the many IDE packages. The bigger challenge is to convince those who have the power to make the decision. Eclipse 4.0 is coming, it should be an interesting road to follow.

]]>
e4: Now With More Openness!e4: Now With More Openness!
Remote Development Tools (RDT) initiative. (So flat out in fact that the wiki page that link goes to is woefully out of date. ) A lot of late hours and many Red Bulls later, we've got a first proof of concept demo up and running on the new RDT framework in time for our upcoming deadline, so I can finally take a bit of time again for things like attending meetings and blogging about them.

I spent Thursday and Friday last week at the e4 Summit. I won't bore everyone with all the technical details, as those can be found on the wiki page, but I wanted to make a point to express some serious kudos to everyone that made this meeting happen, especially given my past criticisms of openness in the platform.

The platform is entering a new era of openness. Seeing thirty people from thirteen different companies/organizations all working together under the scrutiny of the public eye to help design and build the next generation of the Eclipse Platform is a wonderful sight to behold. All the discussion and decisions are being made in the open where anyone can participate, whether it's at the Summit, on the mailing lists, or on the upcoming project conference calls. All the people who attended the summit are going to receive commit rights (if they want them) under the aegis of the new incubator project (which is not the old incubator project).

So, massive props to the Platform Team and everyone else involved. You are doing a great thing right now, and you deserve to be recognized for it. Good on ya!

We've got a lot of momentum going right now on e4. Great opportunities are opening up for everyone to participate in something special here, whether it's on the new browser-based-Eclipse stuff, or working or a new resource model, or something else. If you think there is something that should be a part of e4 that isn't, now is your time to speak up and contribute.

I want to emphasize the word contribute as well. People have harped on the Platform for years about it being hard to contribute to. Now that things are changing in that respect, the onus is on the rest of the community to put their code where their mouth is. Being open is not the silver bullet to build a thriving, diverse committer community. Ultimately it takes people writing code, and you could be one of those people. If you want to see something happen, pick up a keyboard and lend a hand, any way you can. The Platform folks are doing their part in opening things up, now it's the community's turn to take advantage of the opportunity. Don't waste it.

]]>
Eclipse Lite LanguagesEclipse Lite Languages
As part of the Eclipse University Outreach initiative, I have developed plug-ins that allow students to evaluate Scheme and Prolog in Eclipse. A beta version of the Scheme plug-ins is now available. After resolving some licensing issues, the Prolog beta version should be available in June. You can view the user guides here for both to get a feel for what they can do. You can also download the Scheme plug-ins from the same location. Next up will be a lite version for Java.
]]>
Allow to move mouse into hover; or how to make it stickAllow to move mouse into hover; or how to make it stickEMF Meta Tooling : Textual Search/Replace Infrastructure GenerationEMF Meta Tooling : Textual Search/Replace Infrastructure Generationfor some time now, I have been busy developing an Ecore Search "meta tooling" framework in my "garage" ... ;-)

Hope all this stuff will help developers in EMF space by giving possibility to get a quick access to customizable search engines and UI integrations.

Go Eclipse, Go Modeling !

NB: for those interested on the movie in the query tab, see this video. I was so sure these tomatoes was coming from Jupiter... crazy me ;-)

]]>
Screencast: Introduction to the Scala Developer ToolsScreencast: Introduction to the Scala Developer Tools
Virtually everyone who has visited the Scala project page has seen the info page for the Scala plugin for Eclipse. There are a few screenshots, an update site and very little instruction on how to proceed from there. Those of you who have actually installed this plugin can vouch for how terribly it works as well as the remarkable lack of usefulness in its functionality. It’s basically a very crude syntax highlighting editor for Scala embedded into Eclipse. It has the ability to run programs and compile them within the IDE, but that’s about all. Worse than that, it seems to make everything else about Eclipse less stable; somehow crashing random, unrelated plugins (such as DLTK). Needless to say, it’s often a race to see how fast we can remove the Scala Eclipse plugin from our systems.

What is far less widely known is that there is a second Eclipse plugin which offers support for Scala development. Basically, the guys at LAMP decided that it wasn’t worth trying to build out the original plugin any further. Instead, they started from scratch and created a whole new implementation. The result is entitled the “Scala Developer Tools” (or SDT, if you’re into short and phonetically confusing acronyms). Basically, this plugin is a very unstable, very experimental attempt to build a first-class IDE for Scala on top of Eclipse. Obviously, they still have a ways to go:

In case you were wondering, no that isn’t my default editor font. To say the least, the plugin suffers from an annoying plethora of UI-related bugs. Behavior is inconsistent, and often times changing a value doesn’t seem to be permanent (it took me several tries to get the syntax highlighting to stop shifting before my very eyes). To make matters worse, it seems that installing the plugin in the first place is a bit like playing a game of hopscotch using un-anchored floats in the middle of a pool. The update site has a nasty habit of throwing a 404 about 50% of the time. You know what they say: if at first you don’t succeed…

The good news is that once you get the plugin installed, the preferences beaten into submission, and the UI bugs safely ignored, things become quite nice indeed. The new editor is vastly improved over the old one, and it’s easy to see tremendous potential in the project. Things are actually getting to a point where I would consider using the plugin rather than my current jEdit setup.

Of course, it’s hard to get a good idea of how a tool works until you see it in action. That’s why I took the time to put together a small screencast which illustrates some of the highlights of the new editor. I made no attempt to hide the bugs which cropped up during my testing, so this should give you a fair approximation of the current state of the plugin and whether it’s worth trying for your own projects. The screencast has been produced at a reasonably high resolution (1024×732) in both Flash and downloadable AVI format. Enjoy!

]]>
e4 Summit — You Need to be Theree4 Summit — You Need to be There
Thanks to Boris, we actually have a place to record whether or not you are coming to the e4 Summit. I know many people, who said that they wanted to come, whose name is not on that list yet, so if you plan to be there, please add your name.

Note that there are two distinct sections to that page:

If you are planning to come to the summit, add yourself to the Attendees section

If you are interested in working on a particular area, even if you can’t come to the summit, please add yourself to the Work areas section. Our expectation is that those lists really will be the starting points for the major work areas, so if you want to work on something that isn’t covered by any of those lists please add a new row.

For example, one area that I believe is not covered yet, which I am sure I heard a lot of people say they were interested in at EclipseCON, is UI macro recording/playback. Although the work on Make it easy to script Eclipse (e.g., expose DOMs) will help people who want to implement UI macro recorders, it isn’t the same — the existing item is talking about making it easier to drive eclipse via scripting languages. So if you want to work on macro recording, you will have to drive it.

]]>
More CollaborationMore Collaboration
JavaWorld compared Eclipse with Netbeans. The outcome was a draw with a slight edge for Netbeans. While I heavily dispute the result, one thing struck me: There is not much support for collaboration within teams in Eclipse. There is the Eclipse Communication Framework (ECF) but I think there should be more collaboration features built right into the Eclipse platform. I think this should be a focus for the new Eclipse e4 version.But maybe IBM does not want to cannibalize its upcoming Jazz platform?

]]>
The bursting of bubblesThe bursting of bubbles
content-type specific icons in Eclipse 3.4M6 and showing how to make use of them through an extension or org.eclipse.ui.editors, but I think that something needs to be pointed out about that solution. Developers who do this aren't just specifying their icons, they're specifying whole new editors. This isn't a bad thing, but every editor has to have a unique ID, and it's easy to forget that some other features depend on that editor ID. The most common example is the org.eclipse.ui.actionSetPartAssociations extension point, which adds menu and toolbar actions automatically as you switch from workbench part to workbench part. In WTP, this is how the JSP and Web Page Editors ensure that the Run menu is present and populated with the Launch action set, and it's how most editors expose the Annotation Navigation action set. So when you're registering an editor to get that custom icon showing up, keep in mind that there may be more to it than reusing the right editor and editor contributor class.

]]>
Dynamic Context HelpDynamic Context Help
I admit that I've passed over this user assistance feature for the last few years. My reasons: it's fluff, the documentation is vague, and no one will use it. I couldn't have been more wrong.

We first wired up dynamic help to our job deck editor in January. Looking at the above image you will see that the *TFSUPPRESS tag NOISEDIP has been selected with the help view offered a link for more details. It is simple, it works, and it is not annoying.

The idea is to offer the user help on what has been selected in the active view or editor. The help appears in the Eclipse help view. I've put together an example plugin that shows how it works. It is a view listing three characters from the classic movie "The Good, the Bad and the Ugly". The view class is called DynamicHelpView.

Download the plugin source here from SourceForge.net. The code assumes you are using Eclipse 3.3 and at least Java 5.0. I'm assuming you have some experience in adding help to your plugin. If not, spend a few minutes looking at the plugin.xml, context.xml and reference/*.* files.

The Eclipse help view, if visible, will listen for view or editor activation and selection events. In our example, when it sees one of these events it calls DynamicHelpView.getAdapter(Class) with a request for an IContextProvider.

To get this to work, make a selection and request dynamic help by pressing the F1 key.

The ContextProvider retrieves the current table selection and returns a SelectionContext which provides the help view with help context resources to display. The help context resources can be related topics links or external links.

The SelectionContext creates a list of IHelpResources. The first is a a single external link to Wikipedia followed by some internal links to related static help. The static help was registered in plugin.xml.

There is a small bug 173073 which requires you to hit a few keys to have the help view update its list of links. It has been fixed in Eclipse 3.4 (to be released in June 2008). Don't let this bug stop you from using the dynamic help.

Chapters six through ten cover various ways that reports can be enhanced. Each of these chapters is relatively self-contained, making it easy to refer back for details later when writing reports. Chapter six describes how reports can be parameterized, obviously a necessary capability to promote reuse. In particular the distinction between data set and report parameters in BIRT needs to be understood, and chapter six does a good job in explaining this difference. Also explained are more advanced parameter concepts, such as dynamic, cascading and group parameters. While a number of these parameter concepts are covered quickly, if you are a Java developer as the book assumes, the explanation should be a sufficient overview, and the BIRT documentation can serve to fill in the details.

Report project and libraries are described in chapter seven. By using the Eclipse projects configured for BIRT and report libraries, further reuse is enabled. For example, images that need to be shared across a team of developers creating reports can be stored in libraries, and these libraries are then referenced by each consuming report. As with the previous chapter, Java developers should have no problem understanding these concepts, and the use of specific project types will be very familiar to experienced Eclipse developers as well. Chapter seven also contains a tutorial about reusing resources, and this is useful for checking understanding.

An important aspect of report development is being able to customize rendering. Obviously we’d like to separate rendering instructions from main report data (if possible), so changes in rendering can be made independently. In chapter eight there are examples of several style options BIRT supports: BIRT built-in styles, custom styles, CSS, and style templates. For simple formatting requirements either the built-in style support or slight customizations of it will suffice. Style templates are more useful to apply over a range of reports, probably across groups or departments. Finally, the capability to use CSS allows BIRT users leverage vast resources from that style language. The examples in chapter eight are brief, but detailed enough to suggest the possibilities in each option.

Charts are a common requirement for reports. Luckily, BIRT’s Charting Engine supplies a number of thirteen popular charting options, including scatter,pie, bar and line charts. Further, drill-down (the ability to see more detailed information for a specific chart element) is supported by the Chart Engine. Chapter nine uses the pie, gauge and bar charts in simple, illustrative examples. While table reports are common and useful, you really get to see the power of BIRT as a reporting tool through these chart examples. Well designed charts can convey a lot of information in an attractive form, and drill-down allow you to present additional details without cluttering the initial chart presentation. Perhaps because of the visual appeal of charts, I found the examples in chapter nine more interesting than those in other chapters, and this made me wonder if incorporation of charts throughout the book might have been a good strategy.

BIRT includes scripting support using Java and JavaScript. Chapter ten discusses these capabilities, interestingly starting with a comment that knowledge of Java is useful for understanding the scripting examples. Yet the assumptions stated at the beginning of the book include being a Java developer and, as I’ve mentioned several times, those without Java experience will have to work hard to grasp much of the book’s content. Perhaps in an attempt to limit chapter size or to keep it accessible to those without Java experience, the script examples in this chapter only scratch the surface. A minor criticism: much of the code has pedestrian comments (about things that method names, etc. should suggest) and subsequent paragraphs have explanations similar to the comments. It would have been better to omit these comments, hence making the code more compact. Event handling, an integral part of BIRT scripting, is also covered briefly in a few examples. My feeling is that this chapter should assume a fair amount of experience with Java and the ability to pick up JavaScript while showing more detailed examples. Granted this would increase the length of the chapter and still would only show a fraction of the possibilities, but a more comprehensive example would be more instructive to the (stated) target readership.

The final two chapters deal with report deployment and a case study. The deployment material is good to get started with, and ideally your deployment requirements will fit within the basic cases. But, as is often the case with real world deployment scenarios, likely there will be complications requiring studying further BIRT documentation for alternatives. The working example chapter is only suggestive – to follow the example exactly requires a lot of set up and I doubt few readers will attempt it. As a summary of many of the concepts covered earlier, however, the case study is a nice summary and useful for pulling all of the previous threads together.

In conclusion, I believe John Ward’s book does a fine job of providing a quick start to BIRT. If you are a Java developing using Eclipse and want to take advantage of BIRT, starting with the BIRT “all in one” download and working through Practical Data Analysis and Reporting with BIRT will quickly get you up and running.

]]>
ReclipseReclipse
Hello everybody out there using Eclipse -

I'm doing a (free)Ruby version of Eclipse (“Reclipse” Just a hobby, won't be big andprofessional like regular Eclipse) for Ruby and scripting language clones.This hasbeen brewing since last april 1st, and is starting to get ready.I'd like any feedbackon things people like/dislike in Eclipse, as my Reclipse resembles it somewhat (samephysical layout of the widgets (due to practical reasons) among other things).

I've currently ported JDT(3.4M3) and PDE(3.4M5), and things seem to work.This implies that I'll get something practical within a few months, and I'd like toknow what features most people would want.Any suggestions are welcome, but I won'tpromise I'll implement them :-)

]]>
My experiences at EclipseCon 2008My experiences at EclipseCon 2008This is my very first blog, so please pardon my tremor and anxiety. I promise I will not make this a “Hello world” blog! :-)

I live and work in the UK with a small Eclipse consultancy, Etish Limited. You can learn all about my professional interests through my profile.

This blog is all about our (my partner in crime Joel’s and my) experiences at EclipseCon 2008, which took place in Santa Clara, California last week. It was our first time at this conference and were literally astonished by the aplomb with which it was organised and implemented.

The venue was comfortable and very friendly. Lots of nice meeting rooms, no overcrowding and very decent foods and beverages served at the various receptions and meal breaks. Not that we had many opportunities to really sample all that was on offer, we were too busy talking and connecting!… And let me tell you about the beds at the Hyatt hotel that is directly connected to the convention centre: they are definitely something to not only write home about, but sing of! Vast! And oh so comfortable!… I wish I could have fit one in my luggage coming back home. ;-)

The organisation of the conference was spotless. In the entire week, there was not a single glitch, at least from my vantage point. Everything was perfectly planned and executed, down to specific persons being put in charge of updating the flash sticks that were distributed to all delegates at registration and that contained all conference materials. I am sure this was done to avoid excessive queues and long waiting times at the designated update points, but I found it a great service: all of a sudden what I was viewing as a chore became a pleasure. I would walk up to the nice gentleman and hand him my stick. He would put it in his left pocket and out of the right pocket would immediately emerge an updated stick for me to walk away with. An outsider might at first have thought this to be some kind of spook trick, but a second look at our (the delegates’) possessed faces, funky attires and impossible hair dos would have immediately dispelled such fantasies :-)

I also understand why so many people yearn to participate in that particular conference. The crowd and the atmosphere are so fantastic, it is almost like being back at university. The atmosphere is relaxed, open and friendly, decidedly techy. Everybody yearns to exchange experiences, learn, share and offer some advice, born of experience, whenever possible.

All the tutorials and talks I attended had something (and oftentimes a lot) of interest and novelty to add to my (meagre... ;-) pot of knowledge.

We not only thoroughly enjoyed ourselves, but we also had a great opportunity to meet face to face many persons with whom we had, up to now, only been entertaining cyber conversations. This definitely added an invaluable dimension to our relationships.

Last, but not least, we had an incredibly breathtaking reaction to our proposed Eclipse framework, the Open Requirements Management Framework (ORMF for friends) and the first exemplary tool we built on top of it, Useme, which is use case centric. We presented a poster on the project (and I even have the two pictures to prove it! Many thanks to Alexey Khoroshilov from the Institute for System Programming, Moscow, for taking them and sending them off to us) and hosted a Birds of Feathers, both of which were very well attended and were the source of innumerable suggestions, expressions of interest and many offers of contributions. Our effort of building a community around our proposal for ORMF certainly received a great boost at the conference, and our friends are now a lot more numerous and more eager.We were also approached, among many, by the authors of the Open System Engineering Environment (OSEE), another Eclipse based project created by Boeing that is currently in the incubation phase. Theirs is an industrial strength but customisable environment that is intended to facilitate the management of all aspects of a project's life cycle and they were very keen to see our framework build on top of their environment as a structured requirements sub-project. After meeting with the good OSEE folks and discussing and exploring with them the possibilities of synergy between the two projects, we decided to accept their offer. I think this is going to prove a bold but fantastic move, as a large number of the important services that we were envisaging providing as part of ORMF some time in the future are already there, in OSEE, for the taking. So we warmly thank the OSEE gang for making this environment so easily available to us and for opening up a lot of hitherto dreamed of possibilities.

To summarise all this in a single sentence, ORMF has come out of EclipseCon a lot stronger!

To conclude I just want to express a final thought, that perhaps represents the most important aspect of my entire experience at EclipseCon, far beyond our personal successes (and I hope you are still with me :-): throughout the week I felt around me a sense of possibility that is very hard, and very thrilling, to find. The Eclipse Foundation members were incredibly welcoming and encouraging, constantly catalysing connections and suggesting synergies. Theirs was an honest effort to make us heard and make us count, no matter who we were or how small an organisation we represented. Every day of the meeting I was buoyed by the feeling that, with the help of this community, I could achieve anything I dreamed of. If that is not a fantastic achievement on the part of a community, what is?

So thank you very much, EclipseCon organisers, Eclipse Foundation members and conference attendees in general for making our week such a rich, exciting and successful week!

Google Summer of Code (GSoC) is a program that offers student
developers stipends to write code for various open source
projects. [GSoC FAQ]

Since our main focus is model driven software development, the projects we will be mentoring will obviously have a modeling aspect as well. We have posted some ideas on the GSoC page on the Eclipse wiki. If you are interested in working with us on a modeling-related project, there are two things you can do:

Have a look at the list of ideas in the Eclipse wiki. Look for "M2T/Xpand" to see our proposals.

]]>
Creating a model template in Rational’s modelling toolsCreating a model template in Rational’s modelling toolsPlugging SWT Leaks in Your Eclipse AppPlugging SWT Leaks in Your Eclipse App
Sleak. The "limitation" of JProfiler is that it works more from a generic Java memory point of view. I needed to find where handles were leaking - even if they were not using that much memory. Sleak solves this simply.

First, I'll go through getting it set up - it didn't work exactly for me as in the included instructions - but you'll need to refer to them to understand what I'm talking about from here. Just dropping it into my plugins dir as directed didn't work (not blaming the instructions here - this could've been my fault). I ended up with Class loader errors. I decided to repackage it myself, so I started by importing it into my workspace as an Eclipse project. Then, I exported it as a jar from the plugin overview screen. At this point, I had a Sleak jar in the specified output directory. I dropped this into my plugins directory, and followed the other instructions. The only other change I would recommend here is to just set the properties to true instead of adding lines since they are already in the .options file.

Now you're ready to find leaks. Launch Eclipse normally with the workspace containing all the code for your application. Open your favorite debug configuration and add "-debug -clean" to the end of your Program Arguments. Then, just launch the configuration. Once it loads, open the Sleak view by using the Window->Show View->Other->SWT Tools menu. The buttons you'll see are a little different from the instructions - you will see "Snap" and "Diff" which I interpret as "Take a Snapshot of the Handles in Use" and "Take Another One and Show the Differences from the Previous Snapshot". All you have to do is use the Snap button, preform the operation you think has leaks and then press Diff. You will get a nice list of the handles still hanging around. Clicking on them will often show you a preview in the window (this works for Colors, Images, and a few other visual handles). Clicking the stack box will even show you were in the code they were created.

I hope this helps. I really appreciate the work by the SWT team for releasing this tool for all us developers! If you wrote this, please leave a comment so you can take some credit.

]]>
Textual Editing Framework TutorialTextual Editing Framework Tutorial
Textual Editing Framework is out. After watching all the TEF screencasts, you can finally try it yourself. We provide a small tutorial, based on plug-in extension templates that allows you to create your own first textual editor in seconds. For all of you that always wanted a textual notation and a proper textual editor for languages based on existing meta-models: check it out.

A typical workspace for textual editor development with TEF.

]]>
Eclipse University Outreach SiteEclipse University Outreach Site
As part of the Eclipse University Outreach initiative, I have start a web site based on Moodle that is currently hosted at Carleton University. Check it out at Eclipse University Outreach. Login in as a guest to see the current content. I have videos and PDF files available on different Eclipse topics. The goal is for people to use, provide and promote Eclipse content at the site. Interested? Have material? Please contact me.
]]>
EMF - Implementing EOperations without touching generated codeEMF - Implementing EOperations without touching generated code
In our current development of M2T/Xpand3 we make a lot of use of Ecore models. Although, there are a lot of obvious advantages in using the Ecore implementation over implementing domain models using Java directly, there are some problems.

One of them is that EMF's generator encourages you to use generated code. As we don't want to do that for several reasons, we had to find another way.

EMF's generator supports specification of Java code via a specific EAnnotation. But of course we don't want to program Java within small properties text widgets. Instead we developed an Action, which automatically adds the respective EAnnotaions with delegation code to each EOperation.

So here's an example. First there is the plain ecore file:

The Action we built can be invoked via context menu and transforms it to something like the following:

EMF's generator now generates the nested delegation code instead of the the usual "throw new UnsupportedOperationException();"-code.

Of course there are compile errors at first....

but after adding the referenced class and needed methods manually the compiler is happy again:

The delegation implementation for abstract types even supports some kind of polymorphism:

So we can add special implementations for more specfic types when needed (i.e. subclasses):

]]>
XSD Export from EMFXSD Export from EMFHappy Chinese New Year!Happy Chinese New Year!
I am probably not the only Chinese blogging about Eclipse in English, but I believe that I am one of the most active one. Something related with this background, I would like to point out that today is the Chinese new year. And this year is the year of “rat”s.
So, how is it related with Eclipse? Actually, if you ask me which one of the 12 animals can best describe the life of software developers, I would like to say the “rat”. So, this year is the year of software developers.]]>
My Last Week’s Eclipse UsageMy Last Week’s Eclipse Usage
I found a cool plugin couple of weeks ago, Ergo by Channing Walton. It monitors the shortcut and menu invocations usage of Eclipse.

I soon discovered that I’d like to know what shortcuts other users use. I whipped up a patch that would let the plugin users specify a reporting URL. Channing liked the idea, open-sourced the plugin and gave me access.

Last week I had the plugin enabled and now I produced a page of the statistics. The stats show that I was pasting all last week .

I was also introduced to the EPP’s Usage Data Collector. I have not checked if it also monitors actions’ invocations. If it does not we’ll try to propose a patch of some sort.

I’ll be uploading my stats for another couple of weeks and compare to my colleagues and see if I can find anything interesting.

If anybody is interested in seeing their stats they can download the plugin from the project’s page and use “http://tom.jabber.ee/reporting/” as the URL. Every installation is generated a UUID and if you specify the reporting URL your workspace/.metadata/.plugins/com.stateofflow.ergo/commands.properties will be reported to the specified url every 1 hour. The stats can be seen from http://tom.jabber.ee/reporting/result.php?uuid=YOUR-UUID

]]>
I am going to attend EclipseCon 2008! Well, probably…I am going to attend EclipseCon 2008! Well, probably…
My talk has been accepted by EclipseCon 2008. Although the travel has to be approved yet, it is very likely that I will be able to go Santa Clara again this year.

Actually, I attended EclipseCon in 2005 and talked in 2006. That of year 2007 was missed due to the busy status of my project. It was the EclipseCon that brought me the chance to meet all the people working around related things and finally lead to the acquisition of my committer status. So, I am really excited about attending EclipseCon again.

However, I am bored with all the troubles with US visa. The whole process of visa application is rather time consuming and iirational. The first time when I was applying a US visa four years ago, I got a multi entry one valid for one year. It allows me to present at several conferences in the US that year, and I was feeling so good with the US. When I had to renew my visa after that year, they kept me waiting for 3 months. Until one week before the conference, I could no longer wait and send emails, faxes to the consulate and finally get one - but only a single entry valid for three months. Why? Both due to the busy status of my project and the fact that I tried to avoid traveling into the US, I greatly reduced the number of times that I had to attend conferences in the US in the last two years. By the end of last year, there was one important meeting which I had to attend and I stated the whole process again. And this time, after keeping a good record of entering and leaving the US for more than 6 times, I am still kept waiting until I could not wait any more. Again, I get a visa for single entry, although I have submitted documents that I have several conferences to attend in the coming year. Well, if that is all the troubles I met, I would be much happier. The case is that, I was put into a special list of people that are required to go through a second interview, after arriving the Los Angeles airport. And, I have to report to the same office again when I leave the US one week later. The reason - “Tech Match” - what is that? Although I do believe that I am a good developer, but I never thought that I know some special techniques that will threaten the US or any countries. Because I am working on the Eclipse parallel tools platform and I am doing projects on Grid computing? But do those guys really know what they are? All these seem to me is that the US is not welcoming people from other countries as it was before. I really hope this will not happen again this time.

]]>
A Quick Eclipse-based XML Editor Using EMF, Part IA Quick Eclipse-based XML Editor Using EMF, Part I
Introduction

I've recently been dabbling in OSGi technology and found DS (Declarative Services) feature neat for automagically registering and finding/consuming services. Basically, instead of having to call OSGi APIs for registering Java objects(services), you will just need to "declare" these provided services in an XML file and the DS framework will take care of registering it for you. Similarly, if your application needs a service, instead of finding (and tracking) it yourself in code, just declare it in the XML file and DS will take care of "injecting" it into your object and tracking dynamic registration/unregistration. It is very reminiscent of Spring Dependency Injection and in fact, Spring has a very similar technology called Spring DM (Dynamic Modules). In the end, these technologies are all about decoupling your business logic from framework/container (e.g.OSGi) APIs, which promotes greater code reusability.

Anyway, like I said, the configuration is XML-based and being the lazy (in a good way ;-)) developer that I am, I wanted an integrated editor in Eclipse to edit these DS files. Nowadays, whenever I see and XML<->Java binding (plus Eclipse integration) problem my neurons automatically fire "EMF!", and so I set about trying to use this technology to create my own editor.

As a modeling technology, EMF is pretty solid and feature-rich. However, for this case, we don't need to create models because there is already a schema available - it's in the OSGi specs. EMF already has the ability to convert any XSD schema to an ecore model, and vice-versa.

The DS schema looks something like this:

I just copied it from the specs and saved it to a file in my system.

Here, then, is the quick procedure for creating the editor. Are you ready? Don't blink your eyes because this will be F-A-S-T! :-)

3. In the next page you will be asked what packages to import. EMF tries to generate a model filename from the schema contents but it's sometimes not nice (e.g. _0.ecore). You can rename it if you want, like here I renamed it to ds.ecore.

4. When the wizard is finished, you will have a new project in your workspace, with the generated ecore model (ds.ecore). Also, there will be a ds.genmodel file which is a generator model for your model: it tells EMF about code-generator-specific stuff like where to put the generated files, what base package name to use, etc.

You can view and edit these files via the Ecore and GenModel editors already bundled with EMF:

Again, the package name derived from the schema might not look nice ("_0"), so here we can rename it. In the Ecore editor, select the package _0, and in the Properties View, set the Name property to "ds".

5. Now that that's out of the way, we can configure the code generator. First, double-click the ds.genmodel file. It will open the GenModel editor:

We can set the Base Package property to org.example so that the generated package will start with org.example.ds. And also rename the Prefix property to "Ds" (previously it was "_0").

If successful, it will generate the Java code for your model, as well as 2 new projects: a UI-agnostic .edit project which you can re-use outside of the Eclipse environment, and an Eclipse-specific .editor project which is tightly bound to (you guessed it) the Eclipse environment.

The editor application is complete!

7. You can test it by launching a new Eclipse instance from the workbench:

It will run a new Eclipse environment with your core, edit, and editor plug-ins included. In it you can try to create a new DS file via New->Other...->Example EMF Model Creation Wizards->Ds Model.

If you try to double-click this new file, it will invoke the .editor plugin you just created, and show a nice tree-based editor for you!

The great thing is, if you try to look at My.ds using a text editor, you will see the underlying data is fully compliant XML!

What's Next

We've gone from zero (editing XML by hand) to hero (editing the same XML by a neat, integrated, GUI-based editor) in just a few mouse clicks. But that is not all that EMF can do! I just hope this article has gotten you interested enough to want to have a deeper look. :-)

From here, where can our DS Editor application go? Well, I can see a few things for improvement already:

1. The generated class names are ugly!

Did you have a look at the generated model/Java code? There are classes whose names are Tservice, Tproperties, and so on (fugly!). These are a direct consequence of the <complexType name="xxx"> declarations in the schema. It would be nice if the generated class names were more, hmm... how you say... Java-ish? :-p Like Service and Properties, respectively, for example.

In the next parts I will show how to "coax" EMF to generate (nicer, if you wish) class names, so you won't be forced to use the schema's type names if you don't want to. :-)

2. I want more Eclipse IDE integration!

For example, when I edit the attribute interface, I don't want to type the Java interface name in a text box. I want something similar to those nice JDT dialogs where I can search and select for the Java interface:

]]>
Read...+ Write?Read...+ Write?
post I mentioned the possibilities opened up by having access to a C/C++ project's DOM AST within Eclipse. Now there's a new bugzilla in CDT to work on making this AST writable. Aside from the obvious benefit (aka "refactoring"), I can imagine a whole new bunch of neat features that can be made possible by this work: more intelligent C/C++ code generation, quick fix, among others.

Eclipse CDT is becoming more and more like the excellent JDT in terms of features and API, and this is great news for us C/C++ developers and Eclipse plug-in developers alike.

]]>
STEM 0.2.1 Now AvailableSTEM 0.2.1 Now AvailableMaking Eclipse Newsgroups More EffectiveMaking Eclipse Newsgroups More Effective
newsgroups to be a great source of information and probably the best source of documentation outside the included help docs. However, I've always found it hard to find information in the platform newsgroup. The problem is that it has turned into a kind of catch-all for Eclipse users, plugin developers, jface developers, etc. This makes it really hard when you are searching for something on a topic from a particular point of view. A few months ago, I filed this bug to request a change. My proposal is to try to organize the information as follows:

platform.usersplatform.pluginDevplatform.jface...

In order to get some support for this change, I meant to post back then (really, I did). It was brought back to my attention by a comment on the bug, so I thought this would be a good time to see if any other in the community have the same feelings. If not, I'll just have to get better at searching;)

]]>
Eclipse Monkey &amp; RSEEclipse Monkey &amp; RSE
Lately I have been trying out Eclipse Monkey. The Dash project website describes Eclipse Monkey best: "Eclipse Monkey is a dynamic scripting tool for the automation of routine programming tasks. Monkey scripts are little JavaScript programs using either the Eclipse APIs or custom Monkey DOMs." This tool caught my eye as I always wanted to try out Greasemonkey for Firefox and it looked like it could do the same for Eclipse. Reading the wiki for creating scripts showed that it was very easy to do. Now I just needed an idea for a script.

Nick Boldt had an enhancement request to be able to have 2 Remote Systems Views open at once. Until this is added we came up with a workaround to use the Remote Scratchpad. You might be thinking Remote Scratchpad? It is one of the views that comes with RSE, but is hidden behind the Properties view. Its purpose is to be able to drag and drop any RSE object into it for later use. It's great for doing copy and paste across connections. So, if I can drag and drop any RSE object into it I can populate the view to be a secondary Remote Systems View.

With this bug in mind I decided to make a script that would copy all of my connections to the Scratchpad and display the Scratchpad view. The script can be found on bug #210574. If you want to try this out install Eclipse Monkey from the Europa update site and copy the contents of the file attached on bugzilla. Go to Scripts > Paste inside Eclipse. You have now installed your first RSE Eclipse Monkey script that will Clone the Remote Systems View inside the Remote Scratchpad.

]]>
Boosting Eclipse Plugin Development With JavaRebelBoosting Eclipse Plugin Development With JavaRebel
The latest development snapshots of JavaRebel include support for the Eclipse platform. This means that plugin developers can launch an Eclipse Application to test their plugins and make changes to the source code and see the results in the already opened Eclipse instance. No need to relaunch Eclipse instances to see changes happen.

]]>
Automated Eclipse GUI testing the quick and simple wayAutomated Eclipse GUI testing the quick and simple way
We’re very test driven here at Cape Clear, we develop automated tests for everything we do. We’re not strict about writing our tests first, I for one write my tests with my code, iterating between one and the other in the course of realising a story (we follow Scrum, a lightweight wrapper process on agile/XP). I wouldn’t dream of writing code without some level of automated test coverage, to me it is meaningless – how do I know the feature code works if I don’t have something that proves it works now, and as I refactor the code base through iterations of the product? Writing tests makes me many orders of magnitude more productive as a developer. I still hear the “lack of time and resources to write automated tests” excuse from developers I know in other companies. Sometimes I argue with them, sometimes I just smile benignly: you don’t need more resources or time to write tests, writing tests gives you more resources and time, and of course results in far superior quality software. But the fly in the ointment for us has been automating the GUI testing of our Eclipse-based tools. We have extensive junit tests for the non-GUI parts of the tools, like our (WTP) facet install delegates, our builders, our models etc. Eighteen months ago we chose Eclipse TPTP, the GUI recorder and playback toolkit, to automate our GUI tests. Maybe others have had more success with TPTP than we have, but our experience was less than satisfactory. In the end we only achieved a tiny amount of coverage with it, and it is difficult to keep these kinds of tests passing and running continuously across multiple branches of the product. In general GUI recorder/playback tests are very brittle to even minor changes in the user interaction. Several things happened recently that made me realise we could and should drop that approach. We started to push our (PDE) junit tests up into the UI, specifically in relation to testing our GMF-based SOA Assembly Editor. We wrote tests that did things like clicking on all the tools in the palette and checking that the edit parts and model elements were created correctly. PDE junit tests run in the UI thread. It struck me that we already had 99% of what we needed to automate our GUI tests from junit. What we did not have was:

a test framework, test APIs, which read like a GUI test specification

the ability to automate the testing of blocking UI elements, namely wizards and dialogs

The first was easy, I took a couple of our WSDL-to-Java project wizard GUI test cases and prototyped the kind of APIs I wanted, they read just like we write our test specs. Then I implemented the APIs, most of which were very thin (but more test friendly) facades on existing APIs and existing test code. That left me with the wizards and dialog problem. When you launch an SWT wizard or dialog from a PDE junit test, it blocks because its waiting on input from the SWT event queue. The blocking happens in the open() method of org.eclipse.jface.window.Window, from which WizardDialog is derived. In an automated test, we want the input to come from the junit test code, not from the SWT event queue. Fortunatly, open() is public. I will resist going off on a tangent here about one of my pet gripes: the excessive marking of methods as private and classes as final etc. – let me decide how I want to specialise your code, you cannot see all ends and mostly I know what I’m doing. Anyway, back on topic, so now we have our own CcWizardDialog (which extends WizardDialog), and the code looks like this:

The cctest member is an object that implements a simple callback interface, typically its the junit test itself. There is no difference in adding these kinds of test hooks to code and doing test-driven development. We write our code to be testable by code. Remember that we’re not trying to test SWT or core Eclipse platform components, we know they are well covered, stable, mature – basically: we know they work. And of course we do manually test and use our tools too. The ICcWizard interface looks like this:

Now you can start to see how the junit test reads, just like you’d write a GUI test spec: launch the wizard, set a value in a text box, select a value in a combobox, go to the next page in the wizard, select a radio button, press finish. After which the call stack unwinds back to the junit test, which can then use project APIs to verify the results. I’ve been deliberately vague about ICcWizardPage. How that works is also quite interesting, and very simple. I will detail this and more in another posting. What I really like about the whole approach is that the tests are quick to write and simple to maintain.

]]>
2nd and Last round of voting for the Eclipse Ajax Toolkit Framework project name change2nd and Last round of voting for the Eclipse Ajax Toolkit Framework project name change
After a first pass and over 50 responses to the survey, we have a crop of good potential names for a final ballot. I am making a new survey to select the winners: click to vote

Note that this is not a normal vote. The committers WILL HAVE final say and their votes count much more than those of common mortals.
This is what I call algebraic democracy And the Eclipse foundation has also its word in the game as they like to have names that can be trademarked or are clear of trademarks.

]]>
Fighting AIDS with open sourceFighting AIDS with open source
I met last week-end with folks (Paul and Burke) from the OpenMRS project during the Google Summer of Code mentor summit. Their project is important as it tries to provide an open source platform for managing medical records for economically dis-advantaged countries — such as African countries — and to bring their modest contributions to fight serious diseases such as AIDS.

It makes all of our work as software developers –open source or closed source– look futile and misplaced.

I found out they used a lot Eclipse BIRT and are on the lookout for a visual XForms editor to create new forms for medical records.

I thought it would be awesome if we could do some collaboration together around a visual XForms editor.

Such an editor would be open source –of course– , fit within the Visual Editor project charter and even better it would be coding for a good cause.

I am calling out for volunteers that could be interested to make something like that happen.
If member companies would want to contribute in kind or resources to help (resources are always better, unless you have a visual XForms editor for Eclipse to contribute), that would be awesome.

The Global Scope (as it applies to JavaScript) is the super scope which holds all the objects, types and fields accessible to all methods. Its honestly one of the horrible inefficiencies of JavaScript as just about everything gets pushed into the Global Scope.

JSDT users can customize a projects Global Scope throughout the ‘JavaScript Include Path’ page. Each library is a collection of JavaScript files that prototypes all objects within that library. (Additional information can be attached to objects with JsDoc. ). When a library is added to a project, all of the prototype code in each libraries JavaScript files is modeled and placed in the projects Global Scope. The types, fields and methods are then available for resolution and content completion in every JavaScript file within the project Matter of fact, every JavaScript file sees the Global members of every other JavaScript file within a project (unless you’ve done some careful tweaking in the Include Path page).

A piece of JS code running in a browser has access to many of the objects, fields and methods provided by the browser. The browsers ‘Window’ object is an extension of ‘Global’ which extends ‘Object’. Each browser provides different types, fields and methods to almost all of the common objects (and additionally handles these types, fields and methods differently across browsers). Hence the cross browser compatibility pain points that arise when developing JavaScript.

The JSDT forces browser compatibility by defining and extending ECMA 3 objects through the libraries. By adding and removing libraries you can nail down the types, functions and fields available in code completion and resolution. This will help ensure your project supports the objects in specific browsers.

Libraries aren’t just useful for browsers. They work well for toolkits, or common utility scripts across a site.

]]>
Hello EveryoneHello Everyone
First post!

I’m a developer on the JavaScript Development Toolkit
(http://wiki.eclipse.org/JSDT).

If your unfamiliar, the JSDT is a rewrite of the JDT in Eclipse to make it work with JavaScript. This means you get most of the Java language features in Eclipse, but for JavaScript instead of Java.

As many of you are aware, the available JavaScript tooling is small, and frankly SUCKS. Our goal is to add professional tools, that treat JavaScript like a real language. I’ll be posting comments and changes as time progresses.

]]>
A Community Site For Report DeveloperA Community Site For Report Developer
http://www.birt-exchange.com/ for report developers today. It provides typical "community" features such as forums and wiki. One unique feature on the site is a DevX listing, where anyone can post code example, report design tips, tutorial, product downloads, consulting service, or any cool things that one wants to share. DevX items has rating, review comments, visit count that will help me find out which of them are really good to pick up. I can search for items by keywords using the filtering tool.

Today, contents on the site mostly are about Eclipse BIRT, and Actuate iServer and iPortal products. But the site is built in a way that it can be used to serve other technology communities too, what we need to do is set up different interests areas and forums for that community. Well, the first step is to find out what works on birt-exchange.com and add features to make it an useful place for report developers.

]]>
Time to Write a BookTime to Write a Book
GMF and the QVT contribution is available in CVS, it's time to write a book on using the Modeling project as a DSL Toolkit.

Designed to complement the "EMF book" (version 2.0 is out soon!) in the same Addison-Wesley series, the book is scheduled to be completed at the end of March, 2008 and be some 500 pages long. So far, I have just about 100 pages written in draft form, so hopefully it will be a long cold winter indoors.

The book will utilize a series of DSL projects to cover in detail the development of graphical concrete syntaxes (using GMF), model-to-model transformations (using QVT Operational Mapping Language), and model-to-text transformations (using Xpand). In the future, perhaps be extended to cover concrete textual syntaxes, if and when the proposed Textual Modeling Framework project becomes a reality.

At this point, I'd be interested in the community's feedback on this book and its scope. To me, a DSL Toolkit should include all aspects of model-driven software development as they relate to a domain (semantic) model. And since the world already has an excellent book on EMF itself, it's about time we had one to cover these other important capabilities in the Modeling project.

]]>
Modeling is the Very Model of a Modern Majorly General and Diverse CommunityModeling is the Very Model of a Modern Majorly General and Diverse CommunityOf course there are some big corporations involved

but also smaller companies

as well as academics and some unique very unique individuals

including more than a few you might easily over look because they blend in so well.

With the call for EclipseCon talks coming out this week, it's time to start thinking about emerging from your little hidy hole to share a bit of yourself with others.

I'll bet you have a unique story to tell.

In any case, come to Eclipse Summit Europe and come to EclipseCon to sponge up some knowledge.

Bring a friend.

Hope to see you there!

]]>
New Subclipse Build Posted -- Dialog ImprovementsNew Subclipse Build Posted -- Dialog Improvements
CollabNet Merge Tracking Early Adopter program. The reason being that these builds require development build of Subversion 1.5 and I am coordinating these builds so that they use the same Subversion binaries (to make it easier to test).

This new build includes some dialog UI improvements I have been wanting to make for a long time. I have really grown to like the simplicity of the CVS commit dialog and have heard comments many times about the general usability difference between the CVS plug-in and Subclipse. So the intent here is to close the gap some more and incorporate some of the same UI while maintaining the Subclipse features that we can. Here is a fairly complicated example that shows most of the features:

There are a few of major changes.

The dialog uses a wizard-style UI which is pretty common in Eclipse. This gives us a chance to include a graphic and just generally make the dialog look better.

The presentation of files has changed from a table with checkboxes to the more friendly and graphical mode that CVS uses. There are three presentation models to choose from.

Because we no longer have a table to show data and text we needed a way to show when there are Subversion property changes. We are using a second decorator to do this. Currently we only use this in these dialogs, there are no plans to do this in Eclipse views.

The biggest change is that you now have to right-click and use Remove from view to not commit something. You used to be able to uncheck a check-box.

Earlier this year I wrote a post that detailed the Features of the Subclipse Commit dialog. You can review that post if you want to compare the differences in the UI. Here is another screenshot that is a little simpler to give another taste of the changes.

I think everyone will agree the dialog looks better. I think where there might be some controversy is in how you decide to not commit a certain file. This new approach is definitely optimizing for the scenario where you typically commit everything in the dialog. Users that work mostly from the Synchronize view, as an example, should really like this better.

Personally, I find this approach more usable. Even though it is a little more difficult to right-click and remove something than it was to uncheck it, the fact that the item no longer shows up in the view makes it more obvious what is going to be committed.

The Revert and Lock dialogs got the same treatment:

Other dialogs like Switch and Create Branch/Tag also received the new wizard look. Please give these builds a try and let me know what you think. The best places to reply would be the Subclipse users@ mailing list or in the issue tracker for this issue 682.

]]>
End of gsoc : the show must go on !End of gsoc : the show must go on !
First I would like to thank Philippe and mentors from Eclipse, the others students (check their projects, they achieved great results !), and google of course.Thanks also to people who followed and commented this irregular diary of my work :).I would, too, to provide some feedback. From a student point of view, it was a awesome experience ! I learned a lot, both technical skills and communication skills. Eclipse is divided in a large number of projects, and thus a continuing source of knowledge, for people people "curious" like me.About what gsoc brings to eclipse, I think it helps to integrate students into the eclipse community, and that a point where in my opinion some progress can be made. For instance I remarked that for the next eclipse summit there is no "registration fees" or invitation for students.

Regarding my plug-in I will provide soon new screencasts with some comments, and detail what are the next plans.

]]>
Impressed by GMF, but a hefty dose of patience is requiredImpressed by GMF, but a hefty dose of patience is required
Over the last couple of months we’ve been developing (should I say “modeling”?) a GMF-based graphical editor to support the new assembly and multi-channel, multi-tenant mediation features of Cape Clear 7.5. Actually, I’ve had pretty much no hands on involvement with it myself, a colleague of mine championed its use and has become quite expert with it, but I thought I’d pass on some of our experience. We are using GMF 1.0.3 as we are shipping with Eclipse 3.2.2 (we plan to move to 3.3 later in the year). The promise of GMF is rapidly rolled graphical editors with lots of eye candy and neat features for free, achieved with near zero code, or at least approaching zero when compared with GEF-coded editors. Our BPEL editor is GEF-coded, so we have something very substantial to compare our Assembly Editor with. Generally, I am very impressed by GMF, but much patience is required to get the best out of it. See pretty picture (a high-res version is viewable here).

Our core model is described in XML Schema and we generate our EMF model from that. A few points then:

It can seem like there is a huge gap between an initially generated editor (based on your model/schema) and where you know it needs to be. There may well be, but it is not necessarily a code-gap. The temptation is to jump in and start extending and modifying generated code, patience is required to do the right thing. Doing the right thing means trawling through examples and forums, posting questions to newsgroups and waiting on answers.

Go back to your source schema, modify and constrain it appropriately to help get the end feature the way you want in the generated editor.

If you have a schema that contains a lot of similar things, start with a subset of the schema which contains just one example of all the distinct elements, and get a working editor for that. Then add back in the other stuff – which is hopefully just repetition. In other words, reduce the problem space to something manageable to start with.

Make one model change then re-generate. The error reporting in the generation process is not very clear and diagnosing one cause and effect at once is by far the easiest way to work. Read all the error information reported and watch for compilation errors in the generated code too.

Make small changes to the model, regenerate and test them. When you are happy, commit those distinct changes to source control. Move on to the next task.

GMF has made it possible for us to have a great graphical editor for our Assemblies in a timeframe we would not have been able to GEF-code one in – or at least, nothing this polished and (almost) ready to ship! Hats off to the folks behind it.

]]>
An Auto-configuration Plug-in for EclipseAn Auto-configuration Plug-in for EclipseHave you ever written a plug-in and thought it would be neat if you could automaticallydetect the programs the plug-in needs, so you can make life a bit easier for your users?If so, the auto-configuration plug-in for Eclipse, Discovery, is just what you need.

Once you've installed the plug-in, you simply extend two extension points to get thefunctionality you need. The first extension is for finders; you specify a class that finds theprogram that you want discovered for your users. The other extension is for consumers; inthis extension, you specify a class that inserts the services (the term we use for programs,or any other thing found by the finder) into your plug-in. And that's pretty much it. You cancheck out our wiki site to read the documentation and download a few source examples.

Note that Discovery is built on top of ECF and so if you're used ECF's API, you will haveno trouble working with Discovery.

]]>
Hotel Europe in Amsterdam!Hotel Europe in Amsterdam!
As my vacation trip had a stopover in Amsterdam, I was pleased to notice that they named a hotel after Eclipse's latest and greatest release :-):

]]>
The beginning of the endThe beginning of the endIn other news, I spent most of last week creating a Web site. In the process, I read the CSS 1.0 specification from beginning to end. Compared to the CSS 2.0 spec, CSS 1.0 is simple, small, and darn elegant. Of course, things like absolute positioning become a bit of a pain, but you really wonder if they couldn't have made CSS more powerful without adding so much more stuff to it.

]]>
compiling windows apps on linuxcompiling windows apps on linux
]]>
Cross platform development with the Eclipse CDTCross platform development with the Eclipse CDT
]]>
Few questions if we may...Few questions if we may...jLibrary 1.1 has been releasedjLibrary 1.1 has been released
jLibrary 1.1 has been released. jLibrary 1.1. jLibrary is a very easy to use Document Management System that can be used from the desktop using the provide Eclipse RCP based application and that is built on top of Apache Jackrabbit the JSR-170 reference implementation. You can take a look to the changes summary to see all the changes, short summary is below though.

The server has been improved with a new HTTP tunnelling layer replacing the old web services one, improving scalability and lowering memory usage. Web Services layer is now an optional add-in. Documents can also have custom properties that you can add, remove, search in, etc. Other important changes are the migration to Maven 2, a more easy to use build system and the addition of plenty of unit tests to help developers start coding with jLibrary.

In the client side many bugs have been fixed, the core system has been migrated to Eclipse 3.2, the build system is now also much easier and the stability has been improved.

And finally, thanks to the Eclipse Maven PDE plug-in and the new deployment system, there is now stable versions available for Linux 32 and 64 bits and a Mac OS X.

Hope you like it!

]]>
JMX Scripts with Eclipse MonkeyJMX Scripts with Eclipse Monkey
Continuing the series about “writing JMX scripts in a dynamic language”, after Ruby (part I & II), let’s do that in JavaScript.

Aside of the use of a different scripting language, this example differs completely from the Ruby one by its context of execution: it will be integrated into Eclipse and called directly from its user interface (using Eclipse Monkey as the glue).

The example will:

ask graphically the user for a logging level

update all the JVM’s loggers with this level

display all the loggers of the JVM

in 50 lines of code.

This example is simple but it implies several interesting steps:

connect to a JMX Server

retrieve a MBean

retrieve value of MBean attributes

invoke operations on the MBean

There are many use cases where you have to perform theses steps in repetition. It’s tedious to do that in a JMX console (e.g. jconsole or eclipse-jmx) and most of the time, it is not worth writing a Java application.

These use cases beg to be scripted.

We will again use jconsole as our managed java application (see this previous post to start jconsole with all System properties required to manage it remotely).

logging represents a MBean (it is not a “real” MBean, more on that later) and mbsc represents a MBeanServerConnection (but it is not a “real” MBeanServerConnection, more on that later).logging.LoggerNames returns the value of the LoggerNames attribute of the MBean (note that the first letter must be in upper case) which is an array of strings.
For each element of this array, we invoke the setLoggerLevel operation using mbsc.invoke().
This method is very similar to the “real” MBeanServerConnection.invoke() method:

something representing a MBean (instead of an ObjectName)

the MBean operation name

the parameters of the operation

the types of the parameters

jmx: an Eclipse Monkey DOM

What do I mean when I write that logging is not the “real” LogginMXBean and that mbsc is not the real MBeanServerConnection?

This 2 types of objects are created by the jmx object in the main() method. This jmx object is in fact an Eclipse Monkey DOM that is contributed by the plug-in listed in the DOM directive at the top of the script:

This plug-in was included in the “JMX Monkey” feature which was installed the first time you ran this script.

The jmx DOM has a single method connect(host, port) which connects to a JMX Server using the standard JMX Service URL.
The object returned by this method is a ScriptableMBeanServerConnection. This class encapsulates the “real” MBeanServerConnection (still available using the getObject() method) but only exposes its invoke().

It also exposes a getMBean() method which returns a ScriptableMBean. In turn this class exposes the attributes of the MBean as JavaScript attributes.

To sum up, these are the operations you can perform using the jmx DOM:

connect to a JMX server: mbsc = jmx.connect(host, port)

get a mbean: mbean = mbsc.getMBean(objectName)

get the value of a mbean attribute: val = mbean.AttributeName

get the “real” mbean server connection and use it: objectNames = mbsc.getObject().queryNames(name, query)

invoke an operation on a mbean: mbsc.invoke(mbean, operation, params, param_types)

Conclusion

This script example is simple but quite interesting thanks to its integration with Eclipse.

I believe there is an use for such scripts: repeatable management operations that needs to be tweaked from time to time.
It’s tedious to do that with a GUI and it’s even more tedious to write Java applications to do so.

Last year, during EclipseCon’06, I blogged about an use case for scripting a RCP application using Eclipse Monkey.
This is a concrete example: I’m using eclipse-jmx to manage Java applications that I develop. When I realize that I perform the same kind of management task, I write a monkey script which automates it.

Next time, you have to perform the same operation on many MBeans or many operations on the same MBean but you think it is not worth to write a Java application to automate it, ask yourselves if it can not be simply automated by a script such as the one in this post.

]]>
I need YOU to help Install/Update!I need YOU to help Install/Update!
Pascal 'LeNettoyeur' led me to believe in the shuttle from Santa Clara to San Francisco airport after this year's Eclispecon. I am still not convinced that 'features suck' as Ed Merks likes to yell when protected by the Eclipsecon crowd (even he expressed concern that they may end up with the same 'sucky' thing with a different name :-). Nevertheless, I am sure that our Equinox friends will come up with something new and shiny and that it will catch the world by storm almost as much as OSGi did.

But let's take our eyes off the future and focus on the here and now. While we are dreaming (or should I say 'provisioning') new Update dreams, the old Update still needs to work and tie us over, or the new Update will be crushed by the unrealistic expectations before the birth pangs subside. There is Europa to ship, new features to post, patches to publish for all that brilliant software that was bug-free when you unleashed it on the unsuspecting public. Unfortunately, Install/Update is down to one active committer (yours truly), and he is a manager with a full plate. That's as if we are down to 0.1 committer with a short attention span :-).

As I am typing this, Alex Blewitt ran away with the prize by blogging about this first. I am not going to out-Alex him but here is the plan for a few good men (or women - we are equal opportunity here :-):

Pick a bug that you would really like to see fixed and that has enough steps to be reproducible (but call it first so that others don't investigate the same problem)

Pick a fairly recent build (M6+) and set it up for Update development using these instructions (read under 'Feature-based self-hosting')

Try to pinpoint the problem; if you feel confident, try to fix it

If it seems to work, post a patch to the bug report, but be diligent - we want to fix existing problems, not create the new ones, right?

What's in it for me, you ask? You can make your code a proud part of Eclipse Platform (names and emails of contributors will be prominently displayed in file copyright notices). You can get a high from fixing a hard problem. At the end of 3.3, we will create an 'Update Hall of Fame" with pictures and short bios of top Update contributors. Finally, you will get to help yourselves by fixing problems that affect your own projects and components (he is looking at you, Mylar :-).

Last but not least, you will learn the whole problem domain of installing and updating bundles. When the time comes to switch to the new and shiny Equinox Provisioning, you will know what works, what doesn't, what you like and what is, to use Ed's immortal words, sucky.

Alessandro Ribeiro also has an interesting threepostpiece where he shows how to use the JDK 1.5 ‘jps’ and ‘jstat’ tools to help diagnose the same problem (hint: “jstat -gcpermcapacity
” will give you a nice summary of the status of the VM with the target PID).

Users have been running into this when running Eclipse-based products on Sun’s JDK 1.5 for some time and Eclipse Bug 92250 has been quietly collating lots of feedback and suggestions from the Eclipse community along the way. A lot of people have been lurking/waiting/hoping that someone would identify the source(s) of the issue and produce a fix so it is good to see increased activity there. However, the recent debate about whether this is a Sun or Eclipse (where I found links to the above blog posts – thanks Karsten!) issue might be a little discouraging but it appears there is acknowledgement that some responsibility lies within the Eclipse community to help diagnose the problem.

I’d imagine many adapters would be willing to lend a hand in diagnosing this since it is a potential stability issue for all Eclipse based products when run on what is the de-facto Java VM. However, I suspect any such analysis effort would need to be done in coordination with some platform/WTP folks since they would need a good insight into the expected behaviour of all of the active bundles to be able to determine which are at fault (or if indeed the fault lies with the VM)

What intrigues me about this issue is that Eclipse/Equinox doesn’t normally destroy classloaders so classloaders not getting garbage collected can’t be the source of the problem. But with that assertion, how can any application be hitting a 512Mb permGen limit? It does smell of a VM bug but then why doesn’t it happen with other Java based applications? Clearly some feature common to many Eclipse applications is rubbing the Sun VM up the wrong way.

I guess the only way to find out is to do a very deep analysis but a pre-requisite for that is to find a simple use case that can reliably and repeatedly reproduce the problem. We don’t seem to have one with our product but perhaps someone else out there does.

]]>
TextProcessor and BIDITextProcessor and BIDILets start from the javadoc:

"This class is used to process strings that have special semantic meaning (such as file paths) in RTL-oriented locales so that they render in a way that does not corrupt the semantic meaning of the string but also maintains compliance with the Unicode BiDi algorithm of rendering Bidirectional text.

Processing of the string is done by breaking it down into segments that are specified by a set of user provided delimiters. Directional punctuation characters are injected into the string in order to ensure the string retains its semantic meaning and conforms with the Unicode BiDi algorithm within each segment."

So where do I use it?Anywhere where the string you are displaying has a series of segments where the ordering of the segments is the same no matter what language you are displaying. The most common examples are file associations, file paths, URLS and URIs.

When is it processing Strings?The TextProcessor will do bidirectional String processing anytime you are running with your Locale set to one of the four bidirectional languages (Arabic, Farsi, Hebrew or Urdu).

Will it process if my layout is right to left?Only if your Locale is one of the languages mentioned above.

Do these extra characters get picked up if I copy the String?Yes. In some applications that do not try and interpret these characters (such as Notepad) they will show up as control characters.

How do I strip the extra directional characters out?TextProcessor#deprocess(String)will allow you to remove them.

What should I do about input fields for these Strings?The Eclipse Platform tries to avoid creating behaviour that is inconsistent with the operating system as much as possible. As a result we do not process the input String - the user gets whatever the operating system wants to show them.

Once again we would love to hear from anyone who is using our bidirectional support day to day to give us more feedback.

]]>
Bidirectional Support FAQBidirectional Support FAQ1) What is the difference between using -dir rtl and -nl iw?

-dir rtl was a command line parameter that Help was using before the workbench added it's general BIDI support in 3.1. When we added the BIDI support ourselves we continued to support the old flag but we also felt that the -nl command line parameter made more sense. If you are using a Arabic, Farsi, Hebrew or Urdu locale as the argument to the -nl parameter you will get right to left support.

The TextProcessor does not support the -dir rtl parameter so it is recommended that you use the -nl parameter to get all of the bidi support. Note that TextProcessor will process BIDI strings whenever the Locale is bidirectional whether or not the Locale was set by -nl or taken from the OS.

So if you use -dir rtl your orientation will be set to right to left but your paths etc. will not use the text processing.

2) What is the difference between TextProcessor and Window#getDefaultOrientation ?

TextProcessor is a class supplied by OSGi to used to process strings that have special semantic meaning (such as file paths) in RTL-oriented locales so that they render in a way that does not corrupt the semantic meaning of the string but also maintains compliance with the Unicode BiDi algorithm of rendering Bidirectional text. (from the javadoc).

So for instance http://www.eclipse.org/ will render the entire String in left to right order but will process each segment right to left.

org.eclipse.jface.Window#getDefaultOrientation is a method used to determine the default orientation for windows. If it is not set the default value will be unspecified (SWT#NONE) (also from the javadoc).

Dialogs, Windows, IWorkbenchParts and FormToolkits use this value to determine their default orientation. All of these classes allow the orientation to be overridden.

3) Why do I not get bidirectional text processing in left to right orientations?

Using the TextProcessor to evaluate paths is slower than not processing them at all (of course). For performance reasons we do not use TextProcessor unless you are in a bidirectional Locale.

4) How do I set my inputs to use bidirectional support?

All input processing is done using the platform widgets. Eclipse generally does not try and work differently than the OS so we use the OS input methods.

5) What happens if I use multiple direction parameters?

There is a precedence of these parameters. As document in the help at org.eclipse.platform.doc.isv/reference/misc/bidi.html:

The orientation of the workbench is determined in one of the following ways (in order of priority):

-dir command line parameter. If the -dir command line option is used this will be the default orientation. Valid values are -dir rtl or -dir ltr.

system properties. If the system property eclipse.orientation is set this will be used. The recognized values of this property are the same as the -dir command line argument.

-nl command line parameter. If the -nl option is used and the language is Arabic, Farsi, Hebrew or Urdu the orientation will be right to left.

Failing all of the above, the platform defaults to a left to right orientation.

The TextProcessor is only checking Locale and does not use this precedence.

So...

-dir rtl will give you right to left windows but no text processing

-nl fa will give you right to left windows and text processing

-nl ur -dir ltr will give you left to right windows and text processing

Starting Eclipse in a bidirectional Localewith no arguments will give you left to right windows and text processing

6) Is right to left ever the default?No. When we initially did this work we made right to left the default for users in the right to left locales but the overwhelming response was that they developed in left to right and then we should only switch orientation if explicitly asked to.

So if you start Eclipse on an Urdu machine with no command line arguments you will get left to right orientation. If you start Eclipse using the command line argument -nl ur on the same machine you will get right to left orientation.

7) Who uses this support?This is actually a question for you. We know there are lots of users who use Eclipse left to right in a right to left locale but we are interesting in hearing from

Users who use right to left for their Eclipse development - especially if they want some right to left support changes in JDT or EMF.

Users who use left to right but frequently have to deal with right to left strings and find the lack of bidirectional text support problematic

If you are one of these people please log a bug to Bugzilla or add a comment to this blog.

IProxyService: a service that allows clients to access and modify the proxy settings for HTTP, HTTPS (or SSL) and SOCKS and ensures that the values specified are put into the corresponding Java system properties. This service is located in the org.eclipse.core.net plug-in and there is an associated preference page for setting the proxies.

IJSchService: a service that complements the JSch SSH2 client by ensuring that JSch is properly configured (using settings from the SSH2 preference page) when clients attempt to make SSH2 connections. The service also uses the proxy service to configure JSch proxies. The JSch service is found in the org.eclipse.jsch.core plug-in while a generic prompter can be found in the org.eclipse.jsch.ui plug-in.

In order to use these services, you need to obtain them using OSGi. Niel provides some general articles on working with OSGi services. To illustrate, here's an example of how you can use the JSch service (and, indirectly, the proxy service) to make a connection using a proxy.

The first thing you need to do is add a service tracker in you bundle activator (or, in the old world lingo, add a tracker to your plug-in class). Here's the code I needed to add to my Activator class.

The above connection methods will use any proxies that are specified in the proxy service. However, with the JSch service, you can also use proxies with other types of connections (e.g. CVS pserver). Here's how:

If the returned proxy is null, you can make a direct connection. Otherwise, you can use the Proxy#connect method to connect to a specific host. There's a helper method on the JSch service to connect to a proxy in a responsive fashion (i.e. respond to cancellation) that you can use as well.

service.connect(proxy, hostName, port, timeout, monitor)

So, there you have it. No more need to define static classes for accessing singleton services. Initially, I found accessing the OSGi services a bit more cumbersome than the static class approach but I think the added flexibility is worth it.

]]>
Have fun at EclipseCon!Have fun at EclipseCon!I have three special requests:

If you take any pictures be sure to upload them to flickr and tag them with 'EclipseCon 2007'. Please indicate whether or not it's ok to use the pictures elsewhere (for example I might put together a gallery for ZDNet if there are enough pics). Here are the pics for last year if you'd like to reminisce.

I'm a huge Dilbert fan so if somebody could get Adams' autograph for me I'd be eternally grateful.

If the new Ambassador has a party this year, be sure to go and get to know a few new people. Last year's party was a blast. The best thing about Eclipse is its community, and that means you. (Just be careful not to stand between Steve and the beer cart, that could be hazardous to your health. :) )

Have fun!--Ed

]]>
Eclipse in a browser – Eclipse Rich Ajax Platform (RAP)Eclipse in a browser – Eclipse Rich Ajax Platform (RAP)
I’ll not be at EclipseCon this year (it conflicts with polishing our next major product release this week) but if I was there I’d be going to any sessions related to the new Rich Ajax Platform (RAP) incubator project. Think the Eclipse UI running in your browser, with the Eclipse application, developed as plugins, running remotely on a web server.

The Yoxos Eclipse On Demand online Eclipse distribution builder is a good example of the types of applications this framework could support. In fact, given that Yoxos is run by Innopract, the folks who are driving the RAP project, it is probably the best example out there. It’d be interesting to find out how far along they are with their RWT API implementation – attempting to build a remote client framework based on a local client framework like SWT is perhaps a dangerous thing to attempt to do but we’ll see how they fare.

]]>
If I'm going to screw up, why does it have to be so public?If I'm going to screw up, why does it have to be so public?Normally that would be a good thing, except the title used for the story was "How to get 66.6 TeraFlops for $600". If you go to the link, it says "How to get 520 GigaFlops for $600". Just a tiny difference.

While the original title was much cooler, it was wrong. I didn't discover the error until about a half hour after hitting the 'Publish' button. The 66.6 TFlops number was based on two sentences in the GeForce 8800 Architecture Technical Brief. The first says:

Teraflops, plural. Later it says that the card has 128 stream processors and that:

Each stream processor on a GeForce 8800 GTX operates at 1.35 GHz and supports the dual issue of a scalar MAD and a scalar MUL operation, for a total of roughly 520 gigaflops of raw shader horsepower.

Hmm, so if each stream processor gets 520 GFlops and there are 128 stream processors that works out to 66.6TFlops, right? Well, that's what it looked like to me. But if you look closely, 1.35 GHz times 3 flops per cycle = 4.05 GFlops per processor, not 520. The 520 number was for the whole card (128 * 4.05 = 518.4).

Doh!

Since ZDNet's blogging system doesn't let me preview my articles, I do my final editing pass after clicking 'Publish' so I can see the article in context. It was during this pass that I saw the numbers didn't add up and I went back to double-check the figures. I corrected the article, but by then the damage was done. People were starting to comment on it, and it had been dugg. One poster said that a card that delivered 66.6 TFlops "gives new meaning to 'fast as Hell'". Heh.

To make matters worse, when I got the ZDNet email alert several hours later it also had the wrong title. It's really too bad because it would still have been noteworthy with the smaller number. I sent a note to digg's feedback address to see if they could fix it there, but got no response.

Mea culpa. Sometimes, the internet is a little *too* fast.

All I can say is, "The devil made me do it".

]]>
Add a filter to a TreeViewerAdd a filter to a TreeViewer
In eclipse-jmx, a plug-in to manage Java applications through JMX, I have a view which displays registered MBeans using a TreeViewer.
Since there can be many MBeans to display (e.g. more than 200 just for Tomcat), it is tedious to navigate in the Tree by expanding many nodes before finding the MBeans I want to manage.
To make it more usable, I wanted to add a filter text to the view to show only the MBeans