March 03, 2015

If you aren’t aware of the Open Source Initiative (OSI), you should be. They are fantastic not-for-profit organization responsible for the Open Source Definition (which everyone should read once in their lives), they maintain a a list of compliant license definitions on top of promoting open source across the world.

They are also membership driven organization, which is supported by individuals and affiliates. As far as I know, they are the only organization that brings together a variety of open source individuals/institutions to cross-promote ways to work together improve the adoption of open source software:

They are also in the last month of their membership drive, so if you’re interested in supporting their cause, I highly recommend you consider joining as a member:

Also more selfishly, the OSI currently has nominations open for the board of director election which I’m partaking in. The current group of nominations include a great group of folks from all over the open source ecosystem and I’d love to have the opportunity to serve, my plans include expanding corporate membership and more.

So please consider supporting the OSI and vote your interests, they really make the greater open source community a better place.

March 02, 2015

If you’ve already gotten started with trying out
Docker and JBoss Tools, and want to see what other options are available, this blog post will
explain how to run Wildfly in a Docker container and configure it for management tasks and
managed deployments.

Customizing your Dockerfile

Since the default jboss/wildfly Docker configuration doesn’t expose the management port by default, we’ll need
to customize it, and this means writing your very first Dockerfile. Our goals here are
to add a management user to the Dockerfile, and also to expose the management ports when
running the container.

First, you’ll want to make a folder on your filesystem that you can play around in.
I’ll name mine docker_jbds_wf_mgmt -
Inside this folder, we’ll make a new file named Dockerfile and give it the following content:

You can see that we’re extending the original jboss/wildfly container.
In the above RUN command, we’re also adding a management user. Feel free
to customize the password as you wish.

Exposing actual or sensitive passwords in a publicly accessible Dockerfile is dangerous.
Make sure to only use example credentials in publicly available Dockerfiles.

Finally, we’re making sure that when the Docker container runs
standalone.sh, it is also making sure to bind the management port to
all possible hosts, helping to expose it for use.

Building your new Dockerfile

You can give your new configuration any name you want when building it.
I’ll build mine here, and give it the name wildfly-mgmt :

docker build --tag=wildfly-mgmt .

Running your new Dockerfile

To run your new configuration, run the following command, replacing the last
parameter with what name you chose when building the Dockerfile.

docker run -it -p 8080:8080 -p 9990:9990 wildfly-mgmt

Note that, unlike the previous post, we do not need to launch with custom volume mappings.
All we need is the addition of the management port.

Configuring a Server in JBoss Tools

When creating the server remember to set the host to be dockerhost ( at least on OSX and Windows ) as shown above.

Since we’ve configured the server to be remote, and for communication with the server
to be handled over the management port, we mark it as Remote and
controlled by Management Operations in the second page of the New Server wizard.
We also don’t require a runtime here, though we may need one later when creating
a functional web project with classes. For now, we won’t create one. You’ll also
note that we have marked that the Server lifecycle is externally managed,
which means we won’t be starting or stopping the server via JBoss Tools, since
you’ll be controlling that via Docker on your own.

On the next page, you’ll note that our remote runtime details are optional.
Since the server is configured only for management operations, we have no real need
to know where the filesystem is located or how to access it. We can safely ignore
this page and just proceed through it.

Now, your server is created, but we still need to set the management credentials.
First, double-click your Server in the Servers View to open the Server Editor.
Then, set your credentials as you did in your Dockerfile as shown below.
You’ll note that some default values are already there, and so you’ll need to
delete them and set your own values.

Creating Your Web Project

In this example, we can create a very simple web project by browsing to
File → New → Dynamic Web Project, Once the web project is created, we can
create a simple index.html in the WebContent folder.

Starting the Server

Now that everything’s set up in Eclipse, we can start our Docker container as we mentioned before:

docker run -it -p 8080:8080 -p 9990:9990 wildfly-mgmt

Starting the Server Adapter

In Eclipse, we can now right-click our server, and select Start. This
shouldn’t launch any commands, since we marked the server as Externally Managed.
The server adapter is configured to check over the management port at dockerhost:9990
to see if the server is up or not, so it should quickly move to a state of [Started, Synchronized].

Deploying the Web Application

We can now right-click on our index.html project, and select
Run As → Run On Server and follow the on-screen directions to deploy
our web application. We should then notice the Eclipse internal browser
pop up and display the content of our index.html files.

Conclusion

In this second example, we’ve seen how to install and configure a
Wildfly Docker image customized for management operations in JBoss Tools.

To summarize, here are the steps needed:

Create your own 'Dockerfile` that uses the existing jboss/wildfly configuration, but also adds a management user and starts server with the management port exposed

Start Docker with 8080 and 9090 mapped

Configure the server to run on dockerhost, using Remote management settings and have lifecycle externally managed

As you hopefully noted, this kind of setup is much more straightforward (no messing with paths); unfortunately it
does have a downside since all publishes are full publishes over the management API. Because of this, incremental updates will not work in this case.

In a future example, I hope that we’ll see how to create an image customized for SSH access,
which will allow starting and stopping the server and support incremental updates.

So you’ve heard the buzzwords and all the hype around Docker,
but you haven’t had the chance to play around with it yet or see if it fits your needs.
You’re already an avid user of JBoss Tools, and are fairly familiar with starting and stopping
your local or remote Wildfly installation, and deploying web applications to it.

If this describes you, and you’re interested in trying out Docker, then this blog is targeted to you.

Running the default wildfly image

These images are JBoss' standard Docker images and they do not expose more features than just
the bare minimum for production and reuse. This first blog will show how to use them as-is, but going
forward we will show how to configure them to be a bit more useful for development use cases.

To make sure your docker installation has worked, and that Wildfly can
start without any errors, you can do the following:

docker run -it -p 8080:8080 jboss/wildfly

This command will run the Docker image for jboss/wildfly in its default state, no customizations and
map port 8080 on your dockerhost to 8080 of the running jboss/wildfly container.

Once this is run, the command will not only start up your container, but also launch the server
in standalone mode, and connect a terminal to it so you can see the output.

Here I only have one container running; you might have more. But to
kill any container, you execute docker kill f70149043400, replacing
the given hash with your value from CONTAINER ID column.

Externally Managed Local Server With Deployment Folder Mapping

Since the Docker image in this example does not have SSH enabled, and the Wildfly server
is not exposing the Management port, we will need to configure JBoss Tools to use custom filesystem deployments.
The way to do this is to map in a local folder from our host into our container.

And since we start Wildfly via the Docker image, on the command-line.
we’ll want to configure our Server Adapter in JBoss Tools to be an externally managed server,
so it will not manage the start nor stop of the server.

Mapping a deployment folder

For this example, we’ll make a temporary directory somewhere on our host,
and tell our Docker container to treat that as the standalone/deployments folder inside the container. In this way,
changes made to the folder can be visible in both the host and container.

Mapping Folders may cause IO errors for SELinux!* To ensure your container can actually read and write to the folder,
you’ll need to run setenforce 0 to disable SELinux, or, alternatively, give Docker the permissions and exceptions in SELinux.
See this stackoverflow post for more information.

The ':rw' at the end is important, since without it the docker container cannot write to it, only read.

If you were to now place a .war file inside /home/rob/tmp/dockertest1, it will be picked up by the deployment scanner,
and visible in a web browser. Take note of the folder, since we’ll use that in the configuration of the Server Adapter.

Making your Server Adapter

The final step of this example is to create your Wildfly 8.2 server adapter in JBoss Tools,
and to create and deploy a web application to your temporary folder, which in my case is
/home/rob/tmp/dockertest1

First, we’ll open the Servers View and create a new Wildfly 8.x server adapter.
Since we’re exposing our container’s ports on dockerhost, we need to set the host to
dockerhost.

The server should still be marked as Local and Controlled by Filesystem and shell operations
as shown below.

Since we’re hacking a Local Filesystem server adapter to work for Docker, we’ll still need a local
runtime in this example, so point it to any locally installed WildFly server you have.

Configuring your Server Adapter

Once your server is created, you’ll find it in the Servers View, where we can double-click
it to open the Server Editor. From here, we can make what configuration changes we’ll need.
First, we’ll need to make sure the server is Externally Managed. This means
JBoss Tools will not attempt to start and stop it from Eclipse, it is
expecting you the user handles that via Docker.

Next, we’ll need to disable the tooling for keeping deployment scanners in sync with
the locations JBoss Tools expects to be deploying. Since we’ve already mapped the folder
in via the Docker command line, we won’t need any additions to the deployment scanners at all.

And finally, on the Deployment tab of the Server Editor, we’ll want to
mark the default deploy folder to be a Custom location, and choose the folder
that we previously mapped in via Docker’s command line, as shown below:

Once all this is done, we can save the editor, and our server adapter is configured properly.

Make a Web Project

In this example, we can create a very simple web project by browsing to
File → New → Dynamic Web Project, Once the web project is created, we can
create a simple index.html in the WebContent folder.

Starting the Server

Now that everything’s set up in Eclipse, we can start our Docker container as we mentioned before:

Starting the Server Adapter

In Eclipse, we can now right-click our server, and select Start. This
shouldn’t launch any commands, since we marked the server as Externally Managed.
The server adapter is configured to check dockerhost:8080 to see if the server is
up or not, so it should quickly move to a state of [Started, Synchronized].

Deploying the Web Application

We can now right-click on our index.html project, and select
Run As → Run On Server and follow the on-screen directions to deploy
our web application. We should then notice the Eclipse internal browser
pop up and display the content of our index.html files.

Congratulations - you just used JBoss Tools to deploy a local running Docker hosted WildFly server.

What could be better ?

The default docker image is restricted by default. This means
it does not have the Management port exposed, nor JMX nor file system access via SSH.

All this means that currently you have to go through some setup to use them from existing tools,
but luckily we are doing two things:

we will post more blogs explaning how to enable some of these features to use todays tools (not just JBoss Tools)
with 'raw' docker.

we are working on making the steps simpler when using Docker 'raw'

Conclusion

In this first example, we’ve seen how to install and configure the default
Wildfly Docker images.

To summarize, here are the steps needed:

Start Docker with 8080 mapped and with /opt/jboss/wildfly/standalone/deployments mounted as volume

Configure server to run on dockerhost, be externally managed and Custom deploy to the volume above

In future examples, we’ll see how to extend those images for Management or SSH/SCP usecases.

I'm interviewed in the roundtable article on the future of release engineering, along with Chuck Rossi of Facebook and Boris Debic of Google. Interesting discussions on the current state of release engineering at organizations that scale large number of builds and tests, and release frequently. As well, the challenges with mobile releases versus web deployments are discussed. And finally, a discussion of how to find good release engineers, and what the future may hold.

Thanks to the other guest editors on this issue - Stephany Bellomo, Tamara Marshall-Klein, Bram Adams, Foutse Khomh and Christian Bird - for all their hard work that make this happen!

As an aside, when I opened the issue, the image on the front cover made me laugh. It's reminiscent of the cover on a mid-century science fiction anthology. I showed Mr. Releng and he said "Robot birds? That is EXACTLY how I pictured working in releng." Maybe it's meant to represent that we let software fly free. In any case, I must go back to tending the flock of robotic avian overlords.

February 27, 2015

The Orion Project is pleased to announce the release of Orion 8.0. You can check out the new release now on OrionHub, or grab your own copy of the Orion server from the download page. The Orion Node.js implementation matching this release is orion 0.0.35 on NPM. The Orion 8.0 release includes over 1000 commits from 36 unique contributors, and fixing 280 bugs. The major themes for this release were building higher value tools for JavaScript/CSS/HTML development, and in making the Orion server highly scalable.

On the server side, the main focus for Orion 8.0 has been on clustering support. For a site with heavy traffic, it is useful to have multiple Orion instances operating on the same Orion workspace content. This also allows you to have a running standby instance for fail-over and upgrade without any site downtime. To support this, Orion 8.0 has introduced file-based locking to manage contention across multiple instances. In addition, persistent state has been divided between instance-private and shared locations, to prevent sharing of the server’s own internal state that is not designed for sharing across instances.

A final major change on the server side was completely rewriting our search infrastructure. The new Orion search implementation provides global search and replace based on a server side search crawler, rather than indexing. This means searches take a little longer, but they have perfect accuracy. Server side search now also supports regular expressions and case-sensitive search, making this kind of search much faster than before. This implementation also simplifies Orion server management for large-scale Orion installs, because it saves Orion server admins from having to deal with search index sharding and Solr clustering. This server side global search is currently only available on the Java implementation of the Orion server, but there is a team in the community working on a Node.js implementation for this as well.

I am happy to announce some of the highlights of the upcoming release of the Red Hat JBoss Fuse Tooling.
It will be available via early access in the JBoss Tools Integration Stack 4.2 / Developer Studio Integration Stack 8.0.

What is new?

There are a lot of improvements and bugfixes and I just want to pick the most important things for now. You can see a full list of changes in the What’s New section for the release.

Apache Camel Debugger

You probably already used the tracing functionality for a running Camel Context, but now we are happy to finally give you a Camel Debugger. Using the Eclipse Debug Framework we created our own Camel Debugger which works fully through the design view of the Camel editor. Here you can set your breakpoints (static and conditional ones) and breakpoints hit are highlighted in your design view. Instead of using the "Run as → Local Camel Context" menu you can now use the "Debug as → Local Camel Context" to startup the Context in debug mode. Once a breakpoint is hit Eclipse will automatically change to the Eclipse Debug perspective.

Step through your routes, add watch expressions, change message content on the fly or simply monitor what your routes do with the messages. Add a conditional breakpoint if you only want to debug on a certain condition. You choose the condition language and you setup the condition in an easy to use expression builder.

Palette and Properties

Over the past weeks we worked on improvements regarding the usability of our Camel Route Designer. The result of that work is that we introduced a new drawer to the palette of the designer which provides easy to use items for Apache Camel Components. In the past you had to know that the Endpoint palette entry has to be used to create a connector for a Camel component just by prepending the right protocol name to the endpoints uri attribute field. That still left you alone in adding the correct Maven dependency to your projects pom.xml. When you now drop a component connector to the route the pom.xml gets updated automatically so you don’t need to care any longer.

Starting with version 2.14 the Apache Camel developers started implementing a model to determine URI parameters and their meta data. We now use the provided functionality to give our users improved property pages for the Apache Camel Components.

Server Adapters

The wizard pages for creating the servers have been reworked too and you are now able to download the binaries directly from within your Eclipse session.

Another thing to mention is that we replaced the old deployment options in favor to the modules publishing way using the servers view. You can select the server entry there and choose to Add or Remove modules to/from the server. The deployed projects from your local workspace will be visible as a child node under the server item. Depending on your settings for the server publishing options your application will be republished automatically when it gets out of sync / is changed locally.

The Eclipse Planning Council—with help from the community—is trying to sort out the name for our eleventh named simultaneous release in 2016. We had some trouble with the first set of names that we selected, and so we’re into the second round.

The “Eclipse Lynn” Lobby has significant momentum.

We started following the alphabet a few years ago, naming our fifth release “Helios“, and now we’re up to the letter N. We’ve been bouncing around some N names on Twitter; I quite like the idea of just going with “N”, and stated as much.

I originally put in the exclamation mark to make it seem more exciting, but then it occurred to me that the exclamation mark has special meaning in Lisp. In a functional programming language like Lisp, you generally avoid changing state, so Lisp functions that make changes are marked with a cautionary exclamation point (spoken as “bang”). When you invoke a bang function, things are going to change:

Of course, we’re also trying to tackle some pretty big things. We’ve come a long way towards having a proper installer for Eclipse. I’m also optimistic that we’ll be including Gradle support in Eclipse (more on this later).

We need your help and so we’ve started the Great Fixes for Mars skills competition. To enter, all you need to do is take responsibility for a bug, mark it as “greatfix”, and submit a patch. We’ve even provided a list of great suggestions where you can make the biggest impact. There’s prizes. Cool prizes.

Enter the Great Fixes for Mars skills competition; there’ll be prizes!

This shift in momentum will build through Mars and into the 2016 release. I’m certain that N-bang will usher in even bigger changes.

Having added the functionality of visualising Repositories that are online thru Fork Visualisation View, the next usecase was allowing them to the forked into our login and also clone into our eclipse git perspective.

We have now fixed this requirement. You may Fork a Repository from the Fork Visualisation View to your login and also clone it to add to your Git Repositories.

Screenshots as shown belowA new button is available on the Action Bar of the View. Select the repo that you want to Fork and click on this Fork and Clone action.

On successful Forking, this message would he sbown asking for Clone Confirmation. If clicked on YES, it opens the Clone dialog with the required URL filled as it is done on Paste in Git Repositories view.

It is available for download from MarketPlace : http://marketplace.eclipse.org/content/github-extensionsCode is available on https://github.com/ANCIT/eGit-extensions

February 25, 2015

Last year I announced J2V8, a new JavaScript engine for Java that wraps V8 with a set of Java Bindings. We have been using this technology to power Tabris.js on Android — giving us much better performance than Rhino could. While J2V8 was very stable, it wasn’t very easy to consume. Today I’m happy to announce that J2V8 2.0 has shipped, and it’s much easier to embed in your Java Applications.

There are several notable improvements in J2V8 2.0:

Self Loading Native Libraries

Following the lead of SWT from years ago, we have included a Library Loader that will load the native library automatically for you. You no longer need to twiddle with the java.library.path to get J2V8 to work. This also works on Android.

Bundled Native Libraries

The native libraries have also been bundled with the Jar file. Currently we are shipping 5 different Jar files for J2V8:

The final one (j2v8_android.jar) contains the native library for both armv7l and x86, so you can use it on several different Android devices. By adding the jar to your classpath and calling V8 runtime = V8.createV8Runtime(); (V8.createV8Runtime(null, activity.getApplicationInfo().dataDir); on Android: because you need to specify the location to unpack the native library) the native library will be automatically loaded for you. We can add more platforms, but each one takes time as I need to build V8 on the platform first. Any particular platform I should target next?

Available in Maven Central

J2V8 is now available in Maven Central. You no longer need to build the library yourself. For example, add the following snippet to your pom.xml to use J2V8 on MacOS:

Hello, World

Once you’ve added the the dependency to your pom.xml, you can begin embedding JavaScript in your Java Application using J2V8. Here is a simple Hello, World! that demonstrates how to call JavaScript from Java and register a callback.

February 24, 2015

For the 2016 release of the Scout framework we will introduce two significant changes. These changes address major pain points we have suffered from in the past. As the scope of the two changes is substantially larger than in previous releases, we would like to start talking about them well ahead of time. So remember in the text below that this is a blog about the future of Eclipse Scout scheduled for 2016, not about the upcoming Mars release.

The first change is driven by our vision of the future of UI technologies for business applications. The second change addresses the need of our customers and BSI that Scout applications should be easy to integrate in a Java EE or Spring based environment. In the text below we will discuss these two changes individually.

A new HTML5 Renderer

The first change is based on our belief that the future of business applications mostly lies in the domain of web applications. In consequence, Eclipse Scout needs to provide the best possible web experience for the users of Scout applications. And to achieve this goal for the 2016 release, we already have started to write a new Scout web rendering engine, directly based on HTML5/CSS3 standards. Thanks to this substantial investment of BSI, Scout applications will comply with the new de-facto standard for web applications and will be able to take advantage of the latest web technologies in the future.

This decision also implies that future Scout web applications are no longer based on the Eclipse RAP project. At the same time, discussing this topic extensively with our customers, we found that their demand for the existing Swing and SWT rendering components no longer matches the necessary expenses to maintain these components. This is why we decided to discontinue the Scout SWT and Swing desktop rendering components after the Eclipse Mars release.

Eclipse Scout will become a Java framework

The second change affects the foundation of Eclipse Scout applications. Currently, Eclipse Scout applications are based on OSGi and the Eclipse runtime platform. In the past, we have addressed the challenge to integrate a plugin based application with Java EE technologies again and again. And according to some Scout customers, integrating the plugin based Scout server with Spring technology did cost them significantly more time than anticipated.

Observing the market and the need of our customers over the past years, we have come to the conclusion that Scout’s dependencies to OSGi/Eclipse platform did bring more harm than good to Scout projects. This is why we now started to implement replacements for familiar Eclipse concepts such as jobs, extension points and services.

As a result of this second change, Eclipse Scout applications will become standard Java applications that will seamlessly integrate with Java EE technologies and other Java frameworks, such as Spring. We also hope that this change will increase the adoption of Eclipse Scout in the Java domain.

What will stay the same?

Although the changes mentioned above may seem substantial, it is important to keep in mind that most aspects of the Eclipse Scout framework remain the same.

Scout will continue to be an Eclipse Open Source project.

The Scout SDK will continue to be based on the Eclipse IDE.

The existing Scout application model will (mostly) stay as it is.

Migration efforts for existing Scout applications will remain rather modest (with the exception of the rendering part for your custom controls, if any).

From the Scout developer perspective these changes will probably not feel exciting. But this is by design.

And to avoid confusion: The Scout Mars release will still be shipped with the known RAP, SWT, and Swing rendering components.

Your Feedback?

Whether you like these changes or are concerned regarding your plans for Eclipse Scout please let us know. Please contact us on the Scout forum. For discussions we have created separate topics. One for the new HTML renderer and another one for removing the Eclipse/OSGi dependencies.

If you prefer a less public channel for discussing these changes you can contact us by email to scout@bsiag.com. Your feedback is very valuable to us and we would like to find/discuss options if you have any concerns or questions.

I've recently joined the Eclipse Orion team; after >10 years on the eclipse platform it was time for a change. Working on the eclipse UI was easily the highlight of my career so far. The opportunity to work with many truly exceptional people in such an energetic ecosystem made the work even more rewarding. While I missed out on EclipseCon this year I *will* find my way to at least one more so that I can properly thank (in person...with Beer !) at least some of the crew that has been instrumental in moving eclipse forward over the years; far too many to name individually...you know who you are..and thanks !!

Now on to the future and Orion...

It took me literally no time at all to realize that Orion needed some tooling work. Immediately upon my first attempt to work on some JS code I realized that the tooling that made me look effective in JDT just didn't exist. I was reduced to regex searches and the like to wend my way through the maze of JS / CSS / HTML files comprising Orion. This made going up the learning curve for the codebase seem more like climbing a mountain of molasses. Something had to be done...

...and has ! Take a look at the Orion 8 N&N to see the new tooling in Orion 8 and for those lucky enough to be at EclipseCon don't miss Simon Kaegi's talk: "JavaScript Language Tools. In Orion.". These are just the first fruit of the efforts to make the Orion editor more that just a fancy notepad, more are on the way.

Since the new functionality has already been announced this post will look more closely at where we're going with the tooling. It's worth noting that all the Orion 8 feedback needed was a change in how the existing hover tooltip got its information and 'quick fix' commands (pretty good bang for the buck IMO).

It's all about the context

Most of the help from JDT comes from its crawling large statically analyzed parse trees(AST's). Since this is largely impossible in JavaScript what can we do ?

There are (loosely speaking) three different types of tooling feedback:

Information: Regular tooltip style giving JSDocs or showing a color when hovering over parts of a file...

Manipulation: The is for feedback that allows the user to modify their files in some way (think quick fixes and, eventually, refactoring and color/font definition).

Navigation: This is the ability for the editor to recognize and locate files referenced from the one they're editing.

Of these three the one that is most important is 'Navigation'. One of the first problems we encountered was that the code is (of course) always written reflecting the directory structure of where it is *deployed* (i.e. where it lives on the server). In cases where anything but the 'null' deploy is used (the code is in the same structure for development as it is when deployed) we needed some sort of 'File Map' to allow us to properly locate the development file given a reference in a deployed file. This is currently available through the file navigation hovers in Orion 8 but this is just the beginning. It's more than just 'require' and 'import' but should also be capable of deeper introspection such as locating a specific CSS class used in a JS snippet that is assigning a class to a classlist (by searching the available CSS classes for that specific class name).

With the ability to navigate from 'required' references in one file to the actual JS (or 'imported' CSS files) we also gain the ability to walk dependencies looking for useful info to show the user (i.e. JSDoc content assist for methods declared in a required JS file...). As we become more proficient in parsing relevant info we will start to gain the ability to answer some of the trickier questions like "Where is this JS / CSS class used ?" and "Where is this method used ?" (needed for eventual refactoring).

Orion 9 and beyond

Here are some of the things we'll be focused on for future releases...

Type Inferencing

Due to its inherently mushy nature JavaScript is effectively tooling resistent. Our goal is to not restrict what the developer wants to do but instead warn them if they're doing something that will make the tooling ineffective. For example Orion uses RequireJS to access instances of various JS services. The 'required' service instances are assigned to variables in the object doing the requiring. The tooling will work only if the code never assigns a different, unknown, value to this var. To help here the tooling will produce a warning should such an assignment be used (preventing us from doing content assist...on that var).

Note that can (and should) be generalized to account for new techniques to 'typify' JS. If we parse a 'type' (whether from JSDoc or some other type description) the underlying mechanisms will be common enough to show the warning any time a 'typed' var is assigned to regardless of how that type was originally specified (unless of course the value being assigned is of a matching type).

Deeper Context Introspection

It's the union of all the references within a web project that truly define its 'context'. Being able to infer as much as possible about the resources being used in a particular scenario is proportional to the tooling support we can provide. One of the possibilities we're looking at is the ability to specify the HTML doc we're doing our development in the context of. This would provide access to the specific CSS files used by that page (which may be different from those used on a different page), allowing us to provide more accurate feedback; in the end everything has some page at its root.

Since the tooling relies heavily on the file map we'll be making it simpler to create and maintain the map for your particular setup as well as providing this functionality 'in the UI' (i.e. allowing the user to define explicitly the file being referenced and having the UI remember this choice, some sort of support people writing the deploy scripts...). The goal here is to make this as simple (for you) as possible so that the advantages of having access to great tooling far outweigh the effort needed to provide the file map for your project.

As a complete noob to the Web development world one of the first differences I noticed is that web pages are the result of the *merge* between the HTML, JS and CSS files. I found that working on these files in separate tabs was a pain. We'll be working on various ways of allowing the coder to focus more closely on working and less on navigation issues. I don't think it'll take 10 years to show multiple editors on one tab within Orion...;-).

What I'd like to see is something like this:

This uses everything we've seen so far. Type inferencing tells us that '_thumb' is a DIV, meaning that we can infer that 'splitLayout' is a CSS class name. We can then search in the appropriate CSS files to locate the actual CSS class and extract it to show in the tooltip. Ideally in this scenario the tooltip would allow you to edit the contents of the class to allow you to 'tweak' it without ever having to open it.