So far I am just playing to get a rough idea of what is needed. I have extracted the contents of AbstractController.resolveContexts(boolean) and what it calls with AbstractDependencyResolver and DefaultDependencyResolver. AbstractDependencyResolver contains a reference to the controller and utility methods to do work on the controller. DefaultDependencyResolver contains the "work" done by resolveContexts and the methods it called.

At the moment DefaultDependencyResolver contains too much controller specific stuff, which I want to move out, but I just wanted to get something in for further discussion.

When a bundle is INSTALLED, only its constistency in terms of osgi metadata in the manifest is verified. There is no connection to other Bundles established. From now on, the Framework is free to resolve the Bundle at any time. When a Bundle gets started (explicitly) it first becomes RESOLVED and then ACTIVE. For a Bundle to become RESOLVED all its required Package-Imports and other mandatory requirements must get sattisfied. If a Bundle fails to get RESOLVED it remains in the INSTALLED state and may get RESOLVED at a later time (when more requirements become available). In the transition from RESOLVED to ACTIVE the optionally associated BundleActivator.start() method is called. If that fails, the Bundle remains in state RESOLVED and the user may retry to start the Bundle again at some later time.

There are various operations that implicitly (try to) resolve a Bundle. These are

During bundle resolution, so called 'wires' are established between the exporter of a package and its importers. The resolver may have multiple exporters for a given package name/version to choose from and the most common case is that a Bundle contains a certain package that it may also import from another Bundle. In this case the wire may get established to the Bundle itself. It is a 'self wire'.

The resolver is encouraged to find the best possible solution that leads to a consistent class space. A consistent class space is one that satisfies the requirements of all importers and a given package is only exported by one and only one exporter. This can become quite complicated, especially when exported packages define a 'uses' directive. The uses directive means that all Bundles in a class space must get wired (i.e. use) the same exporter for a given package. In the face of multiple possible wiring outcomes, the uses directive helps to make more deterministic choices. Generally the resolver should aim to resolve as many as possible Bundles by walking the tree of possible wirings. In case of many unresolved (i.e. hundreds of) Bundles this can become a very expensive operation. Equinox, disregards possible wirings beyond a certain level of complexity and may not find a good solution. Felix considers more possible solution but may take ages to finally return with "sorry can't do it". Currently, both Equinox and Felix work together on a standalone resolver that may be used by both frameworks (and hopefully) ours in the future.

Once a wiring is established, is is never changed unless the Framework is restarted or a set of Bundles is explicitly refreshed through the PackageAdmin.refreshBundles(Bundle[] bundles). It is important to understand that Bundle.uninstall() does NOT remove the wiring. Even after a Bundle is uninstalled it is still possible to load classes from that Bundle. This is true, unless the uninstalled Bundle was never choosen as an exporter and there is no wiring to it. In this case it can be removed from the Framework straight away.

Here are a few requirements that I would have on the resolver API

It must be possible to resolve multiple Modules at the same time

Resolution must be based on mandatory/optional requirements and capabilities

BadDependencyTestCase - probably since the calls it checks for now happen differently

OnDemandDependencyTestCase.testChangeDependencyReinstall() - since the ordering is now slightly different

What I have so far is quite simple. I am still playing around at the moment, and it needs a lot of tidying up, but here is the basic idea:

When calling Controller.install(ControllerContext) the context's dependencies are indexed in the resolver. I put it in a map where the

key is the dependency name, the value is a map where the

key is the dependent state, and the value is list of controller contexts.

Controller.uninstall() removes the context's dependencies from the mentioned map.

AbstractController.resolveContexts(boolean) has been modified to AbstractController.resolveContexts(ControllerContext, boolean) where the context being installed via Controller.install() or Controller.change() is the parameter. This delegates on to the IDR2.resolvePlugins will try to install the context in question as far as it can until it no longer can resolve the dependencies for entering a particular state. Once it cannot go to another state, it simply returns.

I have added a DependencyResolver.stateIncremented(ControllerContext, boolean) callback that is called by the controller once a context has successfully entered the new state. So for example when a context enters the INSTALLED state, this callback is called, and the dependency map is checked for any entries for the context's name and aliases. Then the dependency map is checked for the contexts waiting for that dependency to enter that state. If there were some contexts waiting, we call resolveContexts() on those dependencies.

In addition there is a bit of extra housekeeping for OnDemand contexts.

So far I have not yet looked at scoping, and need to come up with a way to handle implicit dependencies such as supply/demand, contextual injection with qualifiers and so on. Running the kernel project I get 47 failures and 47 errors out of 1575 tests with the new DependencyResolver, which should be fixed once I implement what I have just mentioned.

I'll make which resolver is used configurable via a system property. Once this is more stable we want to run all the tests for any DependencyResolver. For the dependency project I can just create additional sub classes for the different resolvers, but for kernel that will be a big job since there are so many classes, so I'll look at doing profiles instead. For the dependency project I can just create additional sub classes for test for the different resolvers, but for kernel that will be a big job since there are so many test classes, so I'll look at using maven profiles instead.

I have added scaffolding to support different types of dependency lookups. So far, I have only implemeted the name/alias based one, which is just what I had before in a different place.

The basic idea is that for each kind of dependency item there is an implementation of ResolverMatcher / ResolverMatcherFactory. When indexing a context's dependencies, if the dependency is of type AbstractDependencyItem, the name/alias based one I mentioned is used. I'll implement matchers for when the dependency is a demand or a contextual injection dependency. I'm not 100% happy with needing to implement ResolverMatcher(Factory) for each kind of dependency item, but it is the only idea I have at the moment.

There is also a requirement to do a 'try run' of bundle resolution. This would allow to answer a question like: If I install these bundles will I end up with a consistent result where everything can get resolved? To answer this question the running system must not be affected - remember, you cannot "unresolve" a bundle.

In real life, you would give a provisioning system a small set of top level bundles and a pointer to a repository. The provisioning system needs to figure out if and how the complete set of transitive dependencies can be sattisfied before it installs anything (i.e. modifies) in the running system.

I noticed that some test where dependency items come from annotations were failing, so I have fixed that by adding a dependency item DependencyInfo decorator that tracks when dependency items are added/removed and pushes those to the index.

KernelAllTestSuite now has 45 errors and 37 failures. Next I'll look at the indexing other types of dependency such as Supply/Demand and contextual injection

Looking at supporting contextual injection in the indexing dependency resolver, I've stumbled upon an issue. The basic idea is simple, record which contexts depend on a given Class. When installing a context, check which contexts depend on it (and its superclasses + interfaces) and increment those.

The problem is how to know the class of the context being installed?

KernelControllerContext.getBeanInfo() - do we want to require it to be a KCC?

ControllerContext.getTarget() - will not be valid until INSTANTIATED, but then again it probably does not make sense to have contextual injection with a lesser state, although I am not sure this is checked anywhere.

I have got contextual injection and basic supply/demand working, so KernelAllTestSuite is down to 18 errors and 32 failures, most of which are related to scoping.

The supply/demand implementation is pretty simple so far. When registered, if a context has demand dependency items, they get recorded. When a context has supplies and its state is incremented, we try to resolve the contexts with demand di's waiting for that state. I hope that will suffice, since I don't want to get involved with the matchers, I might be wrong but I don't think they lend themselves to hash lookups - I would have to iterate over all of them anyway.

Similarly, the contextual injection mechanism is also quite light, we only index the wanted class on registring, and on incrementing the state we check for the classes. I don't do anything about the qualifiers.

One thing I think we're lacking is "wrong order" tests for contextual injection, at least with qualifiers, so I need to add some. The same might be the case for supply/demand, which I need to check.

I've been using concurrent collections in my DependencyResolver and Matchers, which might not be necessary. I thing al the access to the DependencyResolver + Matchers happens with the controller lock taken, but again I need to check that.

If we have a context A and context B depends on that. If we install context B first, and then install context A, context A's dependsOnMe does not get populated until we resolve context B's iDependOn items. There is nothing there to tell us which contexts to try to resolve, so we need to look at all contexts to do it this way.

If we have a context A and context B depends on that. If we install context B first, and then install context A, context A's dependsOnMe does not get populated until we resolve context B's iDependOn items. There is nothing there to tell us which contexts to try to resolve, so we need to look at all contexts to do it this way.

Ah, yes, you're right. DependsOnMe is not set until you resolve it from the "other" side.

So, I guess what you're doing is OK, as there is no other way as to go per dependency type,