Discussions

Aspect-Oriented Programming (AOP) has become extremely popular as a way to solve certain kinds of problems that are hard to address using procedural or object-oriented methodologies. Nevertheless, AOP still has its critics. The latest TSS cartoon prompts us to ask, should we heed the warnings of these AOP critics?

Read this from your blog which I was kind of expecting:<br>"Once you start thinking in AOP, then your implementation and design will degrade since you know you can shortcut your way out of anything into an un-traceable mess."<br>

Hmm... Isn't that a variation on the exact same objection a procedural/C developer would have against object orientation: "duh, I can't follow and debug the code in a straight line, it's too abstract, object-orientation is bad!"

And here I was thinking that abstracting design/architecture from implementation was exactly what we wanted.. :)

It's more abstract and harder to trace in "a straight line" because, well, thats part of the point!

Personally, I can see some merit (although I don't agree fully) in the argument that AOP is bad because it breaks encapsulation and people might weave in "malicous" aspects that break security at some level, but the "untraceable" and "too abstract" argument is just pure folly that OOP originally met as well..

Hmm... Isn't that a variation on the exact same objection a procedural/C developer would have against object orientation: "duh, I can't follow and debug the code in a straight line, it's too abstract, object-orientation is bad!"

And here I was thinking that abstracting design/architecture from implementation was exactly what we wanted.. :)

In the defense of the blog, it is an editorial ;-)

The same debate is is the Java forums and there are people there talking about crosscutting the ability to light monkeys on fire for an insurance company (not kidding). Not to repeat myself, but AOP is best used in cases of later adapting your code to a specific context or protocol, not to be used in "implementation and design" phases.

On Java.net, there's another new weblog about how best to handle refactoring with AOP. The example is decorating a method with behavior, then refactoring the method signature, causing it to fall from the scope of your aspect's pointcut. One person pointed out that it's a tools issue and no different than "plain" Java, but in reality it's a lot different since we are talking about different programming paradigms. How can one assert an error if a pointcut suddenly falls out of scope for a method's signature-- it does for "plain" Java, suddenly you can't compile.

Just like I said in the blog, if you think in AOP during your implementation and design phases, you are asking for trouble down the road when it comes time to refactor and you have to worry about what Frank did in his new Aspect to modify the behavior of all your code.

"if you think in AOP during your implementation and design phases, you are asking for trouble down the road when it comes time to refactor and you have to worry about what Frank did in his new Aspect to modify the behavior of all your code."

I don't see a problem thinking in AOP terms during implementation and design, the problem is _really_ thinking in terms if AOP as _aspects_ and nothing else.Using aspects prudently to separate logically different aspects is the key, with the typical aspect candidates being the classical cases of:* logical flow of execution of the actual application* logging* authentication/authorization* persistence..might have missed something obvious. However, if you do aspectify in logical aspects, I'd say you actually make the code _more_ testable:Say that you have a lot of classes that would normally depend on persistence code? With aspects, you can test the persistence code separately, and the previously persistence dependent classes without the actual persistence dependency in testing. This is very much non-intrusive and only enhances code-reuse, decoupling and reduces duplicate code.

But the big hickups come when you start using aspects as "shortcuts" for things that aren't really aspects of your application. I think that is what you are trying to point out, but perhaps fail to mention the distinction between logical aspects of an application and pure "hey, lets use cool aspects"-hacks?

You mean, if you change method signature it will fail only in those bad aspects but still will run fine in other ordinary java code ? :)) LOL

I don't think you read my original statement fully-- I said in Java it wouldn't compile if I simply just changed the method signature on an object. Tools aside, even from the command line, you would know what objects used your method and are now aren't compiling. In AOP, everything would still compile, just the method could no longer match the intended pointcut-- which I presume would be a pain to debug.

I should focus my argument a little better and say that pointcuts, not aspects in general, can create hurdles when maintaining a program. If you are going to use aspects, I would recommend using annotative declarations, but only in order to provide extrinsic behavior within a system. Standard OOP should be providing all intrinsic behavior through proper encapsulation and composition.

Just like I said in the blog, if you think in AOP during your implementation and design phases, you are asking for trouble down the road when it comes time to refactor and you have to worry about what Frank did in his new Aspect to modify the behavior of all your code.

Not an AOP expert, but check out some of the papers the Xerox guys published. It might alleviate your concern about "trouble down the road." The AOP implementation of their image processing system was 1/10th the size of the original implementation and much cleaner code.

Not an AOP expert, but check out some of the papers the Xerox guys published. It might alleviate your concern about "trouble down the road." The AOP implementation of their image processing system was 1/10th the size of the original implementation and much cleaner code.

In the paper, they cover their original implementation for image processing which pretty much took an 'final' image, and returned a new image for each operation-- as they state, caused redundant computation, large memory footprint in creating multiple images, and an inefficiency in data caching.

They came to the solution of AOP to basically create a system of filters with caches. While their AOP solution worked great, the problem they were trying to solve could have just as easily been solved in OOP. It seemed as if they had an OO solution that 'out grew' its intended purpose or was going to be used for something it was never intended.

In table 1 of their paper, they go over perf/loc numbers. Undoubtedly, the original 'naive' java implementation had the poorest implementation, probably, again, they were trying to 'hammer a bunch of nails in with some screwdrivers' because it's all they had. Their 'hacked' version of the OO solution did give them the performance, but added many lines of code (continuing, they tied a bunch of heavy rocks to their screw drivers). Finally, their AOP solution took 4x the time to run, but had lots fewer lines of code.

From those results, I could come to the following conclusions:-AOP is the best way to patch the original code they had-'Hacked' Java out performed the AOP solution (which is why I believe they added # of images created to strengthen their AOP argument), yet they could still tweak the AOP implementation and beat the 'Hacked' Java version.-Their AOP solution could have just has easily been modeled in Java in an OO way using pipes/filters, and java.io.*-like implementations.-AOP introduces another learning hurdle into the maintenance of the program by other programers-- K.I.S.S.

In summary, while the AOP solution worked, you could have just as easily modeled the same solution in straight Java using stateless filters and pipes. The 'fusing' optimizations they are talking about with AOP could easily be modeled in OOP through dependencies/comparators within a pipe of transformations. I don't think you can necessarily say that an OOP solution is ever better than an AOP solution, but why introduce it if you don't need to?

While their AOP solution worked great, the problem they were trying to solve could have just as easily been solved in OOP. It seemed as if they had an OO solution that 'out grew' its intended purpose or was going to be used for something it was never intended.

I defer to those who know the code best, but assume that if they could have used a traditional OO approach to solve the problem they would have.

In table 1 of their paper, they go over perf/loc numbers. Undoubtedly, the original 'naive' java implementation had the poorest implementation, probably, again, they were trying to 'hammer a bunch of nails in with some screwdrivers' because it's all they had. Their 'hacked' version of the OO solution did give them the performance, but added many lines of code (continuing, they tied a bunch of heavy rocks to their screw drivers). Finally, their AOP solution took 4x the time to run, but had lots fewer lines of code.

You make fairly bold assumptions that the original implementation was poor (maybe it was, maybe it wasn't). OO and performance often don't together. A good, nice OO design /implementation is often slow as hell. That's where tuning and optimization comes in, which often makes the code more complex and less manageable (what you call a 'hack').

From those results, I could come to the following conclusions:

-AOP is the best way to patch the original code they had-'Hacked' Java out performed the AOP solution (which is why I believe they added # of images created to strengthen their AOP argument), yet they could still tweak the AOP implementation and beat the 'Hacked' Java version.-Their AOP solution could have just has easily been modeled in Java in an OO way using pipes/filters, and java.io.*-like implementations.-AOP introduces another learning hurdle into the maintenance of the program by other programers-- K.I.S.S.

So you're saying

* The original version was bad (written by incompetent programmers)* the optimized version was a hacked because they didn't know how to optimize it properly to make it fast and extensible, simple, etc.* They weren't intelligent enough to use pipes and filters (those don't exist in Lisp, if they had used Java...)* We should all code in assembly because everything else requires us to learn something else.

In summary, while the AOP solution worked, you could have just as easily modeled the same solution in straight Java using stateless filters and pipes. The 'fusing' optimizations they are talking about with AOP could easily be modeled in OOP through dependencies/comparators within a pipe of transformations. I don't think you can necessarily say that an OOP solution is ever better than an AOP solution, but why introduce it if you don't need to?

OO is not the end all. Different ways to solve different problems. AOP and OOP are not mutually exclusive.

You asked for a non-trivial example and here's one, but you seem to dismiss it rather quickly.

You make fairly bold assumptions that the original implementation was poor (maybe it was, maybe it wasn't). OO and performance often don't together. A good, nice OO design /implementation is often slow as hell. That's where tuning and optimization comes in, which often makes the code more complex and less manageable (what you call a 'hack').

The article does describe how the OO version worked, actually in a procedural manner (as the paper states) where, again, a final image was passed and a completely new one was returned for each step in the processing that they now wanted to do. Of course there are better ways of 'chaining' modifications to io data, and I'm sure anyone would agree.

OO is not the end all. Different ways to solve different problems. AOP and OOP are not mutually exclusive.

Correctly used, the two are fairly mutually exclusive. It would be improper to write a whole domain model in aspects or declare all of your intrinsic business behavior in aspects. Aspects should be there to support external dependencies of your system (logging, caching, persistence, contextual behavior), but never to model the internal business behavior of your system. In the latter case, AOP is really unecessary and *can* be improperly used in this context. In reality, my views aren't too far 'right' from the pro AOP folks.

I look at the nature of what AspectJ accomplishes in being able to modify the behavior of your object, for everyone, by weaving in additional code at compile time. The fact that your objects have now changed, to be truthful, scares me and could possibly be scary to someone who has to now maintain my code.

If my goal is to provide additional behavior within a system, I would prefer an annotated approach since you are not modifying your object itself, but declaring behavior for an external context. An example is the EJB 3.0 spec where you annotate your objects in order to provide persistence rules *specific* to the external tasks of persistence. Does the nature of your object change for everyone outside of the EJB container where the annotations were intended? No. By handling 'aspects' in this manner, you don't have to worry about behavior leaking into other processes that they were never intended for or expected.

Correctly used, the two are fairly mutually exclusive. It would be improper to write a whole domain model in aspects or declare all of your intrinsic business behavior in aspects.

I agree with you on this. Although... :) ... Last week I had an interesting dinner meeting with a JBoss user. He was telling me that he scopes out the easy, simple, monotonous work to his offshore team and is starting to use AOP to weave in the more complex stuff he needs to write that the offshore team can't handle. I'll have to get back to him a few months from now to see if this approach was successful.

Bill

Aspects should be there to support external dependencies of your system (logging, caching, persistence, contextual behavior), but never to model the internal business behavior of your system. In the latter case, AOP is really unecessary and *can* be improperly used in this context. In reality, my views aren't too far 'right' from the pro AOP folks.I look at the nature of what AspectJ accomplishes in being able to modify the behavior of your object, for everyone, by weaving in additional code at compile time. The fact that your objects have now changed, to be truthful, scares me and could possibly be scary to someone who has to now maintain my code.If my goal is to provide additional behavior within a system, I would prefer an annotated approach since you are not modifying your object itself, but declaring behavior for an external context. An example is the EJB 3.0 spec where you annotate your objects in order to provide persistence rules *specific* to the external tasks of persistence. Does the nature of your object change for everyone outside of the EJB container where the annotations were intended? No. By handling 'aspects' in this manner, you don't have to worry about behavior leaking into other processes that they were never intended for or expected.

That is interesting - AOP (more familiar with AspectJ), seems to me like OO + aspects. I don't see AOP as a system that uses only aspects but as a system where I add aspects to an OO implementation. That is, with the first aspect I add, I obtain an AO version of the code. In my opinion, you cannot have a system that is both OO and AO (at least if you don't say that OO is AO without any aspect).The fact that some concerns can not be well modularized in OO and we need aspects for that, it means that you replace the scaterred OO implementation of that concern with an AO one => AO replaces OO. And that's what AOP should aim: replace OOP by keeping the good things and adding aspects to that parts where it doesn't work well.However, it might be a matter of interpretation but I think this afirmation is too often used and not that obvious.

From those results, I could come to the following conclusions:-AOP is the best way to patch the original code they had-'Hacked' Java out performed the AOP solution (which is why I believe they added # of images created to strengthen their AOP argument), yet they could still tweak the AOP implementation and beat the 'Hacked' Java version.-Their AOP solution could have just has easily been modeled in Java in an OO way using pipes/filters, and java.io.*-like implementations.-AOP introduces another learning hurdle into the maintenance of the program by other programers-- K.I.S.S.In summary, while the AOP solution worked, you could have just as easily modeled the same solution in straight Java using stateless filters and pipes. The 'fusing' optimizations they are talking about with AOP could easily be modeled in OOP through dependencies/comparators within a pipe of transformations. I don't think you can necessarily say that an OOP solution is ever better than an AOP solution, but why introduce it if you don't need to?

I've had this argument before with Spille. The main problem is that the AOP community has not focused on the Why, but rather too much on the What. We tried to focus on the why in JBoss AOP's The Case for Aspects user guide.

I think AOP will first be used by the framework developers to make coding easier for framework users. The masses won't know they're using AOP, but using the framework will be butt-simple. From this, the framework developers will discover design patterns. The users will learn from these patterns and be able to apply them to their code. It is a natural evolution.

AOP + Annotations is a perfect example of simpler design. It allows the framework developers to easily encapsulate the functionality their annotations are supposed to introduce without relying on messy unmaintainable code generation (I've written an IDL compiler, so I know what I'm talking about). Users get an easy way to apply these annotations. They just tag their code and use it.

A perfect example of AOP + Annotations is the Asynchronous Aspect Clause Haussenet contributed to JBoss AOP. You tag a method as @Asynchronous which introduces an advice that runs the method in the background, and a mixin that allows you to access an API to get the response of the method asynchronously. I've recently extended this to work remotely. Clean encapsulation of asynchronous behavior combined with the ease-of-use a middleware user expects.

JBoss' EJB 3.0 implementation is based entirely on JBoss AOP. The aspect-oriented design has given us a container that is completely extendible.

JBoss Cache is another example of a framework using AOP for its design internally. They needed to aspectize their code because it was becoming too bloated. And also used AOP to provide transparent caching to their users.

One large problem with AOP is that the ordering of advices is critical in applications of it. For instance, the Asynchronous aspect needs to come before transaction demarcation, but after security. This is where IDE integration can be very important as you need to see what aspects are applied to a specific joinpoint and what order they are in. Advice ordering, IMO, is the one significant hurdle that may make AOP hard to adopt in many situations.

All and all, I find the HUGE adoption problem with AOP is that its hard to understand by reading about it. You need to see examples before you make the connections. Maybe OOP had the same problem. I can't remember back that far.

A perfect example of AOP + Annotations is the Asynchronous Aspect Clause Haussenet contributed to JBoss AOP. You tag a method as @Asynchronous which introduces an advice that runs the method in the background, and a mixin that allows you to access an API to get the response of the method asynchronously. I've recently extended this to work remotely. Clean encapsulation of asynchronous behavior combined with the ease-of-use a middleware user expects.

Even though the method call is asynchronous, all operations on the object must be synchronous since it binds the result of the last call to a ThreadLocal variable of the mixin (please politely catch me on this if I'm wrong ;-). Sorry to use this as an example, but if I were a developer that was going to go back and unknowingly maintain an object that uses this aspect, the use of threadlocal from an aspect could be a *huge* gotcha.

From a maintenance standpoint, I would have hoped the code took a more direct approach with a wrapper object that decorates the original calls with a callback reference that is the owner of the object's caller. (maybe throw in some simple reflection). IMHO, it would be a lot easier to trace and keep code ownership a lot more direct.

Example is here, except the 'HeavyProcess' is a delegated method invocation.

A perfect example of AOP + Annotations is the Asynchronous Aspect Clause Haussenet contributed to JBoss AOP. You tag a method as @Asynchronous which introduces an advice that runs the method in the background, and a mixin that allows you to access an API to get the response of the method asynchronously. I've recently extended this to work remotely. Clean encapsulation of asynchronous behavior combined with the ease-of-use a middleware user expects.

Even though the method call is asynchronous, all operations on the object must be synchronous since it binds the result of the last call to a ThreadLocal variable of the mixin (please politely catch me on this if I'm wrong ;-). Sorry to use this as an example, but if I were a developer that was going to go back and unknowingly maintain an object that uses this aspect, the use of threadlocal from an aspect could be a *huge* gotcha.

I fail to see the gotcha. The idea is to support concurrent asynchronous calls with a VERY simple, but powerful API. The asynchronicity is encapsulated in an aspect and isolates the user from how the asynchronicity is accomplished. Since this asynchronicity is encapsulated in a class it can be extended to add new functionality, or replaced entirely with a different aspect binding with no changes to user code.

From a maintenance standpoint, I would have hoped the code took a more direct approach with a wrapper object that decorates the original calls with a callback reference that is the owner of the object's caller. (maybe throw in some simple reflection). IMHO, it would be a lot easier to trace and keep code ownership a lot more direct.Example is here, except the 'HeavyProcess' is a delegated method invocation.

Actually, IMNSHO, your example is a lot harder to maintain. For example, let's say you started off with the "Heavy Process" approach. After a few weeks you realize that it would be more efficient to use a Thread pool to implement this pattern. A few weeks later, you find that you want to have timeouts, max thread pool size, and other configuration tweaks and find that the Doug Lea's Oswego Conncurrent framework would be a better choice for implementation. Then a few months from now, you are porting your app to run on JDK 5.0 and find that the java.util.concurrent package has native OS hooks that are extremely efficient so you refactor your codebase yet again to use JDK 5.0's java.util.concurrent.

This evolution of your 'Heavy Process' would require code changes to each and every piece of code that used your 'Heavy Process' pattern. On the other hand, if you used AOP to encapsulate asynchronicity, you'd have a cleaner migration and evolution path.

BTW, the 'Heavy Process' pattern is a push pattern. The JBoss Asynch Aspect uses a pull pattern by encapsulating the Future API provided by Oswego and/or JDK 5.0 java.util.concurrent. Push and Pull are both viable ways of doing asynchronicity. There's also no reason that the JBoss Future couldn't be extended to register a callback object.

Actually, IMNSHO, your example is a lot harder to maintain. For example, let's say you started off with the "Heavy Process" approach. After a few weeks you realize that it would be more efficient to use a Thread pool to implement this pattern. A few weeks later, you find that you want to have timeouts, max thread pool size, and other configuration tweaks and find that the Doug Lea's Oswego Conncurrent framework would be a better choice for implementation. Then a few months from now, you are porting your app to run on JDK 5.0 and find that the java.util.concurrent package has native OS hooks that are extremely efficient so you refactor your codebase yet again to use JDK 5.0's java.util.concurrent.

If JBoss's asynch Aspect changes it's annotation signature, the same reflective changes would need to be made in the classes/methods/xml that use it. I could encapsulate the same kind of behavior in a single decorator that serves the specific 'caller' that wants async behavior *for* the 'targeted' object. We would be still hiding the how, allowing us to modify the underlying implementation within the decorator, yet retaining contextual behavior ownership within the 'caller'. The intended behavior of the 'targeted' object is still retained and can be used as expected by all other 'callers'.

Actually, IMNSHO, your example is a lot harder to maintain. For example, let's say you started off with the "Heavy Process" approach. After a few weeks you realize that it would be more efficient to use a Thread pool to implement this pattern. A few weeks later, you find that you want to have timeouts, max thread pool size, and other configuration tweaks and find that the Doug Lea's Oswego Conncurrent framework would be a better choice for implementation. Then a few months from now, you are porting your app to run on JDK 5.0 and find that the java.util.concurrent package has native OS hooks that are extremely efficient so you refactor your codebase yet again to use JDK 5.0's java.util.concurrent.

If JBoss's asynch Aspect changes it's annotation signature, the same reflective changes would need to be made in the classes/methods/xml that use it.

It depends on how the syntax of the annotation changed. This is where having flat metadata model would allow for easier evolution.

For instance:

@Asynchronous@MaxThreadPoolSize(10)

instead of:

@Asychronous(maxThreads=10)

I could encapsulate the same kind of behavior in a single decorator that serves the specific 'caller' that wants async behavior *for* the 'targeted' object.

Yes you could, but the instrumentation would be much, much, more verbose. The goals here are:

1) Encapsulation of the cross-cutting concern so that the concern can more easily evolve2) Encapsulation so that user code is isolated from the evolution of asynch behavior.3) Finally, the most important, ease-of-use and transparency.

We would be still hiding the how, allowing us to modify the underlying implementation within the decorator, yet retaining contextual behavior ownership within the 'caller'. The intended behavior of the 'targeted' object is still retained and can be used as expected by all other 'callers'.

Even if you were ok with hurting ease-of-use there's still other things to consider. For instance, ordering of other cross-cutting concerns. Asynch needs to happen after security checks and before transaction demarcation. Asynch needs to happen before metrics and synchronization. For remote asynch, asynch needs to happen after tx and security is propagated to the context of the invocation. etc... The asynch code shouldn't be concerned with other aspects that are being used in the system. AOP gives a declarative way of specifying these orders and dependencies.

It depends on how the syntax of the annotation changed. This is where having flat metadata model would allow for easier evolution.For instance:@Asynchronous@MaxThreadPoolSize(10)instead of:@Asychronous(maxThreads=10)

Not sure why it makes a difference. Refactoring will be able to rewrite correctly either of these syntaxes.

Also, I'm not sure I like the idea that a method can declare itself "asynchronous". This is the kind of decision only a caller should be able to make.

If anything, the method should declare that it "can be called" asynchronously, but this information is already present in the form of the "void" return type.

It depends on how the syntax of the annotation changed. This is where having flat metadata model would allow for easier evolution.For instance:@Asynchronous@MaxThreadPoolSize(10)instead of:@Asychronous(maxThreads=10)

Not sure why it makes a difference. Refactoring will be able to rewrite correctly either of these syntaxes.

The difference is that the @Asynchronous annotation is independent of the implementation of the asynchronicity. For instance, if you wanted to use JMS as the backbone, maxThreads wouldn't make sense. The Asynchronous annotation shouldn't be extended with every implementation type.

I guess what I am saying is that the maxThreads attribute is configuration dependent on the implementation you're working with.

Not sure why it makes a difference. Refactoring will be able to rewrite correctly either of these syntaxes.Also, I'm not sure I like the idea that a method can declare itself "asynchronous".This is the kind of decision only a caller should be able to make.

I take your point that the caller should be ableto decide to call asynchronously or not a methodmarked as supporting asynchronous call.

If anything, the method should declare that it "can be called" asynchronously, but this information is already present in the form of the "void" return type.

Why a method returning a non-void object can not be called asynchronously?

The returning object could be a part of the asynchronous response which can be retrieved through the Mixin implementation. See JUNIT Test cases bundled with JBOSS Aspect .

This along with the SQL/C++ comparison are great analogies for the differences between AOP and OOP. I believe AOP is best used (ifat all) for supporting your code, not supporting your application. That statement does sound confusing, but the point is that thereshouldn't be a dependency in your application logic for AOP decoration.

On what do you base your belief? Surely not on actual AOP practice, or any of the examples in Ramnivas Laddad's excellent book.

Classes are modules. Aspects are modules. Aspects have a different structure than classes; but they are modules just the same.Sometimes there is no dependency from your application logic to aspects (e.g. logging aspects), sometimes there is (e.g. businessrule aspects). Sometimes there is no dependency from your application logic to certain classes (i.e. log4j support), sometimes thereis (ShoppingCart class).

With pointcut Aspects, you lose some compile time safety and application logic can become more difficult to follow fornew developers. To me, this means Aspects that support logging/auditing (which do not affect application logic at all) can be verybenificial in supporting your code.

If you start rolling in transaction management aspects and event related aspects, you start to lose compile time safety and otherswho wish to leverage your OOP application will have higher hurdles of entry without fully realizing the aspects that *might* affectlogic written in refactoring.

Pointcut aspects lose static type safety? Hmmm... that's news to me. Perhaps you could show us a fragment of AspectJ code that hasthis property?

In addition, what users report is that for cases where it was appropriate to use aspects, the AOP code was much easier to comprehend than the non-AOP code. This point is critical, let me try to explain why its true.

The Forest and The TreesIn the non-AOP code, you have a number of places in the code that all do the same thing. But they don't do it by accident. They doit because there is some common property they have, and they all do the same thing because they need to ensure that some structuralinvariant is preserved. For example, "be sure to grab this lock when entering this region". In the non-AOP code its easy to see that*one*particular*place* does something, but it can be very hard to see the common structure or pattern. So you might see one lockgrab, but not understand the overall rule.

In the AOP code, simple IDE support makes it easy to see each specific place that grabs the lock. So you've got what you used tohave in the non-AOP code. But, and this is critical, in the AOP code, its also easy to see the overall structure. The pointcut is adeclarative description of what the common points are. A well named pointcut, like a well-named procedure, can make very clear "whatis going on".

Even though the method call is asynchronous, all operations on the object must be synchronous since it binds the result of the last call to a ThreadLocal variable of the mixin (please politely catch me on this if I'm wrong ;-).

U're caught.. :-) The asynchronous Aspect doessupport multiple asynchronous calls on the same or different methods of the same object.

Take a look the JUNIT test case test2AsynchronousCalOnSameInstance which is bundledwith the JBOSS Aspect Library .

Sorry to use this as an example, but if I were a developer that was going to go back and unknowingly maintain an object that uses this aspect, the use of threadlocal from an aspect could be a *huge* gotcha.

Not anymore base on my previous answer ;-)I use the threadlocal to handle multiple asynchronous callon the same instance object from different thread.The typical user-case is running the Asynchronous Aspectwithin a web-container such as Tomcat,Resin or Jetty.

I'm sorry, but this blog doesn't qualify you as a major critic of AOP. What's in there are just the usual objections people raise after only a few minutes of thinking about it.

These days, to be a major critic, you have to address issues like: Will runtime weaving be efficient? What is the right semantics for dynamic deployment? How can developers identify aspects early in development? What refactoring support is needed for AOP? How does AOP interact with modeling? Are aspect libraries possible? Are reusable pattern implementations useful? And so on.

You also have to avoid saying things like "sprinkle your objects with behaviors" and "AOP is best used in cases of later adapting your code to a specific context or protocol" and other things that are just plain wrong.

Try this as a warm-up exercise: take the shapes example evolution scenario from http://www.cs.ubc.ca/~gregor/papers/04-02-04-eclipsecon.zip. Now implement it, without AOP, but with the same locality properties. The litmus test is that you should only have to edit one place in the code at the point in the evolution where the changed shape needs to be passed to the updater. Once you can do this, then you can tell a story about not "coding yourself into a corner on your next project".

If you think AOP is about doing things that you could have done with OOP, or even annotations, then you've missed the whole point.

I'm sorry, but this blog doesn't qualify you as a major critic of AOP. What's in there are just the usual objections people raise after only a few minutes of thinking about it.

Heheh, your comments remind me of Tucker Carlson's comments to Jon Stewart on CNN's crossfire where Tucker criticizes Stewart for his reporting style. If only I had known to pay my dues to the 'major' critics association, I could have put a banner on my personal blog to exemplify my rants as anything more than simple opinion. :-)

Try this as a warm-up exercise: take the shapes example evolution scenario from http://www.cs.ubc.ca/~gregor/papers/04-02-04-eclipsecon.zip. Now implement it, without AOP, but with the same locality properties. The litmus test is that you should only have to edit one place in the code at the point in the evolution where the changed shape needs to be passed to the updater. Once you can do this, then you can tell a story about not "coding yourself into a corner on your next project".

I would have taken the parent shape object and added the ability to register 'move' listeners with a shape. Within the pointcuts in your example, it makes external assertions on the internal behavior of the object (case and point of my issues with improper AOP). The observation you are making is that if a 'client' calls setPoint1(int, int) on an object, that it means that it actually moved anywhere and it should be up to the line to decide if it actually moved.

The sample pointcut also suffers from the same redundancy as the Java code where if a movement method call on Line which delegates to other movement operations, you will be requesting the display to update multiple times in a single call.

Since rendering is asynchronous by nature, IMHO, simply adding the ability to register movement listeners to your shapes would allow you to simply notify the display that you've moved. It would be up to the display to then decide when it will actually render-- thereby seperating the concerns of the system because your shapes still properly encapsulate the knowledge if and how it moved.

I would have taken the parent shape object and added the ability to register 'move' listeners with a shape.

But this fails the test that if I change my mind about what constitutes a change, or I want to pass the object, I have to edit more than one place.

It also decentralizes the description of what it means for something to be a change, thereby making it harder to reason about the overall structure of the code.

The sample pointcut also suffers from the same redundancy as the Java code where if a movement method call on Line which delegates to other movement operations, you will be requesting the display to update multiple times in a single call.

Good point. And another illustration of why AOP is a winner in this case. The simple change to the AOP code to solve this problem is to change the advice to:

This is easy because an AOP programmer thinks about crosscutting structure in a first-class and composable way. So they say: - change is a well defined crosscutting concept - I want the top-level change, not any changes within them - I get that with cflowbelow, also a crosscutting relationship on a pointcut - and simple pointcut composition && !

It also decentralizes the description of what it means for something to be a change, thereby making it harder to reason about the overall structure of the code.

Then, I would be interested in knowing your thoughts on the validity of breaking encapsulation/responsibility in order to decentralize change?

What breaking of encapsulation? I don't see AOP breaking any encapsulation in that example.

What I see is that in the non-AOP implementation, the modularity (encapsulation if you will) of the Shape classes is violated by manually scattering code having to do with signalling the changes throughout them.

In the AOP version, the modularity of the Shape classes is enhanced. They only contain the functionality of the Shape classes. Similarly, the encapsulation of the DisplayUpdating aspect is better, because the code for that "concern" is now a modular unit.

Now you may see some violation of encapsulation if you think about how the compiler/weaver work inside. You may see them insert bits of code here and there. But that isn't what the AOP programmer should be thinking about, and that isn't what the semantics of the AOP languages is. That's no more than the way that some optimizing compilers or JITs sometimes inline code.

I recommend that people read "Agile Software Development, Principles, Patterns, and Practices", by Robert C. Martin, before looking at AOP as an architectual solution to common OO mistakes.

Again, any reasonable AOP proponent always says that if you have a case where OO can modularize well all by itself, then by all means use OO. AOP was invented for the cases that OO can't handle properly alone. Cases like the shapes and updating example.

We are trying to determine weather AOP is a good fit for us or is it too dangerous. I have being reading a number of discussion threads and I must say that I do agree that caution is needed. Conceptually (I would even say even academically) the concept is sound. There are definitely Aspects in our world, things like:- Monitoring- Security- Traceability- Caching- Money :) (Only if AOP could help with this) LOL

But to give this tool into the hands of average developers can cause a disaster. Here are some questions I came up with that I hope we can discuss.Ok, here goes.1. In a large organization many groups work on the project and its peaces in the same time. Many components are developed in isolation and binaries are reused. When using Aspects to post process classes how you can be sure that you do not pick up something you don’t want. For example in your code you want to trace enter event for every function that starts with set* and print String value of parameter passed, however a public method from the component you did not right excepts a setKey(keyvalue) for security key and you will log this value into the log. In many organizations this is a major security violation.2. In the world of the open source and vendor software many libraries are in your class-path. With growing popularity of AOP how can you be sure that the classes you have not being already post-processes for AOP and will your processing of them break their behavior. Caching is a perfect example. What will happen to an object if the vendor used AOP to return cached value of the object and your code will add an around() advice and break that.3. What about transactional processing and synchronization. It makes my head spin how that can be hurt by AOP. Let’s say an Aspect enters within method and starts operating variables that are otherwise protected by an inline lock without knowing about and even if the aspect does know about it can not get access to the instance of that lock. 4. I might think about more later, but, I think AOP can be much better if it is a part of java language itself, rather then a post processing step. The problem is not in Aspect way of thinking but rather in physical disconnect, because however you slice it is a byte code manipulation. How would anyone like if someone decided that you production code is not optimized enough and just went and adjusted byte code by hand? I’ve done that for binary code in RISC processors and finding problems is very difficult.

I wasn't aware that AOP was killing thousands of people every day. We better start a public fund so we can create commercials and warn everyone about AOP. While we're at it, lets also make a public service announce that says, "the surgeon general warns assembly and C are dangerous to your mental health and may have detrimental affects."

I remember the days when I worked for a large project made of C++ classes with an SQL stored procedures counterpart. I remember it felt bad to read the C++ code and to miss the real behaviour af the application since you were not aware of the SQL part of the business logic. Sometimes it happened that C++ compiled correctly and everything ran fine until you hit the first call of SP with the wrong number of parameters. But I also remember that sometimes we were saved by a small change in an SQL script instead of redeploying the whole application. So were's the need to tag AOP as good or evil? Why not to try getting the best out of all the technologies you can (and know how to effectively) use?

The application code and scripting code are not aware with each other becoz they are not dependent with each other during compile time.

This along with the SQL/C++ comparison are great analogies for the differences between AOP and OOP. I believe AOP is best used (if at all) for supporting your code, not supporting your application. That statement does sound confusing, but the point is that there shouldn't be a dependency in your application logic for AOP decoration. With pointcut Aspects, you lose some compile time safety and application logic can become more difficult to follow for new developers. To me, this means Aspects that support logging/auditing (which do not affect application logic at all) can be very benificial in supporting your code.

If you start rolling in transaction management aspects and event related aspects, you start to lose compile time safety and others who wish to leverage your OOP application will have higher hurdles of entry without fully realizing the aspects that *might* affect logic written in refactoring.

Anyone who's had to maintain large applications, debugging often starts with a stack trace in a log. If your classes are being weaved with aspects at compile time, it sounds like the weaving process would make it more difficult to track the call stack and where errors may have occurred. Maybe someone else can offer a little more input on how easy it is to debug AOP applications without unit tests and a class decompiler?

Anyone who's had to maintain large applications, debugging often starts with a stack trace in a log. If your classes are being weaved with aspects at compile time, it sounds like the weaving process would make it more difficult to track the call stack and where errors may have occurred. Maybe someone else can offer a little more input on how easy it is to debug AOP applications without unit tests and a class decompiler?

how about a debugging aspect? that way, one could shutdown the server, turn on debugging aspect and restart. The server would inject all the normal aspects and debugging. of course, that's not to say it will be easy to find and fix the bug.

why would an application with AOP not have a unit test. it would be tested it differently, but I fail to see how AOP == no unit test.

What is "considered dangerous by some", including me, is the people who believe that AOP is the mother of all solutions, not AOP itself. Our industry, at least the 90-95% of it that "tends to mediocrity" as Hani so well puts it, has a very annoying way of exclaiming they have found the holy grail as soon as some new tool comes up. Like a carpenter who finds a new type of hammer and goes about building a house with it as his only tool. Whether for money, nerdship (compare: worship) or pure stupidity, pleeeease stop that.

And yes, the TSS cartoons are, despite the authors' best intentions, amateurish at best. Please stop them too. :-|

for people who do not know how to apply OO.They may know OO buzzwords, but do not know how to leverage OO for productivity!It's just a way to hack a code..V

"AOP is just another design tool, not a replacement of OOP"

This is like arguing that since database and OS software were never OO, OO is just a design tool. OO never replaced POP completely. The post is clear that AOP is solving what OO is not solving properly. In this niche it is as good as the shift from POP to OO.

It's a tool right, so if your company or group feels it's too danger, then don't use it. If your team is mostly experts, who are agile and can adapt to new techniques quickly, then it might be a good solution.

it's like saying, "knives can kill people, therefore we should not make knives." Well knives are dangerous, but it comes in handy when I need to cut some meat. If i want to enable some POJO's with logging, but the jar is provided by a third party, I may be able to weave in logging calls. of course, I can always decompile the code and manually add the calls. A tool is a tool. Some tools are only used by a few individuals, while other tools are used by the masses. I would think the market will decide how AOP is used.

I had a college professor who helped invent virtual memory. He explained a similar debate that raged back at the time (the 60's) that he became interested in researching this technology. The "real" programmers contended that automated memory management would fail. Proponents of automated memory management contended that software development was slow, expensive, and error-prone.

There were good points made on both sides of the argument. Many mistakes were made on the way to an adequate memory management system. But eventually, they came up with the right equations and algorithms to make it work.

The new world of automated memory management ushered in greater innovation, faster development cycles, and lower costs. And, oh yeah, more people are now willing and/or able to have a software development career. All at the theoretical cost of less efficient programs and the introduction of "less capable" programmers.

All at the theoretical cost of less efficient programs and the introduction of "less capable" programmers.

It will be interesting to see what happens now that the chip industry has reached the limits of physics. (Intel's new direction of not focusing on processor speed anymore for example). When will "less efficient" no longer become acceptable or plausible?

What I'd like to know is why the Pro AOP-crew continues to assume that, just because there are people arguing against AOP, it must somehow be just as good and relevant as, say, OO or Automated Memory Management and so on. Fact of the matter is.. really.. that there are a lot of things that people argue about. I didn't try to pull the 'But OO had to go through this too' to my girlfriend yesterday when I was trying to convince here that she should really do the dishes everyday.

What I'd like to know is why the Pro AOP-crew continues to assume that, just because there are people arguing against AOP, it must somehow be just as good and relevant as, say, OO or Automated Memory Management and so on.

Well, the arguments against new technology is predictable and regular as clockwork because people get knee-jerk reactions due to:* Don't understanding the applications or context of a new technology.* Believe their current used technology cannot be improved on and is the Bible, Silver-bullet and end-of-all Holy Grail to be followed religously.* Or finally, my favourite: people "protect their investment" in knowledge because they think the new technology might make their current knowledge obsolete (which it won't).

AOP is not a silver bullet or holy grail, however, once the market starts to work out the best ways of implementations, best contexts of use and generally best practices, it will be very useful.

And of course there will be terrible aspect-code written. To quote something I just heard today:"Object orientation doesn't stop bad programmers from writing terrible code of 3000 line classes and 500 line methods".So why is everyone so horrified about badly written aspect code? It's not like bad code will be new to the industry..However AOP will be very useful to those who can learn how and when to use it to the best effect.

What I'd like to know is why the Pro AOP-crew continues to assume that, just because there are people arguing against AOP, it must somehow be just as good and relevant as, say, OO or Automated Memory Management and so on.

You assume that because I find AOP to be a significant technology, it must be because people are arguing about it. I find it significant because it has greatly accelerated my latest software development project, after many fits and starts over the past few years with other more intrusive techonolgies.

I also see how another development project that I'm completing would've been much easier, more stable, and less labor-intesive had we used AOP concepts.

I think AOP can be dangerous. But as a previous poster mentioned, many things can be dangerous if misused.

No, that's not it at all. What I mean is that Pro AOP-folks tend to want people to look at previous battlesm such as the ones between procedural- and object oriented languages, thereby thinking that being on the argued-against side of the argument together with friends like Object Oriented Programming, Automated Memory Management and so on and so forth, it somehow assumes some or all of it's qualities.

Fact of the matter is guys, that you need to make a case all by yourself to prove why AOP is something to be remembered.

And also, it's not that I'm (to assume some of the fire against scepticals like myself) trying to protect myself and my way of implenting programs against Evil Forces of AOP, or whatever assumptions are made against people not fapping all over AOP. It's about me not finding things to solve, solvable by the use of an AOP framework. I can think of a lot of things that I find tedious/bad/negative about my current methods. But to assume that some or all of them (or even one of them) is easily (or at all) solved by AOP, is silly.

The few things, like inventing transparent proxies of various kinds for business objects or what have you, that I do see that AOP could provide solutions to, come at a cost. The cost of losing lots of compile time safety and predictability. (Some of the reasons that brought us Generics). AOP is perl to me. Good for hackity write-only ways of going from A to D without passing through B and C.

i got a question here...what restrictions are available that someone doesnot inject malicious code.

am not talkin about application level security or cross cutting concerns , but at the byte code level the way AOP really works.Cant someone cause real damage ....that way ,is there any way that we can make it more secure

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.