A Survey on IT Technologies, related Domains and Markets

Main menu

Introduction

Java 9 has been finally released on 21 September 2017. Among its features one in particular has represented for a long time a clear lack in the Java platform: modularization. Modularization is a matter of code organization at a different scale than Java packages. Java packages allow us to organize our classes and grouping them by functionalities, with the desired degree of granularity. Java modularization solve this organizing problem from a larger perspective: not that of a group of classes but of the artifacts that are eventually deployed in the runtime environment. We can also say that modularization plays a role similar to objects in Object Oriented programming: objects encapsulate data and behavior, modules encapsulate whole set of classes, resources and configurations and expose to the other modules only the functionalities that are meant to be shared, hiding the others. Modularization is implemented by the Java 9 Module System package.

In the following paragraphs we will expose the basics of the Java 9 Module System.

Module descriptors

A Java module is a collection of code, in form of classes organized in packages, and whatever kind of static resources, such as property files or other. A fundamental characteristic of a module is that it is self describing. Its definition provides the outside environment with all the information that is required to use that module. The module describes itself by a module declaration like this:

module com.myproject.module1 {

requires com.myproject.module2;

exports com.myproject.alpha;

exports com.myproject.beta;

}

The declaration starts with the reserved word module followed by the name of the module. There is no mandatory rule to follow to choose the name of the module but it is advised to use the “reverse” domain format typical of packages definitions.

Inside the braces one or more requires or exports instructions may be present. The requires instruction defines a module’s dependency on some other module. The export instruction defines which packages are accessible by the other modules.

If there is no requires declaration , then the module has no dependencies on other modules and if there is no exports declaration then none of the module’s packages are accessible by other modules.

The module’s declaration is supposed to be written in a file called module-info.java stored at the root of the whole source tree:

module-info.java

com/myproject/alpha/Alpha.java

com/myproject/beta/Beta.java

The file is then compiled as usual as a file named module-info.class stored at the root of the compiled classes tree.

Module artifacts

Java artifacts are usually built as Jar files, then storing modules as Jar files is a natural choice. The contents of a Jar file representing a container would be something like this:

META-INF/

META-INF/MANIFEST.MF

module-info.class

com/myproject/alpha/Alpha.class

com/myproject/beta/Beta.class

...

With the usual manifest file and in addition the module-info.class file.

Java 9 Module System platform modules

Since Java 9 introduces the new module standard, we expect the java platform itself to be released as a set of modules, and that is exactly what happens. The main module is the one called java.base, with a definition like the following:

module java.base {

exports java.io;

exports java.lang;

exports java.lang.annotation;

exports java.lang.invoke;

exports java.lang.module;

exports java.lang.ref;

exports java.lang.reflect;

exports java.math;

exports java.net;

...
}

The base module defines and exports all the main packages included the module system (java.lang.module).

The base module does not have any dependencies on other modules and the dependency of other modules on the base module is defined implicitly, i.e. there is no need to declare it with a requires instruction.

Dependencies graph

When a Java 9 application is loaded, the module system reads all the artifact definitions with the related requires instructions and resolves all the dependencies. The dependency resolution ends up in a directed graph such as the one depicted in Figure 1.

There are two main concepts to remember when talking about the relations between modules:

Readability

A module is said to read another one when it contains a requires declaration containing the latter. So in the graph module1 reads module2 and module2 reads module3 and module4.

Accessibility

The exports clause completes the requires relationship declaring which packages of the modules on which a module depends upon are accessible by it. For instance if the declaration of module2 is like this:

module com.myproject.module2 {

requires com.myproject.module3;

requires com.myproject.module4;

exports com.myproject.delta;

}

It means that module1, that we saw depends on module2, it will be able to access only the package com.myproject.delta of module2.

Figure 1

Implied readability

If we look at the diagram in the previous paragraph (Figura 1), we see that module1 depends on module2 and module2 depends on module4, but what if module1 uses some class in module2 that contains some method signature that references some object type defined in module4? Module1 has no direct dependency on module4, how can we adress this? This issue is managed using another important feature of the module system, the implied readability. This is basically a transitive dependency. If we use the keyword public in the dependency declaration like this:

module com.myproject.module1 {

requires public com.myproject.module2;

exports com.myproject.alpha;

exports com.myproject.beta;

}

then module1, which reads directly module2, will also read implicitly the modules on which module2 depends upon. This is depicted in Figure 2 which shows by the additional blue arrows that module1 reads also module3 and modules4.

Figure 2

The unnamed module, automatic modules and migration

Using the features described above we can run our Java 9 applications organizing our code in a full modular way, but what about if our applications are written with previous versions of Java?

The module system usually loads all the modules from the module path, but also all the jar files in the classpath, if present. To provide a consistent model all code from the classpath is considered as a specific module called the unnamed module. The unnamed module is made to implicitely read all other explicit modules. This way we are sure that our old code would work as expected since it surely would have dependencies at least on the Java core types that in Java 9 are available only in modules.

So far so good, we have a way to run our old applications but what about migrating them to Java 9 version? We must consider two main scenarios:

All the sources of the jar files that constitute our application are handled by ourselves

The right way to proceed would be in this case to explore with some tool the dependencies of our jar files and then configure them as modules, possibly after some code reorganization if necessary to fit the new modular standard.

Not all jar files sources are handled by ourselves

If we take the jar files that are handled by ourselves and transform them in named modules and if they depend on jar files not handled by ourselves and not configured as modules they could not run because those jar files could only be put in the classpath. If we had the sources of the third party jar files we could branch them and handle them ourselves but this we do not control the source versioning clearly this would not be the best solution. Fortunately the module system helps us allowing to put jars not even configured as modules on the module path and they will be loaded by the module system as so called automatic modules. We can call them implicit modules to distinguish them by explicit named modules. An automatic module gets its name implicitely from the jar’s name.

Other modules can read automatic modules as if they were regular ones, so our modules can read them as they can do with any other module.

Since we cannot know in advance what are its dependencies, an automatic module is made to read all other automatic and explicit modules. Furthermore since it could contain methods whose signature refers to other automatic modules is made to grant implied readability to other automatic modules and since it is impossible to know what packages other automatic modules could use it is made to export all of its packages.

Another important feature is that automatic modules are made to read the unnamed module. This way if there are jar files that for some reason cannot be run as automatic modules we can still use automatic modules as bridges between them and explicit modules.

Conclusion

Java 9 fills a gap existing since the very first version of Java, i.e. a native implementation of a modular way of organizing code. This surely will change the way we will design java software in the future.

A short summary

We have seen that the Sid entity could represent a Principal or a GranthedAuthority. This is the crucial point that allows us to exploit the ACL itself to secure methods execution in a fully dynamic way. We recall here that every secured object in the ACL model is associated with one and only ACL entity.

The ACL entity can have multiple Access Control Entries which are represented by Permission, Sid and Acl instances. An ACE in which the Sid is a GrantedAuthority can be seen as a permission on an object granted to a Role, where the Role is the GrantedAuthority.

If our goal is to secure method execution (normally we would secure service public methods) then the object associated with the ACL would be a method and the permission would be related to its execution. So we can define a custom permission calling it ‘execute’ for instance.

The Acl would represent a method execution with its set of ACEs with the ‘execute’ permission granted to a user or role (i.e. to a Principal or GrantedAuthority.

The only thing we would have to do then is to define a PermissionEvaluator with a custom permission factory, and a custom voter. We will see below how we can implement a simple example.

Dynamic Spring Security Sample

The example runs with a HSQLDB database in memory. The DataSourcePopulator class initializes the db with all the ACL tables and records and creates two users, ’granted’ and ‘notGranted’. The first user is given the execution permission on the method ‘secure’ of the TestSecuredMethodService class. The second user does not have any permission.

The execution permission is implemented by the class CustomPermission:

The custom permission factory is configured in the file applicationContext-security.xml, where the permission evaluator is given our custom permission factory as the value for the “permissionFactory” property. In the same file the access decision manager is configured with a custom voter which is implemented as:

The vote(…) method checks first if the object passed as parameter is an instance of ReflectiveMethodInvocation. If this is the case it means that a method annotated with spring security @Secured, @preAuthorize or @postAuthorize is being executed. Since in our specific model we want to mark a method to be secured but without any reference to roles we choose to implement our custom annotation like the following:

This annotation uses the @Secured annotation as a meta-annotation and the “ROLE_DUMMY” string does not represent, as its name implies, a meaningful role, its only purpose is to make our SecureMethodExecution annotation recognized by Spring Security.Our vote(…) implementation uses the permission evaluator to check if the authentication object that represents the authenticated principal has the EXECUTE permission on the method that is being executed.The MethodWrapper class is a wrapper around the Method object retrieved from the ReflectiveMethodInvocation instance. Its purpose is to provide an ID by which it could be stored in ACL as a secured object. As you can see the constructor calculates an id using the object meta-information:

We have two users “granted” and “notGranted” with password “user”. When we pass the login we are given the following page:

The “Execute Secured Method” will execute the secured() method of the TestSecuredMethodService class and the “Execute Not Secured Method” the notSecured() one. If we login with “granted” user and click on the first link we will see the page below with “Method executed” message.

If we login with the “notGranted” user we will see the following error page:

If with any of the users we click on the second link we will have the “Method executed!” page because the “notSecured” method is not under security control, i.e. is not annotated with @SecureMethodExecution.

Introduction

When implementing some sort of web application’s plugin architecture in which the layout is based on the Tiles framework (https://tiles.apache.org/) there could be the need to reload new tiles definitions on the fly.

How to load Tiles definitions programmatically

To load new tiles definitions first of all we get the Tiles container passing the servlet context to the getContainer method of TilesAccess class:
TilesContainer container = TilesAccess.getContainer(servletContext);

Then, assuming it is a BasicTilesContainer we get the definition’s factory out of it with the following:

Introduction

Resource bundles are objects that are characterized by a specific ‘Local’, i.e. are specific to particular geographical areas in terms of language, date format and other standards. Usually they are represented by simple properties files in the classpath with a suffix that indicates the targeted local and the application can load the appropriate resource bundle for the current local using this suffix. The resource bundles are usually all loaded during the application’s startup but sometimes there is the need to copy and load other resource bundles dynamically, without the need to restart the application.

How to load ResourceBundles dynamically with LocalizedTextUtil

A common situation in which there could be the need to load new resource bundles on the fly is a typical plugin architecture. We can imagine a web application in which components made of classes , jsp pages, CSS files and of course resource bundles can be dynamically loaded without the need to restart the application.

First of all a resource bundle file should be loaded in the classpath to make it available. If we choose to name the message bundles as global-messages then the following must be set in the struts.properties configuration file:

struts.custom.i18n.resources = global-messages

Then a classloader must be created with the URL of the path in which the bundle is stored.

Introduction

Spring can use its own MVC or integrate other MVC frameworks. It can for instance integrate with Struts2 by a specific plugin. The plugin overrides the Struts Object Factory providing a way to configure struts actions as beans in the Spring Context. Sometimes some customized behaviour is needed and the spring plugin as it is is not enough. In this case the StrutsSpringObjectFactory which is the core class of the plugin can be extended and the customized version can be configured instead of the default one.

How to provide a customized StrutsSpringObjectFactory

In order to extend the StrutsSpringObjectFactory the buildBean method in the spring struts plugin should be ovverridden. In the following example the buildBean method is overridden and its logic customized to retrieve a bean from a different spring context than the default one, if it does not exist in the default. This spring context is stored in the ServletContext that can be retreived from the default spring application context.

Introduction

When we implement a java servlet web application we face the problem of choosing in which scope to put information in, depending on the needs. In the normal scenarios we have to cope essentially with Context (i.e. ServletContext), Session and Request scopes.
A slightly different requirement comes out when one wants to store some object or information in the current thread, so that it would be isolated from other threads. Someone might say that the ServletRequest object would fit this requirement, because each request runs in a separate thread and one could simply store the information he wants as a request attribute, but in a classical multilayer application the request object won’t be available in the business logic layer nor in the data access layer.
A java class called ThreadLocal comes in handy in these situations. In this brief article we will describe the very basics of the ThreadLocal’s usage.

A Sample Scenario

As a scenario that could describe well why there is the need of storing objects in a Thread scope we will describe the implementation of a tipical Data Access Object layer keeping it as simple as possible. We have picked this example only because it explains well the point, but please keep in mind that there are many out the of the box solutions for DAO’s pattern, both in the form of pure JDBC (Spring’s JdbcTemplate) and more advanced ORM frameworks and there is no point to, as they say, ‘reinvent the weel’.
A tipical issue with implementing a DAO layer is to emcompass two or more database operations in a single transaction. Here we are talking about operations that change the data state, such as inserts, deletes and updates. To deal with this we can create the JDBC connection set the autocommit property to false and pass it as a parameter to each of the DAO method calls involved, then after having called all the methods in the transaction execute commit (or rollback in case of errors) on the connection object.
This approach though couples the DAO methods signature with the JDBC connection. It would be nice if we could get a ‘cleaner’ version of our DAO interfaces, making it unaware of the connection. A way to do this would be to implement some sort of transaction manager class with static methods whith the responsibility of creating the connection (or getting it from a connection pool), storing it in the current thread, giving it to the DAO obect caller and handle the transaction boundaries between the DAO method calls. The part of storing the connection in the current thread can be made by using a Java class called ThreadLocal, in the following paragraph we show a simple example on how this can be done.

Concrete Example

In the following example we use very simple classes just to explain how the all ThreadLocal thing works. First of all we implement a minimal transfer object:

The class SampleTransactionManager has a startTransaction method that creates a new connection, set the autocommit to false so that each database operation is not commited until the commit method on the connection is explicitly called, and finally stores the connection in a static ThreadLocal variable. The magic behind ThreadLocal class makes the connection beeing actually stored in the current Thread. This method will be called outside of the DAO object method calls to mark the transaction’s start.
The method getConnection retrieves the connection from the ThreadLocal variable and returns it to the caller, i.e. the DAO object.

The employeeDao instance is used to add two values to a database table by tranferObject1 and tranferObject2 variables. As we see the SampleTransactionManager is used to start the transaction and to commit or rollback. The addSampleField method does not need the connection to be passed in as a parameter since, as we see in the DAO implementation it is retrieved internally using the static SampleDao’s getConnection method.

Conclusions

We have seen how the use of ThreadLocal allows us to access the current thread context and store objects in it. In this particular example we managed to keep the DAO methods signature ‘cleaner’ and independent of the connection in a tipical transactional scenario (what shown here could be improved to get the DAO internally free of JDBC boilerplate code and focused mainly on SQL , like Spring does with JdbcTemplate).

Introduction

Dealing with http web frameworks, one day or the other one has to cope requirements that go beyond the standard features offered by the chosen platform. A common issue would be to change the request submitted by the client or the response content returned to it on the fly. Java Servlet technology deals with these issues basically with servlet filters and some other complementary trick the we will explain in the following paragraphs.

How to change the http request

Changing the servlet http request can be done using the servlet filter mechanism, but that is not enough. Most of the HttpRequest object fields are read-only ones, since the standard scenario does not cover the possibility that the original request information submitted by the client could be changed. The strategy to overcome this limitation is to wrap the request in another class, customize the wanted getter methods and submit to the filter chain the wrapper object instead of the original request. Java servlet API already comes with two classes named ServletRequestWrapper and HttpServletRequestWrapper that can be used as wrappers for the request. In order to change the original request one has to create a class that extends one of these two classes, depending on what fields are need to be changed (if the field is available in the ServletRequest class just extending the ServletRequestWrapper will do). Then in this class one can overwrite the getter methods that provide the needed fields and implements the wanted logic to retrieve their custom values. Finally in the Servlet Filter doFilter method an instance of the wrapper class is created passing the original request in its constructor and then passed to the chain doFilter method call instead of the original request.

In this example a default value is provided when a specific request parameter is found null. An anonimous inner class is used to extend the HttpServletRequestWrapper to make the implementation terser. The example above is trivial, nevertheless it shows just what is needed to hack into the servlet request lifecycle.

Introduction

In the previous post How to modify the servlet request we explained how to change the servlet request on the fly, using the servlet filter mechanism and the class HttpServletResponseWrapper. In the following paragraphs we are going to show how to change the response content just before it is sent back to the client.

How to tranform the http response content

We can tranform the servlet http response content just before it is sent back to the client, using the servlet filter mechanism and the class HttpServletResponseWrapper. The wrapper is needed because the output stream of the original response is handled and closed by the servlet engine. What we need is to take the original content generated by the servlet, transform it and writing it again in the response output stream, but we cannot do this in the normal request-response flow because as soon as the content is written to the response outputstream the latter gets closed and it not more possible to write anything in it. The solution is to wrap the response in an extension of the class HttpServletResponseWrapper and provide it with a custom outputstream. The wrapper is passed then to the chain execution of the filter instead of the original response and the servlet engine will write its content on the custom outputstream. Then the content will be taken, tranformed and written to the original response outputstream. In the following paragraph a simple example is shown.

Http response transformation example

The following is a simple example of how the all thing works. First of all we define a Wrapper that extends the HttpServletResponseWrapper:

Here the doFilter in the chain object is executed with the wrapper instead of the original response. When the chain’s doFilter execution completes the content is taken from the wrapper , gets manipulated using a charArrayWriter and a HTML h1 title is added to it. Finally the changed content is written to the original response outputstream and the latter is closed.

Current IT Scenario

The current IT market is characterized by a fast change. Many different issues are bringing great complexity. Among them we can mention the need for multi-tenancy and cloud platforms. These new paradigms require more sophisticated instruments to implement solutions at an effective production pace. Some issues in the software lifecycle must be dealt with new processes and methodologies, other are more focused on tools. When it comes to discuss about tools, we have a lot of choices among frameworks and middleware. But what about the Java core? Is the language itself fully fitted for the current IT market and for the near future?

New Java core capabilities

If we look at the latest Java releases, until Java 8, we see that some new features have been implemented. Among the main features the Generic Programming and the Functional Programming deserve a special attention. They fill a gap in the development world and let Java keep the pace with other technologies. In the following two paragraphs we summarized the two.

Generic Programming

Generic programming can be defined in a simple way as a style of computer programming in which algorithms are written in terms of types that can be specified parametrically. Generic programming provides a different concept of reuse than general object oriented paradigm: when a set of classes have the same behavior and data structure and differ only in types there is the opportunity to exploit the features of generic programming. Generic programming provides a way to define reuse that is complementary to the usual Object Oriented programming, it is based more on templating than on inheritance and other OO concepts.

Functional Programming

Giving a precise and definitive definition of Functional programming is not so easy as most of the literature on the subject lack any formal approach and scatters across a variety of different descriptions. As a Object-Oriented programming (OOP) language, Java was originally designed to primarily support the standard imperative (procedural) programming. With imperative programming the code is written in such a way that describes in exact detail the steps to accomplish a task. We can also call this algorithmic programming. In this style of programming some information must be stored in a shared manner. Functional programming avoids to put status information in ‘external’ variables and instead consists in composing a problem as a set of self-contained functions to be executed. Each function has an input return an output and the output of one function could the input of another one. There is no storing of whatsoever status information in the run. We can say that functional approach is focused not on how to perform tasks (algorithms) but on what information and what transformations are required to obtained that information. Even if Java is not a pure functional language it has introduced it by the use of Streams and Lambda functions. That lacks of the performance power of a pure language but gives the opportunity to approach specific problem with a more robust and less error prone approach.

Is that enough?

The support for Generic and functional programming are certainly of great importance and cover programming issues that otherwise should be addressed outside the scope of Java stack technologies, but there are also other issues that are lacking a sufficient support. JEE offers in some way a standard and following standards is a good thing, but its releases follow a very large timeframe, you have to wait years to have something new which is not in touch with the rapid growth of new requirements from the IT world. The new IT tendencies are toward SaaS and cloud paradigms, which require highly configurable systems. Many software features need to be configured dynamically in a programmatic way. Designing the solutions as plugin oriented ones is a must, and is also a must that the components could be to dynamically installed and uninstalled. Dealing with these issues requires the use of some underlying framework that must be robust, flexible, maintainable and as standard as possible. The OSGi technology was meant to be a standard to cope with these requirements, but is it really a viable solution?

OSGi

OSGi is a standard developed by the Java IT industry aimed at implementing dynamic modular systems. It can deal with components that can be installed, started, stopped and uninstalled on the fly, without affecting the whole application server lifecycle, and it also maintains a registry that can adapt accordingly to the components installation state. The main drawback is that its implementations are somewhat heavy-weight and complex, they not fit well with other frameworks (Spring, Struts…) and they require a specific running environment on application server side. It is definitely not a lightweight solutions. It addresses the problem to deal with highly dynamic, components based systems, but it imposes too many constraints on the overall software ecosystem. The only possibility that it could have to gain success in the future would be if all the application server vendors would be turned into considering it a definitive standard. But normally the software frameworks have really success if they fit well with world and not if the world fit well with them. All the crucial features that the IT market requires should be addressed by the Java core platform in the first place and leave to the external frameworks only the burden of the high level stuff.

Java Core And Dynamic Class Loading

The main problem in developing lightweight solutions lies in the lack of support from the Java core itself. Java classes can be loaded dynamically by the use of the Class Loader architecture, but when it comes to load and unload classes on the fly, it is not so straightforward and a special attention is required. Furthermore the things become more complex if we must deal not just with the lifecycle of single classes but with bundles of classes as a whole (as jar libraries for instance). There is no reliable mechanism to deal with this. And also there is no support to handle and share different versions of the same library on the virtual machine, something like the Global Assembly Cache of the .NET platform, for instance. It is frankly difficult to understand why this has not deserved any attention in the latest releases of Java, since the overcome of this limitation would be a great impulse to address even that web applications market segment that is up to now almost fully owned by dynamic language platforms like PHP.

Conclusion

The lack of a support for the reliable handling of dynamic loading and unloading of class bundles in the Java core and their versioning represents a great obstacle to the growth of mature and flexible solution based on Java Stack in the new IT scenarios based on highly dynamically configurable systems. If the future releases of java core will fill this gap, heavy implementations like OSGi will lose importance and there will be much more room for light and extensible implementations.

Preface

Spring Security provides a robust support for securing Spring based applications but it fails in some way when it comes to design dynamically configurable security, especially regarding dynamic configurability of java methods access. How we can overcome these limitations?

Introduction

When we want to secure an application, we must define access policies to its functions and we basically cope with two main models that we can call ‘role-based security’ and ‘object-based security’, where the first works by defining roles played by users and by them limiting the access to specific system functions while the second focuses on permissions defined on single domain objects. This dichotomy is true also for Spring framework. Spring Security provides both these models and for each provides a robust solution on its own. The role-based security is implemented by the base spring security authorization API and the object-based security by the ACL module. Each solves a particular problem area and perhaps they both cover most of the needs but there are some limitations when it comes to design more advanced solutions. What if we want, for instance, to dynamically configure authorization to methods execution? In Spring Security we can secure methods by setting an annotation with an expression based on a role, but a role is something that is defined and configured in advance, not dynamically. Another possibility would be to use ACL to secure a method based on which permissions a domain object passed as an argument has, but this does not cover the situation in which we only want to authorize the method execution without any reference to its parameters. There are certainly ways of customizing the Spring Security classes to overcome this limitations, but in this article we want to point out a possible solution that exploit the ACL security model itself to provide a unique base for securing the whole application in a dynamic way. But first let’s have a quick look on how methods are authorized with ‘role-based’ security and ACL in practice.

Role-based security

A Role in spring Security is represented by an instance of GrantedAuthority class. A list of GrantedAuthority objects can be stored on an Authentication object to represent the roles played by current authenticated user. The AuthenticationManager is responsible to insert the GrantedAuthority into the Authentication object. An AccessDecisionManager is responsible for making authorization decisions based on statements configured in Spring xml configuration files or as expressions in annotations. One can implement its own AccessDecisionManager or use one of the Spring implementations based on voting by the AccessDecisionVoter interface. Methods can be secured both with AOP configuration or in a simpler way using annotations and expressions like the following

@PreAuthorize("hasRole('ROLE_USER')")
public void method();

Secure objects (ACL)

ACL relies on an API backed by database tables to define authorization permissions (like write, delete, admin) on single domain objects. A common way to secure an object is to use the hasPermission expression in an annotation like the following:

In the example above the method admin execution is authorized only if the current contact parameter has an ‘admin’ permission.

A solution to dynamically secure method execution

Spring does not seem to offer an out-of-the-box solution to dynamically secure methods, i.e. to set the permission to execute a method on the fly. One can limit the access to methods using roles or defining the access permission rules on method parameters by ACL. Roles are a rather static way to define access rules, they must be defined in advance and are course-grained, a role is not directly targeted to a single method or object but represents some general rule that limits the access to certain areas of the application. ACL on the other hand, is used for securing single objects by permissions, which are very fine-grained concepts directly related to the objects to be secured and not to some general application behavior. One way to overcome these limitations would be to perform some customization of the Spring Security API, for instance one can provide its own implementation of AccessDecisionManager class. Nevertheless in the middle of spring security ACL model there is already something that could do the trick, maybe in a more straightforward and cleaner way. The key would be to represent a role not as a general application behavior associated to a user but simply as a set of permissions. Let’s recall briefly the main entities involved in the ACL design:

Acl: it represents an object, normally a domain object by an ObjectIdentity and it stores a set of AccessControlEntries

AccessControlEntry (ACE): it is composed of a Permission, Sid and Acl.

Permission: A permission represents what can be done to an object (like write, read, admin) and it is implemented by a particular immutable bit mask.

Sid: it represents a Principal or GrantedAuthority.

ObjectIdentity: Each domain object is represented internally within the ACL module by an ObjectIdentity.

These classes are persisted to the database by the following set of tables:

ACL_SID it stores Sid instances

ACL_CLASS its purpose is to identify any domain object class in the system.

ACL_OBJECT_IDENTITY stores information for each unique domain object instance in the system, it is related to Acl instances and contains a foreign key to a ACL_CLASS instance representing the object type

ACL_ENTRY stores AccessControlEntry instances

If we think that a Sid could represent both a principal or a GranthedAuthority we are taken straight to the point: the ACL model offers us already a way to implement roles as a set of permissions, since an ACE is a set of Permission, Sid and Acl instances. A set of ACEs in which the Sid represents a single GrantedAuthority can be seen exactly as a role made up of a set of permissions. We can even assign permissions directly to a user, using a Sid as a Principal instead of a GrantedAuthority. But what kind of permission can we associate to a method? What we want to secure is method execution so we can define a custom permission, and we can call it ‘execute’, for instance. We can then represent an Acl as a method execution, precisely as a wrapper of an instance of the java.lang.reflect.Method class. The wrapper is needed to provide an additional id property to identify the specific method execution instance. The ACL will then be given its own set of ACEs with the ‘execute’ permission associated with a user or role (i.e. with a Principal or GrantedAuthority). In order to secure a method then, a custom annotation could be implemented, let’s call it SecureMethodExecution , as:

Here the SecureMethodExecution annotation declaration uses the Spring Security’s @Secured annotation as a meta-annotation so that the @SecureMethodExecution is recognized by Spring as if it was @Secured with the attribute value “ROLE_DUMMY”. The sole purpose of the attribute “ROLE_DUMMY” is to get the default AccessDecisionManager to “think” that @SecureMethodExecution is a regular @Secured annotation.

Then the methods could be annotated like this:

@SecureMethodExecution
public void methodName(){...}

Finally a specific implementation of AccessDecisionVoter interface would provide the access logic. The following is an example of what the vote method code might be:

Using this model an user interface could be built up by which ACE instances could be created or removed on the fly for every method that needs to be secured (service methods, usually), without the need of statically configure and restart the application.

Conclusions

Given that there could be several different ways to dynamically secure methods in Spring Security, nevertheless the solution above relies on the spring security architecture itself and it needs only minor customizations.