Nowadays, everyone knows URL shortener services and even you don't know it you are using it.

Twitter automatically translates every URL into a shorter URL, Google has is own service goo.gl and there are so many others...

In my last project I had to configure NWBC and I faced a problem. Do you know that URL for menu entries in PFCG is just 132 characters?

Usually this is enough, but in my case it was not...

I had to link Cognos reports, that believe me are really long URL and even more due to a bug with SAP Fiori Launchpad I had to apply a workaround to have my theme applied as described by note 2092412, and it generates long URLs.

So I decided to use shorten URL, but I had to face two problems:

User have no access to URL shortening service (Internet access is limited)

It is not in the security policy of the company to use an external service for shortening

The idea

Why do not use an URL shortening service in ABAP?

I checked on the SAP help and I didn't find any solution. In 2010 Roel van den Berge wrote a blog in order to integrate public URL shortening service with ABAP URL Shortener Service in ABAP, but it can't be used in my scenario due to constrain number 2.

Why do not create an URL shortening service in our ABAP system?

The approach

URL shortening concept is really simple: find a short string that identifies a long URL. Usually this short URL is a 6 characters long string...

So having this in mind I grouped the alphanumeric chars using this map:

0 → a
1 → b
...
25 → z
25 → A
...
51 → Z
52 → 0
...
61 → 9

Having a 6 character string as unique identifier and using this scale, it is like having 56,800,235,583 unique shorten URLs. Is that enough?

Where does this number come from?

Each short URL can be considered an unique ID, so for instance id 000000 is the short url aaaaaa, 000001 is aaaaab and so on till 56,800,235,583 that is short URL 999999

The solution

I have created a model class ZCL_T3G_AUS_MODEL that has the conversion logic from short URL to ID and viceversa, a transparent table that stores the short URL generated ZT3G_AUS_LINKS and I've also included a blacklisted/reserved table ZT3G_AUS_SLINKS.

Why a blacklisted/reserved list? I think that you don't want that a short URL is the name of your biggest competitor or an impolite word, even more you want to have some friendly short URL like <mycompanyname> or <myproductname> free for use for some specific links.

In SICF I created a new independent service named s (this can be whatever you want)

An HTTP Handler ZCL_T3G_AUS_HANDLER is associated to this service and it is responsible to translate the short URL to the original URL

Before I'll answer the question, let me step back and set the stage for my arguments! I was invited by Leigh Jin, an associate professor at the San Francisco State University, to give a guest lecture on OpenUI5 two weeks ago. I was obviously very excited and prepared some code examples and a presentation for the lecture. Here's what I did: I created all my code examples on jsbin.com, which is a tool/website to collaboratively code in JavaScript. It's an amazing website - allowing you to develop a website using HTML, JS and CSS. But, here are the two best features from my point of view: You can see the outcome of your code and interact with it and you can use libraries like jQuery, Twitter's Bootstrap, AngularJS and .. OpenUI5!

During my presentation, I displayed just this website and did some live coding to demonstrate how fast you can get things done with OpenUI5. I also shared the URL and let the students live code by themselves. The outcome was fantastic! Leigh and the students liked it a lot! For me, it was kind of a big surprise.. I'm in the SAP ecosystem for less then two years and I heard a lot about how complex things can get. Leigh is experienced in conducting SAP student classes and she also knows many SAP solutions. She explained to me that this live coding approach is exactly what she was looking for! By the way, she did not even considered to teach OpenUI5 before I showed her how I used to teach people about OpenUI5. She knows how much preparation SAP classes require: setting up the servers, get and upload sample data, setting up the student's PC to have necessary development tools running, .. the list goes on and on. But, with this approach, you don't need any of those. You can throw away the overhead and jump into code directly - within the browser of your choice!

And finally, here's the list you were probably looking for after reading the title:

No need to download nor install any sources and not dealing with folder structures, correct paths and so on

No need to download and setup any IDEs, plugins, SDKs or what so ever - just open your pre-installed browser

And now it's your turn: What do you think? Try out that approach and share your experience in the comments!

If you're looking for some code examples or just would like to see my slides, click here. And if you're interested in the SFSU class and our collaboration with them, please check out this site. Also, it's nice to see that by adding this web development class with OpenUI5, SFSU is bringing SAP's Student Recognition Award to their MBA program! Congrats!

It's been a while since I blogged about our monthly Open Source meetups in the Bay Area! We just had our fourth meetup last week - with 40 participants (out of 74 RSVPs)! As usual, we had a theme wrapped around the whole meetup, which was this time ... (surprise): Christmas! I was very excited about it and we decorated our facilities with christmassy things -see the pictures below But that was not all! Our first speaker, Aaron Williams, was talking about how to prototype with IoT and he came up with software-controlled christmas lights, running on an Arduino and with sensors and an UI for maintenance purposes!

In the meantime, our Open Source Bay Area community grew to 280 members with overall 162 participants within 4 meetups over the last 5 months. But, that was not the only change! Based on the feedback we received from our upcoming meetups, our participants really enjoy Q&A sessions to get their more detailed questions answered. Because, at the end participants join our meetup to gain new knowledge and our survey results underline that content is king! Based on that, we change the format. Our speeches are now only 15m long, followed by 30m Q&A. This allows us no have more time to mingle and connect to like minded people!

As for the last meetup, we had two speakers joining the speakers panel plus one spontaneous lightning talk by Ralf Pieper:

I was really glad to see such an amazing engagement within our community. Ralf came up with the idea for a lightning talk just before the meetup. We were thinking about those spontaneous talks already a few times and we figured it could be very nice to give our community members the possibility to talk to the community and bring up some issues or thoughts they had!

We also introduced online surveys this time, so let me share some results with you:

And here's what our community is most interested in right now: Big Data and cloud security, IoT everything (e.g. Connected Cars) - but live demos!, NoSQL databases and Big Data infrastructures, Crypto-currency in the cloud, OSS business models & success factors, Configuration Management

We received very well feedback and people would like us to keep it going - which we will obviously do! So, if you are interested join our community and RSVP for the next meetup using the link at the bottom of this blog! Also, if you know any cool speakers that would fit perfectly in one our meetups, please let us know!

Thanks everyone for participation and also thanks to Inga Bereza and Garick Chan for co-organizing all of our meetups

If you are an Open Source enthusiast, please join our meet up group and RSVP for the next meet up on January 21st!

High Availability

Create a highly available 2 node virtual environment using DRBD and KVM

Choices in designing HA clusters from a reliability, scalability, and performance perspective (e.g., such as when to use network bonding, OCFS2 versus file-system fail-over, DRBD)

OpenStack, KVM and PaaS

OpenStack deployments and troubleshooting

KVM on a grid enables dynamic management and resource allocation of virtual machines in large scale high-performance environments

Build Platform as a Service (PaaS) with WSO2 Middleware and EC2

Big Data (Apache Hadoop)

Deploy an elastic auto-scalable cluster with OpenStack to consume and process business data on demand

Ceph Storage

Sizing and performance of Ceph storage

Ceph for Cloud and Virtualization use cases, including thin provisioning to make your storage cluster go further

SAP on Linux

How T-System leverages Linux and SAP LVM capabilities within their data center

Optimized Linux for SAP applications

Automate SAP HANA System Replication

Manage SAP HANA Scale-Out Linux systems

Register Today

All above open source technical sessions are available at the annual user conference SUSECon 2014 (NOV 17-21, 2014 Orlando). Interested to attend? Request your $200 discount off current full conference pass and meet with SAP & Open Source architects (email).

I wrote a blog last month (in July) on just how much I have been enjoying Ubuntu as a desktop machine.

I can't see myself going back - I am a total convert.

So just in the chance that I might win some more to the cause here are 7 more reasons why you might find Ubuntu to be your next OS choice.

7. Wobbly Windows

Oh this is available for other operating systems but having a few great window effects makes development life much more fun. Given I am running pretty standard Gnome Desktop, I use the compviz plugin to get this working for me. The Wobbly Windows adds a little stretchy and snappy effects to desktop windows but the best part is that compviz comes with other effects to snap windows into different parts of your desktop. So with a couple of keystrokes I can setup a browser window on one half of the screen and an editor (like sublime) on the right hand side of my screen

I can also quickly get several terminals up and snap them into the four quarters of the screen and ssh into a different server in each one. Although you can achieve the same thing with ...

6. Terminator

These things can start to be a little 'my-terminal-is-better-than-your-terminal' but after I was introduced to Terminator I rarely use the standard terminal.

The best feature about Terminator is that you can have many windows and tabs open within the app to then replicate what I was doing with separate terminal sessions and snapping them to different corners of the screen. To take this to another level (because that is not the killer use case) you can link windows together and issue the identical command to all windows. This was invaluable to me on a recent project where I was managing a cluster of servers and I wanted to issue the same sql query to each of them simultaneously to determine if they were all in sync.

5. Cowsay

Again this is a pretty minor item, all things considered, but it does make logging messages that much more fun.

I have been doing a lot of work with Ansible an open source provisioning tool. I will have more to say on this in a future blog but for now lets if you are not familiar with it, then consider it a way to script your server deployments so that is easy to deploy new servers with identical configuration.

Ansible uses cowsay to output many of its messages to the screen as they the playbooks run. Given some playbooks take time it helps break up the monotony as the cows mooove across the screen.

Just to give you a feel of how cowsay can immediately improve your life:

4. Multiple virtual desktops

I don't know how I will be able to work on 1080x768 again. I have grown used to multiple screens and not just multiple screens but multiple virtual screens. Ubuntu and Gnome make this a no brainer and you can easily have 4 virtual screens with real estate of 3840x2160. By splitting this into a virtual screen of 1920x1080 I can easily put different types of work on different virtual screens and focus on one particular type of work at once. I like to have a little PHP going on, with a little SAP UI5 here and perhaps email and other messaging on another. I can set up each screen with all the resources I need for that work and then leave it until I want to come back to it. This saves in context switch time as it is no trouble for my machine to leave an project or two open with the virtual machines that it needs and come back to it as I can.

I might restart my machine once a week if I need to, so to be able to set this up and then leave it all running is a great timesaver.

3. A Better understanding of your computer

In my last blog I mentioned that the command line as one of the great benefits of Ubuntu. Now I love great user interfaces as much as the next person. In fact I am passionate about creating great user experience for my clients. One of the best ways to do this is to simplify, simplify, simplify and remove all the complexity that does not affect the transaction at hand.

As a developer and as a DevOps'er you need to be familiar with what is going on with your servers. There are many great graphical programs that enable this but by using the command line I feel like I am operating at a much closer level to the computer and after a while of doing this the muscle memory kicks in and it becomes second nature. Also things like $ and ^ that are part of regular expressions have identical meaning in vi (yes vi). Knowing basic vi is also handy for when you are ssh'd into your headless server and guess what sublime isn't installed but vi is. Nano probably is too but I'd rather not talk about that.

2. Alias you, Alias me.

Whilst Jason Bourne is flying around the world with half a dozen passports an alias or two can be a great things to stick in your back pocket or or .bash_alias file.

An alias can take a long command line sequence and reduce it to a couple of easily typed letters. For example I have a scripted vagrant box that I used to have to get to the correct directory before starting it but with a simple and short script that I have aliased to two letters I can start that virtual machine and with another alias I can ssh into the machine and I am away.

Also the .ssh_config file is a winner. As you can define easy to remember aliases to all those servers you are managing and specify which user you want to log in as. There is also a trick about use ssh_config to differentiate your github accounts if you have multiple accounts for multiple clients.

1. Configurability

Yes, I saved the best for last. The best part of Ubuntu or any other variant of linux is it's configuarability. If you don't like the UI or pretty much any part of the OS you can switch it out for another. This is one thing that MacOS and Windows don't have to a large extent. While most of the tips and tools mentioned here can be applied on those OS's you can't swap out your UI.

This sort of discussion about editors, terminals, OS's can get a little bit heated for no good reason and I am not saying what I run is best and there is no other. Work out what works for you. In fact I run all these OS's (Linux, MacOS and Windows) and they all have advantages but for my main machine - ubuntu is where I am staying.

Last week, the Developer Relations team organized a Open Source meet up around success stories, failures

and best practices of Open Source initiative in bigger organizations. This time - it was our second meetup - we had 75 people attending the meet up. Our community grew from 18 to 75 within a month - that is pretty impressive! I was obviously really excited to welcome all participants and our 3 speakers: Zach Chandler from the Stanford University, SAP's Tools Team (Ben Boeser, Dominik Tornow, David Farr), and Mark Hinkle from Citrix! Based on the feedback from our participants, most people really enjoy the variety of speakers and valued the different perspectives on Open Source initiatives in bigger organizations.

If you were not able to attend, please find the slides of the talks below:

I also wanted to thank Inga Bereza and Garick Chan for their support as Co-Organizers - they did a great job!

For this second meetup, we actually changed our format based on the feedback we received from our early community members. This time we had one more speaker, shorter speaking slots and more focused talks. For the next meetup the speaking slots will get shorter again - our member like to discuss in more detail and exchange their experiences. They also want to have small "pitching slots" to talk about their projects within the community! Oh yeah - also, we will have more pizza - that was requested most

Rick A: "Great talk from a wide variety of speakers regarding use of open source at their workplace. Very informative, exactly what I was hoping to hear about. Pragmatic talks about open source in general, specific discussions on Drupal, Git, OpenSSL, and others."

Daniel K.: "It was great! Very reassuring that other big organizations are having similar experiences...and overcoming them."

Jack P.: "Very useful meeting, great hosts and talks."

Greg P.: "All the speakers were great. Big thanks to the organizers & SAP for hosting."

If you are an Open Source enthusiast, please join our meet up group and RSVP for the next meet up on September 24th!

Earlier this year I was about to take on a new client and it was very clear that I would need to upgrade my computer.

The fun part about this new client is that there was no SAP technology to be seen and it was a very open source house. Open source in the respect that it used a lot of open source technologies and open source thinking.

The fun part for me at the start of this assignment was picking out a new beast on which to practise my craft on.

After looking into the various options available I went for a DELL Latitude with stacks of RAM and an SSD drive and chose Ubuntu for the OS.

WOW! I can almost hear you drop off your chairs.

I have been a Windows guy for all my career. Not that I have particularly enjoyed that. Windows can be a right pain in the neck at times but at the end of the day it works most of the time and had everything I needed . I saw a lot of my developer colleagues heading down the shiny iMBP or iAir path and while that looked very shiny and attractive here are my reasons for going with Ubuntu, enjoying it and never going back to Windows again (unless I am forced to).

Everything I need is available on Ubuntu.There is nothing that I need that is not on Ubuntu. Actually that is not quite strictly true in the most pedantic sense of the word but for everything I need to do there is an option on Ubuntu

What's good for the server is good for the desktop.The great thing about working with Ubuntu on the desktop is muscle memory. The servers run Ubuntu be it Webservers, Database servers, Monitoring servers, Email servers are all running Ubuntu. Not that all those processes are running on the desktop but it does mean that when you are working on the production servers all the same commands work exactly the same way. Need to work out if your server is running out of disk then using the same df or du commands makes it easy to remember.

Embrace your inner command line.I loved windows because I could avoid the command line. Even though Windows does now have powershell and it is powerful I used to avoid getting into the DOS command line because it was really a pain in the neck. With Ubuntu and even with the MacOS systems in my life I love the command line. A lot of the time it is easier to type a command than use a GUI equivalent. Also because tools like grep become part of the everyday working with regular expressions become (slightly) less daunting. They just become part of your muscle memory.

Do I need to mention Windows8?The short answer is no. I have used Windows 8 a little bit on some machines that I had to and I cant say that it was a pleasant experience. It really is two user interface paradigms nailed together badly.

Installing software is a snapI had this impression that installing software on linux systems was compile, make etc but because Ubuntu and similar debian based systems have a critical mass software repositories are up to date and it is easy to sudo apt-get install <program>. Pretty much anything you need is an apt-get away.

The performance is awesomeThis is perhaps down to Dell and the face that I have all the memory and SSD that I do but to be up and running from a cold start in 30 seconds is fantastic. My old clunky creaking Windows machine was literally come back after you have made your second coffee. I know I am not comparing apples to apples here but the I haven't yet made it really made this machine creak.

Virtual machines rockVirtualBox is the best. Teamed together with Vagrant and Ansible they make a great combination of creating local servers that can be easily created, provisioned, deployed and destroyed. They make it easy to work on similar setups right across the software landscape.

Seven good reasons to leave the realm of Windows and not get dragged over the the expensive side of the force.

If you are looking to replace your machine soon take another look at Ubuntu. It is not as scary as you might think.

I was first introduced to Ubuntu by a basis consultant years ago. Now I look back and wonder why it took so long to get on board.

I would love to hear of your feedback and how SAP software can be made more Linux friendly.

FISL (International Free Software Forum) is one of the biggest events aimed to promote and adopt free software. It takes place every year in Porto Alegre, the capital of Rio Grande Do Sul, the southernmost state of Brazil and the state where the SAP Labs Latin America is located.

The event is a good place to exchange ideas and knowledge and there you find students, researchers, social movements for freedom of information, entrepreneurs, Information Technology (IT) enterprises, governments, and other interested people. It gathers discussions, speeches, personalities and novelties both national and international in the free software world.

I go to FISL every year since 2009 (at the 10th edition at that time), and in 2010 the SAP made it's first partnership with the event, in this case I got to know better about SAP and had an interview for a developer job position during the event. Less than a month after that I was working at SAP.

This year SAP participated again in the event and I was able to give it back been at FISL representing SAP.

SAP@ FISL15

I was there talking about our Open Source contributions (OpenUI5, Eclipse, Apache projects, etc...) and sharing my experience as an SAP employee. The results of the event was great, many people came by our stand (not only for gifts) and we had many good conversations, but in the end I think the most important for me is that I may have inspired others like I got inspired 4 years ago.

About two years ago SAP started to invest into a new OData Library (Java). Goals for this effort were to implement a library which supports the OData Specification Version 2, which has nearly the same feature set one can find in SAP NetWeaver Gateway and to open source the library at Apache in order to build a developer community around OData.

Mid of 2013 SAP did a software grant of the library and contributed the source code to the newly formed Apache Olingo Incubator project. Shortly after, the project released version 1.0.0 in October 2013 and version 1.1.0 in February 2014. The next version 1.2.0 is already on its way and currently available as snapshot on Apache Olingo Incubator. There you can also find the release notes. The releases cover the OData Specification Version 2. The committers of the project work constantly on the documentation for users of the open source library and are happy to answer questions via the dev mailing list or via Jira.

In the meanwhile OData evolves to an OASIS standard. So you can watch out for any news in the OASIS OData Technical Committee. The community work now focuses on implementing both client and server libraries for the OASIS OData Standard (Version 4). These efforts are supported by new contributions for Java (ODataClient) and Javascript (datajs), both client libraries for consuming OData Services.

Apache Olingo tends to evolve into a project hosting OData Implementations in different languages and technologies which is already a great success but the community has also some more milestones to focus on:

Graduation, which means that the project leaves the incubator behind and becomes a top level project within the Apache Software Foundation

Agreement within the community for a common roadmap of V4 feature development

Merge the contributions into a common code base to go forward with the OData OASIS Standard (Version 4) feature development

Release a first version of an OData Java Library supporting V4

Release a first version of datajs supporting V4

Last but not least I also wanted to share some short facts around Apache Olingo (Incubator):

2 releases, the third one is on its way

19 initial committers

7 new committers

75 persons active on the mailing list

1025 commits in the git repositories

more than 1500 mails via dev mailing list

more than 150 Jira Issues closed / resolved

about 20 tutorials available

With that I think there will be interesting times ahead of us in shaping the future of the Apache Olingo project.

Introduction

An application using OpenUI5 at the front-end will sooner or later need to connect to the back-end services for some business logic processing. In this blog entry we'll show how we can use the popular Spring MVC framework to expose REST-like endpoints for such server-side processing. Spring MVC makes it very simple to setup and configure an interface which will handle requests with Json payload, converting all domain model objects from Json to Java and back for us.

Simple Maven project with embedded Tomcat for testing locally

Servlet 3.0, no-XML, set-up for the web application using Spring's annotation based configuration

JSR-303, Bean Validation through annotations, used on the model POJOs

Spring MVC set up with a web jar for OpenUI5 runtime and automatic serialization of the model to Json

Useful links

Application

This is a very simple single-page application which has a table of fruit, each having a name (String) and a quantity (integer). One can add a new fruit, delete an existing entry from the table or update an existing fruit using an inline-edit.

Taking just the "add" operation as an example, we can see that the home view, home.view.js, calls the controller with a JavaScript object constructed as to represent a Fruit when it is serialized as the part of the Ajax request by the controller.

The controller, home.controller.js, then is simply sending the serialized Fruit object as the content of a POST request to the appropriate endpoint (/home/add) made available by the Spring MVC controller. Once the Ajax call returns the updated model data, it is simply rebound to the JSONModel associated with the view.

In what follows we'll look in detail how to implement a REST-like endpoint handling Json payloads using Spring MVC framework.

Spring MVC set-up

We are using Servlet 3.0, no web.xml, approach based on Java annotations to set up a simple Spring MVC web application. For this we need an implementation of org.springframework.web.WebApplicationInitializer where we specify the class which will be used when constructing an instance of org.springframework.web.context.support.AnnotationConfigWebApplicationContext and where we declare a dispatcher servlet. Here is our implementation, com.github.springui5.conf.WebAppInitializer.

We use a helpful EnableWebMvc annotation, which configures our application with some useful defaults. For example, Spring will automatically configure an instance of org.springframework.http.converter.json.MappingJackson2HttpMessageConverter message converter which will use a Jackson to Java converter to serialize the model returned by the Ajax handling methods of the controller.

Another interesting thing to notice is that we are using Spring's resource servlet to serve the static JavaScript (OpenUI5 runtime) from the web JAR available on the classpath of the application. To create the web JAR, we can simply package the OpenUI5 runtime JavaScript, available for download, into a JAR and add it to the WEB-INF/lib directory of our project.

We are autowiring the view-model bean into the controller. It will be reinitialized by Spring automatically for each new client of the application (new browser, for example). Ajax request handling is configured on the class and method levels via RequestMapping annotations specifying the URL paths available in the form /home or /home/add. Some methods accept a model object (Fruit) deserialized or unmarshalled from the Json in the body of the POST request via RequestBody annotations.

Each conroller method returns the instance of HomeModel which will be automatically serialized or marshalled to Json and later bound to the JSONModel on the client side.

Upon the initial request for the model data (/home) this is what the controller returns. Notice how the list of Fruit domain objects was automatically serialized to Json for us.

If an invalid value is submitted as the part of the request body (for example the quantity of 0 when adding a new fruit) it is automatically picked up by Spring and assigned to the org.springframework.validation.BindingResult parameter of the corresponding request handling method. The application then exposes the validation error message as the value of the models "error" attribute.

Testing the application

This is a standard Maven application which needs some mandatory dependencies to compile and run.

It also uses a Tomcat Maven plugin for running the project in an embedded Tomcat 7 using: mvn tomcat7:run from the command line.

Conclusion

Using Spring MVC with OpenUI5, the way we have described here, has some advantages. We can easily setup a REST-like endpoint which will automatically convert Json payloads to Java domain objects allowing us to concentrate on manipulating the model in Java without worrying on how the changes will be reflected in the JavaScript on the client-side. We can also plug in domain objects validation based on annotations (JSR 303), using Spring's validation mechanism. This allows us to process all business logic validation on the server-side in a declarative and transparent manner, leaving only checks for formatting errors on the client-side.

There are some disadvantages to this approach, however, the main of which, of course, is that we are returning an entire model for each request, which results in an unnecessary large data transfer. This should not be a limitation for a relatively simple views, but can represent a problem for the complicated views with a lot of data.

Open source is changing the way software is being developed and consumed. It is also SAP’s intention to contribute to open source and integrate open source into the product line. With the same intention, OData JPA Processor Library headed off the open source way a few months back and yes, we are now an open source software along with OData Library (Java) on Apache Software Foundation (ASF), see Apache Olingo project for details

The OData JPA Processor Library is a Java library for transforming Java Persistence API (JPA) models based on JPA specification into OData services. It is an extension of the OData Library (Java) to enable Java developers to convert JPA models into OData services. For more details check SAP OData Library Contributed to Apache Olingo (Incubator)which gives you an introduction on why OData and the features of the OData Library (Java).

The artifacts to get started with OData JPA Processor Library, the documentation, the code are all available on Apache Olingo. The requirements for building an OData service based on a JPA model are quite low, for a quick start you can refer to the following tutorial. In short, you just have to create a web application project in Eclipse (both Kepler and Juno versions are supported), implement a factory to link to the JPA model and register the factory class within the web.xml file, it’s that simple. TheOData JPA Processor Library also supports more enhanced features like the ability to redefine the metadata of the OData services (like renaming entity type names and its properties) and to add additional artifacts like function imports to the OData service.

So if you are on the lookout for a reliable and easy to use software for transforming JPA models into OData services, you now know where to go The libraries are out there in the open, please do explore them, extend them and let us know the new faces you give them. Use the mailing list available here not just to let us know how you have used the libraries but also to report bugs, ask questions, the team will be glad to hear from you and to answer your queries

With that, we say we have arrived into the Open Source Software world!! And also, this is just the beginning and there is more to come (or at least that is the intent), so keep an eye on the Apache Olingo project.

In today’s mobile and agile business environment, it is important to unlock the enterprise data held by applications and other systems, and enable its consumption from anywhere. The Open Data Protocol (OData), on its way to be standardized by Microsoft, IBM, SAP and a lot of other companies within OASIS (an international standards body for advancing open standards for the information society), provides a solution to simplify data sharing across applications in enterprises, in the Cloud, and on mobile devices. OData leverages proven Internet technologies such as REST, ATOM and JSON, and provides a uniform way to access data as well as data models.

SAP contributed the Java OData Library recently to Apache Olingo (Incubator). After just a few days available in public we got a lot of interest from other companies. That makes us confident to build up a community working on evolving this library to the latest version which is the upcoming OData Standard as result of the standardization process at OASIS.

Talking about features there is already a lot to be discovered in the library. Since the Entity Data Model, the URI Parsing including all System Query Options and (De)Serialization for ATOM/XML and JSON is already supported one can build an OData Services supporting advanced read / write scenarios. Features like $batch are currently added, Conditional Handling, advanced Client Support and detailed documentation are on the roadmap for the upcoming months.

The guiding principles during the implementation of the OData Library were to be OData 2.0 specification compliant and have an architecture in place to enhance the library in a compatible manner as much as possible. The clear separation between Core and API and keeping dependencies down to a minimum is important. The community should have the option to build extensions for various data sources on top of the library. The JPA Processor as one additional module provided is an excellent example for such an extension.

Besides the Core and API packages there is also an example provided in the ref and ref.web packages in order to show the features in an OData Service Implementation and to enable to integrate new features in that service also for full integration tests (fit).

We’ll keep you posted once the first release is available to digest. You can already dig into the coding, provide bug reports, feature requests and questions via Jira or by using the mailing list. All the information is available in the support section of the web site.

SAP's software is known for its role running many of the world's largest companies, but not necessarily for its user-friendliness. As part of an ongoing effort to change this perception, SAP unveiled Fiori, a set of 25 lightweight "consumer-friendly" applications that can run on desktops, tablets and mobile devices, on Wednesday at the Sapphire conference in Orlando.

Fiori applications are written in HTML5, which makes multiplatform deployments possible. They also target some of the most common business processes a user might perform, such as creating sales orders or getting their travel expenses approved, according to SAP's announcement.SAP has grouped the initial Fiori applications into four separate employee types, including manager, sales representative, employee and purchasing agent. Fiori is priced per user and available now, but specific costs weren't disclosed Wednesday.It's possible to deploy Fiori as a single group of applications, as well as separate Web applications and within portals, according to a statement.Some 250 customers helped SAP develop Fiori and make the apps more user-friendly, SAP said.SAP has basically been compelled to develop something like Fiori, according to one observer."Customers want enterprise-class apps with consumer-grade experiences," said analyst Ray Wang, CEO of Constellation Research. "Fiori is one of the ways SAP customers can pull the data out of their existing systems, and democratize that information so that everyone can benefit from access to the SAP system.""For years, the issue was that SAP data was hidden or not easily accessed," Wang added. "This is one small step to make that change."SAP's App Haus, a startup-like development group within the company,has been working to create more usable and appealing application interfaces. It wasn't immediately clear Wednesday whether the App Haus team is involved with Fiori.

The vendor has also launched a product called Screen Personas, which gives users the ability to rejigger SAP software screens to better fit their job role and personal preferences.There's plenty more to come, SAP co-CEO Jim Hagemann Snabe said during a keynote.

As some of you might know SAP is a contributor in the OpenSource project Eclipse. As part of that engagement we also organize so called "Eclipse DemoCamps" to show what one can do with this great development platform which is used a lot in the IT industry and is also the IDE of choice for SAP HANA Cloud Platform.

In case you are interested in joining the event you can register for free or even propose a speaking slot at the Eclipse DemoCamp and join speakers like Mike Milinkovich, the Executive Director of the Eclipse Foundation.

You'll be able to listen to interesting talks, get free drinks & food and a lot of possibilities to connect with other developers during the event.