Oracle – info.michael-simons.euhttps://info.michael-simons.eu
Just another nerd blogFri, 14 Dec 2018 10:11:38 +0000en-UShourly1https://wordpress.org/?v=4.9.8Add pluggable databases to Oracles Database Docker imageshttps://info.michael-simons.eu/2017/06/07/add-pluggable-databases-to-oracles-database-docker-images/
https://info.michael-simons.eu/2017/06/07/add-pluggable-databases-to-oracles-database-docker-images/#respondWed, 07 Jun 2017 14:20:53 +0000http://info.michael-simons.eu/?p=2333I recently needed a database including a huge schema for demo in a very short notice of time. I know know how to run expd/impd but that brings usually hassle regarding users, schemas and so on.

As some of you might know I’m using Oracles Database Images for Docker for quite some time now (first described here). You can still build them to your own needs from the sources (here) or use the new Oracle Container Registry which works quite nice if you already have an OTN account.

The 12c database consists per default of a container database and one or more pluggable databases and the 12c images adhere to that. So does our main development database.

What I did on the development database was: Stopping and unplugging the database containing the schema and user of our application (luckily, I was clever enough to separate those from other applications the last time i set this up). Then, I copy over the files to my local machine. That took about 15 minutes, including giving my coworkers a short note and was way faster than exporting the whole 40Gb or something.

How to do this? Login as sys user or any user with administrative rights: sqlplus / as sysdba will do just nicely on the database server. Than

This closes and unplugs the database and writes all metadata into pdb_awesome_app.xml. Then copy the database files. Those usually by in a folder like /opt/oracle/oradata/NAME_OF_THE_CONTAINER_DB/pdb_awesome_app. Copy the whole directory onto the machine that will run the container. Also copy /tmp/pdb_awesome_app.xml onto your machine.

First it’s time to make your coworkers work again. The last command dropped the pluggable database from your server which is fine, since we still have the database files and the metadata. Still on the server execute

That will plug the database again and open it for your coworkers to enjoy.

Back to your machine running the docker instance. Start the Oracle Database image that fits the version your server has. I was using 12.1.0.2 here. Also note that the container must have the same database options available as the source. You can still add them inside the container with dbca, it’s part of the docker image. dbca has a -silent option so it doesn’t molest you with the X11 UI.

Note: You might want to check the charset of your database! The container databases charset should match the pluggable database. Oracles Docker images use AL32UTF8 by default. Luckily you can pass the charset with ORACLE_CHARACTERSET. That charset is used for that instance only. For example, use e ORACLE_CHARACTERSET=WE8MS1252 to start a database with WE8MS1252.

It will map /some/path/OracleFiles into the running container. If this folder contains a database (for example when you already used the container), startup will be fast, otherwise the container takes some time to create the initial container database. It’s important that you map a local folder into this docker container here. Oracle databases are huge and slow to create, so it’s not really an option to keep them inside the docker container. Also, it makes the running docker container even bigger than it already is.

Also, for the purpose of this exercise here, it’s important to take note of /some/path/OracleFiles. While your container is starting, you can already move the pdb_awesome_app folder you copied from the database serve into /some/path/OracleFiles. Also copy pdb_awesome_app.xml into the same folder.

Here is an important step: You may have to fix paths inside the xml file if your server doesn’t have it’s database files under /opt/oracle/oradata. Take your favorite editor and fix each entry in the xml file if necessary.

Then, use Kitematic or docker command line tools to execute a bash inside the running database container. Login to the container database as follows

If the database version and all options match: Congratulations, you just copied a 40Gb schema in 30 minutes instead of several hours. I personally had some time to write this blog post before rushing to a JUG meeting and than to a customer 700km away.

]]>https://info.michael-simons.eu/2017/06/07/add-pluggable-databases-to-oracles-database-docker-images/feed/0Oracle JET: JavaScript components for mere mortals?https://info.michael-simons.eu/2016/11/14/oracle-jet-javascript-components-for-mere-mortals/
https://info.michael-simons.eu/2016/11/14/oracle-jet-javascript-components-for-mere-mortals/#commentsMon, 14 Nov 2016 19:46:12 +0000http://info.michael-simons.eu/?p=2138The final post in this series is about adding a front end to the analytic queries created with jOOQ and published through Spring Boot / MVC throughout the previous posts:

Deciding on a frontend technology for web applications

As much as I still like server side rendered HTML, there is some great stuff out there ready to use. I have written about creating a Anguluar 1 front end for a Spring Boot app nearly two years, so I had to research current options.

Alexander Casall did a very good presentation at W-JAX last week which of those techniques is the right choice for you. They both do have very different approaches. ReactJS focuses more on a modern programming paradigm where Angular 2 provides a more complete solution in terms of how one structures an applications, provides dependency injection and so on (On my todo list is an upgrade to above posts, where I will showcase upcoming Spring Framework 5 / Boot 2 with Angular 2).

What both Angular 2 and ReactJS are missing for the use case developed during in this series however, providing fast access to a lot of business relevant data, is a set of curated UI components, especially collections and data visualizations. I know, there is great stuff out there, but all must be carefully selected to work together.

“It’s a collection of open source JavaScript libraries along with a set of Oracle contributed JavaScript libraries that make it as simple and efficient as possible to build applications […]”

The sales pitch ends up in “consuming and interacting with Oracle products”. Well, you can do that, too.

I think more like a curated set of well proven open source modules of Oracle JET. It basically provides

RequireJS for managing used modules inside your scripts

Knockout. for providing two-way bindings between models and views

jQuery

jQuery UI which is the foundation for the great list of UI components included

Getting started with OracleJET

A new Oracle JET application can be created with a Yeoman generator which comes in various flavors: A basic starter which provides only the basic project structure and an empty page, a nav bar starter that has a responsive side bar and a starter that already provides hooks for going native through Cordova.

You have several options to use an application generated like that. One is using the provided grunt tasks like grunt build and grunt serve and especially later, when going to production, grunt build:release which does all the black magic of minimizing and optimizing your views and models. Again, you can use NetBeans for those tasks:

I do have a second Oracle JET based application up and running which is the admin backend of my user groups site, available here. This application is build as described above and then deployed to Pivotal CF using the static build pack.

The demo

In the demo for DOAG Konferenz und Ausstellung I have chosen a different approach. Consider you want to deploy your analytic query together with an easy to use frontend: All static resources you put into src/main/resources/static are delivered “as is” by the embedded application container of your Spring Boot application, Tomcat being the default as of writing. This is what I did in #8158f5a (the commit is really huge, containing all the small JS files…). The single page application (SPA) is prefixed with “app”, so that I can display my carefully crafted Spring Boot loves jOOQ banner on the entry page before

Note This works quite well, but I wouldn’t want to deliver the huge bulk of those JS, CSS and HTML files this way into production! I wanted to have a single, deployable artifact with a build process I can demonstrate in 90 minutes, even to non JavaScript developers. In production make sure you use the above grunt build:release target before this stuff reaches your customers browser!

Oracle JET uses Knockout for data binding, which is a prominent representative of a Model-View-View Model (MVV) framework.

Notice in the first line how you import Oracle JET modules by the means of RequireJS. The defined function than provides a nested function (artistsContentViewModel()) that creates the actual view model. The view model than is passed to the view.

Interesting stuff to see are the ko.observable which are the data bindings. In this demo I make every HTTP call through the jQuery api. There are probably better ways for tables, lists and such, have a look at those scripts, they use the oj.Collection api for the cool table component to achieve the same without the dirty details.

Coming back to my example. The model fields areaSeriesValue and areaGroupsValue go directly into a snippet that I basically copy and pasted from Line with Area Chart component:

Cumulative plays per artist and day

This may not be fancy, but it does several things pretty well: It's accessible, fully translated and easy understandable on the user side and easy enough to implement on developer side.

The cookbook

The Cookbook is really valuable. It presents all the components in great detail, the links to the api are up to date and it even gives you hints on where and when to use a certain type of visualization. With it at hand I created the frontend for DOAG2016 in about three days:

Verdict

If you want to be always bleeding edge - which I can fully understand - don't go near Oracle JET. You won't like it. If you have to suit a projects needs, presenting lots of data, often visualized and don't have time or the will to figure all the ways you can setup a JavaScript application yourself, give OracleJET a try.

I find the basic architecture well thought through, highly extensible (I did it with several Spring collections apis and more) and excellent documented.

One of the best things is, from a Java developers perspective actually, is the fact, that it doesn't want to hide JavaScript and HTML code and fragments behind Java code, which brings in too much abstraction according to my experience. You can achieve a lot by using for example GWT based stuff, but the moment you have to create your own components, you basically have to learn the interns of the chosen framework, JavaScript, HTML and CSS… The later 3 are somewhat standards, which is why I like to have them in my toolbelt anyway.

Summary

In 5 posts I walked you from setting up a skeleton Spring Boot project that provides a database connection or connection pool and a jOOQ context to creating an Oracle Database inside a container for developing and integration purposes to advanced analytic queries and in the end, showed one way to provide a UI. After 5 posts we end with a decent looking, database centric but not database dependent web applications.

In all steps I opted for the tools that make my life as a developer easier, facilitate the features I already payed for and help my customer to achieve their business goals.

There are several possible deployment scenarios: Run the application on Pivotal CF, get yourself a PostgreSQL database from the marketplace. You just have to push it. Put your frontend on some CDN. Or also on CF with the static build pack.

If you need an Oracle database, maybe Amazon RDS is the right choice for you. It doesn't take much effort to run a Spring Boot application on Amazon EC2.

Do you prefer getting your Oracle Database right from the vendor? I once was in a talk by both Bruno and Josh who deployed a Spring Boot application to the Oracle Cloud.

Regarding your database: I still love coding in Java. Even more so with Java 8. But why not using parts of all the awesome analytic functions my database, for example PostgreSQL and Oracle, offer? Both support fantastic spatial queries, XML and JSON. Maybe you already have relational database knowledge in your company: Use it! Get rid of bad practices putting everything into the database, but use it for what it was designed for. Don't reinvent the wheel.

Last but not least: If you have the time and ambition, try out new features (not only in the front end). Implement stuff yourself. It's fine. Learn. But if you have a project goal, maybe a prepackaged structure for a client side web application is enough and something like OracleJET is of value to you as well.

Thank you so much for your attention and reading so far. The demo project and the talk have been in the making for about 6 months. We actually use most of it inside new projects (not the OracleJET part yet) and it was time well invested researching that stack.

]]>https://info.michael-simons.eu/2016/11/14/oracle-jet-javascript-components-for-mere-mortals/feed/5An HTTP api for analytic querieshttps://info.michael-simons.eu/2016/11/02/an-http-api-for-analytic-queries/
https://info.michael-simons.eu/2016/11/02/an-http-api-for-analytic-queries/#commentsWed, 02 Nov 2016 15:25:59 +0000http://info.michael-simons.eu/?p=2077This post has been featured on This Week in Spring – November 8, 2016 and I’m so happy to be referred as “friend of the Spring community”

So, you have read through the previous parts of this series (1, 2 and 3) and you now have a Spring Boot project with jOOQ enabled, an Oracle Database with a simple schema containing musical data. What can you do with it?

Content

In the previous post I had a simple select, getting all artists from my library ordered by name. For queries like that I wouldn’t bother investigating into jOOQ. I have written extensively about my use of Hibernate and summarized my thoughts in this interview. In short: You can come very far with JPA / Hibernate, you can even create portable JPQL queries that support some analytics, as Thorben shows. Thorben also has a nice little tool that helps deciding wether JPA / Hibernate is the right tool for the job, checkout it out: Should you use JPA for your next project?

Imagine the following use cases based on this schema:

I want a list of top n albums by a list of artists ordered by play count

I want a list of artists and how many times I played a track by them cumulative by day

Given a specific months, I want a top n list of tracks played in this month ordered by play count and also including the change in result to the previous month

Please read through the links at the top of this post to grok each step.

An introductory example

I want a list of top n albums by a list of artists ordered by play count

For the younger generation: An album was a coherent collection of tracks that you often listened from the first to the last track. You could buy those on CDs or LPs, some even owned compact cassettes.

The task looks simple but if you look closely, I haven’t bothered normalizing the tracks table into 2nf in regard to the “album” column. Stuff like this exists! The specific reason the schema above looks like this is that I just was to lazy to normalize it all the way. I would have at least adding the artists to albums as well but even then album is non unique in some cases. Anyway.

Writing it down in SQL is an easy task:

SELECT
tracks.album,
count(*) as cnt
FROM plays
JOIN tracks
ON plays.track_id = tracks.id
WHERE tracks.artist_id IN (54, 86)
GROUP BY
tracks.artist_id,
tracks.album
ORDER BY
cnt DESC,
tracks.album ASC

It should be a bigger problem writing that down in JPQL. But then you would have to use a kind of projection, a custom object to be instantiated and mapped through ResultSet mappings or constructor call inside the query itself. Not a nice outlook.

How does this look with jOOQ? Having the imports in place, that is: The generated Tables and there records, it pretty much looks identical:

So far nothing got executed. At this point you also can use #getSQL to just retrieve the generated SQL. #fetch will get you a result of records, any of the overloaded methods allow you to retrieve just the columns you need into a record. Note that no mapping to concrete objects is necessary at this point! You can however use generated POJOs or my idea of providing RecordMapper from a Spring Context.

The result you get from calling #fetch has some nice additional methods that all start with “format” and we are gonna use one of those to get useful representation. As the post is called “An HTTP api for analytic queries” I’m not targeting a real REST api speaking of resources but a representation of analytic queries that is useful for a well defined set of analytic dashboards and such. Looking from that perspective, one may call those queries themselves resources.

jOOQ offers formatJSON to get an actually useful JSON result. We are not gonna use the method that returns a string and pass that as the response value from our Spring Boot controller but instead write the contend directly to the response, avoid a possible overhead from a String to JSON converter which isn’t necessary.

But wouldn’t that be boring, no limits, no possibility to query stuff? Let’s introduce both: A query that allows restriction to certain dates and certain artists ids as well as limiting the query result:

Getting the parameters into the controller is super easy with Spring Boot. You’ll see @PathVariable which matches an array of ids, labelled “artistIds” inside the request mapping. You can call this URL with a comma separated list of values. Then you’ll find several parameters annotated with @RequestParam. Those are query parameters that even get converted in a usable format, as you can see for the date parameters final Optional<LocalDate> from and to:

You get those into your jOOQ query like this with quite some cool things to discover: The array of Ids can be passed to a “in()” clause as is… Without having to fear SQL injections anywhere. The worst thing that could happen is a type mismatch exception at controller level.

For a correct query on the days, I want to truncate the played on date to the day. Easy to be done with SQL, even easier with jOOQ as a can reference that expression as a variable PLAYED_ON_TRUNCATED_TO_DAY. And last but not least, you’ll see how nicely Java 8 optionals fit into such an expression: If the date is set create a greaterOrEqual expression, otherwise just a simple true condition:

Window functions with jOOQ

I want a list of artists and how many times I played a track by them cumulative by month

The second use case. I want a list Something you could achieve with client side programming working on the result set of a query very similar to the above, having group the plays by month and artist and keep a running total. Throw some nice lambdas in and you’re done.

The whole idea behind window functions is to allow you to process several values of the result set at a time: you see through the window some peer rows and are able to compute a single output value from them, much like when using an aggregate function.

I have heard a lot of Motörhead after Lemmy passing away by the end of 2015. I want to know how many times I heard Motörhead on January 1st, how many times on 2nd, 3rd and so on in total, like 5 times, than additional 10 times (makes 15) and another 7 (should show as 22). Result should look like this:

To make it a bit more interesting, I’ve added the additional option to query multiple artists. So, what do we have? I have added some comments to the query:

@RequestMapping(path = "/{artistIds}/cumulativePlays")
public void getCumulativePlays(
@PathVariable final BigDecimal[] artistIds,
@RequestParam
@DateTimeFormat(iso = ISO.DATE)
final Optional from,
@RequestParam
@DateTimeFormat(iso = ISO.DATE)
final Optional to,
final HttpServletResponse response
) throws IOException {
response.setContentType(MediaType.APPLICATION_JSON_UTF8_VALUE);
this.create
// What to select
.select(
// The play date, truncated to the day as explained above
PLAYED_ON_TRUNCATED_TO_DAY,
// The artists
ARTISTS.ARTIST,
// The sum of the play counts in different partitions
sum(count())
.over( // This is where the definition of the window begins
// without further restriction the window moves from
// the first result up and including to the current
partitionBy(ARTISTS.ARTIST) // The window has to move through multiple partions
// here, the artist as I want each cumulated playcount separately
.orderBy(PLAYED_ON_TRUNCATED_TO_DAY) // The window entries are ordered by playdate
).as("cumulativePlays")
)
.from(PLAYS)
.join(TRACKS).onKey()
.join(ARTISTS).onKey()
.where(ARTISTS.ID.in(artistIds))
.and(from.map(Date::valueOf)
.map(PLAYED_ON_TRUNCATED_TO_DAY::greaterOrEqual)
.orElseGet(DSL::trueCondition)
)
.and(to.map(Date::valueOf)
.map(PLAYED_ON_TRUNCATED_TO_DAY::lessOrEqual)
.orElseGet(DSL::trueCondition)
)
.groupBy(PLAYED_ON_TRUNCATED_TO_DAY, ARTISTS.ARTIST)
.orderBy(PLAYED_ON_TRUNCATED_TO_DAY, ARTISTS.ARTIST)
.fetch()
.formatJSON(response.getOutputStream());
}

Important thing here is that jOOQ not only makes it particularly easy to use those standard SQL methods but it also helps me to get them right. I could hide this beautiful query inside an annotation passing it to the JPA provider of my choice, but than I would not only have no compile time checking, I would have to deal with the result set on my own.

If you query the URL above, this is the query which gets generated and executed by jOOQ as formatted by jOOQ:

Given a specific months, I want a top n list of tracks played in this month ordered by play count

Do you know those charts list that present the top 10 or top 40 of musical tracks during the last months? With changes in regard to the previous month (How many positions a track won or lost)? I have such a chart in my daily photo project that looks like this:

This is already the UI I'm gonna present in the next article, though. Back then, I have written an atrocious MySQL statement which I nearly sent to #CTMMC some months ago. That beast is huge, slow and complicated.

What does it do? There are to CTEs, previous and current month, both including a window function over their "position", where position is defined as the count of plays in the given period of time. The dense_rank() method works much like row num, but "equal" rows (same count in this case) are given the same number, with "dense_rank" having no gaps. Exactly what we need to compute the chart. In the actual query I can join those CTEs like normal tables and I can compute the change very easily.

If I use jOOQ to break this down into chunks, it probably looks more readable to non-SQL developers, see ChartReportController:

Those CTEs can be "saved" onto variables and subsequently be used in a pretty much ordinary query.

Summary and use cases

The above query is not a query I write on an everyday use case and maybe it should be written at all. In real live though, I often haven to deal with complex data models and data silos that need to be queried in many different ways. Those analyses are really useful and helpful to a lot of people.

With the combination of Spring Boot and jOOQ I can provide those analysis in no time. Having those queries available as an HTTP API without depending on some HTTP tool inside a database provides much value. I can map simple and complex queries a like and get great logging for free.

Using Springs awesome MVC support, I can pass URL query parameters directly to the jOOQ builder without risking injection scenarios.

But: I strongly believe that JPA / Hibernate is not evil at all. Quite the contrary. Combine it with Spring Data JPA, implement the repository pattern and you'll get easy to test modules that provide a lot of cruft for you.

If I would have write access of some kind, aka OLTP, in my scenario here, I'd rather would use JPA for it than filling those inserts with jOOQ myself for example.

Gavin King, creator of Hibernate, said on reddit: "Just because you're using Hibernate, doesn't mean you have to use it for everything." and i fully agree with him. After working now on a jOOQ based project for 13 months, this is how an ideal workflow for a database centric but not database dependent application looks like for me:

Always use automatic database migrations, for example Flyway or Liquibase

Get your domain model as "right" as possible, normalize your database and use JPA / Hibernate in combination with Spring Data JPA.

Use JPQL queries if necessary but don't hide native queries somewhere in annotations

For complex queries, projections or use cases like merge statements, use jOOQ. Either directly or pass the generated SQL to JPA or if you like it, to Springs JDBCTemplate. You can write your statements as strings, but then the burden of using prepared statements and the like correctly is on you.

But the most important point to make: SQL, JPA and jOOQ are expert tools. You have to know them to get the most out of them. Read about SQL and persistence in common, read the reference docs. I personally recommend the following sites and their authors:

If you enjoyed this post, you may follow me on twitter, leave some feedback or even star the demo. Next week I'm gonna publish the last entry in this series where I add an UI in which those analytic queries really shine.

]]>https://info.michael-simons.eu/2016/11/02/an-http-api-for-analytic-queries/feed/5Modify memory settings of the Oracle Aurora JVMhttps://info.michael-simons.eu/2014/02/07/modify-memory-settings-of-the-oracle-aurora-jvm/
https://info.michael-simons.eu/2014/02/07/modify-memory-settings-of-the-oracle-aurora-jvm/#respondFri, 07 Feb 2014 08:57:04 +0000http://info.michael-simons.eu/?p=909Use the following class to view and modify the settings of the embedded JVM inside an Oracle database. The class has been tested on 10g and 11g, memory settings and information are in Megabyte.

]]>https://info.michael-simons.eu/2013/02/21/java-implementation-of-excels-statistical-functions-norminv/feed/6How to retrieve tables of custom object types with JDBChttps://info.michael-simons.eu/2012/07/24/how-to-retrieve-tables-of-custom-object-types-with-jdbc/
https://info.michael-simons.eu/2012/07/24/how-to-retrieve-tables-of-custom-object-types-with-jdbc/#commentsTue, 24 Jul 2012 12:42:53 +0000http://info.michael-simons.eu/?p=696The following is not only a personal reminder for me but also a first try to answer some questions on Stack Overflow…

Sadly, we have a legacy application that makes extensive use of such types… We had to rewrite most of the procedures.

]]>https://info.michael-simons.eu/2012/07/24/how-to-retrieve-tables-of-custom-object-types-with-jdbc/feed/26Oracle, JBDC, CallableStatements and named parametershttps://info.michael-simons.eu/2012/07/23/oracle-jbdc-callablestatements-and-named-parameters/
https://info.michael-simons.eu/2012/07/23/oracle-jbdc-callablestatements-and-named-parameters/#commentsMon, 23 Jul 2012 12:30:43 +0000http://info.michael-simons.eu/?p=688Update: Please take a note of comment #5 by DSurber. He works for Oracle and sheds some light on named parameters. In short: The names of the parameter do not refer to named placeholders in statement (like in JPA for example) but to PL/SQL named parameters!. Thank you very much.

Well easy, i had fun with the Oracle JDBC Driver and named parameters. I modified one call, added a parameter and set this parameter not in order… No problem, i thought, as i was using a callable statement with named parameter. As i it turns out, named parameters (“:blah”) are not so named, after all, but the order of setting them does matter, see the following tests:

In this test i actually used the same names of parameters like in the stored procedure and you clearly see p1 should be inserted in column1 and p2 in column2.

#testCallableStatementParameterInOrder succeeds, but #testCallableStatementParameterNoOrder fails. The only difference is that i set p2 before p1… To me, this is a severe bug as named parameters imply a very different behavior.

I strongly advice my colleagues to be very careful about using plain named parameters. The order in which they are set is important!

I could reproduce this behavior with OJDBC 11.1.0.7 and 11.2.0.3 against Oracle 10g and 11g. The example does not work at all with OJDBC 10.2.x.

]]>https://info.michael-simons.eu/2012/07/23/oracle-jbdc-callablestatements-and-named-parameters/feed/6Optimizing the Oracle Query Optimizerhttps://info.michael-simons.eu/2010/10/20/optimizing-the-oracle-query-optimizer/
https://info.michael-simons.eu/2010/10/20/optimizing-the-oracle-query-optimizer/#commentsWed, 20 Oct 2010 10:38:10 +0000http://info.michael-simons.eu/?p=454 1000 and S.cust_id = C.cust_id); will be rewritten into the following query, using a semi-join: SELECT C.cust_last_name, C.country_id FROM customers C, sales […]]]>The Oracle Query Optimizer does a great job in rewriting Queries. For example the nested subquery in

In the first form, the subquery will be invoked for each entry in customers, replacing the value of the correlated column C.cust_id. Thus the query cannot be parallelized among other implications. Most of the time, if the columns are correctly indexed, the semijoin will be much faster.

This technique is called query merging. Here the join should also be faster in most cases.

Both techniques are good and will improve many queries.

But consider a scenario where the subqueries query remote tables (via database link for example) or call PL/SQL or even Java stored procedures and the main table (customes in the examples) contains many more rows than the joined table or the where clause is less selective then the where clause of the subquery: The subquery will be executed as many times as there are rows in the main table. This can lead to very poor performance.