My last blog posting in 2011 will not contain technical stuff. The last few days in a
year are a good chance to relax a bit - and so I just would like to post some screenshots
which I have taken while installing the Oracle 4.1.4 files which I found a couple of weeks ago.
No XML, no PL/SQL, no Oracle TEXT, no APEX ... but EMP ... and DEPT.

This morning I got a SQL question - nothing difficult - but I think, this kind of requirement
is more frequent, and therefore I decided to publish the answer for everyone. The question
is based on table data (as always) ...

The right column cost_share should contain the row's share of total cost - expressed in percent. So
the cost column needs to be divided by SUM(COST) over all rows. I'm lazy: so instead of creating
this table I decided to solve the problem for the SAL column in the well known EMP table.
Let's start with "classic" SQL.

This query should work on almost every database system. First we compute the total
salary amount with an inline view using the WITH clause. This inline view
will be joined in the main query. It returns only one row, so we don't need any
join criteria and we can access the total salary amount just like a table column. And
this can be extended - if we'd like to see the salary share expressed as "percentage of the
department total",
we add a GROUP BY to the inline view and extend the join in the main query as follows ...

This query uses SUMwithoutGROUP BY, because it's the analytic SUM function.
The query window ,
which defines the rows to be aggregated, is specified in the OVER() clause. The aggregate
is then calculated to each row of the query result set. An empty OVER clause means as much
as "over all rows". But we can also do the calculation on the department level.

Now we have the keywords PARTITION BY inside the OVER() clause. So the aggregate
is being computed for each department. Don't confuse this PARTITION BY clause
with table partitioning. Opposed to the latter, PARTITION BY
within an analytic function has nothing to do with table storage and does not
require an additional license. But we can solve the original problem even more
elegant. Why? Because there is a special function for this purpose: RATIO_TO_REPORT.

Analytic functions are absolutely worth the learing effort. Beyond the problem described here,
analytic functions provide an easy query syntax for things like moving averages, rankings or
other aggregates with flexible query windows. Solving this with "classic" SQL is possible (of course),
but this most often gets cumbersome and difficult to read. You'll find more information in the
documentation.

UPDATE: Just yesterday I installed this package on my own "production" instance - and as I wanted to login into APEX today I encountered my own message Cannot DROP/TRUNCATE table "WWV_FLOW_USER_ACCESS_LOG2$" - so APEX does some regular DROP / TRUNCATE operations which must not be trapped by the Trigger - So I extended the trigger code a bit (further extensions might be necessary in the future) - just have a look into the new trigger code. If you also encounter DROP or TRUNCATE operations which need to ge generally allowed in the trigger, feel free to post me a message or a comment.

Recently I again encountered the database event triggers in PL/SQL - using these you can place a trigger not only on table or view DML, but
also on DDL commands or database events like STARTUP, SHUTDOWN oder LOGON. One thirst thought would be prohibiting
the DROP operation on a production instance - the trigger code for this is rather simple ...

Simple and powerful. On this instance no object can be dropped any more (except the trigger itself, of course).
But this can also lead to a lot of problems, of course: Even on production systems, sometimes there must
be dropped something. Then the trigger needs to be disabled before and enabled after the operation. And then
there might be objects which are frequently dropped and re-created and other objects which not. And what about
the TRUNCATE operation. This one cannot be undone with FLASHBACK TABLE TO BEFORE DROP - so accidentially use
of TRUNCATE is much more dangerous than the DROP operation ...

So my next thought was to try a bit more sophisticated approach - I'd like to see the "Are you sure?" feature
which we all know from Windows also in the database. And although we cannot catch the DROP operation to ask
the "are you sure" question, we can implement a similar approach: The idea is to declare the object to be
dropped beforehand. So we have a package which holds the declared object and a trigger which looks into
the package and raises an error message if the object has not been declared. First we create the package.

Don't forget to grant EXECUTE Privileges and to create a public synonym in order to make the
package accessible for everyone. Creating the Trigger is the next step. As said before, it simply
checks whether the object to be dropped ( ora_dict_obj_name, ora_dict_obj_type, ora_dict_obj_owner)
has been declared with the PRECODE package. If not, the trigger raises an error, otherwise it does
nothing and the operation will succeed.

This implementation of the PREDROP package already provides some flexibility. Setting
the object name, type or owner to SQL NULL matches everything. Normally a declaration
is valid until it has been matched by a DROP operation - after calling PREDROP.SESS it will be
valid until the end of the session - this is useful when runnung SQL scripts.
Using regular expressions or LIKE syntax would make the package even more flexible ... but
this is a story for another blog posting ...

In the past I had quite a number of postinhs in which I made use of the Java engine within the Oracle
Database. Examples are the previous posting
about automated tweets from PL/SQL (which was about using "twitter4j" in the database) or the
postings about
File- and operating system access.
Today I'd like to elaborate a bit about a fundamental thing which one has to accomplish when
using Java in the database: The parameter mapping. This posting is about how to map input or
return paramaters in a java method to SQL and PL/SQL types in a package. And the focus will not
be on the simple mapping of VARCHAR2, NUMBER or DATE datatypes ... these are rather simple,
as we can see with this code example ...

The "experts" might have notices that all java methods are "static" - that's evident. PL/SQL
is a procedural, not an object-oriented language - so when we want to integrate PL/SQL with
Java in the database we need to use the "procedural part" of Java which are static class methods.
If you want to map java methods to procedures and functions in a PL/SQL package you must use
static methods for that.

When mapping DATE values from Java to PL/SQL a bit attention is needed. Basically
a date can be represented in java using java.sql.Date or java.sql.Timestamp. When
those are being mapped to DATE in SQL or PL/SQL, the java.sql.Date class only
maps Day, Month and Year - the time in the DATE would be set to midnight. For having the time
component also you need to use java.sql.Timestamp. Apart from this the mapping
of simple scalar datatypes from Java to SQL and PL/SQL is quite simple. So the first rule
is to use simple types whenever possible. Complex objects should only be used when they're
really needed ... because, as we will see, they require additional coding ...

And now we'll handle these: Let's assume we have a Java library doing some stuff and for
this example we use the java.io.File class (which is part of standard Java). We'd like to access file
attributes with SQL functions and we don't want to have a single call for each
attribute. We need a SQL function which collects all attributes in one call - so we need
a datatype containing all these attributes at the SQL side. We have Object types (or User-Defined-Types)
for that purpose so the first thing we want to do is to create an object type representing
a file.

So far we have a Java representation for a file: java.io.File - and we have a SQL representation
for a file: Our new FILE_T. But there is absolutly no connection between those. Of course, we
cannot map the java.io.File class to our FILE_T type - the database has no clue how to map
the attributes. We need to build a "bridge"
between java.io.File and FILE_T. And this bridge is a special java class: oracle.sql.STRUCT.
So we now need to implement a static Java method (we can only use static methods) which
uses java.io.File to collect file attributes and builds a oracle.sql.STRUCT instance which can
be mapped to FILE_T. This code goes here ...

The Java engine within the database needs to interact with the SQL layer - we need
a JDBC database connection for that. This connection is not more than kind of a pointer
to the actual database session in which the java code runs in. The
StructDescriptor class is a utility which helps us to create a STRUCT object
exactly matching FILE_T. Note the usage of the connection object and "FILE_T" when the
StructDescriptor instance is being created. After that we collect all relevant file
attributes in an Array of the fundamental Java class Object[]. The developer needs to take
care about the order within that array: Our object type has eight simple, scalar attributes. The
Java types in the Object array (String, int, java.sql.Date) must match the attributes of the object type (VARCHAR, NUMBER, DATE).
With this array, the StructDescriptor instance, and the database connection the STRUCT instance
is being created and returned in the last step of the program. This STRUCT instance exactly
matches the FILE_T definition in the SQL layer.

The final step as (as always) the PL/SQL Wrapper for the static method. We create a
function which takes a VARCHAR2 (containing the file path) and returns a FILE_T. These
are being mapped to java.lang.String and oracle.sql.STRUCT . All object types are being
encoded as oracle.sql.STRUCT - the StructDescriptor objects cares for the mapping to
the correct SQL type.

Ah, yes: You need Java privileges in order to access the file system with a
Java stored porcedure. In the developing stage you might grant yourself the JAVASYSPRIV
privilege - you will be a "Java superuser" then - on production systems this is not recommended
you might grant individual privileges there. The practical bit is that Oracle not only throws
the error message - it also gives the complete PL/SQL call to grant the required permission. So when you have the
privilege, try again ...

So far - so good. We managed to create a structured object in java and pass it to
PL/SQL. You can use this approach for any kind of structured object - and yes: You
can nest one oracle.sql.STRUCT within another oracle.sql.STRUCT - just as you
can nest object types in each other. The basic steps are:

Create the SQL object types

Create the static Java method which creates the STRUCT instance matching the object type using
the Object[] array and the StructDescriptor class.

Return the STRUCT instance from java to SQL and PL/SQL

Create the PL/SQL wrapper for your Java stored procedure

Then we'll move on to the next (and more interesting) level: We want to pass
not only a structured object but a list of structured objects from Java to PL/SQL.
In the PL/SQL world we have Table Functions for that purpose - and as with these
we now first create another object type representing our list of files ...

createtype file_ct astableof file_t
/

And again: We now need to create an object within Java which maps to this FILE_CT type.
But for Varray or table types there is another "Bridging class": oracle.sql.ARRAY. It's used the
same way as oracle.sql.STRUCT. We first create an ArrayDescriptor instance
using the JDBC connection and the type name. This helper object, the JDBC connection
and a plain Java array are being used to create the actual instance of oracle.sql.ARRAY.
And the code goes here ...

As you see: This code is basically just a small extension of the code above. We loop through
the File[] array returned by java.io.File.listFiles(), create an oracle.sql.STRUCT instance for each
java.io.File object and place it into a Java array (STRUCT[]). This java array is then being used
to create the oracle.sql.ARRAY instance, which is finally passed back to PL/SQL. On the
PL/SQL side we create the wrapper function and that's it. Try ...

It's quite simpe, isn't it? Using this approach you can pass any data structure between
Java and PL/SQL - you can use any Java library you want. The only prerequisite is that
your Java library does not depend on native code - it must be pure Java.
First you might design a simple interface consisting of static Java
methods which can be easily mapped to PL/SQL calls. Use simple, scalar types whenever
possible and use object types and arrays only when you really need them. Then you
need to implement the static java methods (using STRUCT and ARRAY, if necessary),
create the SQL object types and PL/SQL wrapper
packages and you are done. The code packages for
filesystem access,
zum accessing POP3 or IMAP4 mail servers with PL/SQL or topack and unpack ZIP archives all work that way. And as said: You can
use integrate any Java library doing interesting stuff into the Oracle database. There are
no limits.

My teammates and me are using the twitter account
@oraclebudb, to post interesting
news about the Oracle database (Events in german language, Releases and more) for a german
audience. So if you are a german reader and interested in regular updates about the
Oracle database, follow @oraclebudb.
English readers might follow @oracledatabase.

But sometimes one wants to schedule a tweet - perhaps for tomorrow at 10:00am. If one
wants to use Twitter for regular updates about something (think about floodwater levels for
a river) some kind of automation or scheduling is needed - manual tweets are not possible.
So the question is: How to implement this? I'd like to do it with APEX, the Oracle database
and PL/SQL. So I asked Mr. Google how to do this and I got some PL/SQL fragments which I could
not manage to work on my system. But I saw that the "OAuth" mechanism which Twitter uses
for authenticating is something which I cannot implement in 5 minutes in PL/SQL.

Java developers have the very popular twitter4j.
library - and since the Oracle database can run Java Stored Procures I decided not to
reinvent the wheel and to use things already there (XE-Users: Sorry). The following
posting describes how to do automatic tweets from the Oracle database using the
Twitter4j library and some Java and PL/SQL coding with the "own" twitter account; the
library allows also authorization for other Twitter accounts - but this is not
described here.

Step 1: Register your application on "dev.twitter.com":

Log into dev.twitter.com
with your Twitter account. In the menu in the upper right corner (where you can see your
Twitter username) navigate to My Applications and click on
Create a new Application. Then fill in the form. You need a name, a description
and a Website URL (which don't need to work for now). The callback URL is not needed - after
that manage the CAPTCA and your application is registered.

Register your application

You now see the key pair (Consumer Key and Consumer Secret) for your
application. We will need that later on.

The application has been registered.

Now navigate to the Settings tab and adjust the Privileges of your
Application. It will need
Read and Write privileges, since a tweet is a "Write".

Grant "Read and Write" privileges to the application

Then go back to the Details tab and navigate down to the button named
Create My Access Token - this will generate the keys for your account.
(Access Token and Access Token Secret). Our application has to
create a signature using the two application keys as well as the two account keys - but
the Twitter4j library will do the necessary stuff.

Generated "Access Token" and "Access Token Secret"

Now you're done with registering the application. We can now start the implementation
in the Oracle database.

Step 2: Load the "twitter4j" library into the database.

Download twitter4j from twitter4j.org.
Twitter4j is Open Source software and it uses the Apache License. So you are free to use
for either commercial or non-commercial usage without a fee. Unpack the ZIP archive into
a folder of your choice - it's best to place it directly on the database server.

Contents of the downloaded ZIP archive

The java libraries are located in the subfolder lib. For simple status updates (tweets)
the archive twitter4j-core-2.2.4.jar is sufficient. Load it into the Oracle database as follows.

It would be a lot of work to map all the API methods to PL/SQL procedures and functions. There
are many methods and most of the have java objects in their signature. It's better to implement
the needed operations as Java methods with simple signatures and finally map those java methods
to PL/SQL. So the following code creates a simple java method "updateStatus" which can be
mapped to PL/SQL very easily. This java code has been created based on the twitter4j examples, which
are also part of the downloaded archive. Note the placeholders for the key pairs from dev.twitter.com
in the call of setOAuthAccessToken which need to be replaced. The calls to System.setProperty
in the previous lines also contain placeholders for the proxy server and port, which also need to be adjusted to
your environment.

Doing a tweet is a network request. A "normal" database user does not have privileges
to execute network operations in Java. To get the above java class actually working the
database schema needs java privileges to set the proxy server (if needed) and to do
the network operation. The following SQL script, which has to be executed with DBA privileges,
does the job.

Point your browser to your twitter page - you should see that posing - done by SQL in
the Oracle database. From this point we can use this procedure as any other one.
Creating a DBMS_SCHEDULER job is as easy as creating an APEX application from which
tweets are being posted. A simple example is an APEX application with a form on a table
where the user can enter a tweet and a timestamp at which to post the tweet. A DBMS_SCHEDULER
job runs in regular intervals an posts all tweets due. The twitter4j API allows much
more operations ... we could query the Home Timeline and pass it back to the
SQL - but this is another story ... for another blog posting ...

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle Corporation or Oracle Germany. The views and opinions expressed by visitors to this blog are theirs and do not necessarily reflect mine.