In this article I’ll talk about identity column option, which is one of the new features that have come with Oracle DB 12c R1. I hope this will be a helpful article in terms of awareness.

It’s always been the major of our complaints that we cannot set Primary Key to rise automatically as built-in while creating a table. We solved this either by adding a manuel from sequence to the table during insert or by defining a trigger for our table and rising p.k column automatically with the help of a sequence during insert. In Oracle 12c R1 the infrastructure to make this operation automatic was developed and issued. Now, we’re in a position in which we can create numeric columns that can automatically rise.
Identity column type was developed in a way that will support three different features on the background. We can choose and use whichever suits us the best.

These features are;

1- GENERATED ALWAYS AS IDENTITY

2- GENERATED BY DEFAULT AS IDENTITY

3- GENERATED BY DEFAULT ON NULL AS IDENTITY

Before viewing the details of these features, there are certain restrictions for columns created in identity type. We definitely need to take into consideration the following situations while creating this kind of columns.

Only one identity column can exist in one table.

The identity column we’ll create must definitely be defined in the type of numeric data.

We can’t define default value for a column we created in identity type.

If there’s another identity column in a table we copied using create table as select, the identity feature of this column cannot be moved to the newly-created table.

The moment we define a column in identity type, NOT NULL constraint for this column will be defined as implicit.

Now, let’s analyze identity column kinds of different features by giving examples.

1- GENERATED ALWAYS AS IDENTITY: We can’t externally interfere with this identity column we created using this feature. Now, let’s see it with an example.

As we would reason from the running logs, we got an error by the system in the first insert we performed; because we sent external value to the identity column. However, the second phrase ran successfully and our identity column started by automatically rising since there was no such an interference in the second insert phrase.

2- GENERATED BY DEFAULT AS IDENTITY:With this feature, we can fill our identity column with an external value, yet we still can’t choose null. Now, let’s analyze this, too.

As it’s seen, the second phrase was inserted successfully. However, because the value we gave manually was the same as the first value which identity column automatically produced, the first values of the two records are the same. If the column we defined in identity type were the primary key, the second phrase would get an error for running over p.k constraint.

3- GENERATED BY DEFAULT ON NULL AS IDENTITY:If we create our identity column using this option, we can ensure that column value keeps rising automatically without any mistakes.

We observe that both of the insert phrases ran successfully. Now, let’s take a glance at the state of our table.

We analyzed how it can be used with three different features. Now, let’s mention what’s happening on the background.

The moment we created an identity column, it automatically creates a sequence for this column that will produce the automatic value.
Afterwards, it carries out the operation of producing value from this sequence with every new record, and writing it to the table. You can get at the sequence produced for our identity column by entering the data dictionary the query below.

select * from ALL_TAB_IDENTITY_COLS;

As you would understand from this dictionary query, we displayed that a sequence was created on the background. This might beg the question: Can we specialize the sequence’s running principle while we are creating the identity column? In other words, can we make the sequence advance by increasing by five? The answer is yes. Let’s demonstrate it in the example below:

As one can see, we can specialize the sequence which produces the identity column.
In this article, I analyzed deeply the identity column, which is one of the new characteristics that have come with 12c, and its features. I hope this has been a helpful article for your awareness.

Last week, I was at Oracle Open World 2015 to obtain Oracle Database Developer Choice Award. Also I got around and tried to benefit from the event as much as I can. Oracle Open World is held in last week of October every year in San Francisco where also Oracle is headquartered. It could be the world’s biggest IT event if I’m not mistaken. It’s actually an IT madness which lasts for 5 days and includes over 1000 presentations, numerous product demonstrations and over 50 thousand people from all over the world. You can have the opportunity of meeting people of your product group and share ideas with them which might be the biggest positive side of it.

As there are too many presentations and events in Oracle Open World, It is not possible in means of time to follow all of them. Therefore you are able to follow only the news from your own domain and some interesting presentations which I did so and could be able to follow the presentations about Oracle Data Integration, Oracle Database Development, SQL, PL/SQL and Big Data. The main emphasis in these presentations was mostly on the new features that have come with the Oracle’s 12c product and among these features, the in-memory subject which was with the largest investment was the hottest topic of each presentation. It seems that as the increase of data amount and the requirement of perform business works in real-time will force big gamers to invest more in in-memory solutions. With the release of Oracle’s 12c version about this topic, we can say that it made big investments about satisfy the customers’ needs as it became supportive in both mid-tier solutions and in DB level. The fact that in almost every presentation there was an emphasis on in-memory, proves how much Oracle cares this topic. In parallel to that it wouldn’t be false statement that there were also hot topics in Real-Time analytics, NoSQL and Big Data solutions.

In Oracle Open World 2015 especially in Keynote talks, the main topic was ,as everyone could guess, Cloud and what may happen in upcoming 10 years of projection. The main topic in both Larry Ellison’s, the founder of Oracle, and Marc Hurd’s talk, the CEO of Oracle, was Cloud, Cloud Security and best practices that the companies made with the Oracle’s solution in that domain. It seems that we are entering an era where we will hear about Cloud solutions more frequently and where the companies in our country will not stay indifferent to this trend. On this account, as professionals working at IT field, in order to enter this era well-prepared I should say that we have to do a technical investment to ourselves.

Attending to global events such as Oracle Open World is very nice in means of understanding what is being talked about in the world and which problems are being worked on. I hope that we from time to time, catch these kind of opportunities and also catch up with the sector.

Hi Everyone, I nominated for Oracle Database Developement Choice Awards from Turkey approximately five months ago. Then I learnt to be a finalist from Oracle Technology Network on 15th October. Then voting started to determine winners for each category. I finished voting at first place thanks to Turkish community supports.
Winners were announced by Oracle at San Francisco (Oracle Open World 2015, YesSQLCelebration) last week. I was to be there and I took this award from Steven Feuerstein and Andy Mendelsohn(Executive Vice President @Oracle). As I said in Awards ceremony in San Francisco, I am so honered to be a winners and I would like to express my gratitude to all communities who supports me (Especially TROUG ve TurkeyJUG). Hopefully, more awards like this come to our country and we may have the chance to show our capabilities all aorund the world.

Today, I recieved very good news from Oracle, USA. Both I want to share this news and I expect your support for my achivements. I had the final list in “The SQL developer of the 2015” prize which is given by Oracle worldwide. I passed the very difficult elemination stage and now I’m on the list of final. The result will be determined by voting by the community. To receive this distinguished award, I am waiting for your support.

Emrah is a contributive community member, especially for Turkish Oracle professionals. He is one of the core founding members of TROUG (Turkish Oracle User Group) as well as having an active role in TROUG non-profit organisation, held “Database Development SIG Chairman” for 2 periods and contributed – and still continues to contribute – actively on local Oracle related conferences as a speaker or organiser. Because of his studies, he is well known by the Oracle Community and his shares are followed by the almost whole Oracle Community in Turkey. He is currently involving in large or very large scale projects using his prior expertise of Oracle SQL, PL/SQL and Oracle Data Integrator (ODI) and these projects adding him deep knowledge in also Oracle database architecture as well. He is especially known as one of the Oracle SQL and Oracle PL/SQL experts in Turkey. Not only his colleagues, but also his blog or social media followers are consulting to him about complex database queries, query optimisation and Oracle related technical questions and he never gave up answering them. He is maintaining and writing his blogs, only about Oracle technologies, which is about 5 years old (https://emrahmete.wordpress.com/ in Turkish https://emrahmeteen.wordpress.com/ in English). He prefers to provide his contents in Turkish, in order local Oracle professionals can understand better and communicate them in ease. To me, his blog is so helpful and satisfying in terms of content. This is one of the most followed Oracle based blogs in Turkey region.

Hi everyone,
In this article I’ll talk about the APPROX_COUNT_DISTINCT command. Hopefully, this will be a helpful article in terms of awareness.

Both in routine data tests that we make, and when analyzing data, we use COUNT(DISTINCT …) command most of the time. Operating time of this command can take longer depending on the size of our data. The main reason for this is that distinct command causes a sort operation on the background. As is known, sort operation is one of the basic operations that can last the longest on DB level according to data size.
Oracle has generated an alternative -and faster- way for software developers and analysts by building up APPROX_COUNT_ DISTINCT command in order to avoid the situation of COUNT(DISTINCT …) operation in which operation’s time gets extended due to data size. This command works faster; however, sometimes it cannot produce a definite result like COUNT(DISTINCT …). According to data size, it swiftly finds the result through deviations which might be ignored.

Now, let’s observe comparatively the performance of this command in respect of data size.

In this test our count(Distinct) query worked longer as expected, and detected the result correctly. Our APPROX_COUNT_ DISTINCT query worked waster; however, it couldn’t detect the result quite accurately, it was able to calculate the result with an approximate estimation.
As it would be deduced from our tests, it will not be an erroneous remark to say that the gap of operation times between the two functions will enlarge as data size increases, and APPROX_COUNT_DISTINCT command will draw away from real distinct value to some extent.

APPROX_COUNT_DISTINCT function can work with data types except for BFILE, BLOB, LONG, NCLOB, LONG RAW, CLOB as a parameter. And it attains a value of NUMBER type ignoring NULL values while estimating distinct value.

Ultimately, I am sharing Explain Plans of the two functions so that it gives an idea, and helps your interpretation.

COUNT(Distinct)

APPROX_COUNT_DISTINCT

You can follow the links I share under “references” section for more detailed test results and information. I hope it has been a useful article concerning awareness.

In this article I’ll mention JSON support which has come with Oracle DB 12c 12.1.0.2

I hope this will be a useful article in terms of awareness.

With its new version, Oracle Oracle 12c 12.1.0.2 is now capable of knowing and queryingJSON data. I liked this feature because we would usually need to write some extra things in order to query and analyze application logs after saving them to DB. With this feature now there’s no need for such an effort.
Now, let’s take a look at what we can do over an example. Let’s start with creating a table first.

We created our table. What we need to pay attention to here is the last row in which we defined row constraint. At this point we told Oracle DB that we would save JSON data in appLog column. From now on, the system knows that here will be kept JSON data.

We have added three-row registry and JSON data exist in all of their appLog columns. If there had been a registry which did not fit this format, the system would’ve given an error.

Now, let’s query our data over appLog column and see how we will insert in our query the elements in JSON data. Say we want our query to put the registries that include “Yildiz Teknik Üniversitesi” in the element OKUL, which resides in JSON data in appLog column.

As we can see in the result, we managed to perform a query over JSON data. The point which needs our attention is that we can use each element in appLog as appLog”.”

Now, let’s write one more query data example to practice the usage better. This query will bring the “tckn” information of the “maltepe” registries which are in the element ilce, which is in the element adres, and which resides in the data in appLog column.

By defining a constraint we said that we would keep JSON data in the appLog column while creating our table in the example above. Now, let’s see how we can query JSON data in a table in which there is no constraint defined.

First, I’ll create a table and I’ll insert one registry with JSON data, and another one without JSON data.

As we can comprehend from the examples above, JSON support which has arrived with Oracle 12 12.1.0.2 makes it much easier to process especially application logs. You can follow the links below for more detailed information on this topic. I hope this has been a helpful article in terms of awareness.

In this article I’ll talk about a new SQL feature which has come with Oracle 12c.
I hope this will be a useful article in terms of awareness.

In fact, it’s been on my mind for a while to write about this topic; however, I just couldn’t get the chance because of my job. Though, I did give a presentation about this topic at TROUG Datawarehouse SIG event at ITU in 2014. Those who attended and watched took advantage of that opportunity, and I think this article will come in handy for others.

Oracle’s pattern matching feature carries some qualifications which fortify Oracle’s already potent analytic query infrastructure all the more. Thanks to this feature, we can create a strong input for our business by displaying it with queries in which we’ll write the trend of the data we have.

We can use Pattern Matching feature for such business needs as below:

Financial Implementations

Analysis of the fluctuation of product prices against time

Analysis of the fluctuation of common stocks against time and taking action

Spotting of unusual movements in security implementations and taking action

Spotting of fraud

Time-wise inspections of data coming from sensors

In cases that require Complex Event Processing

At times when this feature didn’t exist, we weren’t able to write only one query to perform these kinds of analyses due to the insufficient technology of the day. We could reach these results only by carrying out a series of operations and transformations, and this used to take time and effort to some extent. With this feature however, we can analyze our data with only one query in a very time-efficient and easy way, and report the results.

Now, let’s follow a sample case in order to grasp how this mechanism works.

Let there be data that includes the amount of hourly downloads of a company.

Graphically demonstration of this data is as follows:

Now, let’s explain our case. Because there would be no one, an increase of download is not expected at our company at night. I want to come up with an alarm for the case when the amount of downloads successively increases for 5 hours. Actually, generating an alarm is the second step; we will not realize that part. What matters is that we can understand if the amount downloaded rose constantly in the last five-hour period.

The trend I’m looking for is shown more clearly:

My trend starts with an escalation, keeps rising successively for five hours, and indicates a five-point trend.

Now, we write our query which will catch this trend. And then we start explaining the technical details of the query.

SELECT *
FROM NETWORK_TRAFIC
MATCH_RECOGNIZE (
PARTITION BY TRAFIC_TYPE
ORDER BY TRAFIC_DATE
MEASURES
STRT.TRAFIC_DATE AS START_TSTAMP,
LAST (UP.TRAFIC_DATE) AS END_TSTAMP
ONE ROW PER MATCH
AFTER MATCH SKIP TO LAST UP
PATTERN (STRT UP UP UP UP UP DOWN)
DEFINE
DOWN AS DOWN.TRANSFER_AMOUNT_MB PREV (UP.TRANSFER_AMOUNT_MB))MR
ORDER BY MR.TRAFIC_TYPE, MR.START_TSTAMP;

There! When we run the query above, we can get this result below.

As the consequence of query, our data caught the trend we had given with our query, and showed the result. We can interpret the retrieved result as follows: Our trend started at 3am and ended at 7am. The result wouldn’t have been retrieved if there hadn’t been a set which was fitting the pattern.

Now, let’s explain the bold-written parts of our query:

partition by: this is the part in which we explain how we split the data logically. We used transfer_type field here.order by: here, we explained which fields we employed to order every logical group. We used trafic_date field here because we took flux as history.measures: we determined the metrics, namely the colons which would occur as a result of our query. There was a starting point, and an ending point. We completed these.
one row per match: we command with this phrase that it will produce only one row of record in response to a set fitting our pattern.after match skip to last up: the first point from which you’ll start in order to form a new trend after every match is not the last ‘up’ point of the trend.
pattern: we described here exactly which trend we were looking for.(STRT UP UP UP UP UP DOWN)
we describe here the trend we’re looking for.

Although the usage of this new feature seems pretty challenging at first, I tested how it can be written in a fast way when it’s accustomed to. It’s a command with a bit too many parameters, but it saves us the trouble of dealing with lots of stages about trend analysis, and we acquire a result which has a performance at the end of the day. We tried to get across the topic by realizing a very simple implementation. And you can practice by taking this sample as a model. You can effortlessly determine the ‘V’ trend or ‘W’ trend of the data you have. I hope this has been a helpful article.

P.S. I recommend you to follow the links I share above for more detailed information and examples.