You are here

Jeff Moss

I was trying to use DBMS_COMPRESSION on an 11gR2 (11.2.0.4 on RHEL 6.3) database the other day and was helped by this article from Neil Johnson. I was having some trouble with permissions until I read that article which nicely summarises everything you need to know – thanks Neil!

One thing I did notice is that Neil stated that you can’t parallelise the calls to the advisor since it uses the same named objects each time and this would then cause conflicts (and problems). Neil illustrated the example calls that the advisor is making based on him tracing the sessions…

Because I kept having permissions issues I was repeatedly running the advisor and I ended up with a situation where one of the transient objects (above, or so I thought) had been left in place and when I tried the next rerun it complained that the object existed. I can’t reproduce this as I can’t remember all the steps that I took and I wasn’t recording my session at the time – it’s not really the point of this blog in any case, rather the knowledge it led to. Because the error was that the object existed, I figured I just needed to find the object and drop it and I’d be good to carry on – obviously I looked at the above code fragments and started to search for the two objects in question (DBMS_TABCOMP_TEMP_UNCMP and DBMS_TABCOMP_TEMP_CMP) but found nothing. I started looking for DBMS_TABCOMP% and again found nothing.

Somewhat confused, I then looked for the latest object created and found that the objects were actually called something completely different and of the form CMPx$yyyyyyyy. I think this must have changed since Neil wrote his article (it is from 2013 after all).

I can’t work out what “x” is – at first I thought it was the RAC instance but that was just a coincidence that I saw a 3 and I was on instance 3 of a RAC cluster. In fact on a single instance database (test below) I saw numbers higher than 1 so it’s not the RAC instance number and I can’t work out what it is. “yyyyyyyy” is definitely the OBJECT_ID of the data object, confirmed by cross referencing the data dictionary.

Given this naming standard is therefore object specific, it suggests that you could execute these advisor calls in parallel.

Just to be clear, I’m not sure what version of 11g Neil was using but I am using 11.2.0.4:

Start by navigating to the Azure login page and enter your details. If you have never visited before your screen will look like this:

If you’ve logged in before the page will show your login and you can just click it:

After you login, you’ll arrive at the Dashboard:

Choose the “Marketplace” link at the bottom right, which leads to the following screen where you can type “HDP” and it will show you the options for Hortonworks Data Platform. There are currently two options 2.4 and 2.5 – I chose 2.5:

When you choose 2.5 it will bring up this screen which shows the details of the option you have chosen and offers you the “Create” button to go ahead and start the creation process – click on Create:

After clicking on Create, the process moves on to a five step wizard, the first step of which allows you to choose “Basic options” for the VM. I set the following options:

Name: oramosshdp25sandbox

VM Disk Type: SSD

User name: jeff

SSH Public key: my public SSH key

Subscription: Leave set to Free Trial (if that’s what you are using, as per screenshot, or your Corporate/Pay As You Go subscription if you have one)

Resource Group: Create New called hdp25sandbox_rg

Location: UK West

A screenshot of these options looks like this:

Click on OK and move on to the 2nd step in the wizard for choosing the size of the VM. I chose the DS3_V2 size which seemed to work OK – you might be able to get away with something smaller, perhaps.

Click on Select and move on to step 3 of the wizard which is about configuring optional features. For this step I set the following:

Use managed disks: Yes

Leaving all other options as defaults this looks like:

Click on OK and move on to step 4 which is just a summary of the configuration:

If you’re happy, click on OK and move on to step 5 where you accept the terms of use and “buy” the VM:

If you’re happy, click on Purchase and that’s the end of the wizard. Azure then goes off to deploy the VM, which can take a few minutes. You’ll be returned to the dashboard screen where you’ll see the VM at the top right with the word Deploying on it:

As I say, it takes a few minutes to complete, but when it does, you’ll see a popup notification in the top right of the screen and the VM tile will change to look as below:

So, you now have the Sandbox VM up and running.

The VM by default only has inbound SSH access enabled and can only be accessed by IP address so we’ll make some changes to these next. First we’ll give the VM a DNS name which allows you to access it on the internet via a name rather than an IP address. From the dashboard screen (above) click on the VM and it takes you to this screen:

You’ll notice the Public IP address which is a hyperlink…click on that link and it takes you to the following screen where you can specify the DNS Name which means the machine will have a Fully Qualified Domain Name that you can access via the internet. I set my DNS Name to oramosshdp25sandbox and given I’d previously chosen to use UK West as the location, the Fully Qualified Domain Name is thus oramosshdp25sandbox.ukwest.cloudapp.azure.com as per the screenshot below:

Now, navigate to the Inbound Security Rules page which is under the Network Security Group page (access from the Resource List on the dashboard). Notice that the only rule existing is one to allow inbound SSH communication:

In order to facilitate additional capabilities you should open up a few more ports, as follows:

8888 – HDP

8080 – Ambari

4200 – Web SSH access

50070 – Default Node Name

21000 – Atlas

9995 – Zeppelin

15000 – Falcon

6080 – Ranger

Click on Inbound Security Rule which takes you to the page for maintaining these rules and enter the details for the 8888 port. I specified the name as default-allow-8888 and the port as 8888 as shown below:

Click on OK to create the rule. Carry out the same process for the other ports.

Now that we’ve undertaken these additional activities we can access the VM using an SSH terminal logging onto oramosshdp25sandbox.ukwest.cloudapp.azure.com as the user you have created (jeff in my case) and the private SSH key:

Whilst you are in the SSH terminal you can reset the Ambari password. This is not strictly necessary unless you want to login to Ambari as admin, but I’ll describe it anyway.

First become root with:

sudo su - root

Now SSH into the Docker Image as root:

ssh root@172.17.0.2

You will be prompted to change the password for root on this first login – the current password is hadoop.

After changing the password run the Ambari password reset process:

ambari-admin-password-reset

Follow the instructions to reset the password and after that it will start the Ambari server process.

Once all that is done, exit out of the sessions and the original SSH terminal.

Now go into HDP via the web interface by logging on to the following URL:

The first time you access this URL you’ll be given a welcome (marketing) page which asks for your details:

Fill out the details and hit Submit which will take you to the main entry page for HDP:

Choose the Launch Dashboard option on the left, which brings up a pair of browser windows that use the entire desktop and show the Ambari login page on the left hand browser and the Tutorials website on the right hand browser like this:

You can use either the admin user that you just reset the password for or the predefined user raj_ops (password raj_ops) to access Ambari. Click on Sign In on the left hand browser once you entered the credentials and it takes you into the main Ambari homepage:

This is the main systems management environment for Hortonworks – more documentation here.

If we close this pair of browsers now and go back to the main HDP entry page and choose the Quick Links option on the right we get this page:

From here you can choose to use any of these specific components.

NOTE – I couldn’t get Atlas and Falcon to work – they need more configuration/setup to get them functional. Ranger, Zeppelin and the Web SSH client work fine though.

Looking on MOS, highlights this article, which suggests that you shouldn’t have been able to do this in 11gR2, i.e. it was a bug and that 12cR2 has fixed this bug and thus you can no longer create such a virtual column (the article refers to functional index and check constraint use cases as well).

In my case, I was able to rewrite the virtual column to use simple string functions such as SUBSTR, TRANSLATE and INSTR to achieve what I wanted and the virtual column was allowed to be created with these – problem solved – a shame really as the REGEXP_REPLACE approach was far neater.

Obviously nobody could beat Tim to getting the comprehensive installation instructions out first, but here are my notes for installing it on a proxmox container environment which is what I use as my research platform. Some of the calls used are from or based on Tim’s prior 12cR1 installation article – thanks Tim.

NOTE – this post is just a guide and is based on my environment – you will likely need to make changes to suit your own environment.

Instead you have to do these at the host level – and only if you think they are relevant and that those settings wouldn’t upset all of your other environments running on that host. I haven’t tried but you could potentially just tell the GUI installer to ignore the warnings relating to these entries and not make these changes at all especially if you’re only using it for small scale research purposes.

Install the software and a database by running through the GUI screens and following the instructions. The installer complains on the prerequisite checks screen about some of the kernel memory parameters (rmem%, wmem%) which you can ignore.

Configure Auto Start

Follow these instructions from Tim to setup auto start using the runuser method – make sure you change the ORACLE_HOME to be 12.2.0.1 not 12.1.0.2 that is mentioned.

Now reboot the container and it should return with the database automatically started.

I created a 4 node container based (Proxmox LXC) Hortonworks Data Platform 2.5 Hadoop cluster recently and all went well apart from all the charts on the Ambari homepage were blank or showing “N/A”, like this:

An outline of the environment:

4 node cluster of LXC containers on Proxmox host

Centos 7 Linux OS

Nodes are called bishdp0[1-4], all created from same template and identical configuration

All containers are on 192.168.1.0/24 network

DNS Server also available on same network and all hosts can resolve each other via DNS

Hortonworks Data Platform version 2.5

Proxmox host sits on a corporate network and the host has iptables set to allow the containers on 192.168.1.0/24 to reach the internet via the corporate proxy server, e.g. for yum access

Other than the blank charts everything appears to be working fine

After much reading around it turns out that I hadn’t quite set up the proxy serving correctly, specifically that I hadn’t told Ambari to ignore some hosts, namely the bishdp0[1-4] hosts on the 192.168.1.0/24 network, when proxying. I can’t find a 2.5 HDP version of the document for setting up the proxy serving for Ambari but the 2.2 instructions worked.

I got a new laptop the other day and installed all the software, including SQL Developer Data Modeller all fine. I then opened a model which had a bunch of tables with columns based off Domains…the columns did not have Domains against them but had been replaced with Logical data types instead.

After some research, the fix, in this case, involved copying the file “defaultdomains.xml” from the following directory on my old laptop, to the same place on the new laptop:

%SQL Developer Home%\sqldeveloper\extensions\oracle.datamodeler\types

After restarting and reopening the model all was back to normal.

What I probably could have done in the first place was to have created my own Domains file for the Design, saved in the Design folder and then when I transferred the Design by copying across the Design folder the domains would have come with it. I could have then just opened the Domain file on the new laptop. I guess it depends on whether I would want these domains to be Design specific or part of the defaults for all designs.

The Oracle Big Data Lite VM available on Oracle technet, provides a pre built environment for learning about a number of key Oracle products, including Oracle 12c database, Big Data Discovery and Data integrator as well as Cloudera Distribution – Apache Hadoop (CDH 5.8.0).

The download ultimately delivers an OVA “appliance” file for use with Oracle VirtualBox, but there isn’t anything to stop you running this as a VM on proxmox 4, with a bit of effort, as follows.

Follow the instructions on the download page to convert the multiple files into one single OVA file.

For Oracle Virtualbox, simple follow the rest of the instructions in the Deployment Guide.

For Proxmox, where I was running LVM storage for the virtual machines, first rename the single OVA file to .ISO, then upload that file (BigDataLite460.iso) to a storage area on your proxmox host, in this case, mine was called “data”. You can upload the file through the Proxmox GUI, or manually via the command line. My files were uploaded through the GUI and end up in “/mnt/pve-data/template/iso”.

Now, bring up a shell and navigate to the ISO directory and then unpack the ISO file by running “tar xvf BigDataLite460.iso”. This should create five files which include one OVF file (Open Virtualisation Format) and four VMDK files (Virtual Machine Disk).

Now, create a new VM in proxmox via the GUI or manually. The VM I created had the required memory and CPUs as per the deployment guide, together with four Hard Disks – mine were all on the SCSI interface and were set to be 10G in size initially – this will change later.

The hard disks were using a storage area on Proxmox that was defined as type LVM.

Now convert the VMDK files to RAW files which we’ll then push to the LVM Hard Disks as follows:

My kids are getting to the age where they can’t keep away from the laptop, various pads or the Smart TV to go online…time for some protection I thought.

I figured, for the Windows 10 laptops, that I’d use the Microsoft Accounts approach and use the “big brother” features there to stop the kids watching things they shouldn’t and restrict their access time.

First step was to convert my local account into a Microsoft one – simple enough and worked fine.

Next step was to create additional Microsoft accounts and then have them linked up as part of the “Family” – again, fine.

Then tell the PC to add those users – again all fine and simple to do.

All going well up until now, but then when I try to logout of my working Microsoft account on the laptop and login to one of the Family Microsoft accounts, it fails with the “User profile Service failed the login:

After much googling and trying various things, the one which worked for me was to copy the directory C:\Users\Default from a working Windows 7 Ultimate machine onto the laptop with the problem (where the directory did not exist at all). The advice I found actually referred to copying from another Windows 10 machine, but I didn’t have one of those – only a Windows 7 one.

I then added the family Microsoft accounts back in and after logging out and trying to login as one of these added accounts then worked fine!

I can’t be certain what the issue was, but various reading suggested an issue where the machine was upgraded from Windows 7/8 to 10 and where the local profile (C:\Users\Default) was either missing or corrupted. Copying in a working one from another machine fixed the issue in my case.

Although introduced with Oracle 10g Release 1 in 2004, I first used it in a production scenario around 2008, when upgrading a hardware platform for a Data Warehouse. It seemed like a logical choice for myself and the DBAs at the time, although the storage team were less pleased at losing some control. Ultimately it proved a big success on that project and is still in stable, reliable use today.

I got a new laptop the other day and installed all the software, including SQL Developer Data Modeler all fine. I then opened a model which had a bunch of tables with columns based off Domains…the columns did not have Domains against them but had been replaced with Logical data types instead.

After some research, the fix, in this case, involved copying the file “defaultdomains.xml” from the following directory on my old laptop, to the same place on the new laptop:

%SQL Developer Home%\sqldeveloper\extensions\oracle.datamodeler\types

After restarting and reopening the model all was back to normal.

What I probably could have done in the first place was to have created my own Domains file for the Design, saved in the Design folder and then when I transferred the Design by copying across the Design folder the domains would have come with it. I could have then just opened the Domain file on the new laptop. I guess it depends on whether I would want these domains to Design specific or part of the defaults.

I had a scenario today where I was loading a table and a particular column is known by multiple names in different source systems and thus to different people. In order to make everyone happy on this occasion, I wondered if I could create a normal column for one of the multiple names and then use virtual columns pointing at the normal column, for the other names.

I’m aware there are several ways of skinning this cat and that virtual columns was probably not the best choice in the first place, but I was just playing with an idea and it didn’t quite end up where I thought…so it was interesting in that respect.

I expected it to just work, but I clearly ran in to two problems which scuppered my idea. Firstly, the virtual column can’t simply refer to a normal column without any changes to it, otherwise it fails with ORA-54016. The error isn’t particularly helpful, but eventually I worked out that it was because the column is simply a mapping to a non virtual column. Working around that by adding zero to the numeric column gets it to work, but it’s an ugly hack.

In my scenario there are three different names for this column, depending on the users involved, which then leads on to the next issue, which is that I’d then need two virtual columns pointing at the same source column. Unfortunately if I use the same hack twice, it fails with ORA-54015, because you can’t have two virtual columns with exactly the same expression! A slight variant to the hack and it works, but it’s getting uglier and uglier!

There is no such thing as Unstructured Data. All data has structure. If it didn’t have structure we wouldn’t be able to use it.

What about free text? Well, that’s just a single column value (stored in a CLOB in Oracle, for example) and the free text is, more often than not, on a row with other columns, such as identifiers and timestamps, i.e. yet more structure.

I think what people mean when they use this “marketing foam”TM term is “data that we have not yet defined the structure for”, but in order to use it at some later stage, the structure will need to be defined – that definition process doesn’t actually give the data structure in and of itself, it simply defines what that structure is, in order to be able to use it.

Interestingly, the Wikipedia article for Unstructured Data calls out the imprecise nature of the term:

Data with some form of structure may still be characterized as unstructured if its structure is not helpful for the processing task at hand.

Unstructured information might have some structure (semi-structured) or even be highly structured but in ways that are unanticipated or unannounced.

In other words, it does have structure, but maybe we’ve not written it down, or the structure isn’t helpful to processing or is structured in ways we were not expecting – so what?…it’s still structured!

All of the above seem to me to support the view that all data does indeed have structure.