We're always trying to make SQLcl easier to connect to your database, whether its at your place or in the cloud. So, one other thing we have added to enable you to drill into your cloud databases is an SSHTUNNEL command. Lets take a look at the help for it, which you can get as follows.

SQL> help sshtunnelSSHTUNNEL---------

Creates a tunnel using standard ssh options such as port forwarding like option -L of the given port on the local host will be forwarded to the given remote host and port on the remote side. It also supportsidentity files, using the ssh -i optionIf passwords are required, they will be prompted for.

Specifies that the given port (localhost) on the local (client) host is to be forwarded to the given remote host (Remotehost) and port (Remoteport) on the remote side. This works by allocating a socket to listen to port on the local side.Whenever a connection is made to this port, the connection is forwarded over the secure channel, and a connection is made to remote host & remoteport from the remote machine.

-i identity_fileSelects a file from which the identity (private key) for public key authentication is read.

SQL>

So for this to work we need to decide which ports locally we are going to use and which remote machine and port we want to use to map our ports from local to remote. We also need a RSA file from the target host. In this example, we have created one with the default name of id_rsa. The format of the flags follow the standard ssh rules and options, so -i for identity files and -L for port forwarding. Heres an example connecting to a remote host via a tunnel.

We've been playing with our console drawing in SQLcl for a while now and this week, we hooked up some keys to make editing and running much easier. The video will show the following keys for managing your buffer in the console. This will make it into the next Early Access candidate soon.

up arrow - previous history (this will continue to show you the next history unless you move into the text to edit it.

down arrow - next history which is the same as above.

If we are editing and not showing history, then the up and down arrow will move up and down the buffer.

ctrl-W will take you to the top left of the buffer and ctrl-S will take you to the bottom of the buffer.

left arrow moves right, with ctrl-A taking you to extreme left of that line

right arrow moves right and ctrl-E takes you to the extreme right of that line

ESC takes you out of edit mode, back to the SQL> prompt

ctrl-R will execute your buffer if you are editing it.

Editing SQL in SQLcl

At the start of the video, we paste in a large piece of SQL from Kris' blog and all NBSP get stripped out so you get the full SQL and none of the dross.

If you are at the end of the buffer and terminate your statement correctly, the next CR will run the contents of your buffer. If you are anywhere else in the buffer, ctrl-R will run the buffer for you.

Check out the latest one on OTN and come back for these features when we drop the new version of SQLcl on OTN.

Turloch posted this today, which is like aliases for SQL*Net connection URL's which are used to connections like this:

connect <USERNAME>/<Password>@URL

This works great and you can simplify your connection strings that you use. Vadim wired this into the code completion and we can now code complete via key, a connection string that you have used before or you can set up a new now using the net command.

We support many ways to connect in SQLcl, including lots from SQL*Plus which we need to support to make sure all your SQL*Plus scripts work exactly the same way using SQLcl as with SQL*Plus.

I've added several ways to show how to connect to SQLcl. If there is one you want to see added that is not here, let me know and I'll add it to the list. So far, We have below:

EZConnect

TWO_TASK

TNS_ADMIN

LDAP

At any time when connected you can use the command 'SHOW JDBC' to display what the connection is and how we are connected. Here's some details of the types above.

EZCONNECT

The easy connect naming method eliminates the need for service name lookup in the tnsnames.ora files for TCP/IP environments. It extends the functionality of the host naming method by enabling clients to connect to a database server with an optional port and service name in addition to the host name of the database:

The TWO_TASK (on UNIX) or LOCAL (on Windows) environment variable can be set to a connection identifier. This removes the need to explicitly enter the connection identifier whenever a connection is made in SQL*Plus or SQL*Plus Instant Client.

In SQLcl, we can set this up as a jdbc style connection like this

$export TWO_TASK=localhost:1521/orcl

TNS_ADMINLocal Naming resolves a net service name stored in a tnsnames.ora file stored on a client. We can set the location of that in the TNS_ADMIN variable.

Here's a little preview of the code insight we have in SQLcl. These changes are part of EA2 which are coming out very soon. This also shows the buffer and cursor management which was introduced in SQLcl

This allows you to move around the buffer easily and add and change text as you would in a normal text editor, not a console window like this.

We're also adding hotkeys to run the buffer from anywhere or to jump out of the buffer to do something else without losing the contents of the buffer.

since we released our first preview of SDSQL, we've made a lot of changes to it and enhanced a lot of things too in there so it would be more useable. One specific one was the use of LDAP which some customers on SQLDeveloper are using in their organisations as a standard and our first release precluded them from working with this.

Well, to add this, we wanted a way that we could specify the LDAP strings and then use them in a connect statement. We introduced a command called SET LDAPCON for setting the LDAP connection. You can set it like this at the SQL> prompt

set LDAPCON jdbc:oracle:thin:@ldap://scl58261.us.oracle.com:389/#ENTRY#,cn=OracleContext,dc=ldapcdc,dc=lcom

Then you can come along and as long as you know your service name, we're going to swap out the ENTRY delimiter in the LDAP connection with your service. We're working on a more permanent way to allow these to be registered and used so they are more seamless.

Since we dropped our beta out of SQLDeveloper 4.1 and announced SDSQL, we've been busy getting some of the new things out to users. We support SQL*plus editing straight out of the box, but one thing that was always annoying was the time when you make a mistake and can't fix it to you have finished typing to go back and add a line like this.

This was always the way as console editors didn't let you move around, the best you could hope for on the command line was a decent line editor and anything above was printed to the screen and not accessible unless through commands like you see here in the images about..

Well, not any more. In SDSQL we've taken a look at several things like history, aliases and colors and we've now added a separate multiline console editor which allows you to walk up and down your buffer and make all the changes you want before executing? Sounds normal, right? So, thats what we did. Have a look and tell us what you think.

So My macbook pro died the other day and much to my wife's amusement, my dell laptop died 30 minutes later with disk errors as I hadn't used in it in forever. She wasn't laughing long though cos I swiped her Macbook Air to get me out of a hole while the Apple store replace the magsafe card. (Don't worry, though, cos Lisa grabbed one of the kids laptops and now they are the only ones fuming. )

So, here we are, no development environment to speak of on this laptop, not even Xcode tools or anything and a release to go out! First thing out of the box was to down load eclipse, from eclipse.org, which at time of writing is still keplar.

Download it, and expand it. then take the complete eclipse folder and drop it into your /Applications folder. It'll look like this.

Also, when you click on the Launcher, you'll see eclipse added to the list of applications.

Now, When you run it, you may be asked if you want to install java 1.6 to run Eclipse. Accept the install and sit back until it completes. When its installed, you'll be able to run eclipse, so click the icon in the launcher, as above.

Eclipse will appear like this below.

We'll want to see what java versions we have installed and for that you can go to preferences and type jdk into the filter box which will show a number of java related options.

As you can see, we have a preference called installed JRE's which, when we click on it will only have the apple JDK we installed when we first tried to start eclipse. I want JDK 8 and JDK 7 and I got them on the oracle site for Java. Download both dmg files from Oracle, double click them and follow the instructions on the installer to drop them in. If you restart eclipse, and go back to the preferences, to this page you will now see the appropriate JDKs installed and you can choose your default for your project.

Now, part two. Getting subversion into your eclipse, which turns out to be kinda difficult when you are trying to figure out which path to do. There are various schools of thought on how to get subversion on to your mac, but for me so far, I have found Brew to be one of the best of the latest package installers out there. If you do not have Brew installed you can do that really quickly by running this command in a terminal window.

Badda Bing. Now, we can install subversion from the Brew repository and as all homebrew experts know, you keep your home-brews in the Cellar, so look out for /usr/local/Cellar appearing. Now you may be asked for your administrator passwords as you do this because brew setups up the Cellar under /usr/Local and needs to create that there if it does not exist and set the permissions on the directory.Next, we'll want to install subversion with Brew.

This will install subversion and its dependencies for you.

Now, lastly, you'll need to install SVN support on eclipse. The best one I've seen and have been using for ages has been Subclipse from Tigris.org. If you go to the download page, you'll see some notes on the download pages and sections for each release like this

What we want to pick up is the Eclipse update Site URL. We can then take that and use it in eclipse to install subclipse for us. So. Open eclipse again and go to HELP > Install New Software

This will popup the window below for available software and if you use the drop box, you'll see things like eclipse and myln and other update sites which base eclipse uses.

We need to add another for Subclipse. Remember we grabbed the update url from the Subclipse site, we can add a new site by clicking add and pasting in the URL and a name for the site as shown.

This will appear like this and will give you the options that below to install subclipse and the SVNKit.

Install these and its normally good to restart eclipse after these installs. The last thing you need to do then is to make sure you are using the right svnkit in eclipse once you restart.You can make sure of this by going to the preferences again and searching for SVN. Click on the main SVN preference and make sure the SVN interface is set to SVNKit instead of javaHL.

Now, svn should be all set up and you can go look at adding new repositories and checking out code.

This, the last part of a four part tutorial goes over the movement of data using files generated by Oracle SQL Developer. In part 1 we generated the offline capture scripts to take to the SQL Server machine, unloaded the metadata, zipped it up and copied it back to out local machine. In part 2 we used SQL Developer to create a migration project and load the capture files into SQL Developer. We then converted the metadata into its Oracle equivalent. In Part 3, we were able to generate DDL and run this DDL against an Oracle database.

Looking at the data move scripts that we generated in an earlier part. We need to zip up the files and copy them to the SQL Server machine to run. Lets look at that now. The images below show the files moved to our SQLServer machine. We go into the main directory under data move and run the bat file MicrosoftSQLServer_data.bat. This batch file takes a number of parameters

This script then unloads the data from the database for the databases selected earlier. We can see the dat files in the image above. Now, we just need to go and transfer the data to the Oracle database machine for loading. We can go back out to the main datamove directory and zip up the entire directory including the scripts. We then need to ftp that to the database machine.

The files need to be unzipped on the machine and cd into the main directory until you find a file called oracle_loader.sh.

We can run the files as below. The output below shows the exact output of running the Oracle_loader.sh script on the data we have taken from SQL Server.

We can now take a look at some data in the Oracle database by going to the dbo_northwind connection we made earlier and look at the data.

And thats it. In these four parts we have generated capture scripts from SQLDeveloper to unload metadata from SQLServer. In part two, we loaded the metadata and converted it into an Oracle version. In Part three, we generated the DDL and ran it creating the new Oracle users. In part 4, we unloaded the data and copied it to the oracle machine. We then loaded it using the scripts we generated from Oracle SQL Developer.

In part 1 we generated the offline capture scripts to take to the SQL Server machine, unloaded the metadata, zipped it up and copied it back to out local machine. In part 2 we used SQL Developer to create a migration project and load the capture files into SQL Developer. We then converted the metadata into its Oracle equivalent.

In this episode we will try and generate DDL from our migration project. Right now, We can see the Oracle objects in the Converted Database Objects node.If we right click on Converted Database objects and choose generate, we can generate DDL to create the Oracle Schema and Objects.

The wizard appears again with the introduction screen. Clicking next takes us directly to the Target database Screen.

Click on offline to choose generation of files. For specifics of how the files get generated, click on advanced options

You can select what way you want to generate your files, all in one file, a file per object type or a file per object. You can also choose the types of objects you want to generate and run.

In this demo, I will just generate tables, data and supporting objects. Clicking next will take us to the data move page where we again choose offline to generate files.

Choosing advanced options allows us to be specific about date masks and delimiters for data unload.

Once we have chosen our options, we click next and review the summary.

Finally, we click finish and the files are generated in the output directory we specified when setting up the project in part 2.

Now, Lets go see what we generated. If we go to the output directory we specified in the project, we can see the list of files we generated. Remember the options we chose for generation.

We also get the master.sql file opened in SQL Developer which looks like this

Now, lets try and run this file and create the users and objects. Firstly, we choose a connection to run the script. This user must have the privileges to create users and all their ancillary objects.

We can run this script to create the users. Notice the worksheet output showing the output of the files.

Once this is complete, we can create a connection in SQL Developer to one of the users created, dbo_Northwind, dbo_pubs and emulation.

Now, we have created the schema from the DDL which was generated. In the next and final episode of this, we will visit the data move. We will run the data move scripts on SQL Server and extract the data which we can load via SQL Loader or external tables.

Ok, Now we have the files as generated and moved in part 1, we can now start SQL Developer to load the files. Start up SQL Developer and create a connection with the following privileges: CONNECT, RESOURCE and CREATE VIEW.

When the connection is opened, right click on it and choose Migration Repository then Associate Migration Repository. This will create the repository in the connection.

Now, We can start the migration wizard. You can do this by either going to the tools menu and selecting migrate from the migration menu, or you can select the migrate icon from the migration project navigator. The wizard will popup and you can walk through the steps as outlined below.

Clicking the next button selects the repository page which we can choose the repository connection we just made.

Next page and we need to create a project to hold the captured databases.

The output directory in the page above is the directory where any log files or generated files will be placed. When we generate DDL or data move files, this is where they will get generated. Next page is the capture page. For using the files from Part 1, we need to choose offline which will then show the page below, which asks us to select the offline capture file.

This offline capture file is in the zip file we brought over from SQL Server. Browse to the sqlserver2008.ocp. This file tells SQL Developer what to expect in the directory. It will look for the databases that have been unloaded.

When its selected, SQL Developer parses the files and shows you a list of the databases you ran the offline capture scripts for in Part 1.

Choose both databases and click next.

The next page shows a list of the datatypes of SQL Server on the left and a list of equivalent data types on the right. You can choose a different type if you want and you can also create a new mapping by clicking on the "Add new Rule".

The next page lists the objects to be translated. Because we have not captured anything yet, the best we can do is to tell SQL Developer to translate everything. We can come back later and choose specific stored programs to convert and translate.

At this stage, we can click proceed to summary and then finish once you review the summary page.

When finish is pressed, SQL Developer will capture the database metadata from the files and convert it to its Oracle equivalent.

When this completes, you will see a new node with the project name you chose earlier. If you click on it, you will get an editor on the right hand side with a summary of the data captured and converted.

Many people want to migrate their SQL Server databases and do not have direct network access to the database. In Oracle SQL Developer, we can migrate from SQL Developer to Oracle using a connection to SQL Server or using files to extract the metadata from SQL Server and convert it to an Oracle equivilent.

Today, we'll show you how to use scripts to convert SQL Server. First we need to start up SQL Developer and choose the Tools menu, then select Migration and Create Offline Capture Scripts

When the dialog appears, choose the SQL Server and the appropriate version you want. You will also need to choose a directory to put the scripts into.

This will generate a set of files which we will need to move to our SQL Server machine to run.

So on disk, these look like this.

Now, we can zip this up and ftp it to the SQL Server machine you want to migrate, or in my case, I'll scp it to the machine.

Now, lets go to SQL Server and run the scripts against the SQL Server database. Looking below, I have opened up a command window and created a directory called blog and moved the sqlserver.zip file into that directory.

Now, we have the scripts on the SQL Server box and ready to run. Its important that when you run the scripts on a server, that you always run it from the same place. The script which is run takes a number of parameters to run.

OMWB_OFFLINE_CAPTURE sa superuser_password databasename server

OMWB_OFFLINE_CAPTURE sa saPASSWORD DBNAME_TO_CAPTURE SQLSERVER_SERVER

This will unload the metadata from the database to flat files. You need to run this script once for each database you want to migrate. You'll see something like these as you go.

This is one run for the northwind database. I've run this again for the pubs database and lets look and see what files exist now.

Now, we go up a directory and zip all this up so we can move it to the machine where we will translate it.

Now, we can move that zip file. Take a look at it, it is very small in size for this demo, but even for a large system, we are only capturing the metadata structure of the database. If you are working with a partner or SI, this is the file you will want to send them for analysis.

Ok, for those of you who are doing this right now, read on.

When you have the capture.zip file transferred, unzip it into a clean directory. We will use SQL Developer on this to convert these metadata files into DDL to create the new Oracle schema and the data move scripts which can be used to unload the data from SQL Server and load it into Oracle.

Now, we use SQL Developer to load this data. We will need access to an Oracle database to create a schema to use as a repository. The repository is used to hold the source database information and the converted data.

The next post will walk through SQL Developer loading these files and converting the metadata to an Oracle equivalent.

Sometimes people want to connect to MySQL from Oracle and copy table data between the databases. You can do that with Oracle Hetrogenous Services via ODBC. This post will show how to create an odbc connection to your MySQL database which is the first part of this.

For my example, I'm using unixODBC and its on the Oracle public yum repository

Cool. When we get this we are connected via odbc to the DSN. Now we can prove it by doing a show tables or something to prove its working. NB: If you get an error at this stage asking for libraries, its likely you specified your Drivers incorrectly in the odbcinst.ini.Now we have this working we can setup HS on the Oracle side.

I find myself installing and running mysql of different versions in different places for different reasons all the time (well often enough to do it and not remember the little things that cost time when setting up) Its with that in mind, I'm making notes for myself and you guys as well to help you along.

We use MySQL a lot with Oracle SQLDeveloper and many use SQLDeveloper to co-exist between MySQL and Oracle.

For most versions of Oracle Linux, we will install SQL Server from the Yum repository. If you dont have one set up you can configure one under /etc/yum.repos.d. These notes for yum are a reference (blatant copy) from the Oracle Linux Admin guide

As root, change directory to /etc/yum.repos.d.

# cd /etc/yum.repos.d

Use the wget utility to download the repository configuration file that is appropriate for your system.

# wget http://public-yum.oracle.com/public-yum-release.repo

For Oracle Linux 6, enter:

# wget http://public-yum.oracle.com/public-yum-ol6.repo

The /etc/yum.repos.d directory is updated with the repository configuration file, in this example, public-yum-ol6.repo.

You can enable or disable repositories in the file by setting the value of the enabled directive to 1 or 0 as required.

Now we are ready to install MySQL. If you havent used yum before play with some of the options to list packages and switch repos as you need them. Its a great tool saving us all lots of time with dependencies.

In general, you can start mysql on the server without a server password in order to set one up for yourself. My one caveat here, is that all this is for development folks, some one with a security hat on will complain (bitterly). I'm going to show you how to clear down all permissions so you can connect from any machine.

Now, if you want to change your passwords, make sure you do that now. If you are on 5.1 and over secure_auth is set on and old passwords are off by default. In my version 5.0, I need to set them to get new passwords and secure_auth which is default on all mysql clients now. This is done in /etc/my.conf followed by a restart of mysql

old_passwords=0secure-auth=1

mysql> update user set password=PASSWORD('oracle') where user='root';

lastly flush privileges and exit

mysql> flush privileges;

Lastly, I like my prompts to be informative so, You can also set this in your profile to setup your prompts.

export MYSQL_PS1="\u@\h [\d] > "

It'll give you a prompt like this when I log in with

root@localEl5> mysql -uroot -poracle -Dmysql

giving this prompt in mysql

root@localEL5 [mysql] >

Now, you are all set to connect from SQL Developer to the this instance. We can also install the sample databases from http://dev.mysql.com/doc/index-other.html

Running lots of scripts in SQL*Plus is nice when they are working correctly. But what about when they fail? Its a pain to figure out what went wrong, unless you have a log file and even then, you have to hunt the errors down.

Well, I've had enough of that. Laziness has forced us to create clickable errors in the SQL Worksheet as part of SQL Developer.

Heres a simple example. Take 3 statements, one of which has an obvious error. Hitting F5 runs the script and the errors now appear in the script output. Notice that they are coloured blue at this point to show us that they are clickable.

When you click on the error, you get taken to the point where you made the error in the worksheet. In this case, line 3. One of the things we wanted to do as much as possible when doing this was to keep tabs on the error should we change the file. Above, I add a few lines and return to click on the error and it brings us to the right spot again. In this instance, line 5.

The same thing works in files. If we have a problem in a nested file like @test/test.sql, the error is reported and we can click on it in the same way and be brought to the file at the appropriate line and column offset.

When you have a load of SQL*Plus scripts, they can be a pain to manage and just simply get around.In the next version of SQL Developer, you'll be able to hold the ctrl key down and click on the file name and have it open up. This also works on mac with the cmd key and clicking on the line.

Remember, hold the ctrl key down on an @file.sql or @@file.sql and the line will be decorated so you can click on it.

This is the simplest type of replacement, taking the source strings found and replacing them with the string in the replacement tag.

regex

The regular expression replacement can either simply replace text, or it can also use regular expressions to rearrange the string that was found. For example, function(a,b,c) can be switched to myfunction(c,a,b)

Translator

The translator type allows the user to take the string found and pass it to a language translator denoted by the type. In our example, the type is SYBASE, which will call our sybase translator and translate the source string.

In the rules file above, we have 2 rulesets defined, the first doing text and regex replacements, and the second doing translator replacements. All these can be mixed together, though. If you have a lot of rules, it makes sense to delineate them in rulesets so the tool can filter out what is not required.Now, taking a look at the source we had in Part 1

So, there we a translated file which was running iSQL on Sybase, but is now running SQL*Plus and Oracle specific SQL.

Now, if you are interested, heres what the scanner said as it was running. It prints out individual statements as they are found and where in the source they were found. At the end, the tool summarises what was found and where.

I was working through a bug the other day and using SQL*Plus, which for the most part doesn't annoy me too much. However, one of the things that does, is having to retype lots of stuff. (We dont have that problem in SQL Developer).

Having hunted around for a few minutes, I found rlwrap which is a GNU readline wrapper. All this means is that when we use it on SQL*Plus, it give us keyboard history and user defined completion. I've found a few posts about it too, which are referred to below, but I wanted to do this for our virtual machine.

We use our Oracle Developer Days VM a lot internally as its great for spooling a DB having a full environment ready to play with and test features. I'm using that for this post.

You can download rlwrap from here. There are also RPMs available too. I pulled down the tar ball. Expand it out and you have a bunch of files for a standard build

Firstly, we need to run the ./configure script to find all the dependencies. You can see a cut down version of the output of that below.

checking for tgetent in -lcurses... nochecking for tgetent in -lncurses... nochecking for tgetent in -ltermcap... noconfigure: WARNING: No termcap nor curses library foundchecking for readline in -lreadline... noconfigure: error:

You need the GNU readline library(ftp://ftp.gnu.org/gnu/readline/ ) to buildthis program!

[root@localhost rlwrap-0.37]#

Running configure on my system flagged that I didnt have the readline package installed. However, when I went to install it with

I discovered it was already installed. A quick look through the config.log tho, from the configure process shows that the -lreadline library dependency could not be satisfied. It needed the development package to build.

So, now I'm a lot happier and can zip through loading files and getting my previous statements.

Now, I know there are issues with using this when we redirect files into SQL*Plus, on other blogs like this from Lutz Hartmann, but for me and working with plus in a terminal window, this will do nicely.