Archive for the ‘Consolidation’ Category

Early tall-building designers, fearing a fire on the 13th floor, or fearing tenants’ superstitions about the rumor, decided to omit having a 13th floor listed on their elevator numbering. This practice became common, and eventually found its way into American mainstream culture and building design. If hotel floors are lettered, would you mind staying at floor M?

Next month, thousands of Oracle professionals will come to San Francisco where Oracle organize for the 13th time in a row, Oracle OpenWorld.

Having 13 presentation in honor of this jubilee is technically impossible but here are 3 ones that I will deliver next month in San Francisco:

Abstract: I will do a LIVE demo and try to create from scratch an Oracle compute instance in less than 13 minutes. Countdown stops after I am root in the virtual machine.

Infrastructure as a service (IaaS) is the fastest-growing area of public cloud computing. Oracle Cloud IaaS, with built-in security and high availability, offers elastic compute, networking, and storage to help any company quickly reach both value and productivity. This presentation covers the benefits of Oracle IaaS over other cloud providers, and shows how fast and easy it is to set up IaaS services in Oracle Cloud.

Oracle Database Exadata Cloud Service provides service instances that contain a full Oracle Database hosted on Oracle Exadata inside Oracle Cloud. This presentation is about best practices on how to migrate and consolidate Oracle databases onto Oracle Database Exadata Cloud Service. It covers the three phases—planning, migration, and validation—of Oracle’s database consolidation workbench that helps in end-to-end consolidation of databases and enables consolidation of more databases on the same Oracle Exadata system, both on premises and in the public cloud.

If you are reading this post, I welcome you to join my talks at OpenWorld. Thank you in advance. Please join also other presentation from our team: The Accenture Enkitec Group. Here are the remaining talks:

Rate this:

“Whenever you find yourself on the side of the majority, it is time to pause and reflect.” – Mark Twain

Oracle Cloud Infrastructure allows large businesses and corporations to run their workloads, replicate their networks, and back up their data and databases in the cloud. And I would say in a much easy and efficient way than any other provider!

Oracle provides a free software appliance for accessing cloud storage on-premise. The Oracle Storage Cloud Software Appliance is offered free of charge. You do not get this from Amazon. And from Azure, you do not get as much memory on a VM for a core as you get from Oracle. In addition to the hourly metered service, Oracle also provides a non-metered compute capacity with a monthly subscription so that you can provision resources up to twice the subscribed capacity. This is a way to control the budget through a predictable monthly fee rather than the less controllable pure pay-as-you-go model.

Creating an Oracle Compute Service took me (the first time) less than 10 minutes. Accessing it was an immediate process. This is simple, fast, easy and most of all I had no issues whatsoever. OK, I did not find lshw but I installed it in a minute:

– Both metered and non-metered options of Oracle Compute Cloud Service are now generally available.
– You can no longer subscribe for 50 or 100 OCPU configurations. Instead, you can specify the required number of 1 OCPU subscriptions.
– If you have a non-metered subscription, you can now provision resources up to twice the subscribed capacity. The additional usage will be charged per hour and billed monthly.
– Oracle provides images for Microsoft Windows Server 2012 R2 Standard Edition.
– Oracle provides images for Oracle Solaris 11.3.
– You can clone storage volumes by taking a snapshot of a storage volume and using it to create new storage volumes.
– You can clone an instance by taking a snapshot and using the resulting image to launch new instances.
– You can increase the size of a storage volume, even when it’s attached to an instance.
– You can now find the public and private IP addresses of each instance on the Instances page. Earlier, this information was displayed only on the instance details page of each instance.
– The CLI tool for uploading custom images to Oracle Storage Cloud Service has been updated to support various operating systems. The tool has also been renamed to uploadcli. Earlier it was called upload-img.

Rate this:

How to compare Oracle’s Database Public Cloud with Amazon’s Relational Database Service (RDS) for enterprise usage? Let us have a look.

Oracle’s Database has 4 editions: Personal Edition, Express Edition (XE): free of charge and used by very small businesses and students, Standard Edition (SE): light version of Enterprise Edition and purpose designed to lack most features needed for running production grade workloads and Enterprise Edition (EE): provides the performance, availability, scalability, and security required for mission-critical applications.

In the comparison in this post, we will evaluate Oracle and Amazon in relation to the Enterprise Edition of Oracle’s database.

Amazon RDS for Oracle Database supports two different licensing models – “License Included” and “Bring-Your-Own-License (BYOL)”. In the “License Included” service model, you do not need separately purchased Oracle licenses. Here are few characterizations:

– Enterprise Edition supports only db.r3.large and larger instance classes, up to db.r3.8xlarge
– Need to choose between Single-AZ (= Availability Zone) Deployment and Multi-AZ Deployment
– For Multi-AZ Deployment, Amazon RDS will automatically provision and manage a “standby” replica in a different Availability Zone (prior to failover you cannot directly access the standby, and it cannot be used to serve read traffic)
– Only 2 instance types support 10 Gigabit network: db.m4.10xlarge and db.r3.8xlarge
– Amazon RDS for Oracle is an exciting option for small to medium-sized clients and includes Oracle Database Standard Edition in it’s pricing
– Several application with limited requirements might find Amazon RDS to be a suitable platform for hosting a database
– As the enterprise requirements and resulting degree of complexity of the database solution increase, RDS is gradually ruled out as an option

So, here is high level comparison:

Notes:

– Oracle’s price includes the EE license with all options
– Amazon AWS is BYOL for EE
– Prices above are based on the EU (Frankfurt) region
– Amazon’s Oracle database hour prices vary from $0.290 to $4.555 for Single AZ Deplyoments and from $0.575 to $9.105 for Multi-AZ Deployments
– Oracle’s database hour prices vary from $0.672 to $8.569

Rate this:

1. For DBAs and Developers, the words READ and SELECT have been for years somehow synonyms. In 12c, is there now any difference?

2. Before pluggable databases, selecting data from the SALES table for instance meant selecting data from a table called SALES in a certain SCHEMA within the database. How about if a table called SALES belongs to several pluggable databases under the same schema name?

The aim of this blog post is to shed some light on these new concepts.

1. New READ privilege.

Until Oracle 12.1.0.2. the SELECT object privilege allowed users to perform the following two operations in addition to just reading data from the SALES table:

LOCK TABLE sales IN EXCLUSIVE MODE;
SELECT ... FROM sales FOR UPDATE;

These 2 commands enabled the users to lock the rows of the SALES table.

The READ object privilege does not provide these additional privileges. For better security, grant users the READ object privilege if you want to restrict them to performing queries only.

In addition to the READ object privilege, you can grant users the READ ANY TABLE privilege to enable them to query any table in the database.

When a user who has been granted the READ object privilege wants to perform a query, the user still must use the SELECT statement. There is no accompanying READ SQL statement for the READ object privilege.

The GRANT ALL PRIVILEGES TO user SQL statement includes the READ ANY TABLE system privilege. The GRANT ALL PRIVILEGES ON object TO user statement includes the READ object privilege.

If you want the user only to be able to query tables, views, materialized views, or synonyms, then grant the READ object privilege. For example:

GRANT READ ON SALES TO julian;

2. Querying a table owned by a common user across all PDBs.

Consider the following scenario:

– The container database has several pluggable databases, i.e., it has a separate PDB for each different office location of the company.
– Each PDB has a SALES table that tracks the sales of the office, i.e., the SALES table in each PDB contains different sales information.
– The root container also has an empty SALES table.
– The SALES table in each container is owned by the same common user.

To run a query that returns all of the sales across the company connect to each PDB as a common user, and create a view with the following statement:

CREATE OR REPLACE VIEW sales AS SELECT * FROM sales;

The common user that owns the view must be the same common user that owns the sales table in the root. After you run this statement in each PDB, the common user has a view named sales in each PDB.

With the root as the current container and the common user as the current user, run the following query with the CONTAINERS clause to return all of the sales in the sales table in all PDBs:

SELECT * FROM CONTAINERS(sales);

You can also query the view in specific containers. For example, the following SQL statement queries the view in the containers with a CON_ID of 3 and 4:

SELECT * FROM CONTAINERS(sales) WHERE CON_ID IN (3,4);

3. Delegate.

Something else: staring 12.1.0.2, when granting a role to a user, you can specify the WITH DELEGATE OPTION clause. Then the grantee can do the following two things:

A) Grant the role to a program unit in the grantee’s schema
B) Revoke the role from a program unit in the grantee’s schema

Rate this:

Cross-platform transportable database is not the same thing as transportable tablespace. When performing x-platform transportable database we copy the entire database, including the SYSTEM and SYSAUX tablespaces from one platform to another. The usual containment checks are no longer needed and because the SYSTEM tablespace is also being copied, no metadata datapump export/import step is required. But cross-platform transportable database can only be performed between platforms that have the same endian format.

When consolidating a large number of databases onto Exadata or SuperCluster, the work has to be automated as much as possible. When the source and the target platform share the same endian (see the 2 endian group below), then the best option is to use the transportable database method.

– Some parts of the database cannot be transported directly: redo log files and control files from the source database are not transported. New control files and redo log files are created for the new database during the transport process (alter database backup controlfile to trace resetlogs;), and an OPEN RESETLOGS is performed once the new database is created.

– BFILEs are not transported. RMAN provides a list of objects using the BFILE datatype in the output for the CONVERT DATABASE command, but users must copy the BFILEs themselves and fix their locations on the destination database. Execute DBMS_TDB.CHECK_EXTERNAL in order to identify any external tables, directories or BFILEs.

– Tempfiles belonging to locally managed temporary tablespaces are not transported. The temporary tablespace will be re-created on the target platform when the transport script is run. After opening with resetlogs, run alter tablespace TEMP add tempfile…

– External tables and directories are not transported. RMAN provides a list of affected objects as part of the output of the CONVERT DATABASE command, but users must redefine these on the destination platform. Run select DIRECTORY_NAME, DIRECTORY_PATH from DBA_DIRECTORIES and ensure that the same paths are available on the target system.

– Password files are not transported. If a password file was used with the source database, the output of CONVERT DATABASE includes a list of all usernames and their associated privileges. Create a new password file on the destination database using this information.

86. Transportable Tablespaces: the implicit use is supported in the BRSPACE function “-f dbcreate” (Note748434) and the “Tablespace Point in Time Recovery” function of BRRECOVER. Explicit use as part of system copying is tolerated.

87. Transportable database: Can be used (Note 1367451).

And finally, here is some more information on how to migrate your databases to Oracle Engineered Systems:

Rate this:

Building database services based on Exadata, Oracle Enterprise Manager and Oracle Database 12c is a powerful combination, especially if implemented properly and by skillful DBAs.

A recent article of Forbes Magazine explains why Database as a Service (DBaaS) will be the Breakaway Technology of 2014. The 451 research report estimated that DBaaS providers generated revenue of $150 million in 2012, but that revenue will grow at a compound annual growth rate of 86% to reach $1.8 billion by 2016.

By paraphrasing “Animal Farm” author George Orwell, Oracle’s Alexander Wolfe stated that some DBaaS offerings provide a lot more services than others. I would like to clarify why this is indeed so true.

What is DBaaS?

Kellyn Pot’vin from Enkitec says that Database as a Service (DBaaS) is an architectural and operational approach enabling DBAs to deliver database functionality as a service to internal and/or external customers.

According to scaledb.com, Database-as-a-Service (DBaaS) is a service that is managed by a cloud operator (public or private) that supports applications, without the application team assuming responsibility for traditional database administration functions.

Technopedia says that Database-as-a-service (DbaaS) is a cloud computing service model that provides users with some form of access to a database without the need for setting up physical hardware, installing software or configuring for performance.

Kellyn’s definition is the way I understand it based on my experience at least. The other two definitions makes me feel like one can also define easily Compression-as-a-service or Temporary-tablespace-as-a-service.

Regardless of how the essence of DbaaS is set with words, it is all about simplifying, enhancing and automating database provisioning, monitoring, administration, security and operational efficiency. In short, centralizing and harmonizing the database administration.

Although I wrote above simplifying, still it means more like reducing the complexity and having a simplification plan than ending with an elementary and transparent database environment. Though I have seen that this is doable but mostly in power point presentations.

Now, the reference architecture of “Databases as a Service” can be found here. But most of the white papers and reference notes that one can find on the web are made for decision makers, not for implementers. The really though is that DbaaS is DBA driven. Regardless of how cunning plan on how to implement DbaaS a company has, it is still very much up to the ones who implement the service: their skills, practical knowledge and experience. So, here is the DBA cookbook.

A very experienced DBA team can offer a much more services than another. Highly experienced Database Architects can enable businesses to deploy new databases quickly, securely, and cheaply. What is part of the service varies and business is often not even aware of what to request.

The long list of what can be included in the service and how the implementation is handled is not part of this blog post but all database experts know that if implemented properly, DbaaS will lead to:

So, why do some DBaaS offerings provide a lot more services than others? The Forbes article sums up the flexibility inherent in the DBaaS model by apply the “Burger King” analogy: DBaaS lets you have it your way. And it comes up with its pros and cons. In order to always have the upper hand, I always try to follow some simple principles:

– not more than 2 database version including the patchset levels
– databases should not run on more than 2 different operating systems
– at least 2 environments for every database: never end up with just productions
– standby database/replica/ADG for every mission critical database
– 2 OEM environments: either PROD and non-PROD or primary and secondary data center
– 2 DBAs to verify every important DB change
– at minimum 2 RMAN catalogs: one for PROD and one for non-PROD
– do not mix 2 databases based on different COTS software (like SAP and Siebel for example)

In a DbaaS, all databases are equal, however for business some databases are often more important than the others. DBAs are aware of this and they know how to handle this nontrivial complexity.

Rate this:

The Database Configuration Assistant (DBCA) and the “CREATE DATABASE” statement are the 2 possible ways to create a container database in 12c.

In Oracle 12.1.0, ENABLE_PLUGGABLE_DATABASE is a bootstrap init.ora parameter used to create a CDB, and it enables a database in NOMOUNT startup mode to know that the database should be created as a CDB. One CDB, can have up to 252 pluggable databases. Enough for a start I would say.

The parameter ENABLE_PLUGGABLE_DATABASE must be set in init.ora before creating a CDB. The default is FALSE but in future Oracle releases all databases will probably be created as container databases.

In a CDB, the DB_NAME parameter specifies the name of the root. You may want to set the SID to the name of the root, although on the screenshot above you see that they are different. The maximum number of characters for this name is 30.

You can create the database manually using the create database command as follows:

The USER_DATA tablespace created above is for storing user data and database options such as Oracle XML DB (a must install in 12c). PDBs created using the seed include this tablespace and its data file. The tablespace USER_DATA is not used by the root.

Afterwards, run the following script as sysdba:

@?/rdbms/admin/catcdb.sql

As several people have already noticed, the script is missing from the admin directory. There is an open bug about this: Bug 17033183 : MISSING FILE CATCDB.SQL IN $ORACLE_HOME/RDBMS/ADMIN.

It installs all of the components required by a CDB: PL/SQL packages in the root, etc.

Note that every PDB has its own SYSTEM and SYSAUX tablespace that differs from those of the root.

For a CDB, you can configure Oracle Enterprise Manager Database Express (EM Express) for the root and for each PDB by setting the HTTP or HTTPS port. You must use a different port for every container in a CDB.

Here is what I ran in order to get to the image shown above:

exec DBMS_XDB_CONFIG.SETHTTPSPORT(5501);

I have been using EM DB Express for more than a year, starting with the first beta version of 12c. If you cannot get the GUI running, then just restart the database after executing the above command.

After you create a CDB, it consists of really not much. Just the root and the seed! The root contains almost no user data. The user data resides in the PDBs.

Therefore, after creating a CDB, one of the first tasks is to add all the PDBs. Database consolidation and simplification. You name it!

There are 4 methods for adding PDBs to a CDB:

• Create new PDB from PDB$SEED pluggable database
• Plug in a non-CDB
• Clone a PDB from another PDB into the same or another CDB
• Plug an unplugged PDB into another CDB

If you look on the left, under memory settings, you will see a new init.ora parameter: PGA_AGGREGATE_LIMIT. It is a real hard limit on PGA memory usage! If the value is reached, then Oracle aborts or terminates the sessions or processes that are consuming the most untunable PGA memory in the following order:

1. Calls for sessions that are consuming the most untunable PGA memory are aborted.

2. If PGA memory usage is still over the PGA_AGGREGATE_LIMIT, then the sessions and processes that are consuming the most untunable PGA memory are terminated.

P.S. One of my 4 sessions this September at Oracle OpenWorld will be “DBA Best Practices for Performance Tuning in a Pluggable World”.

The article says: “The new approach is embodied in a technology strategy pioneered by Oracle and recently endorsed/followed by IBM (although IBM’s effort to date is rather modest): building richly integrated and fully optimized systems from the ground up, with hardware and software expressly created to work together to deliver maximum performance.”

And as you might guess from the image above, this time I am not only after the technical benefits and advantages of Exadata. I would like to clarify what they bring to business. And see how Oracle Exadata compares to IBM P-Series.

• IBM 3 year TCO is 31% higher than Oracle.
• Exadata can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution.
• Exadata requires 40% fewer sysadmin hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.
• Exadata delivers dramatically higher performance typically up to 12x improvement, as described by customers, over their prior solution.
• Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure.
• Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.

Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.” And it’s pre-engineered for long-term growth.

I personally think that the benefits of Exadata are even much bigger granted the system is properly configured which I see is not always the case but as I said I will not comment on technical issues this time.

But after all, this is a DBA blog, so this part of the research might be of interest for most DBAs:

“For this emerging Database Machine Administrator (DMA) job category, IT employees are cross-trained to handle tasks currently undertaken by admin specialists in hardware, operating systems, network, applications or storage. IT managers who pursue this adaptive path likely will gain operational efficiencies for managing packaged solutions, although it may take several years as IT administrators are trained in new competencies.

The emergence of the DMA also may help restructure IT departments into more efficient operations, but the full benefits of this development cannot be fully realized until most older systems that demand a stove-piped IT organization are decommissioned and IT organizations adapt. At that time, IT operations managers may be able to reduce headcount. In time, packaged solutions should involve not only fewer workers but also fewer IT groups, which should reduce costs; in the meantime IT will be able to do more without adding headcount.”

That is very important! Let me quote here Paul Vallee, who in a recent discussion predicted that in the near future organizations will need few but very skillful DBAs, an opinion I 100% agree with!

“This change in job roles is not necessarily comfortable for everyone in IT because Exadata marginalizes various system administrators as it empowers the DBA: “The DBAs are doing more hardware tasks and diagnostics because most of the Exadata stuff is geared around database commands, not hardware commands or operating system commands. The gearheads have designed Exadata from the DBA’s perspective—when you look at the sys admin portion, it’s all written by a DBA, not by a Sys Admin,” lamented a System Administrator at a Business Services Co.

Other System Administrators have expressed similar sentiments as many of their traditional responsibilities shift towards the DBA—the source of the much of the operational savings we have identified.”

“The biggest disadvantage is that you are adding more complexity to your database architecture. With more complexity comes a higher cost in maintaining and administering the database along with a higher chance that something will go wrong.

The second biggest disadvantage is the cost associated with RAC. Oracle is touting RAC on Linux as a way to acheive cost savings over large Unix servers. With RAC, the costs shift from hardware to software as you need additional Oracle license fees. The big question is will this shifting of costs result in any cost savings. In some cases, yes, and in other cases, no.”

Which would be the third big disadvantage of RAC? I say the bugs! Or let me put it more mildly: RAC just develops random features. And hunting errors in RAC is complex, right? On top of ORA-600, we now have even ORA-700.

Let me offer you some quotes from the RAC debate on FlashDBA.com:

“Then there are the younger DBAs looking to gain more experience, who may say that RAC is a great thing. Secretly that might not necessarily be true but they want the experience.”

“We also see that there are “no application changes necessary”. I have serious doubts about that last statement, as it appears to contradict evidence from countless independent Oracle experts.”

“Complexity is the enemy of high availability – and RAC, no matter how you look at it, adds complexity over a single-instance implementation of Oracle.”

“At no time do I ever remember visiting a customer who had implemented the various Transparent Application Failover (TAF) policies and Fast Application Notification (FAN) mechanisms.”

So, is Database Virtualization the answer?

I think that database virtualization is a concept that has been misunderstood and most of all wrongly defined by many. According to the Oxford Dictionarries, in Computing, virtual means something which is not physically existing as such but made by software to appear to do so.

Decide for yourself what is then a virtual database. At least, it is not a database built in a virtual server! I did discuss that at the Oracle ACE Director Product Briefing at Oracle headquarters this week with world’s top database experts and what can I say: the topic is highly controversial.

Rate this:

Julian is the Global Database Lead of Accenture. His primary responsibility is managing and leading the Global Oracle Technology Practice which includes Autonomous Cloud, IaaS, PaaS, Database Services, Engineered Systems, Java, Middleware, Security and all other areas falling under Oracle Technology. He is also the Accenture-Enkitec Group Managing Director for ... Continue reading →