Hi
I'm on 12.2 EE on Win 2016
I have the following SQL which selects from a 550 GB table (yes, it is GB due to massive GDPR logging)
<code>
create table GFAUDIT.fga_log$_kopi_201809 as
select /*+ PARALLEL (8)*/ (select instance_name from v...

Why is this statement returning value -
<code>select * from (
SELECT 'AAaaanders4n' name
FROM dual
)
WHERE REGEXP_LIKE (name, '^[A]{1}');</code>
I have given {1} in regexp_like, still this statement returns 'AAaaanders4n'

Looking at cloud databases has me thinking about the speed of light. Wikipedia says that the speed of light is about 186,000 miles per second. If my calculations are correct that is 5.37 microseconds per mile. The United States is about 2680 miles wide so it would take light about 14.4 milliseconds to cross the US. If I ping one of my favorite web sites it takes tens of milliseconds to ping so that kind of makes sense because those sites are in other cities and I am going through various routers. I did some tests with my company’s storage and found that reading from our storage when the data is cached in the storage server takes around 200 microseconds. That is 200 microseconds for a round trip. I’m sure that our database servers and storage are a lot less than a mile apart so most of that time has nothing to do with the speed of light. I heard about a cloud vendor whose fast network connection took 100 microseconds plus the speed of light. I guess 100 microseconds is the cost of getting your data to fiber and light does the rest. If your cloud database was on the other side of the country, I guess it could take 14 milliseconds each way at least for each SQL request. If the cloud database was in your own city and say 10 miles away that would only tack on about 53.7 microseconds each way to the 100 microseconds overhead. I guess it makes sense. Maybe 100 microseconds plus the speed of light is the cost of moving data in the best case?

A TRUNCATE or DROP Partition makes Global Indexes on a Partitioned Table UNUSABLE.

You may be lucky if the target partition was empty, resulting in Oracle maintaining Global Indexes as valid. However, the accepted rule is that you either (a) use the UPDATE INDEXES clause [resulting in the TRUNCATE or DROP taking longer to run, effectively locking the table partitions] OR (b) do a REBUILD of the Indexes that become UNUSABLE after the TRUNCATE or DROP.

12c has introduced what it calls Asynchronous Global Index Maintenance. With this feature present, the TRUNCATE or DROP runs much faster as a DDL without actually removing the target rows from the Global Indexes [but still requires the UPDATE INDEXES clause to be specified]

So, I could(1) wait for the next run of the job OR(2) manually trigger the job (which will scan the entire database for all indexes that require such maintenance) OR(3) Execute DBMS_PART.CLEANUP_GIDX to initiate the maintenance for the specific index OR(4) Execute an ALTER INDEX REBUILD to make the Index USABLE again.

Generate value based on start and end columns without using procedure.
How to modify the select query.
<i>select key_column, start_point, end_point FROM tab1 WHERE key_column='10254';</i>
key_column start_point end_point
10254 -2 ...

Hi Team, First off all a big Thanks for your supports
Now i wanna know the steps to recover a fully operational database with RMAN backup.
I haven't done this scenario before,So i am going for a Test case here.
My requirement is
1) I have dat...

I am the admin user and can create tables and procedures in any schema. I have few tables in Schema B which I am referencing in a package i am creating in Schema A however upon compiling it does not see the tables in Schema B.
Schema B does not ha...

I’ve recently received a number of correspondences regarding one of my most popular blog posts, dating back to February 2008: Index Rebuild vs. Coalesce vs. Shrink Space (Pigs – 3 Different Ones). In the comments section, there’s an interesting discussion where I mention: “If Oracle19 does everything for you and all the various indexes structures get […]

I have the following example:
COLUMN
19
20
26
28
29
32
33
34
I need to count the rows based on pairs, 19-20, 28-29, 32-33. I'm having difficulty to check if a pair is already counted or not, any sugestions ? The result should be s...

Hi, I have a problem creating a dump with SQL Developer, the PL/SQL generated is:
<code>
set scan off
set serveroutput on
set escape off
whenever sqlerror exit
DECLARE
h1 number;
s varchar2(1000):=NULL;
errorvarchar varchar2(1...

Hi
I have the following PLSQL code - if run 1st time - it works fine - running 2nd or 3rd time it fails with "Subscript beyond count" error
If I make the declaration of g_response private to the procedure (not globally in the package) - it works...

Providing fast and flexible analysis of any data from any source is a business requirement these days. Oracle Analytics Cloud is a cloud-first analytics platform, built on the industry-leading Oracle...

Setup this simple scheduler job as sysdba to purge the objects in the recycbin.This is one of the most space cosuming location that often dba's forget to cleanup and theobjects get piled up occupying lot of space. Based on how long you want to save these dropped object setup a job under scheduler to run below plsql block either daily, weekly or monthly.

The database object must reside in your own schema or you must have theDROPANY... system privilege for the type of object to be purged, or you must have theSYSDBAsystem privilege. To perform thePURGEDBA_RECYCLEBINoperation, you must have theSYSDBAorPURGEDBA_RECYCLEBINsystem privilege.

Suppose you have an Oracle database on-premise, which you have now moved over to AWS Cloud in AWS Aurora PostgreSQL.

For your Oracle database, you have been using v$ views to monitor your runtime performance of instance, long running operations, top SQLs from ASH, blocking etc. How do you continue doing that when you migrate your database to cloud especially in AWS Aurora based PostgreSQL?

Well, PostgreSQL provides statistics collection views, which is a subsystem that collects runtime dynamic information about certain server activities such as statistical performance information. For example, you can use pg_stat_activity view to check for long running queries.

There are various other statistics views too in PostgreSQL such as pg_stat_all_tables to see size of table, access method like FTS or index scan, and so on. There are other views to check IO on tables and indexes and plethora of others.

In addition to these statistics views, Aurora PostgreSQL provides a nifty tool called as Performance Insights. Performance insights monitors Amazon RDS or Aurora databases (both MySQL and PostgreSQL) and captures workloads so that you can analyze and troubleshoot database performance. Performance insights visualizes the database load and provides very useful filtering using various attributes such as: waits, SQL statements, hosts, or users.

As part of operational excellence, its imperative after a database migration that performance is monitored, documented and continuously improved. Performance Insights and the statistics views are great for proactive and reactive database tuning in AWS RDS and AWS Aurora.

Hi
I am new to oracle and not sure how to provide the liveSQL link.
I have 2 tables to join
huge_table contains about 1 billion rows
big_table contains about 100 million rows
and small tables contains 999 rows providing the condition to fil...