Hello,
One of our client has upgraded Test environment Oracle database from 10.2.0.5.0 to 12.1.0.2.0 with Production data.
After upgrading the database, we are facing performance issue in few processes which are taking huge time to complete com...

Hello.
I need to export an oracle schema using exp, included the "CREATE USER" statement.
Now, for do that the only think to do that I know is perform a full export of the database. The problem is that in my database there are a lot of users and th...

thomas, happy holiday. I have question(s) related to rman
1. what is the difference between -
to back up the current control file and
to backup up control file copy(backup controlfile copy)
2. when using either of above command tag and ...

Verifying upgrading prerequisites ...*** WARNING: Ensure that each Oracle VM Server for x86 has at least 200MB of available space for the /boot partition and 3GB of available space for the / partition.*** WARNING: Recommended memory for the Oracle VM Manager server installation using Local MySql DB is 7680 MB RAM

Starting Upgrade ...

Reading database parameters from config ...

==========================Typically the current Oracle VM Manager database password will be the same as the Oracle VM Manager application password.

Running full database backup ...Successfully backed up database to /u01/app/oracle/mysql/dbbackup/3.4.4_preUpgradeBackup-20190125_051150Running ovm_preUpgrade script, please be patient this may take a long time ...Exporting weblogic embedded LDAP usersStopping service on Linux: ovmcli ...Stopping service on Linux: ovmm ...Exporting core database, please be patient this may take a long time ... NOTE: To monitor progress, open another terminal session and run: tail -f /var/log/ovmm/ovm-manager-3-install-2019-01-25-051107.log

Passwords:There are no default passwords for any users. The passwords to use for Oracle VM Manager, Database, and Oracle WebLogic Server have been set by you during this installation. In the case of a default install, all passwords are the same.

Oracle VM Manager UI: https://oraVMManager.fritz.box:7002/ovm/consoleLog in with the user 'admin', and the password you set during the installation.

For more information about Oracle Virtualization, please visit: http://www.oracle.com/virtualization/

3.2.10/3.2.11 Oracle VM x86 Servers and SPARC agent 3.3.1 managed Servers are no longer supported in Oracle VM Manager 3.4. Please upgrade your Server to a more current version for full supportFor instructions, see the Oracle VM 3.4 Installation and Upgrade guide.

Navigation is critical to any business application. Classic used breadcrumbs for navigation. As I'm sure you noticed, Fluid is different, using Tiles and Homepages as the starting point for application navigation."In Fluid, tiles and homepages represent the primary navigation model, replacing Classic's breadcrumb menu."In Classic, breadcrumb navigation is managed by administrators. It is fixed, not variable, not personalizable. Users cannot personalize Classic navigation (other than creating favorites). Did I say Fluid is different? Yes. Fluid gives users significant control over their navigational view by allowing them to personalize tiles and homepages. This can cause significant problems, with users removing tiles that represent critical business functions. There are a few solutions for this problem (disable personalization, mark tiles as required, etc, see Section 4 of Simon's blog post for ideas). What I want to focus on is confusion regarding optional default tiles, where an optional default tile doesn't default onto a homepage. Here is the scenario:

A homepage already exists

As an administrator, you configure a new tile as Optional Default

After configuring the homepage, all users that have NOT personalized will see the tile. Put another way, any user that has personalized the homepage will not see the new tile (and a simple accidental drag and drop will result in a personalization). Here is what users that personalize will see:

If it is optional default, what happened to the default part? When users personalize their homepages, PeopleSoft clones the current state of the homepage into a user table. Let's say Tom and Jill both personalize their home pages. Tom will now have a personalized copy of the default configuration and Jill will have an entirely different personalized copy.

Administrators will continue to insert optional default content into homepages, but Tom and Jill will not see those optional default tiles. Tom and Jill's homepages are now detached from the source. We can push optional default tiles into Tom's and Jill's copies by using the Tile Publish button available to each homepage content reference (in the portal registry). This App Engine program inserts a row for each optional default tile into each user's copy of the homepage metadata.

Pretty clear and straight forward so far? OK, let's make it more complicated. Let's say an administrator adds a new optional default tile to the default homepage described above and presses the Publish Tile button. After the App Engine runs, the administrator notices Tom sees the tile, but Jill does not. What went wrong? If Jill doesn't have security access to the tile's target, Jill won't see the new tile. Let's say Jill is supposed to have security access so we update permissions and roles. We check Jill's homepage again. Does Jill see the tile? No. Why not? When we published the tile, Jill did not have security access so PeopleSoft didn't insert a row into Jill's personalization metadata. How can we make this tile appear for Jill? We could publish again. If we recognize and resolve the security issue immediately after publishing, this may be reasonable.

Let's play out this scenario a little differently. Some time has passed since we published. Tom has seen and removed the new tile from his homepage. One day Tom is at the water cooler talking about this annoying new tile that just appeared one day so he removed it. Jill overhears Tom and logs in to look for this annoying tile. After some searching, however, she doesn't see it on her homepage. She calls the help desk to find out why she doesn't have access to the annoying tile (that she will probably remove after seeing it). This is when you discover the security issue and make the tile available to Jill. For Jill to see this tile as a default, however, you will need to republish the tile. When you republish the tile, what will happen to Tom's homepage? Yes, you guessed it. Tom will see the tile appear again and will likely call the help desk to complain about the annoying tile that just reappeared.

What's the solution? At this time there is no delivered, recommended solution. The App Engine is very short, containing a couple of SQL statements. Using it as a guide, it is trivial to write a one-off metadata insert for Jill and all others affected by the security change without affecting Tom. When writing SQL inserts into PeopleTools tables, however, we must consider cache, version increments, and many other risk factors (I probably would not do this). I would say it is safer to annoy Tom.

--

Jim' is the Principal PeopleTools instructor at JSMPROS. Take your PeopleTools skills to the next level by scheduling PeopleTools training with us today!

a sql code to remove all the special characters from a particular column of a table .
for example :
iNPUT-ABC -D.E.F
OUTPUT ABC DEF
AND IF THERE IS TWO NAMES LIKE
ABC PRIVATE LTD
ONE SPACE SHOULD BE MAINTAINED BETWEEN 2 WORDS.

Oracle E-Business Suite is often extended, customized and integrated with third-party products. We highly recommend that you follow our published guidance for performing extensions, customizations and third-party integrations.

In addition to these guidelines, you should also follow our equivalent recommendations for granting access to Oracle E-Business Suite database objects. Our guidance for accessing an EBS database takes a least-privileges approach to providing access to database objects. The database access recommendations complement our guidance for deploying customizations, extensions and third-party integrations.

The blog post was very popular, touching on the subjects of Big Data and Data Streaming. To put my own twist on it, I decided to:

not use Twitter as my data source, because there surely must be other interesting data sources out there,

use Scala, my favourite programming language, to see how different the experience is from using Python.

Why Scala?

Scala is admittedly more challenging to master than Python. However, because Scala compiles into Java bytecode, it can be used pretty much anywhere where Java is being used. And Java is being used everywhere. Python is arguably even more widely used than Java, however it remains a dynamically typed scripting language that is easy to write in but can be hard to debug.

Is there a case for using Scala instead of Python for the job? Both Spark and Kafka were written in Scala (and Java), hence they should get on like a house on fire, I thought. Well, we are about to find out.

When it comes to finding sample data sources for data analysis, the selection out there is amazing. At the time of this writing, Kaggle offers freely available 14,470 datasets, many of them in easy-to-digest formats like CSV and JSON. However, when it comes to real-time sample data streams, the selection is quite limited. Twitter is usually the go-to choice - easily accessible and well documented. Too bad I decided not to use Twitter as my source.

Another alternative is the Wikipedia Recent changes stream. Although in the stream schema there are a few values that would be interesting to analyse, overall this stream is more boring than it sounds - the text changes themselves are not included.

Fortunately, I came across the OpenWeatherMap real-time weather data website. They have a free API tier, which is limited to 1 request per second, which is quite enough for tracking changes in weather. Their different API schemas return plenty of numeric and textual data, all interesting for analysis. The APIs work in a very standard way - first you apply for an API key. With the key you can query the API with a simple HTTP GET request (Apply for your own API key instead of using the sample one - it is easy.):

There are several options for getting your data into a Kafka topic. If the data will be produced by your application, you should use the Kafka Producer Java API. You can also develop Kafka Producers in .Net (usually C#), C, C++, Python, Go. The Java API can be used by any programming language that compiles to Java bytecode, including Scala. Moreover, there are Scala wrappers for the Java API: skafka by Evolution Gaming and Scala Kafka Client by cakesolutions.

OpenWeatherMap is not my application and what I need is integration between its API and Kafka. I could cheat and implement a program that would consume OpenWeatherMap's records and produce records for Kafka. The right way of doing that however is by using Kafka Source connectors, for which there is an API: the Connect API. Unlike the Producers, which can be written in many programming languages, for the Connectors I could only find a Java API. I could not find any nice Scala wrappers for it. On the upside, the Confluent's Connector Developer Guide is excellent, rich in detail though not quite a step-by-step cookbook.

However, before we decide to develop our own Kafka connector, we must check for existing connectors. The first place to go is Confluent Hub. There are quite a few connectors there, complete with installation instructions, ranging from connectors for particular environments like Salesforce, SAP, IRC, Twitter to ones integrating with databases like MS SQL, Cassandra. There is also a connector for HDFS and a generic JDBC connector. Is there one for HTTP integration? Looks like we are in luck: there is one! However, this connector turns out to be a Sink connector.

Ah, yes, I should have mentioned - there are two flavours of Kafka Connectors: the Kafka-inbound are called Source Connectors and the Kafka-outbound are Sink Connectors. And the HTTP connector in Confluent Hub is Sink only.

Googling for Kafka HTTP Source Connectors gives few interesting results. The best I could find was Pegerto's Kafka Connect HTTP Source Connector. Contrary to what the repository name suggests, the implementation is quite domain-specific, for extracting Stock prices from particular web sites and has very little error handling. Searching Scaladex for 'Kafka connector' does yield quite a few results but nothing for http. However, there I found Agoda's nice and simple Source JDBC connector (though for a very old version of Kafka), written in Scala. (Do not use this connector for JDBC sources, instead use the one by Confluent.) I can use this as an example to implement my own.

Creating a custom Kafka Source Connector

The best place to start when implementing your own Source Connector is the Confluent Connector Development Guide. The guide uses JDBC as an example. Our source is a HTTP API so early on we must establish if our data source is partitioned, do we need to manage offsets for it and what is the schema going to look like.

Partitions

Is our data source partitioned? A partition is a division of source records that usually depends on the source medium. For example, if we are reading our data from CSV files, we can consider the different CSV files to be a natural partition of our source data. Another example of partitioning could be database tables. But in both cases the best partitioning approach depends on the data being gathered and its usage. In our case, there is only one API URL and we are only ever requesting current data. If we were to query weather data for different cities, that would be a very good partitioning - by city. Partitioning would allow us to parallelise the Connector data gathering - each partition would be processed by a separate task. To make my life easier, I am going to have only one partition.

Offsets

Offsets are for keeping track of the records already read and the records yet to be read. An example of that is reading the data from a file that is continuously being appended - there can be rows already inserted into a Kafka topic and we do not
want to process them again to avoid duplication. Why would that be a problem? Surely, when going through a source file row by row, we know which row we are looking at. Anything above the current row is processed, anything below - new records. Unfortunately, most of the time it is not as simple as that: first of all Kafka supports concurrency, meaning there can be more than one Task busy processing Source records. Another consideration is resilience - if a Kafka Task process fails,
another process will be started up to continue the job. This can be an important consideration when developing a Kafka Source Connector.

Is it relevant for our HTTP API connector? We are only ever requesting current weather data. If our process fails, we may miss some time periods but we cannot recover then later on. Offset management is not required for our simple connector.

So that is Partitions and Offsets dealt with. Can we make our lives just a bit more difficult? Fortunately, we can. We can create a custom Schema and then parse the source data to populate a Schema-based Structure. But we will come to that later.
First let us establish the Framework for our Source Connector.

Source Connector - the Framework

The starting point for our Source Connector are two Java API classes: SourceConnector and SourceTask. We will put them into separate .scala source files but they are shown here together:

The last two functions to be overridden here are start and stop. These are called upon the creation and termination of a SourceConnector instance (not Task instance). JavaMap here is an alias for java.util.Map - a Java Map, which is not to be confused with the native Scala Map - that cannot be used here. (If you are a Python developer, a Map in Java/Scala is similar to the Python dictionary, but strongly typed.) The interface requires Java data structures, but that is fine - we can convert them from one to another. By far the biggest problem here is the assignment of the connectorConfigvariable - we cannot have a functional programming friendly immutable value here. The variable is defined at the class level

private var connectorConfig: HttpSourceConnectorConfig = _

and is set in the start function and then referred to in the taskConfigs function further down. This does not look pretty in Scala. Hopefully somebody will write a Scala wrapper for this interface.

Because there is no logout/shutdown/sign-out required for the HTTP API, the stop function just writes a log message.

HttpSourceConnectorConfig is a thin wrapper class for the configuration.

We are almost done here. The last function to be overridden is taskConfigs.
This function is in charge of producing (potentially different) configurations for different Source Tasks. In our case, there is no reason for the Source Task configurations to differ. In fact, our HTTP API will benefit little from parallelism, so, to keep things simple, we can assume the number of tasks always to be 1.

The name of the taskConfigs function was changed in the Kafka version 2.1.0 - please consider that when using this code for older Kafka versions.

Source Task class

In a similar manner to the Source Connector class, we implement the Source Task abstract class. It is only slightly more complex than the Connector class.

Just like for the Connector, there are start and stop functions to be overridden for the Task.

Remember the taskConfigs function from above? This is where task configuration ends up - it is passed to the Task's start function. Also, similarly to the Connector's start function, we parse the connection properties with HttpSourceTaskConfig, which is the same as HttpSourceConnectorConfig - configuration for Connector and Task in our case is the same.

We also set up the Http service that we are going to use in the poll function - we create an instance of the WeatherHttpService class. (Please note that start is executed only once, upon the creation of the task and not every time a record is polled from the data source.)

Here we say we just want to use the pre-defined STRING_SCHEMA value as our schema definition. And pass inputString straight to the output, without any alteration.

Looks too easy, does it not? Schema parsing could be a big part of Source Connector implementation. Let us implement a proper schema parser. Make sure you read the Confluent Developer Guide first.

Our schema parser will be encapsulated into the WeatherSchemaParser object. KafkaSchemaParser is a trait with two type parameters - inbound and outbound data type. This indicates that the Parser receives data in String format and the result is a Kafka's Struct value.

object WeatherSchemaParser extends KafkaSchemaParser[String, Struct]

The first step is to create a schema value with the SchemaBuilder. Our schema is rather large, therefore I will skip most fields. The field names given are a reflection of the hierarchy structure in the source JSON. What we are aiming for is a flat, table-like structure - a likely Schema creation scenario.

For JSON parsing we will be using the Scala Circle library, which in turn is based on the Scala Cats library. (If you are a Python developer, you will see that Scala JSON parsing is a bit more involved (this might be an understatement), but, on the flipside, you can be sure about the result you are getting out of it.)

That is easy enough. Please note that the case class attribute names match one-to-one with the attribute names in JSON. However, our Weather JSON schema is rather relaxed when it comes to attribute naming. You can have names like type and 3h, both of which are invalid value names in Scala. What do we do? We give the attributes valid Scala names and then implement a decoder:

structInput here is the input JSON in String format. WeatherSchema is the case class we created above. The Circle decode function returns a Scala Either monad, error on the Left(), successful parsing result on the Right() - nice and tidy. And safe.

Now that we have the WeatherSchema object, we can construct our Struct object that will become part of the SourceRecord returned by the sourceRecords function in the WeatherHttpService class. That in turn is called from the HttpSourceTask's poll function that is used to populate the Kafka topic.

Considering that Schema parsing in our simple example was optional, creating a Kafka Source Connector for us meant creating a Source Connector class, a Source Task class and a Source Service class.

Creating JAR(s)

JAR creation is described in the Confluent's Connector Development Guide. The guide mentions two options - either all the library dependencies can be added to the target JAR file, a.k.a an 'uber-Jar'. Alternatively, the dependencies can be copied to the target folder. In that case they must all reside in the same folder, with no subfolder structure. For no particular reason, I went with the latter option.

The Developer Guide says it is important not to include the Kafka Connect API libraries there. (Instead they should be added to CLASSPATH.) Please note that for the latest Kafka versions it is advised not to add these custom JARs to CLASSPATH. Instead, we will add them to connectors' plugin.path. But that we will leave for another blog post.

Scala - was it worth using it?

Only if you are a big fan. The code I wrote is very Java-like and it might have been better to write it in Java. However, if somebody writes a Scala wrapper for the Connector interfaces, or, even better, if a Kafka Scala API is released, writing Connectors in Scala would be a very good choice.connector

You are asked to deploy a three tier highly availability application including Load Balancer on Oracle’s Gen 2 Cloud Then, ✔ How would you define Network, Subnet & Firewalls in Oracle’s Gen2 Cloud? ✔ How do you allow database port but only from Application Tier ? ✔ What are Ingress or Egress Security Rules ? […]

While much of the discussion around 5G has centered on consumer devices, enterprises are looking towards the tremendous impact the technology can have on their ability to serve customers and the bottom line. Not only are most companies (97 percent) aware of the benefits of 5G, but 95 percent are already strategically planning how they will take advantage of this next generation of wireless connectivity to power core business initiatives from new services, to IoT and smart ecosystems.

“Enterprises clearly want to capitalize on the promise of 5G, however, to be successful, IT and business leaders must avoid thinking of 5G as just another ‘G,’ and should instead consider it as an enabler to the smart ecosystem we have long talked about,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “This means asking the right questions at the outset, and considering how 5G can help enable upcoming solutions, what timeframe should be considered and how will they will procure and use 5G capabilities as part of their business evolution.”

Born in the cloud, 5G will have the ability to enable enterprises to provision or “slice” core pieces of their networks to power mission-critical new offerings and smart ecosystems. This can range from anything to providing the highest speed connections for life-saving 911 services, to enabling autonomous vehicles to communicate with each other quickly; to ensuring IoT devices in smart factories are providing real-time information on the health of machines and assets.

Outside specific initiatives, respondents believe 5G will have a wide-spread impact across their business, including increasing employee productivity (86 percent), reducing costs (84 percent), enhancing customer experience (83 percent), and improving agility (83 percent). Business decision makers are most focused on quality of experience the technology will bring, while IT is concerned with network speed and resiliency.

Unleashing the Promise of 5G Ecosystems in the Enterprise

When it comes to 5G, enterprise are most focused on:

Unlocking the potential of IoT: Beyond initial benefits such as speed and quality of experience, 84 percent of respondents feel that 5G networks will be transformative and have a lasting impact on the way their companies do business. Another 73 percent agree the IoT will be revolutionized by 5G networks and 68 percent feel it will be transformative to their customers.

Monetizing new services: Eighty-percent expect 5G to generate new revenue streams for their business. Forty-one percent of the respondents would deploy new monetization solutions specifically for 5G services alongside existing systems, while thirty-four percent say they would replace their existing systems with a single, converged solution for all services. Just about one in five (22 percent) said they will utilize and extend existing monetization solutions with 5G.

Experience and efficiencies: Eighty-four percent of respondents agree that 5G networks will be transformative and have a lasting impact on the way their companies do business. While business respondents are focused on the quality of experience improvements made possible by 5G, IT respondents care more about the network technologies and the internal efficiencies 5G may enable.

Security: While excited about the potential of 5G, both business and IT respondents cited security as a top priority. 51% of respondents ranked security as their highest concern.

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

Do you wish to deploy Oracle EBS (R12) on Oracle Cloud Infrastructure (OCI) using EBS Cloud Manager, but are unable to do so because of lack of knowledge of what steps to follow? If yes, then Don’t You Worry!! We’ve got you covered. Visit: https://k21academy.com/ebscloud28 & know about: ✔Background History ✔How To Deploy EBS Cloud […]

If in the pfile used to start the instance, you specified the recovery destination and size parameters it will try to enable the flashback.
The flashback enable , before during the recovery is not allowed, so we will deactivate for the moment.

Step3: Restore/recover completed successfully, we will try to open the database, but got some errors:

2.b Update the database name in the server.ini file
Like during the Docbase Name change, it is a workaround to avoid below error:

...
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

Hi TOM,
I've created a function, that granting access to tables and views in given schema to given user.
In result, function returns a own-type table, that contains prepared statement and exception message, if thrown.
1. Creating types:
<code...

Hi Tom !
As there is put a restriction on GTTs:
Distributed transactions are not supported for temporary tables
does that mean that inline views in a query, i.e. using WITH clause,
but those with MATERIALIZED hint will not work properly...