In arranging cost comparisons for customers across multiple cloud vendors, it became apparent that there are differences in the most core metric of all – processing power – that not all customers may be aware of. These differences when factored in with costs can make the already very good Oracle Cloud offering seem an even more powerful proposition.

Oracle measures compute in OCPUs whilst Amazon AWS, Microsoft Azure and Google Cloud all use vCPUs – but the 2 measures cannot be directly aligned as they are very different.

Let’s get right to the conclusions before we get into the detailed rational below.

Real cores vs virtual CPUs

Developers benefit from real CPU core resources in the Oracle cloud

Workload Comparison

This means we can directly compare on-premises workload to the Oracle cloud

Value

The value the investment in each the Oracle cloud instance is 100% realized.

AWS, Azure and GC vCPUs are charged at the Thread level, so a standard Intel processor core with Hyperthreading enabled has 2 Threads. Each Thread is then shared by VMs. Customers will be able to pay more for dedicated Cores, but these are not the norm.

OCPUs: Oracle Cloud Infrastructure (OCI)

Oracle OCPUs are charged at the Core level, with no sharing of compute resources! So, customers buying a single OCPU get a dedicated core with 2 Threads.

The dedicated Cores/Threads ensures guaranteed performance for workloads with no contention.

Thread Contention

Going beyond just thread comparisons, even the simple view of 2 vCPU = 1 OCPU cannot be made due to thread contention, whereby other resources are using those threads. This can have considerable impact on hosting multi-threaded applications that scale to serve multiple users. Developers and architects cannot directly compare on-premise hardware to cloud hosted vCPUs.

The exact factor of vCPUs to OCPUs, to provide the same amount of processing power is subject to the amount of overloading on the vCPUs at any point in time. But it will almost always be greater than 2:1.

The only resolution is to over-purchase / flex onto a larger number of vCPUs with associated costs until workloads are met, or to leverage known, dedicated OCPUs to have guaranteed performance/pricing.

Price Comparison

The following comparison was created using public list prices from Oct 2017.

Vendor

CPU Cores

Mem

Storage

Utilization

Monthly Cost

Shape

Oracle OCI Frankfurt

2 OCPU / 2 Cores / Dedicated

14GB

400GB

100%

$131.00

VM.Standard1.2 on BMC

AWS Frankfurt

4 vCPU / 2 Cores / Oversubscribed

16GB

400GB

100%

$202.15

t2.xlarge linux

Azure Germany Central (Frankfurt)

4 vCPU / 2 Cores / Oversubscribed

14GB

512GB

100%

$217.72

D3 v2

In this case, the shapes are driven by memory needs along with minimum core count. The Oracle shape provides 2 cores/4 threads with no contention at a lower price than AWS & Azure’s 2 cores/4 threads with contention.

Linux was used to avoid any differences in Windows licensing models.

Conclusion

The primary conclusion is to ensure your measurements for compute and other key performance metrics are balanced and allow a proper cost/performance assessment across cloud vendors.

Oracle has come from the high-end enterprise down, with a starting focus on massive workloads, guaranteed performance and stability. This engineering led viewpoint has driven its expansion into cloud from the Gen1 through to the Gen2 Bare Metal Cloud, now known as OCI. The focus is on engineering quality for the enterprise.

By comparison, AWS and Azure have come from the lowest commodity scale point of view and are slowly moving up. This is a key factor when planning the migration of enterprise workloads from on-premises to the cloud.

Comments Off on Understanding the Real Differences Between OCPU & vCPU

With Oracle 12c now established in most organizations as the standard, we wanted to highlight a few of the lesser-known features of 12c Real Application Clusters:

Oracle Flex Cluster:

Oracle introduced the Flex cluster feature with the 12c release, which uses a Hub and Leaf architecture. A Hub node is like standard Grid infrastructure nodes where the storage is mounted and it has direct access to Oracle Cluster Registry and Voting Disk. Leaf nodes are part of the cluster where storage is not mounted, but they are connected via Interconnect. (more…)

With introduction of GoldenGate Studio, replication solution can be pre-configured and deployment templates can be created by dragging and dropping data services and replication paths to the solution diagram.

Prior to GoldenGate studio, for data mapping we need to create mapping configuration manually for schema, tables and column mapping by analyzing respective schema and tables. By using GoldenGate Studio all of this can be achieved by GUI and deployment templates and profiles can be created to redeploy same solutions across different projects. (more…)

We’ve all experienced that frustrating wait while a web page loads or a business application takes an age to process something. Poor user experiences cost your organization money and risk damaging its reputation. And with so many companies now using customer experience as a differentiator, any that lag behind are at a serious disadvantage.

But how do you go about addressing this type of slow performance? Enterprise applications can be incredibly complex, relying on a variety of servers, storage, networking infrastructure, databases and potentially other applications and services.

Identifying what exactly is causing the slow application performance can be exceptionally challenging. The monitoring tools used by your administration teams will probably give you little of value, because they focus on the big picture, rather than the details. Equally, it’s not uncommon for each team to be able to ‘prove’ it isn’t their cog slowing down the machine.

The four key questions of performance enhancement

The secret to successfully diagnosing and tackling performance issues with applications running on Oracle databases is to answer four key questions. Let’s look at each in turn and why it’s significant.

How long is the slow process taking?

There are really two parts to this question: what exactly is taking too long, and how long is it taking?

Is a particular screen or web page slow to load? Is it a specific batch process? Or does the whole application feel slow? If it’s the whole app, think about which element of it would deliver the greatest benefit if you could resolve it. This is where you should start your investigation – you can come back to other elements later.

Having defined what you’re aiming to speed up, you need to know how long it’s taking now – in simple, straightforward minutes and seconds. This ensures total focus on the actual user experience.

At this stage, it’s also helpful to identify any specific requirements you have around how long the process should take. Do you have a customer SLA that means it needs to complete within three seconds 99.9% of the time? Or did it used to take 10 minutes and now takes an hour? Be careful not to over-commit at this stage, because you don’t yet know what’s possible or practical – that understanding will come out of the next steps.

Why is it taking so long?

Having defined exactly what you’re looking at, you need to ascertain why your application is taking the time it is. As we’ve touched on, standard monitoring tools look at what the system as a whole is doing, not the individual application. So while a systems-level tool may show the CPU is running at 100% capacity, this doesn’t necessarily mean the CPU is your program’s bottleneck. System-wide statistics can throw you off the scent.

You need a different approach that enables you to pinpoint exactly how your application is spending your time, from the moment you start the process in question to the moment it concludes. This is what Cintra’s Method R approach does.

What if…?

Having identified what your application is doing, the next step is to identify one or more remedial actions that are likely to help accelerate the process. This will require a good understanding of how Oracle works.

Next, ask yourself ‘what if’ you make each of the changes. Quantify which will give you the greatest net benefit. This enables you to target your remedial actions to have maximum impact and avoid costly trial-and-error.

How straightforward this question is to answer will depend on the nature of the change: it can be relatively easy to know how much time you’ll save if you can eliminate 99% of the calls to a given subroutine, for example. But if you’re looking at more complex alterations, such as a database upgrade, or wholesale replatforming, you’ll need to do more work to model and/or measure what the likely performance improvement will be (and also whether there could be any side-effects). This work is where your team or your performance partner will really prove their worth.

Once you’ve identified that what you’re proposing will deliver the improvements you need, it’s time to roll out the change. And because of the scientific, data-driven approach you’ve followed so far, you’ll be highly confident that it will deliver the enhanced performance you’ve forecast.

What else?

With your most pressing need taken care of, you may now want to go back and look at other areas where your business would benefit from improved performance. This may be another element of the process you’ve been addressing so far. It may also be another part of the same application, or another app running on an Oracle database.

In this way, you can quickly and methodically tackle performance issues that are currently hampering your organization’s ability to offer the customer experiences you aspire to.

Why you need specialist tools to answer these questions

These questions in themselves are common-sense. The difficultly many organizations have is that they don’t have the tools to answer them. The normal Oracle tools don’t collect the right data, and even if DBAs or application developers were to gather it in another way, understanding and analyzing it can be challenging.

This is where the Cintra Method R tools are truly revolutionary, because they make it easy to answer these four questions. They are your proven, fast-track route to better performance and user experiences.

So with customer experience already a key differentiator in many sectors, you need to act now to ensure your Oracle-based enterprise systems are delivering the best-possible performance. Cintra’s Method R approach, using laser-focused response-time data, has a track record of delivering results fast.

Audit Vault is one part of Oracle AVDF (Audit Vault & Database Firewall). It consolidates and secures audit event data from one or more Oracle or non-Oracle sources, and provides extensive and customizable reporting abilities to fulfil an organizations security and compliance requirements.

What is the architecture of Audit Vault?

Audit Vault is distributed as a software appliance and can be deployed on a standalone server or a virtual machine. It is comprised of the following two components.

Audit Vault Server

Central repository that stores audit data from one or more sources (secured targets)

Encrypted data using TDE (Transparent Database Encryption)

Provides a web interface to accomplish these tasks among others

Configure Secured Targets

Configure Audit Trails

Configure data retention policies

Set up high availability

Configure external storage

Set up access control

Audit Vault Agent

Deployed one per host, usually where the audit data is generated but can also be installed remotely

Retrieves audit data from various secured targets and sends it to the AV Server

Secured Targets can be Oracle or non-Oracle databases, operating systems or file systems

Can Audit Vault be configured in a high availability architecture?

Audit Vault can be configured in a high availability architecture. It is configured from within the AV GIU; however, standard Data Guard is configured in the background, including the Data Guard Broker. Any attempt to connect to the secondary AV server is automatically re-routed to the primary. Switchover or failover is managed from with the AV GUI.

How can I manage the amount of data held within Audit Vault?

Data in Audit Vault can be archived as part of your company’s retention policy. This is accomplished by creating an archive location and a retention policy. Retention times are based on the time that the audit event happened in the secured target. Currently, archiving has to be started manually but this is easily done via the AV GUI.

Data from the archive area can’t be reported on; however, archive data can be restored online via the AV GUI.

What can you do with the data in Audit Vault?

As an Audit Vault Auditor, you can run reports to examine data across various secured targets as well as Database Firewall if that has also been deployed.

The reports are organized into different categories, for example activity reports or compliance reports. In order for your company to meet compliance requirements, the following reports can be produced.

Reports can be saved or scheduled in either PDF or Excel format. Filters can also be applied to reports that you view online.

Alerts can also be configured in Audit Vault. Notifications can also be set up to enable users or a security officer to be alerted where appropriate.

How can I backup Audit Vault data?

Audit Vault comes with a backup utility which ultimately runs RMAN in the background. As expected, you can run a full or incremental backup strategy as well as cold backups if desired.

How can I monitor Audit Vault?

Audit Vault can be monitored via OEM. To accomplish this, you must download and deploy the AV Enterprise Manager plug-in and discover the targets. The Audit Vault home page displays a high level view from which you can drill down to display individual components.

The Summary section of the home page displays the following.

AV Server version

Status of the AV Console

AV Repository Name and status

Number of AV agents

Number of source databases

Number of collectors

You can also see information on your AV agents, Audit Trails and historical information on any upload issues.

The Oracle Database Appliance (ODA) X6-2S and X6-2M are two of the latest ODA models released by Oracle which offer greater simplicity, increased optimization and flexibility as well as the ability to support several types of application workloads, deployment solutions and database editions.

In addition to the various customizable solutions that the ODA X6-2S and X6-2M offers, the configuration options for the ODA are much simpler as well as adaptive to cater to customer needs and Application requirements. (more…)

Introduction

WebLogic thread monitoring is very important for all slow or hanging application server issues. All WebLogic (middleware) requests are processed by thread pool thus it becomes a very important place to check for the problems. In Oracle DBA terms it’s very similar to checking database active sessions.

Weblogic server architecture

Weblogic server has a centralized AdminServe that allows deployment and configuration of resources like JNDI configuration, JMS (Java Message Services), Data Source (database connection pools). WebLogic server can have multiple managed servers. All managed servers can co-exist individually and they can be clustered as well.

Each WebLogic server processes different classes of work in different queues, it is based on priority and ordering requirements and to avoid deadlocks.

Weblogic server uses a single thread pool, in which all types of work are executed. Weblogic server prioritizes work based on the rules and run-time metrics and it automatically adjusts the number of threads. The queue monitors throughput over time and based on history data it adjusts the number of threads. It can increase and decrease the number of treads of Weblogic managed server.

For each Managed server and AdminServer we can monitor the threads, which is an effective way to monitor the workload on Weblogic servers.

The Admin Console

The most common mechanism to monitor Weblogic server is the Administrative console, which runs on AdminServer. From console we can administer each managed servers of the domain, like startup/shutdown and configuration of various properties.

We can login using the Weblogic username and password. Administrator superuser is usually weblogic user, which is created while creating domain. Using this account we can create other accounts which can be administrator or less privilege users for only monitoring purpose.

After logging in you will see following screen. To check AdminServer or Managed Server click on Servers below Environment. (Or on left hand panel Expand Environment, and click on Servers)

Once you click on Servers you will see following screenshot. Where you can see AdminServer and all the managed serves of domain. Here in this example managed servers are grouped in four different clusters. Managed servers can run on single or different machines. Click on Managed Server you want to monitor.

Thread monitoring

Once you click on server, to monitor threads click on Monitoring and then Threads tab and you will see following screen.

On above screen, you can monitor further threads by clicking Next (On right hand top corner of the Table: Self-Tuning Thread Pool Threads).

You can customize table to display all of the threads on one single page by clicking Customize this table and select Number of rows displayed per page value from drop down. You also can select or de-select columns to display in table.

Console will remember this setting when login next time for the specific managed server.

Thread state

To check issues of slow application response or hanging application you should be interested in Hogger and Stuck threads.

Let’s understand about thread state.

Active: The number of active execute threads in the pool (shown in Self Tuning thread pool table first column).

Total: Total number of threads.

Idle: Threads that are ready to pick up new work.

Standby: These threads are activated when more threads are needed, these threads remain in standby pool and activated when needed.

Hogging: Thread held by a request. These can be potential stuck threads after the configured timeout or it will return to pool before that.

Stuck: Thread is stuck working on request for more than configured stuck thread maximum time. You can imagine Stuck thread as long running job. Once the condition causing the stuck threads is cleared they will disappear and back to thread pool.

The Managed server status will show Health as Warning when you have Stuck threads in thread pool.

To identify potential issues or to check Stuck threads, there are various methods:

On Admin console, sort the Thread pool table based on Stuck in descending order, or you can check threads with Stuck column value as True and look at the Current request column and thread ID. Using thread ID, you can able to check what this thread was doing before became STUCK.

You will have complete detail of Stuck thread in the managed server log file on the server node. In the managed server log, look for string ‘BEA-000337’ or ‘STUCK’ word and timestamp. The log entry will show all about the request and potential problem or current problem. Advantage of log file is we can check the historical STUCK thread occurrences as well.

To understand more about STUCK threads, you can take thread dump by clicking button “Dump Thread Stacks” on thread monitoring page and you will have complete dump of all threads in pool. You can then locate all the STUCK threads in dump and you can understand more about STUCK threads.

Summary

By monitoring WebLogic servers using Admin Console or by checking log files you can identify potential issues or ongoing problems of slow or hanging application.

To learn more, or for assistance with any Weblogic issues, contact Cintra today!

Back in February Oracle announced its NEW ZS5 Storage Appliance. The ZS5 Storage Appliance was designed to deliver high performance across the board for all applications.

Oracle has been building the ZFS Appliance from the beginning using a powerful SMP design. This design allows all the CPU Cores in its controllers to work effortlessly. The ZFS Appliance also runs a multithreaded OS (Solaris) and the controllers each have very large DRAM caches which gives them great performance, but the new ZS5 models now take advantage of all flash storage enabling the Oracle ZFS Appliance to power some very demanding applications while getting very fast query responses and rapid transaction processing times.

The new ZS5 Storage Appliance comes in two models, the ZS5-2 and the ZS5-4. Both use the latest Intel 18 core processors and both models can be configured with a large amount of DRAM. The ZS5-2 can scale out to 1.5TB of DRAM and the ZS5-3 can max out at 3TB of DRAM. That’s with 36 or 72 Intel CPU cores to make the ZFS really scream with performance. For the SSD’s Flash storage both models can be configured with all Flash SSD’s. Each disk storage shelf can be configured with 20 or 24 – 3.2TB 2.5-inch SSD’s. The ZS5-2 can connect to 16 storage shelves giving it a total of 1.2 Peta Bytes of all flash storage and the ZS5-4 can connect to 32 disk storage shelves giving it a total of 2.4 Peta Bytes of all flash storage.

The ZFS Appliance has a built in secret sauce that has been designed to work with the Oracle’s infrastructure stack, also known as the “Red Stack”. This has sped up applications and databases, specifically 12c databases, while delivering efficiency and storage consolidation. Oracle could do this by having their storage engineers work with their database engineers in designing the storage to take advantage of all the software features built into the Oracle database. The ZFS Storage Appliance is truly and engineered system.

A great new feature is that the Oracle ZFS Storage Appliance easily integrates in to the Oracle Cloud, making cloud storage available to all users at the click of a button.

From the beginning the ZFS engineers created Hybrid Storage Pools (HSP). These HSP’s were made up of, DRAM, Read SSD’s, Write SSD’s and spinning SAS hard drives. Together the data was moved in and out of DRAM and through the Read or Write SSD’s and into the SAS drives. Today the ZFS engineers have created a Hybrid Cloud Pool, (HCP) which pretty much acts the same way but only in the Oracle public cloud.

The best part of this integration to the Oracle Cloud is that it’s FREE! Another benefit is that the new ZFS Appliance also eliminates the need for external gateways. It is all built into the ZFS controllers.

Truth be told Oracle has been using the ZFS Storage Appliance in its own cloud since it purchased Sun Microsystems 7 years ago.

And finally, as you evolve your storage model, you’ll want to extend what you’re doing on-premises to the public cloud. And, ideally, you’d do this easily and seamlessly with the new ZFS ZS5 Storage Appliance.

These days every IT department is familiar with x86 server virtualization, a useful strategy to reduce hardware footprints, provisioning time, downtime and increase flexibility and reliability.

But few know that the same paradigm can be also applied to SPARC servers.

Taking advantages of the latest SPARC server platforms and consolidating legacy SPARC systems without changing applications onto a more cost-effective, scalable, agile and flexible infrastructure, is pretty easy and can be achieved with a zero risk approach.

How?

By virtualizing (P2V) legacy SPARC servers into Oracle Solaris zones or Oracle VM for SPARC LDOMs, so that all system peculiarities (system ID, configurations, software installations, etc) are captured and available on the new servers. In this way, one of the most complex and difficult processes, a full OS and software reinstallation/configuration, is skipped and the business risk associated with the migration is dramatically reduced.

Consider that all systems deployed starting from Feb 2000 can be virtualized and run on latest SPARC systems.

Whether they need to be migrated into a Solaris Zone or an LDOM, can be determinate by the Solaris version of the legacy system.

Basically :

If the legacy system is running Solaris 8 or Solaris 9 or Solaris 10 9/10, then it can be virtualized into a Solaris zone

If the legacy system is running Solaris 10 1/13 or Solaris 11, then it can be virtualized into a Solaris zone or an LDOM

It is also possible, with Oracle VM for SPARC and Oracle Solaris Zones, to create from very simple to pretty complex virtual environments made of a wide range of Solaris versions.

For example, the picture below shows various levels of virtualization technologies available on SPARC M Servers :

From the bottom to the top can be identified :

The server platform.

PDOMs, the first level of virtualization, available only on high end server platforms. They are formerly known as Dynamic Domains and are electrically isolated hardware partitions that can be powered up and down without affecting any other PDOMs

LDOMs. Each PDOMs can be further virtualized using the hypervisor-based Oracle VM for SPARC or can natively run Oracle Solaris 11. LDOMs can run their own Oracle Solaris kernel and manage their own physical I/O resources. Different Oracle Solaris versions, running different patch levels, can be run within different LDOMs on the same PDOM

The next virtualization level is Oracle Solaris Zones (formerly called Oracle Solaris Containers), available on all servers running Oracle Solaris. A zone is a software-based approach that provides virtualization of compute resources by enabling the creation of multiple secure, fault-isolated partitions (or zones) within a single Oracle Solaris instance.

While similar virtualization technologies from other vendors cost several thousands of dollars per year, PDOMs – on high-end SPARC servers, LDOMs – on every SPARC system and Oracle Solaris Zones are virtualization options that come at no cost with a valid support contract on SPARC servers.

In conclusion, a technology refresh of SPARC systems can :

significantly reduce the risk of running business critical applications on old hardware

increase the security – thanks to Silicon Secured Memory features

improve overall system performance

be accomplished with minimal effort and a zero risk approach

For more information on how this strategy could benefit your business, contact Cintra today!

Written by Daniel Procopio, Director of Systems, Cintra Italy – May 2017

Do you wonder what the future holds for the principal Oracle Database platform in terms of its RDBMS Architecture? The Oracle Multitenant Architecture is the platform for the future.

Putting this plainly, the current Non-Multitenant Architecture (aka Legacy Architecture), will ultimately be de-supported so, if you’re serious as an Oracle DBA, you had best get on board! But rest easy for now; the 12cR1 Oracle Database release still supports both the Multitenant and legacy configurations.

Key Features

Just in case you were still wondering what its all about and have not dug in and tried to decipher the multitude and sprawl of information out there via a quick google search, here is a quick heads-up on the key elements of the Multitenant architecture you need to be aware of.

The Multitenant option is only available on Oracle 12c with a database COMPATIBLE parameter of 12.0.0.0 or greater. This value also applies to each planned Pluggable Database (PDB) being deployed within it.

You can run Oracle Container Databases (CDBs) and Non-CDBs on the same Server and can even share the same ORACLE_HOME.

A Multitenant database provides the capability of a Database to act as a CDB. There are two potential configurations:

One type of Container Database can act as a singularly unique database which can be plugged into another Container Database. This is called a Pluggable Database (PDB) and it contains all the usual object types you would find in a standard legacy database.

The other type of Container Database can contain none, one or more unique Databases (PDBs) and is called the CDB. A CDB consists of a root container as well as a seed PDB database which is READ ONLY by default.

Then again, you can find this information anywhere, the question is what are the key Architectural considerations you need to focus on as a DBA?

Structural Differences

Essentially, we need to first look at the structural differences between the Legacy and Multitenant architectures.

A container database can exist on its own with or without additional PDBs. Putting aside RAC Architectures, a legacy Oracle database is by design associated with one Oracle instance. In the same context, an Oracle CDB (multitenant Container Database) is associated with one instance. Therefore you have a 1:1 relationship between a database and instance.

However, a key variant for the Container database Architecture is that the CDB and all its associated PDBs “share the same Oracle Instance.” Hence you have a many to one (n:1) database to Oracle Instance relationship

n = CDB and all its associated PDBs

In the context of Oracle High Availability RAC architectures, the same statement would translate to an Oracle Database being associated with “one or more Oracle Instances”. Hence an Oracle CDB (and its associated PDBs) is associated with one or more RAC instances in an n: m relationship

n = CBD and all its associated PDBs

m= Oracle Instance(s) belong to the same RAC infrastructure

Hence in a multitenant environment for one CDB with a relationship to one or more PDBs, we have the following common\shared entities residing in the root Container:

SGA Memory Structures

Control files (at least one)

Background processes

Online Redo logs

SPFILE

Oracle Wallet

Alert Log

Flashback logs

Archived Redo logs

Undo tablespace and its associated Tempfiles

Default Database Temp Tablespace and files (however, each PDB can also have its separate Temp Tablespace)

The Oracle Data Dictionary metadata is principally stored in the root container at the CDB level with links for each PDB dictionary object defined to this from the PDB.

Deployment Architectural considerations and benefits

If you need to deploy a CDB with multiple PDBs and are wondering what you need to consider and how you optimize these, while not a complete list of review points, some items for considerations are

Resource management and delineation against each PDBs for resource consumption

Sizing and impact of each PDB on the shared Redo Logs, Temp Tablespace and Undo tablespaces

Understanding that all Oracle upgrades are performed at the CDB level and impact all associated PDBs

Strategy to leverage new security features related to separation of users and responsibilities between and across the CDBs and PDBs

Strategy for consolidated performance tuning

Communication to the business of the benefits of reduced cost of platform and ease of platform management

Plan for consolidation, especially for same version databases with small storage and memory footprints

Consideration for Oracle DataGuard configuration for related PDBs

Limitations and Restrictions

In spite of all the benefits, you know what they say. With every good thing comes … Below are also a few restrictions (relative to the known operations of the legacy Databases) for RMAN on PDBs:

RMAN Restrictions from PDBs

Tablespace point in Time recovery

Duplicate Database

Point in Time recovery

Flashback operations from RMAN

Table recovery

Summary

So now you have the basics of what you need to be aware of, and to explore from an architectural standpoint. You can now dig into each section to obtain more information and see how to further optimize within the context of your requirements, resource availability and deployments to meet with your planned application needs.

At Cintra, as Database Architects, these are just a subset of the kind of considerations we delve into when looking at deploying customer databases. Get in touch if you would like to discuss how the Multitenant Option might be of value to your business.