About the Original Author

Recent articles by this author

Staging to Production is the process by which data from one WebSphere Portal environment is copied to a second WebSphere Portal environment. It is also known as a deployment process. This document will discuss many different questions that arise when devising a deployment process for your Portal ...

This demonstration shows you how to install and use ISALite Version 1.2.10. This tool automatically collects the data and logs from your operating system that the Support team uses to quickly provide solutions to issues that you may encounter when running IBM® WebSphere® Portal Version 6.0 or ...

This demonstration shows you how to install and use ISALite Version 1.2.10. This tool automatically collects the data and logs from your operating system that the Support team uses to quickly provide solutions to issues that you may encounter when running IBM WebSphere Portal Version 6.0 or ...

This demonstration shows you how to install and use ISALite Version 1.3.3. This tool automatically collects the data and logs from your operating system that the Support team uses to quickly provide solutions to issues that you may encounter when running IBM WebSphere Portal Version 6.1 or ...

Staging to Production is the process by which data from one WebSphere Portal environment is copied to a second WebSphere Portal environment. It is also known as a deployment process. This document will discuss many different questions that arise when devising a deployment process for your Portal environments.

Introduction

Staging to Production is the process by which data from one WebSphere Portal environment is copied to a second WebSphere Portal environment. It is also known as a deployment process. This document will discuss many different questions that arise when devising a deployment process for your Portal environments.

Format

This document is intended to be a living document and will be updated regularly. The format will be a Frequently Asked Questions with multiple sections and questions/answers per section. Questions should not be considered ordered and information in different sections will repeat/overlap. Some questions will be posted without answers - these will be answered in time. Items marked with ***** are considered incomplete. The writing style will be a bit more informal / free-form vs. adhering to strict technical writing guidelines, i.e. technical content, the content accuracy and readability will be prioritized.

Community feedback is welcome - suggestions for improvement, questions/answers to add to the document (will attribute work!), corrections to mistakes - all welcome. See end of this document for author contact information.

Intended Audience

This document is intended for Portal administrators with one year or more of experience and who are familiar with both WebSphere Application Server administration and Portal administration. New Portal administrators are welcome to read through this document to become familiar with the terminology and concepts for staging to production. However, some of the the more fundamental concepts of Portal administration will be assumed throughout this document and not explained. I would recommend reading through this document for an overview of Portal Administration if unfamiliar with any of the concepts discussed.

Section 1: Staging to Production Overview

Forward note - the discussion in this section is a high-level discussion / not specific to the WebSphere Portal product. i.e. You could substitute the term "WebSphere Portal" with "My cool website" and the same principals would apply. Skip ahead to additional sections if you want to see Portal specific considerations.

?Q1) What is staging to production?

A1) Staging to production from a high-level is taking data environment from one environment and copying it to a second environment. In non-Portal terminology, this is also referred to as a deployment process. The terminology used with Portal implies two such environments that are typically involved in a deployment process, a staging environment and a production environment. However, any two environments may be used to copy data.

Q2) What environments are involved in staging to production?

A2) Minimum of two environments, no maximum. Environments are typically referred to by their roles

DEV A development environment. May be a standalone workstation used by a single developer or a shared server used by multiple developers
INTEGRATION Work from multiple DEV enviroments is brought in here and tested. If it does not work, rejected and sent back to developer for additional work.
STAGE / QA / UAT Staging / Quality Assurance / User Acceptance Testing environment. Much more tightly controlled changes and verification testing is performed in this environment after passing through before promoting to PROD REND.
PROD AUTH Authoring environment where new web content originates
PROD REND / Delivery Production rendering envirnoment which serves data to end users
DR Disaster Recovery environment - should mirror PROD REND 100% identical. If PROD REND suddenly fails, you failover to this environment.

In practice we have seen some environments combined to reduce costs - e.g. INTEGRATION and STAGE. It is not recommended to do so as it may introduce additional risk for performing deployments. We have also seen in practice the STAGE / QA/ UAT environments split up into multiple separate environments to perform a specific form of validation testing as the data progresses through each environment (hence the multiple names / labeling you may see). Determine what risk level vs. cost is acceptable when architechting your deployment process.

Q3) How does data flow between environments?

A3) Typically we see two flows established - one for web content, and a second for all other data.

Web Content Flow:

PROD AUTH originates WCM content. Typically a single piece of content is not considered high risk and can be syndicated between environment freely.

In summary, PROD AUTH contains the master copy and all copies of web content should originate from it. No other environment should originate web content.

Other Data:

DEV originates Pages, Portlets, WCM design elements (authoring+presentation templates), themes and skins changes. In summary - anything which can functionally touch / update multiple locations on the Portal site and present risk should originate in DEV. Why not PROD AUTH for authoring and presentation templates? One mistake and you end up impacting all content authors work / delaying new deliverables. Mitigate that risk by putting higher-risk elements through a more rigorous set of tests in QA before pushing to either production environment.

DEV to INTEGRATION
INTEGRATION to QA
QA to PROD AUTH
QA to PROD REND

Q4) Is there a way to push updates without affecting a live production environment?

A4) Yes. You may have two PROD REND enviroments available, one which is live to end users (PROD-A), the second which is not live and has changes made to it (PROD-B). This is known as an active/passive configuration. During a deployment window the changes are made to the passive environment. Once the changes are made update and validation testing passes - update your DNS records to have the external hostname point to the passive environment (PROD-B) and your passive environment now becomes the active environment. Should something unexpected go wrong on the recently updated environment (PROD-B), you may revert back to the environment which had no changes made to it (PROD-A) via a quick DNS changes.

Active/passive offers low risk means of pushing changes between environments. The primary disadvantage is cost - approximately double the hardware and software costs for a second PROD environment - in addition to overhead to maintain a second environment. Further, the Portal servers are not the only servers which would need to be duplicated in an active/passive consideration. Consider duplication of deployment managers, web servers, load balancers, LDAP servers, database servers, etc. in addition to the Portal servers.

Q5) How should I update my Disaster Recovery enviornment (DR)?

A5) Ideally your PROD REND and DR environments should be identical to each other such that if the PROD REND environment fails, you can failover to a disaster recovery environment within seconds. This is more commonly known as an active/active configuration.

When pushing updated data - do you update DR simultenously to ensure the enviornments remain identical? OR, do you wait to update the DR environment? The answer to this question is also dependent on your acceptable business risk level. From experience we have seen changes pushed to PROD and DR simultaneously multiple times with proper validation testing performed in QA and everything works as expected. We have also seen cases where proper validation performed in QA and the same changes pushed to PROD and DR simultaneously end up breaking both PROD and DR simultaneously, resulting in costly outages.

We would recommend updating PROD first during a maintenance window, allow normal end users to access the site following the maintenance window, then schedule the same updates for DR thereafter. While this will create a slight timing gap in PROD/DR being 100% identical, it also mitigates any risk associated with the updates completely breaking both environments. Arguably, better to have a working environment with some new features/updates missing rather than no working environment available. Less costly to failover to a backup than troubleshoot an extended outage.

Q6) When should I update these various PROD environments?

A6) We'll break this out based on some hypothetical scenarios.

Scenario #1 - active/active
- Peak hours are during normal business hours, 0800-1700, Monday-Friday.
- PROD - Friday night maintenance window
- DR - Friday night maintenance window the following week
*Rationale: ~45 total hours of busy activity on the system following the change. Ensures change does not cause unwanted side effects over a long period of time.

Scenario #2 - active/active
- Global site that is accessed with similar usage patterns 24x7. Schedule as follows:
- PROD - Friday night maintenance window
- DR - Sunday night maintenance window the following week
*Rationale: Ensures change does not cause unwanted side effects over a long period of time. In this case it is preferable to keep a functional disaster recovery site available at all times. Better to failover and have a functional site with new functionality missing (deployed during the maintenance windows) than a completely non-functional site.

*Rationale: PROD-A is your fallback in the update fails on both PROD-B and DR.

Note: These are not hard and fast schedules for updating systems but ones we have observed implemented in practice. In most cases, the discussion is based on cost vs. risk assessment with the business. For example - will your business permit changes to a PROD system during business hours? Probably not in scenarios #1 and #2, possibly in scenario #3.

Q7) How frequently should I be updating data?

A7) As often as your business risk will allow it.

- For WCM content updates, these are typically considered smaller / less risky changes, and hourly updates are normal to see.
- For all other updates, schedule the updates weekly, monthly and quarterly, etc. as business risks will allow. Hourly is technically possible but often not acceptable from a risk perspective.

Q8) Can I perform deployments with different code levels?

A8) Yes - so long as the major code level is the same (e.g. 7.0, 8.0, 8.5, etc.) this is supported. We recognize in practice keeping your PROD environments on the same code levels as your lower environments often is not feasible in the same timeframe as a given deployment cycle. i.e. We cannot stop deployments / stop meeting business deliverable while we schedule upgrades of the various environments to complete.

Across Portal code levels - in Portal 6.0-8.0 this was not as much of an issue as new features were not introduced in the middle of a release. In Portal 8.5 with a new philosophy of continuous delivery new features are being introduced as a part of cumulative fixes which may introduce new data into the system that will not promote between environments. Let's take an example scenario:

DEV: Portal 8.5 CF11
PROD: Portal 8.5 CF05

DEV has new work performed on a new feature introduced in CF11 - the Script Application . A ReleaseBuilder DIFF captures these changes, and is eventually moved up to PROD. The DIFF contains a command to update the Script Application. The Script Application does not exist in 8.5 CF05, and therefore the promotion of the data between systems fails. Could we workaround this by ensuring new features introduced are not utilized until all environments are on the same code levels? Yes, however, this requires some change control processes to be established and coordination between teams to ensure this scenario does not occur.

Q9) What should I do for backups?

A9) WebSphere Portal as a product does not offer any backup functionality out of the box. Backups are assumes to be handled by software external to Portal.

Backup the following:
- Deployment Manager if not located on same server as Portal
- Entire Portal filesystem
- Portal databases

On the filesystem backups, a full filesystem backup is recommended. We've seen from experience that updating only /opt/IBM/WebSphere* by itself can lead to issues when restoring from a backup because certain configuration elements NOT stored in this location are missing from the backup.

For PROD systems, a proper backup solution should be used.
For DEV systems, the author of this article has the DMGR, Portal and DB2 installed on the same virtual machine. Typically a snapshot of the virtual machine is "good enough" should something go wrong.

Section 2 - Portal Architecture Considerations

Q1) Does Portal support multiple installations in a single environment?

A1) Yes. The installer will detect existing running WAS/Portal instances and prompt you to install to a different directory. It will also recommended a different series of port numbers for the additional Portal installation to run on. The author of this article has successfully installed and runs simultaneously the following versions of all in a single environment - Portal 6.0, 6.1, 7.0, 8.0 and 8.5.

Q2) Does Portal support multiple profiles in a single environment?

A2) Yes - you may have a single set of WAS binaries (/AppServer) and Portal binaries (/PortalServer) with multiple Portal profiles. e.g.

/wp_profile_DEV
/wp_profile_INTEGRATION
/wp_profile_QA

Or possibly separated by line of business within a single environment:
/wp_profile_PROD_b2b
/wp_profile_PROD_b2c
/wp_profile_PROD_b2g
etc.

This may help reduce software costs. However, one tradeoff is that application binaries are shared between environment. Meaning, if you update either WAS or Portal code levels, you must do so across all profiles simultaneously. We have found from experience this can work in a non-production environment, but not in a production environment. Why? Different lines of businesses have different timelines for their deliverables. Having to halt ALL deliverables for one line of business so a second line of business can perform a deployment is often unacceptable.

Example #1 - line of business #1 needs to add a new feature to allow single sign-on functionality via SAML login. This change needs to be configured at the DMGR cell level, meaning all profiles will need their Portal servers restarted once the change is made. While the other lines of business do not use SAML login - more importantly - they need to ensure the cell-level / global SAML login configuration change does NOT affect their line of business.

Example #2 - one line of business uses purely portlets. A second line of business uses purely web content. One of those lines of business receives an APAR iFix from IBM for a defect in the software. The APAR must be installed across all profiles simultaneously. One line of business is impacted by a change a second line of business needs and will in NO way benefit that line of business.

Q3) What is a virtual portal?

A3) Let's first expand on a different question ... "What is a WebSphere Portal server?". Its a series of .ear files, .war files, and .jar files that run on top of WebSphere Application Server. Several configuration elements of WebSphere Application Server - such as JDBC datasources for databases, global security for LDAP, resource environment providers for global variables independent of the code, etc. are utilized such that it becomes a bit more complex than a single traditional WAS application.

When you access the Portal server primary URL - /wps/portal - WebSphere Application Servers routes this URL to the wps.ear application based on the servlet-filter mappings inside of the web.xml file for the wps.ear application. WebSphere Portal code thereafter takes over and sends you to the "base" portal of the system - i.e. the out of the box site you see following installation of the product. The base portal is a constructions of HTML based on data in the Portal databases and the Portal filesystem.

Virtual portals are discussed in detail in the Portal Infocenter. Virtual Portals allow a logical separation of the Portal site into a different distinct areas, typically one virtual portal per line of business. For example, ibm.com would be a base portal, IBM Commerce would have its own virtual portal, IBM Watson its own virtual portal, IBM Cloud its own virtual portal etc. Each distinct area may have a different look and feel for the given LOB and also be managed differently per LOB. Virtual portals are distinct from virtual portal in that they may be accessed by a unique URL context, e.g. www.ibm.com/wps/portal/watson, OR, by a unique hostname, e.g. watson.ibm.com/wps/portal. Note a combination of unique URL context + hostname is not permitted, e.g. watson.ibm.com/wps/portal/watson could not be created.

Under the hood, virtual portals use the EXACT SAME binaries and database as the Base Portal. The ONLY difference between a base portal and a virtual portal is the manner by which data is loaded from the Portal database - that's it. How do we know whether we should be loading up a base portal or virtual portal from the database when accessing the portal server? Inside of the wps.ear application web.xml file exists a servlet-filter that executes on every URL access to the Portal server. That servlet-filter makes a determination at run-time whether or not the URL is accessing a base portal or a virtual portal and loads up database data accordingly thereafter. At present time, there is no means to separate out a virtual portal from a base portal. If you truly need them separated out so one LOB is not affected by another LOB - you may need a different architecture than virtual portals. See question #4 for those details.

VirtualPortal Filtercom.ibm.wps.engine.VirtualPortalFilter

Q4) Should I have multiple installations, multiple profiles, multiple clusters, or multiple virtual portals? Or some combination of all of them?

A4) Loaded question - but a common one asked. There is no right/wrong answer one on this one - so the most common response starts with "it depends". We'll list the primary pro/con of each.

Multiple installations:
- Pro: Least risky. Separation of lines of business ensure changes to one LOB do not affect a second LOB.
- Con: Most costly - hardware, software and overhead all go up significantly.

Multiple profiles:
- Pro: Single common environment / configuration for multiple lines of business. e.g. Only need to configure LDAP once.
- Con: Changes to WAS or Portal code must be applied to all profiles simultaneously, unlike multiple installations.

Multiple clusters in single profile:
- Pro: Simpler to maintain than multiple profiles configuration - only a single profile to update.
- Con: Cell level changes - such as security changes - must be applied to all LOB's simultaneously. Changes to WAS or Portal code must be applied to all clusters simultaneously,

Multiple virtual portals:
- Pro: Unique to WebSphere Portal product. Simplest of configurations to maintain+update. Can separate LOB per virtual portal.
- Con: Cell+Cluster level changes - such as security+database changes - must be applied to all LOB's simultaneously. Changes to WAS or Portal code must be applied to all virtual portals simultaneously,

What does IBM typically use internally for its environments? Multiple virtual portals. Each LOB has their own virtual portal and can work on their own deliverables independent of the other LOB. Coordination between LOB is needed for some elements that are not separated out by virtual portal - for example WAS/Portal code levels.

Q5) What about Portal farms?

A5) In this author's experience, 99% of Portal environments will want to choose clusters for their architecture. Portal farms are not bad by any means - however, they are intended to fulfill a specific need as documented in the Portal Infocenter. However, they also have limitations associated with them that clusters do not have. One notable limitation - managed pages in a farm is not supported, and generally speaking, customers utilizing IBM Web Content Management (WCM) would not want to choose a farm architecture. A one-hour presentation given by two Portal architects goes into extreme detail on pros/cons of Portal Farms - the 11-15 minute marks discuss when to choose a farm vs. not choose a farm to answer this question.

Q6) What about shared Portal databases?

A6) WebSphere Portal supports sharing of the customization, community, feedback and likeminds databases. The release and jcr databases may not be shared between environments. Let's take a few example scenarios:

Scenario #1: DEV + INTEGRATION: Separate release and jcr databases. Shared customization, community, feedback and likeminds databases.
- Create a new virtual portal in DEV. Not yet promoted to INTEGRATION.
- Do some testing in DEV, including creation of customization data. Decide the virtual portal in DEV is no longer needed, decide to delete the virtual portal.
- Data in the customization database is not deleted, but instead orphaned. The Portal administrator forgot to specificy -DremoveResourcesInSharedDomains=true when deleting the virtual portal with shared domains.
* No functional issues with this scenario, but illustrates additional administrative overhead / deviation from common procedures involved in certain scenarios.

Scenario #2: INTEGRATION + QA: Separate release and jcr databases. Shared customization, community, feedback and likeminds databases.
- Make a change to the INTEGRATION release database. Not yet promoted to QA. Good here, release database is separated.
- Do some testing in INTEGRATION, including a change to customization data. Now the data is in INTEGRATION+QA shared customization DB, yet, QA can't utilize the customization data until the release data is promoted from INTEGRATIONto QA.
* No functional issues with this scenario, but the value-add of the shared databases is limited.

Scenario #3: PROD + DR: Separate release and jcr databases. Shared customization, community, feedback and likeminds databases.
- Make a change to the PROD + DR release databases close enough in time so they are 100% identical for failover purposes.
- Network in datacenter PROD is located in catastrophically fails - have to failover to DR datacenter.
- The customization, community, feedback, and likeminds databases are in the same datacenter as PROD. DR has no access to these databases given PROD datacenter is completely down.
* Functional issues with this scenario given the databases were shared.

Now the good news - for purposes of promoting changes between Portal environments, the four databases noted typically are NOT included in the promotion of changes between environments. Only release and jcr database data is typically promoted between environments. Therefore shared databases are typically not impacted by deployment scenarios.

So in this author's opinion - where does database sharing make sense? Typically niche/rarer configurations - such as multiple cluster or farm configurations - would make sense for sharing databases.

Q7) I've seen some terminology discussing "scoping" of resources? What is this?

A7) This goes back to our discussion on the base portal and virtual portals. The base portal and virtual portals use the same files on the filesystem, but different data in the database. There are some parts of the virtual portals which are completely separated from the base portal - most notably pages. There are other parts of the virtual portal which are shared with the base Portal that cannot be separated - most notably portlets. The "scoping" of resources refers to whether or not the artifact in reference can be isolated to a given virtual portal, or, if it must be shared in some means with the base portal.

The big three which are unscoped / must be shared - themes, skins, portlets
The big three which are scoped / are not shared - pages, web content libraries, search collections

A full listing is in the Portal Infocenter.

Q8) How does scoping affect my deployment process?

A8) Changes that are unscoped may affect a large number of teams. For example - let's suppose we redeploy the WCM Rendering portlet as a part of a deployment. Given the change is an unscoped change, it will impact the base portal and ALL virtual portals. Now, let's suppose we redeploy a custom portlet ... same idea, the change will impact the entire site because it is unscoped. If a custom portlet is only actively in use by the base portal or a single virtual portal - then the change probably won't impact other LOBs. If however the portlet is commonly used across locations in the Portal server (such as the WCM rendering portlet), then there is a higher risk associated with updating that particular portlet.

Changes with are scoped will only impact the base portal and/or virtual portal which they exist in. They will not impact other LOB's if updated . For example, if I create a page named "test page" in my "test virtual portal" - no other virtual portal will see the page, nor will the base portal see the page.

Special note: Web content libraries are scoped in Portal v8.0 and v8.5 only if managed pages is enabled - they are unscoped if managed pages is disabled. In Portal v7 and earlier, the base Portal and all virtual portals had access to all WCM libraries. Now, in v8.0 and v8.5 with managed pages enabled / scoping enforced, the base portal and each virtual portal MUST have its own copy of the WCM libraries it requires and they WCM libraries will function independent of each other. If WCM libraries need to be shared between two more more locations on the site (e.g. base portal + virtual portal, 2+ virtual portals, etc.) - this can be performed via a series of two-way syndication relationships. Chose one location as the "master" copy (preferably the base portal), and other locations to receive the updates once made to the master location. This is not quite a typical syndicator/subsriber pairing as we think of between systems, this is within the SAME system we are setting up these relationships. Once the syndication pairing are setup, all WCM libraries scoped to each location can be kept up to date with each other.

Q9) I want to use a Deployment Manager for administrative purposes. How does this work with Portal?

A9) WebSphere Portal installs to an unmanaged node out of the box. If you wish to utilize a managed node topology / federate to a deployment manager, we recommend following the steps outlined in the Portal Cluster guides per version, v7.0 link, v8.0 link, v8.5 link. Note - if you federate to a Deployment Manager, WebSphere Portal has an additional requirement that you created a cluster within that Deployment Manager for Portal. Even with a single-server in a DMGR, you must have a Portal cluster. Federated to a DMGR but not clustered is NOT supported by WebSphere Portal.

Why is it not supported? The issue is one with how Portal stores its global variables in a Resource Environment Providers within WebSphere. In an unmanaged node configuration all variables are scoped to the server level. When you federate to a DMGR they remain scoped to the server level. Now let's add a second Portal server to the DMGR ... the variables are still scoped to the server level. Therefore, the "global" variables are no longer global, e.g. a change in a variable on Portal server #1 at the server scope will not update the variables for Portal server #2 at the server scope. Portal's architecture requires a cluster scope to be created to ensure global variable updates can be made easily for all servers which require access to this information.

On a practical note - the Portal server in a single-server configuration federated to a DMGR but not clustered (unsupported configuration!) does work. Meaning you can start the Portal server and perform most functions correctly. However, many administrative operations will fail if we attempt them - in particular ConfigEngine configuration tasks will act erractically in a federated but not clustered scenario. This finding unfortunately flares up most frequently when performing a cumulative fix upgrade. The configuration must be unmanaged node / standalone system, OR, fully clustered for the cumulative fix upgrade to complete - this is no means to ensure a successful CF upgrade otherwise.

Q10) What is a database transfer? Why do I need to perform one?

A10) Portal installs to an out of the box database called Derby. Derby is lightweight database intended as a starting point only for proof of concept work / sandbox environments. An enterprise database - DB2, Oracle or SQL Server - is recommended for any significant development work. An enterprise database is required for PROD environments. A full listing of which versions of the enterprise databases Portal supports is available here (click the supported software tab). The database transfer procedure takes the content of the Derby database and copies the data via series of SQL commands to the enterprise database.

Special note #1: Derby does not scale well with large amounts of Web Content Management data. If you plan on performing a large amount of WCM authoring work, we recommend transferring an enterprise database in that environment before beginning any significant work.

Special note #2: You must perform a database transfer before creating a Portal cluster. Derby may functionalliy work in a single-server cluster scenario, but it is not a supported configuration. In a two-server cluster configuration, Derby absolutely will not work. Why? Derby is included on each installation of the Portal server ... so Portal server #1 has its own Derby database, Portal server #2 has its own Portal database. Both JVMs need to access the same database in order to keep their data synchronized. However, Derby does not support multiple JVMs accessing the Derby database.

Thus, we recommend performing the database transfer early in the system configuration process to avoid potential headaches.

Q11) I messed up creating my Portal site. I want to go back to the copy of the data in the Derby database which works. How can I do so?

A11) If the Derby database contains data on the SAME code levels you are presently on with your enterprise database, you may rerun database-transfer, which will drop and recreate all tables in the enterprise database and recopy the data from the Derby database. HOWEVER, if code levels are not the same, you may NOT perform a database-transfer as they will create unpredictable results with the database data being at a code level different than the .war/.ear files on the filesystem. Let's use a few example scenarios to illustrate this concept:

Q12) I have common jar files that need to be loaded by multiple custom applications. I need them in a single location. Where should I place these jar files?

A12) We recommend utilizing a WAS shared library on a common location on the filesystem, e.g. /opt/MyCustomLibs. More details on this are available in the WAS Infocenter.

Historical Note -= the Portal server could load up custom libraries if placed in a directory that was recognized and read on server startup by the classloader. However, the intention of these directories was not for custom code but rather short-term debug modules issued by IBM L3/Development. Portal cumulative fixes in newer Portal releases now check for the presence of custom code in these shared directories and will fail the upgrade immediately if custom code is detected. Generally speaking - do not use PortalServer/shared/app, PortalServer/wcm/prereq.wcm/wcm/shared/app/, or /AppServer/lib. Notable exception custom TAIs should be placed in AppServer/lib/ext for each application server (including the DMGR!) according to TAI documentation.

Q13) I have common configuration settings that need to be loaded outside of my custom application code. Where should I place these configuration settings?

A13) You may create a custom resource environment provider in the DMGR admin console at the cluster scope which contains settings common to you custom applications. This is where WebSphere Portal stores many of its configuration settings. Utilize a Java Authentication and Authorization Service - J2C authentication data configuration if you need to store sensitive data - such as a username/password combination. This is where WebSphere Portal stores its userid+password for its databases and search collections.

Q14) How do realms and virtual portal scoping relate?

A14) The base portal in Portal is always assigned to the "default" realm in the wimconfig.xml file. Further, Portal has a requirement (not WAS products overall but Portal specifically) that all base entries in wimconfig.xml must belong to the default realm. Net result - all users by default will be able to login to the base Portal. This has created situations that are not always desirable - for example - separation by line of business. Thus, a recommended architecture has emerged for such requirements that the base Portal be used ONLY for administration purposes, and, a separate virtual portal be created for each LOB.

The realm of a virtual portal is defined in wimconfig.xml of the WebSphere Application Server configuration. For example, the following configuration is valid:

To answer the question - Pages, WCM Content and Search Collections are scoped to each respective base/virtual portal. Themes, Skins and Portlets are unscoped, and are shared among all locations. Therefore, while some separation is possible of resources, other resources cannot be separated / must be shared (much to the disappointment of the Sneetches!).

Q15) Can I fully administer my virtual portal with only a virtual portal administrator group?

A15) No - this is not possible. Various Portal administration functions - in particular during cumulative fix upgrades - assume the base portal administrator must have administrative access to the virtual portal. The recommendation is to allow the superadmin user of portal, e.g. wpsadmin - full access to the base and virtual portals. Assigning a dedicated location in LDAP / a specific base entry in wimconfig.xml for such a purpose simplifies

Q16) Can I put my entire site in the base portal and not worry about separate out into virtual portals?

A16) Yes - this is possible. However, administration of the Portal site becomes incredibly complex. For example, you would need to have LOB #1 and LOB #2 granted full administrative access to the Portal administration area - which would allow LOB #1 to potentially modify LOB #2 pages and portlets. While this scenario is highly unlikely to happen in practice (each LOB would leave their respective areas of the Portal site well enough alone) - the fact there is a potential risk for interference - whether accidental or intentional - is enough of a concern such that virtual portals do make logical sense.

Q17) Is there a performance difference between a base portal and virtual portal?

A17) There is not a noticeable performance difference when comparing two similar sites.. The base portal and virtual portal(s) shares the EXACT same binaries / code on the filesystem. The only difference is the data in the database. Thus, if a base portal uses 1GB of data in the database with 10 users and a virtual portal uses 100GB of data with 1000 users - yes, there will most likely be a performance difference. However given equivalent usage patterns and database sizings there is not a noticeable difference.

Q18) What are the artifacts that we need to be concerned with when working with a deployment process and what tools are used to move them?

A18) This is a common question asked that is answered in detail by many other elements of this document. A summarized answer as follows:

Q19) Is it better to redeploy the whole site during the deployment process or just deploy my changes?

A19) This question focuses is dependent on the size of your site and the risk level acceptance.

If your site is small and your risk level is low, it it best to deploy changes only in manual manner not using IBM tooling. Systems are not guaranteed to be 100% identical with manual changes but overhead is typically lower.

If your site is small and your risk level is high, it best to deploy the entire site via IBM tooling using an active/passive configuration. Systems are guaranteed to be 100% identical following the the deployment and changes need not be tracked / coordinated. The downside to this approach is a second PROD site is required to deploy the changes, as a full redeploy will require a wipe of the current data and a full reimport of the new data to create a perfect clone.

If your site is medium/large and your risk level is low, it best to deploy changes only using IBM tooling. Systems are guaranteed to be 100% identical and the tooling scales well with larger sites that may have uncoordinated change control (e.g. departments in a large company each making independent changes to their area of the Portal site).

If your site is medium/large and your risk level is high, it best to deploy the entire site via IBM tooling using an active/passive configuration. Systems are guaranteed to be 100% identical following the the deployment and changes need not be tracked / coordinated. Further, for larger sites additional time is given for validation testing without impacting the live PROD site.

If your site is small it is easier to redeploy the whole site as you move from one env to another. Remember that in order to handle deletes (like a page deletion) you will need to do an empty portal in the target env first and then deploy the site.

If your site is medium to large it is much more efficient to only deploy the changes. This is accomplished with the ReleaseBuilder mentioned below. Note how important it is to deploy the whole site the first time (and never create ad hoc pages/portlets in the target environments) so all the Portal object ids are identical.

Section 3 - ReleaseBuilder overview

Q1) What is ReleaseBuilder?

A1) ReleaseBuilder is an executable .bat|.sh file located in wp_profile/PortalServer/bin. It is a critical tool used as a part of the overall staging to production process. In some documentation, you may see references to a ReleaseBuilder process. This is a synonymous terminology for staging to production / deployment process with WebSphere Portal.

Q2) What does ReleaseBuilder do?

A2) From a high level - the releaseBuilder tool allows you to compare the data on a system from two different points in time, capture the changes made on the system, and create an output file noting the changes made. The output file, commonly called a DIFF file, can thereafter be imported to another environment and have the EXACT same changes made on that second environment. This ensures the exact changes made on one environment can be replicated to a second environment. i.e. We need only copy over the delta of changes made, not a full copy of data. ReleaseBuilder is a standalone java process that does not require a running WAS or Portal server to execute.

Q3) How does ReleaseBuilder actually work?

A3) Example Scenario:

January 1st, 2016 Take an ExportRelease.xml XMLAccess export from STAGE. Name it 2016.01.01_stage_exportrelease.xml

The 2016.01.01_2016.02.01_STAGE_DIFF.xml file contains a capture of all changes made to the STAGE enviornment release database over a one month period of time. It may now be imported to another enviornment, such as a PROD enviornment, and the PROD environment will automatically have the 1 month of changes made in the STAGE enviornment applied to it. STAGE and PROD release databases will be in sync / identical thereafter.

Q4) Can ReleaseBuilder be used with two different environments?

A4) No this is not supported. Let me repeat - THIS IS NOT SUPPORTED!!! (Author's note - I would use blink tags here if I could to emphasize this point). If you create a DIFF file with the intention of importing it another environment, the DIFF file _MUST_ be created from XMLAccess exports from the SAME environment. If you create a DIFF file using releasebuilder exports two different environments and import that DIFF file, this is NOT supported, and could create unpredictable results. We have seen from field experiences loss of 25%+ or more of Portal pages if ReleaseBuilder is used in this manner. ALWAYS use Releasebuilder with two exports from the SAME environment.

Q5) Can ReleaseBuilder be used with two different environments if I don't perform an import?

A5) Yes - this is supported. You may generate a ReleaseBuilder DIFF between two different environments so long as the DIFF is NOT used in the import. The tool itself does not prevent this action and actually - it can be helpful to compare two different enviornments. IBM Support performs this action regularly when analyzing customer data to check for differences between environments.

Q6) OK, how exactly does ReleaseBuilder "work"? Why can't I just do my deployments manually?

A6) Now we get into the heart of how WebSphere Portal works as a product and this will be a lengthy answer feeding into several other answers.

Dial back the clock several years - circa 2005, pre Web-2.0 days. Portlets were the primary focus back then - the capability to check your bank account balance, personalized health information specific to the user login, etc. were new and exciting. Today we consider them standard and boring but back then they were significant new features. WebSphere Portal as a product needed a means to copy the configuration of one environment to a second environment so what was tested in QA is what ended up in a PROD environment. Direct SQL statements to each environments databases were risky - not only were the SQL statements risky in and of themselves - disparate environments configurations - e.g. Derby in one environment (DEV) and DB2 in a different environment (STAGE) would result in different SQL statements. To ensure data in environment #1 made it to environment #2 successfully - an abstraction away from SQL statements was needed. Hence a scripting language specific to Portal - called XMLAccess - was created to provide a database representation of data that was not tied to a specific database implementation. As a result, data could be exported from one environment and imported to another environment, independent of the individual databases in use, and expect to produce the same end result at runtime.

One of the key principals of any database is primary keys - ensuring uniqueness of data in a given database table. The same is true with XMLAccess - each artifact in the Portal server must have some means of being uniquely identified. The title of the page itself is not considered unique - for example, environment #1 could have a page called "test page", and environment #2 could have a page called "test page". Both have the same title, but radically different contents of the page. Exporting/importing between environments would not produce the same result - nor should it. To ensure consistency of data between environments, in the XMLAccess scripting language Portal introduced the concept of "objectIDs". Each artifact in the Portal release database has an objectID, and, each objectID uniquely identifies the data it represents. There is a specific structure to objectID which you may read about in question #4 of the the XMLAccess FAQ. High-level summary, objectIDs unique identify data in the Portal database, and, objectIDs can be used to copy data between systems. Think of objectIDs like primary keys ... looking in the Portal database you will not find a primary key called objectID, but, think of the two concepts roughly equivalent.

How does this tie back to ReleaseBuilder and a deployment process? If you create a page named "test page" in environment #1, it may end up objectID/primary key "12345". Create a page named "test page" in environment #2 and you'll have objectID/primary key "67890". Exporting/importing the data between environments, you'll have two "test page" pages with the same name, but different objectIDs / primary keys. Our goal is to ensure the objectIDs / primary keys are the same across ALL environments for a successful deployment process. ReleaseBuilder and XMLAccess are both tools which can assist with guaranteeing the same objectIDs / primary keys exist across numerous environments. Thus, the database data in one environment is the same in a second environment independent of the actual database in use.

Fast forward the clock to today - while portlets are still used throughout the WebSphere Portal product - there has been an increasing emphasis on web content, mobile, and multimedia rich websites to entice end users. The architecture of Portal remains the same even with the shifts in the market - thus we still use ReleaseBuilder to move some data between systems while other forms of data (namely web content) may use a different tool - such as syndication - to move between systems. ReleaseBuilder has been in the WebSphere Portal product for a LONG time and continues to be a fundamental tool used as a part of a deployment process.

Q7) Am I required to use ReleaseBuilder / IBM tools for my deployments? What if I want to use my own tools?

A7) ReleaseBuilder and XMLAccess help guarantee the same objectIDs / primary keys between environments. IBM does not require the use of these tools - though does recommend them as a best practice for deployments to help ensure two environments remain the same with their database data. You may use any other tooling at your disposal to perform deployments - the primary focus is guaranteeing the objectIDs remain the same between systems.

Q8) What happens if the objectIDs are not the same between two different Portal systems?

A8) This is a condition we refer to as the systems being "out of sync". Let's take an example to illustrate an issue:

You deploy a portlet on DEV - it has objectID ABC
You deploy a portlet on QA - it has objectID XYZ
You create a page on DEV - it has object 12345. You add the portlet to the page. The portlet referenced has objectID ABC.
You export the page from DEV. You import to QA.

The QA system does not know about the objectID "ABC" ... it knows about the objectID of "XYZ". The page will properly import, but the page will NOT render the portlet correctly.

In summary, if the objectIDs are different between systems, export/import between systems becomes difficult to manage due to differences in objectIDs. Correcting the mismatched objectIDs is not a trivial task ... you MUST delete the "wrong" objectID / data in the database and reimport the "correct" objectID / data in the database for export/import to work.

Q9) My deployments are small. Do I have to know / care about the systems having the same objectIDs?

A9) If the number of changes you make are relatively small - say 10 total changes or less per deployment - we have advised many folks to perform their deployments manually rather than utilizing ReleaseBuilder / XMLAccess tooling. Why? Even with the manual changes - the overhead to perform the same tasks manually is often less than the overhead required to utilize the IBM tooling. XMLAccess and ReleaseBuilder - admittedly - is focused around larger environments. The more changes made to an environment - the more useful the tools become to capture those changes made and move them to a second environment. For example, if the number of changes increases from 10 to 100, would you feel confident in perform all 100 changes manually? Now what about 1000 changes? The scale of the changes is where the tooling has extraordinary benefit.

Q10) My deployments are large. How do I perform a proper ReleaseBuilder update?

A10) The following steps assume you have baselined your systems. See question #11 for how to baseline your systems if you have not done so already. The following instructions also assume two virtual portals are in use, though can be adapted for zero to hundreds of virtual portals in use.

0) Assume changes have been made in the SOURCE environment that need to be moved to TARGET.

1) Locate the three files that represent exports of your initial baseline

- REV1_base.xml

- REV1_vp1.xml

- REV2_vp2.xml

2) Create a new XMLAccess export from SOURCE of the base portal and both virtual portals, e.g.

15) Restart your Portal servers. This is not strictly required at this time, however, if you restart the severs as a part of normal maintenance anyways, now would be a good time to do so.

Q11) How do I "baseline" my systems?
A11) We talked about objectIDs previously. If you install a Portal 8.0 or 8.5 system, the IBM provided pages and portlets will have all the same objectIDs on installation. Cumulative fix upgrades will also preserve the same objectIDs. It is new data added to the system which may introduce differences in objectIDs between two different Portal servers. If we need to ensure the data in two different Portal servers is the same, proceed with the following steps: https://www.ibm.com/developerworks/community/blogs/portalops/entry/staging_to_production_portal_8_without_paa?lang=en

Steps repeated here with some slight modifications:

1) Backup the filesystems and databases on TARGET

2) Obtain an XMLAccess export of your SOURCE environment using ExportRelease.xml as the input file. Do NOT use Export.xml, e.g.

*Note #1: If you have any WCM libraries scoped to the VP, delete those prior to deleting the VP

*Note #2: You may use either the Manage Virtual Portals administration portlet, OR, the delete-virtual-portal command line to do so.

*Note #3: This step is needed on early code levels of 8.5 (before CF06) and 8.0 (before CF10). It is recommended to perform the steps on later code levels to avoid trouble, but it is not strictly required.

Q12) Do my LDAP schemas need to be the same between environments?

It is permitted to create the DIFF file from INTEGRATION, and perform a find/replace of "dc=ibmdev,dc=com" to "dc=ibmqa,dc=com". Other examples include a find and replace of the full username/groupDN in the DIFF file. This is also permitted. The key in the DIFF files are the objectIDs remaining the same between systems. The metadata - such as the LDAP DNs - can be modified as needed. However, note that the more manual changes that are made as a part of the deployment process (rather than allowing XMLAccess / ReleaseBuilder to perform all changes in a scripted manner) ... the more potential for error to be introduced and the end result in PROD not being functionally correct.

Q13) How to portlet deployments work with ReleaseBuilder?

A13) If you update the .war file by itself, ReleaseBuilder will NOT recognize the change. ReleaseBuilder will recognize changes in the Portal database, but not changes purely on the filesystem. There are three potential means of addressing this scenario:

1) Update the uid= or id= of the portlet.xml file with a versioning schema.
Pro: Each portlet deployment will force an update in the Portal database and Portal will redeploy the portlet in such a scenario.
Con: Most source control systems have their own versioning schema independent of the application id.

2) Manually update the .war file in the Portal or WAS administration console - commenting out the tags that in XMLAccess scripts that are used to deploy portlets
Pro: More control over portlet deployments if portlet updates are performed infrequently.
Con: Additional manual steps required during a deployment

Comment: We have seen this performed in practice on more than a few occasions. There is often a desire to tightly control when portlets are redeployed in a system and we have seen a number of DIFF files from ReleaseBuilder with the tags commented out to allow for this control. This is supported/permitted, so long as only the tags are commented out and other database changes for the portlet remain in the DIFF file to be promoted between environments.

3) Write a custom XMLAccess script to redeploy custom application on each deployment.
Pro: Guarantees updates each and every time for custom portlets. Scripts are relatively quick to author - no more than 15 minutes - a series of copy/paste.
Con: Murphy's Law / Bad luck could cause unforeseen issues on applications that have not changed and if redeployed now have issues.

As an example of scenarios #2 and #3 - a RelesaeBuilder DIFF file may show the following output to deploy a portlet - in this case the World Clock portlet

For in scenario #3 - you may copy/paste similar stanzas for all of your custom applications and redeploy them during each deployment

Q14) What is virtualPortalMode in ReleaseBuilder?

A14) For deployments you MUST move over both the unscoped artifacts (which exist in the base Portal) and the virtual Portal artifacts for a deployment to be successful. Because some resources are unscoped - they may show up twice in XMLAccess exports of the system - once in the base portal export and a second time in a virtual portal export. This is CORRECT behavior - the resources must show up / exist in the virtual portal export. However, if we want to create a releasebuilder DIFF file to move changes between systems - we do not want to move the unscoped resources twice. So do we pick the base portal or the virtual portal to move over unscoped resources?

Answer - the base portal should move over unscoped resources. First move over the base portal artifacts, then the virtual portal artifacts. The use of the -virtualPortalMode flag on the ReleaseBuilder command guarantees that unscoped resources do not appear in the DIFF file for virtual portals and therefore unscoped resources are not accidentally moved over with the virtual portal.

Author commentary - from experience I can state not using this flag will lead to problems on XMLAccess import. ReleaseBuilder will work as expected and produce a DIFF file for the virtual portal without the virtualPortalMode flag. However, the DIFF file will not contain a complete definition for the unscoped resources. As a result, XMLAccess import will fail due to the incomplete definition. A ReleaseBuilder DIFF of the base portal will produce a complete definition of the unscoped resources which will import correctly to the TARGET environment.

Could you potentially omit the virtualPortalMode flag and include both scoped an unscoped resources in the virtual portal export? Answer - yes this is technically possible but absolutely NOT RECOMMENDED. Why? With two virtual portals both including unscoped resources, you could run into a situation where both virtual portals attempt to make changes to the scoped resources ... VP #1 succeeds, then makes its scoped changes, VP #2 fails to make unscoped changes (which occur first), and thereafter cannot make its scoped changes. Even with a single virtual portal, a risk exists by omitting this flag leading to data inconsistencies - recommendation is only allow unscoped changes to move over with the base portal DIFF via ReleaseBuilder.

Q15) What data does ReleaseBuilder not move over?

A15) Let's examine the name of the command "ReleaseBuilder" ... thus the tool helps primarily with moving over data in the release database. Data it does not move over and must be moved over by separate commands.

Q16) How do I move over Credential Vault Data?

A16) Credential Vault data typically stores sensitive information, such as userid/password combination. If you export the Credential Vault with XMLAccess, a Portal administrator will NOT be able to look at the XMLAccess export and casually glean several different userids/passwords. XMLAccess will not import/export credential vault data without ensuring the data is encrypted AND XMLAccess itself is run over https. See the Portal Infocenter for more details:http://www.ibm.com/support/knowledgecenter/SSHRKX_8.0.0/admin/adxmltsk_cmdln_sntx_crd_vlt.html

Q18) How do I move over Themes and Skins Data?

A18) For Portal 8.5 and later, we recommend using PAA. For initial deployments, use the build-initial-release-paa configuration task. For subsequent deployments, use the Theme Analyzer export.

Q19) How do I move over Custom Jar Files and Shared Libraries?

A19) File changes are not moved over as a part of the staging to production process. Utilize standard file transfer tools, e.g. SFTP, to copy data between environments. If you have a centralized repository to check-in/check-out code level changes to a filesystem (Git, SVN, etc.) - these certainly can be leveraged.

Q20) What is the difference between Export.xml, ExportRelease.xml, ExportManagedPagesRelease.xml and ExportUniqueRelease.xml?

A20)

Export.xml - exports the release, customization, community databases and policy-nodes from the JCR database. Useful for anlaysis - do NOT use for staging to production.
ExportRelease.xml - export the release database only - includes managed and non-managed pages. Traditional means to move data between systems.
ExportManagedPalesRelease.xml - export the release database only - non-manged pages only. This will grant you access to the Administration area of Portal, but no other pages will be present. You MUST setup syndication to syndicate managed pages.

Q21) Can I take a DIFF of REV1 and REV3?

A21) To put this question in perspective, we have:
REV1 = golden copy of the system
REV2 = changes to system after 1 month
REV3 = changes to system after 2 months

Typically we look to compare the two most recent revisions via ReleaseBuilder, e.g. REV1/REV2, or REV2/REV3. While it is technically possible to compare REV1/REV3, this may lead to incorrect results. For example, let's suppose you delete a portlet in REV2, then recreate the same portlet in REV3. ReleaseBuilder may not detect the change (depending on exact database level changes made) and therefore not perform the update. Always compare the two most recent revisions when creating a DIFF files to ensure systems remain identical.

Q22) How do I take changes from a higher level environment and push them down to a lower environment with ReleaseBuilder?

A22) Typically ReleaseBuilder is a one-way set of changes, promoting from a lower environment to a higher environment. Taking a set of changes from a higher environment to a lower environment with ReleaseBuilder is NOT recommend and will not be commented on in this article.

In practice, we realize emergency changes need to be made to the PROD environment during maintenance windows. The recommendation would be to update the lower environments with this SAME / identical changes, push the changes back up to PROD during the next maintenance window. The net result should be lower environments updated with changes, PROD itself has no effective change made to it.

Q23) Can I perform an full XMLAccess export/import on each deployment?

A23) Yes, this is technically possible - but time-consuming. Each portlet would be redeployed if you attempted this, which on an PROD enviornment would impact live end users. You would only want to perform this technique on a VERY small number of portlets - OR - preferrably on an environment that would not impact any PROD end users. Assuming a mitigation of impact to PROD end users - you will need an active/passive configuration and would want to run empty-portal on your passive configuration each time you wish to promote changes from a lower environment to the passive environment. Thereafter you can use the full XMLAccess export/import (using either ExportRelease.xml or ExportManagedPagesRelease.xml depending on your configuration) to copy data between environment.

Q24) I need to baseline my environments. I have a running PROD system. What do I do?

A24) This is one of the least fun questions to try to answer. You have two choices:
i. Promote all changes to your PROD environment - make it the "golden" copy. Follow the steps in Q13 to copy the PROD environment to lower environments. This - unfortunately - will require all new work in lower environments to be put on hold as they will be wiped to bring in a perfect golden copy of PROD.
ii. Synchronize your lower environments / baseline with each other. During a maintenance windows - follow the steps in Q13 to bring in a perfect copy of QA to PROD. This option is often NOT implemented in practice unless an active/passive configuration is in place. Quite frankly, asking the business team for permission to wipe a PROD environments of most of its data - even with sufficient backups in place - is a discussion that typically doesn't go over well. The first option thus is pursued in practice most frequently.

Q25) Is there a required naming convention for DIFF files?

A25) There is not a required naming convention. In practice we will see the following naming which incorporates a timestamp, the name of the SOURCE and TARGET environments, and (optionally) the hostname of the SOURCE and TARGET environments, e.g:

Note: The use of the terms REV1, REV2, etc. is used in technical terminology but rarely used in practice. The timestamp itself can help signify when the deployment occurred.

Q26) Can we customize the location portlets are stored to a location other /deployed/archive?

A26) Yes - this is technically possible but not recommended. In an XMLAccess export the tag is used to denote the locate of the portlet on the filesystem. Typically it points to the /deployed/archive directory. While it is update this variable to a different directory, e.g. /MyNASDirectory/portlets - doing so and ensuring all systems enforce this configuration often proves to be more costly than leaving the default configuration as-is without modification.

Q27) Can the ReleaseBuilder process be automated?

A27) Yes. Both IBM Lab Services and IBM business partners are confirmed as having automated solutions available. Unconfirmed (but strongly suspected) is additional third-party solutions being available that can perform the automation. The automation itself from a high-level is creating a series of custom scripts - Bash Shell, python, choice of implementation varies - to execute the tools needed to perform the ReleaseBuilder deployment. The

A28) Yes, under the hood such tools ultimately invoke one or more WAS/Portal tools to perform the deployment. i.e. They are effectively wrappers around the tools that perform the heavy lifting. This is not to discount said custom build tools usefuless - indeed they can simplify a good portion of the integration of all the disparate tooling - however at the end of the day they are not performing the actual work to commit the changes - product levels tools provided by WAS/Portal perform such changes.

Q29) Does managed pages impact which files I use to create the DIFF file?

A29) Yes - see Q20. If you use Managed Pages with syndication, use ExportManagedPagesRelease.xml to create your DIFF file. If you do not use Managed Pages with syndication, use ExportRelase.xml to create your DIFF file. Most instructions provided by IBM assume managed pages is NOT in use, thus, you will often see ExportRelease.xml used in examples.

Section 4: Syndication, Web Content and Managed Pages

Q1) What is IBM's recommended syndication strategy across a large number of environments?

A1) Please see acknowledgements section. The following model was derived from one of the individuals noted in the acknowledgments section.

The following model conceptually assumes two PROD environments - a PROD AUTH environment where content authors originate new web content, AND, a PROD REND environment where end users are accessing the real data. Both are treated as true PROD environments and therefore need their own respective QA environments to validate changes in before pushing to PROD.

Additional commentary:

- WCM design elements (presentation template, authoring template, etc.) represent a significant change to the Portal site and is inherently riskier. Such changes are NOT pushed automatically. WCM content is less risky (e.g. a minor word change that changes from a negative connotation to a positive connotation) and may be pushed automatically.

- SANDBOX enviornment - not pictured. Purely experimental and may blow up at any time. Useful for proof of concept of new features.

- DEV environment - originates theme changes and portlet changes. Use ReleaseBuilder to move chanes to QA AUTH.
- QA AUTH environment - validates changes from DEV work as expected. Changes may be rejected and sent back to DEV for further work. Pushes to QA REND and PROD AUTH to further validate changes.
- QA REND environment - no changes originate outside of this environment. This is the true "validation" environment. If it doesn't work here, it doesn't get pushed to either of the PROD environments.
- PROD AUTH environment - changes from QA AUTH (includes theme/portlet changes) are validated in QA REND before being pushed here . Once validated, content is added to "complete" the page and previewed in PROD AUTH. Thereafter, pushed to PROD REND as completed.
- PROD REND environment - final environment all changes are pushed to. By this point it should be bulletproof / fully tested out through multiple eyes looking at it in lower environments.

Q2) What is IBM's recommended syndication strategy for a single environment?

A2) The following answer assumes managed pages is enabled.
- Setup your WCM content and WCM design (similar to Q1) into separate libraries.
- Do NOT store any of your own custom content or design elements into the Portal Site Library. If IBM code stores data in there, let is, but otherwise have your own custom WCM libraries independent of the Portal Site Library to manage data.
- You may automtaically syndicate all libraries _EXCEPT_ the Portal Site Library between the base virtual and virtual portal(s) within a single environment. This ensures all design/content elements are automatically updated, e.g. branding changes and actual content updates, between the base and virtual portals.
- Note, in some cases, such as a virtual portal separate by lines of business that does NOT share any common branding or content items - it may make sense not to have syndication between the base portal and virtual portal(s) so the libraries are in sync. Hence, this question largely have an any of "it depends" to determine what syndication strategy makes sense for your environment. Contact IBM Support if any questions.

Q3) What is cross-version syndication?

A3) A Portal migration - whether it be a traditional migration through WAS/Portal commands, OR, a manual migration with exporting/importing data between environments - can take a fair bit of planning and therefore a fair bit of time to complete. Halting all new deliverables to the current PROD environment (example, Portal v7.0) while the new environment is being built up (example, Portal v8.5), if often not acceptable to the business. We MUST have a means to deliver new content to BOTH enviornments. Hence the introduction of Portal cross-version syndication. With Portal cross-version syndication, you may update a PROD AUTH environment in a lower Portal version (say v7.0) and syndicate that content to a lower Portal vesrion PROD REND environment (say v8.5) and a newer Portal version (say v8.5) environment that is not yet live. Thus when the changeover is made from v7.0 to v8.5, the environments are roughly equivalent.

Q4) Can I cross-version syndicate the Portal Site Library?

A4) This is not supported. Equally important (arguably more important), this doesn't work technically and cross- syndication WILL fail on the Portal Site library if you attempt it. Why? The traditional Portal migration process removes pages that are no longer applicable in newer Portal versions. Cross-version syndication will NOT have similar logic in place to determine what is good in a previous version of Portal vs. what is no longer good in a newer version of Portal. It will simply attempt to syndicate, and will not try to logically reconcile diffferences beween the two major versions.

With new WCM Libraries, such reconciliation need not be performed. With the Portal Site Library - which affects Portal pages in the release database - further consideration must be given.

Q5) How do I remove WCM libraries?

A5) You may remove individual WCM libraries via the the Portal Administration area Web Content Libraries administration portlet. However, in practice we have found that libraryB may have a dependency on libraryA, and, you cannot remove libaryA without first removing the dependency that libraryB has on libraryA. Resolving such references ... if they involve hundreds or THOUSANDS of items, is not trivial. As a result, removing a single library in practice may not be feasible, it may be necessary to remove multiple WCM libraries simultaneously.

Presently the UI in the Portal administration area does not allow for removal of multiple libraries simultaneously. You must use a ConfigEngine configuration task to remove multiple WCM libraries simultenously. *****

Q6) I have a requirement to completely automate my system setup. How can I setup syndication/subscriber pairing by command-line / without using the Portal Administration console?

A6) *****

Q7) I have a large amount of web content. Do I need to manually syndicate all of that content, OR, is some alternative available? M/h4<
A7) There are three primary methods of moving web content between environments:
i. Syndication
ii. Configuration tasks (notably some invocation if export-wcm-data and import-wcm-data)
iii. Database cloning

Method #1 has an advantage of ensuring all data elements are copied between environments. Disadvantage is for a sufficient large amount of web content (say 100GB or more of data), this process can be TIME-CONSUMING and and alternative sought to help reduce the time needed to copy that much data.

Method #2 has an advantage of containing all data within a filesystem so its easily transportable between environments (this is ultimately what a PAA file does under the hood). The disadvantage is not all information may be transmitted with this method - for example - WCM versions are not preserved with this method.

Method #3 is preferred for larger environments. You may clone the JCR database and end up with the same data in the SOURCE and TARGET environments, and can be MUCH faster than method #1 or #2 for copying a large amount of data. Disadvantages include - there are SEVERE limitations for this method in Portal 8.0 (author's personal commentary - please just don't try it in Portal 8.0 - contact IBM Support if you do wish to attempt). For Portal 8.5, the same limitations do not exist - however, it is recommended you disable BOTH syndication and Portal search before attempting to clone the JCR database. On practical consideration with database cloning ... will the database schema owner in SOURCE (such as QA) be the same as the database schema owner in PROD (such as PROD)? While this can be fixed, commentary being if you plan on utilizing JCR cloning ensure such considerations are weighed out AHEAD of time vs. finding out about them during the middle of a tight maintenance window.

Q8) I saw something about WCM library import and and a rename conflict flag?

A8) *****

Q9) What is the Audit Service in WebSphere Portal? Why is it important for syndication?

A9) SystemOut logs are diagnostic for what went wrong. Audit logs tell you who did what. Real story - PROD AUTH deleted all pages, auto-syndicated to PROD REND. Who did it? No idea, need audit logs to determine.*****. Backups saved the day, but needed audit lots for root cause.

Q10) Are there any naming conventions I should be using for syndication?

A10) *****

Q11) OK, so, I conceptually understand there are primary keys in the Portal database that need to be maintained between different Portal environments. Portal tooling (XMLAccess, Syndication, etc.) helps preserve those primary keys. What about fresh installs of Portal? Will the have the same primary keys?

A11) YES. Out of the box Portal v80 and later installations are GUARANTEED to have the same objectIDs between different systems. This allows you to perform syndication of managed pages and other IBM provided portlets immediately with no additional configuration available. Portal cumulative fix upgrades also will GUARANTEE the same objectID / uniqueness is preserved on IBM provided pages/portlets/etc. artifacts.

CAVEAT: Portal 8.0 base installations. These will have the same objectIDs on installation, but the moment you attempt to add additional features (WCM, personalization, etc.), those added features may produce different objectIDs / bring the systems out of sync. Portal 8.5 does NOT offer a base installation primarily (though not exclusively) for this reason. Full installations are recommended for Portal 8.0 and Portal 8.5 if you plan on utilizing the managed pages feature.

Q12)What are content-mappings and system content-mappings I see in my XMLAccess exports?

*****

Q13)How do I remove versioned items from the JCR database?

A13) ******

Q14) How do I purge expirted items from the JCR database?

A14) *****

Q15) How do I remove deleted items from the JCR database?

A15) *****

Section 5: Portal Application Archive

Q1) How do I update to the Script Application?

A1) This answer will focus specifically on Portal 8.5 only. The Script Portlet was introduced via the IBM Greenhouse catalog as v1.2 or v1.3. Installing Portal 8.5 CF09 or CF10 would automatically update the script portlet to the script application v1.4 Installing Portal 8.5 cumulative fix 11 or later will automatically update the Script Portlet (any version) to the Script Application v1.4. If you installed the script portlet to virtual portals - those will be taken into consideration as well and the script portlet --> application conversion will occur.

WARNING: Going from Portal 8.5 CF09/10 to Portal 8.5 CF11/CF12 does NOT correctly update the Script Portlet to the Script Application. APAR PI70882 (included in Portal 8.5 CF13) addresses this issue. See the APAR text for a workaround to manually correct affected CF11/CF12 systems.

NOTE: The reason this question is listed in this section is because if you attempt to perform a PAA deployment from a system upgraded from CF09/CF10 to CF11/CF12 using PAA, this WILL fail due to this APAR. PAA deployments are not the only operation that will fail under these circumstances (any XMLAccess in Portal will that attempt to operate on the Script Application), however, it is the operation most commonly seen in the field.

Q2) How do I move the Web Content Management Multi-Lingual Solution (MLS) between systems?

A2) For initial deployments, use build-initial-release-paa. This will capture ALL of the data needed to copy this between systems - EXCEPT for the actual MLS paa file itself. This actually is not a huge concern - the PAA itself is a zip file containing elements of the Portal server. Thus, when you copy a LARGER zip file with all elements (build-initial-release-paa) vs. a single zip file (the MLS file) with fewer elements, you are including a superset of all data in the larger zip file vs. a smaller set in the smaller zip file. Well, what about updating the actual PAA files? The next time you upgrade the SOURCE and TARGET systems to the latest Portal cumulative fix, the cumulative fix installation will detect that MLS is installed and updated all files as needed. No further actions are needed (e.g. you do NOT need to manually run install-paa, but skip deploy-paa on TARGET ... that would get complicated in a hurry across multiple environments to maintain).

Note: For IBM products that are officially supported - such as MLS - they are guaranteed to maintain the same uniqueness across multiple systems and will NOT change during an upgrade.

Q3) How do I move the Web Content Management Content Template Catalog (CTC) between systems?

A3) For initial deployments, use build-initial-release-paa. This will capture ALL of the data needed to copy this between systems - EXCEPT for the actual MLS paa file itself. This actually is not a huge concern - the PAA itself is a zip file containing elements of the Portal server. Thus, when you copy a LARGER zip file with all elements (build-initial-release-paa) vs. a single zip file (the MLS file) with fewer elements, you are including a superset of all data in the larger zip file vs. a smaller set in the smaller zip file. Well, what about updating the actual PAA files? The next time you upgrade the SOURCE and TARGET systems to the latest Portal cumulative fix, the cumulative fix installation will detect that CTC is installed and updated all files as needed. No further actions are needed (e.g. you do NOT need to manually run install-paa, but skip deploy-paa on TARGET ... that would get complicated in a hurry across multiple environments to maintain).

Note: For IBM products that are officially supported - such as CTC - they are guaranteed to maintain the same uniqueness across multiple systems and will NOT change during an upgrade. This answer is almost a direct copy/paste of Q2 in this same section and is intentional to be a copy/paste emphasizing that items that are officially supported have a guaranteed quality level and consistency associated with deployments across multiple environments.

Q4) What is the "WCM Support Tools" portlet? How do I use it?

A4) The WCM Support Tools portlet is a read-only portlet available that let you explore data as it exists in the JCR database. Some items - such as Web Content - will appear similar to as it appears in the Web Content Libraries portlet. Other items - such as theme elements, policy nodes, etc. - will be different. The WCM Support Tools portlet is an optional download and is NOT required to be deployed on systems. IBM L2/L3 support may request it be downloaded and installed in specific situations to help troubleshoot issues that may be difficult to enable verbose tracing for in a PROD environment . The WCM Support Tools portlet may be downloaded from here:https://greenhouse.lotus.com/plugins/plugincatalog.nsf/assetDetails.xsp?action=editDocument&documentId=AE2BB2412F20AA318525772E006F7014

Q5) What is the CacheViewer portlet? How do I use it?

A5) WebSphere Application Server offers a form of caching available to all WAS-based products known as Dynacache. In memory, Dynacache in its barebones form is a Distributed HashMap. In an extraordinarily complex application such as Portal that may have thousands of users per second - maintaining a Distributed Hashmap across multiple servers in a cluster becomes a complex undertaking.

For WebSphere Portal specifically, a downloadable portlet is available that can indicate how full a given Dynacache is on a specific Portal server, the % of hits to the cache, the % of misses to the cache, etc. The added benefit of the CacheViewer portlet is it ALSO provides recommendations SPECIFIC to WebSphere Portal caches for adjustments to make. Other Dynacache analysis tools - such as the WebSphere Application Server CahceMonitor.ear application - OR, monitoring tools, may be able to provide statistical information but NOT recommendations for cache tuning based on those statistics. The Cache Viewer portlet provides both the statistics and tuning information all in one package. The Cache Viewer portlet may also invalidate caches (both in the current server AND across the entire cluster!) with a click of a button for testing purposes . The Cache Viewer portlet may be downloaded from:https://greenhouse.lotus.com/plugins/plugincatalog.nsf/assetDetails.xsp?action=editDocument&documentId=81637C45380C3B4E85257CEB00451825

Q6) What effect does flipping managed pages from "enabled" to "disabled" have?

A6) Managed Pages has a SIGNIFICANT effect on the JCR database. Let's talk about Portal versions (v7 and prior) that did not have managed pages. In Portal v7 and prior, all Web Content Libraries were shared among the base and all virtual portals. While it was possible to secure access to Web Content by specific LDAP group(s) - in practice this became extraordinarily difficult to maintain. In some cases, it was desirable for one virtual portal with a specific LDAP group to have access to the Web Content ... BUT ... a second virtual portal with the SAME LDAP group may not have access to that web content. Seems odd, but in practice IBM has observed such business requirements. With Portal v7 and prior given the WCM libraries were shared ... separating the libraries by permissions alone did not always meet requirements.

With Portal 8.0 and later, the managed.pages setting was introduced in the WP ConfigService ResourcEnviornmentProvider of the Resource Environment Provider section of the Deployment Manager (DMGR) console . You may think, well, GREAT, I don't use managed pages, no harm no foul, right? Well ... the setting applies more to just mangaed pages. In particular, managed.pages=true in WP ConfigService Resource Environment Provider will allow managed pages syndication to work, but more importantly, will FORCE a separation of WCM content libaries. Namely, the base portal will have its own copy of the WCM libraries, each virtual portal will have its own copy of the WCM libraries, etc. with managed.pages=true. With managed.pages=false in Portal v8 or later it reverts back to the Portal v7 or prior behavior, assuming WCM libraries are shared everywhere . With managed.pages=true, the WCM libraries are separated by base and virtual portals.

The WCM Support Tools portlets (see Q4) can most notably demonstrate this change. Start with a system with 1 base portal, 2 virtual portals, and lookup the JCR workspace in the WCM Support Tools portlets. You'll note a single JCR workspace, meaning, a single location in the Portal JCR database for WCM content libraries to be stored. If you changed managed.pages=true in the WP ConfigService in the Resource Environment Provider, then restart Portal, on the next Portal server restart (NOT when you make the change, but next restart!) - the updated configuration will take effect. In particular, the Portal JCR database will reorganize itself such that all WCM libraries will be consolidated into a single JCR workspace (with the base Portal taking priority should a conflict exist). Once the consolidation takes place, that is changing from managed.pages = false to managed.pages = true, the WCM Support Tools portlet will show a single JCR workspace rather than multiple JCR wokspaces (one for the base portal and each individual virtual portal).

NOTE: IBM does NOT recommend switching between managed.pages = false and managed.pages = true on a whim. Such decisions are intended to be a ONE-TIME effort, e.g. during a migration, and should NOT be performed multiple times. Reorganizing the JCR database "on the fly" / outside of well established controls may lead to unexpected outcomes. From field experience, we have seen a VERY high success rate with the conversion, but in cases where it has NOT gone over smoothly (for one reason or another...), it can be difficult to troubleshoot and recover from. Contact IBM Support if you have questions about this setting and what effect(s) it will have on the Portal JCR database.

NOTE - some of these options ARE mutually exclusive. For example, exportPortalSiteLib and exportManagedPages should be used together in a SPECIFIC combination. While it is technically possible to use it in other combinations - the end result - frankly - will not produce anything usable. We recommend consulting the following document - Step-By-Step Guide to performing staging to production using Portal Application Archive in WebSphere Portal 8.5 - for the options which will apply to 80%+ of Portal environments. Contact IBM Support with further questions.

Q8) How do I guarantee that all of m Portal server(s) will route through a given web server as I move across environments? I do NOT want my PROD system to route through my QA web server!

A8) There are two answers to this:
- a. The web server itself via the plugin-cfg.xml file will ALWAYS point to specific Portal servers it will use. Therefore, in practice is it NOT possible for a QA web server to try to contact PROD Portal servers to serve traffic. Further, firewall rules in place often will prevent ANY actions taken place / "cross of streams" from occurring.
- b. Independent of safeguards in the webserver/firewall, there are settings in WebSphere Portal which will guarantee a specific host:port combination will ALWAYS be used. For example - let's suppose you wanted to route ALL web traffic to a specific hostname for load balancing purposes and also a specific port - how would you accomplish that? With WebSphere Portal as an application, links that are generated either can defer to the incoming host:port headers for the links to be generated (default behavior) - OR - can be hardcoded to a specific:host port combination no matter WHAT the incoming request states. The author of this document comments this is a DELICATE balance between trusting the incoming information vs. forcing a hardcoded value for enforcing security. Both have their pros/cons, choose the one that makes the most sense for your deployment. For a harcoded set of values, you may perform the following actions:
i. Login to the DMGR
ii. Navigate to Resources > Resource Environment Providers > WP ConfigService > Custom Properties
iii. Locate the following custom properties
host.name
host.port.http
host.port.https
iv. Update each of the values of the custom properties to a hardcoded value of your choice. In practice:
host.name = frontendloaderbalance.hostname.com
host.port.http = 80
host.port.https = 443
v. Save changes. Sync nodes. Restart the Portal server(s).

Q9) I have virtual portals with hostnames. How does that work across deployments?

A9) Virtual portal hostnames are great for PROD environments. However, moving changes from nonPROD to PROD environments requires some knowledge on behalf of the Portal administrator to execute.

For example, if you run the following xmlaccess command on DEV:
./xmlacces.sh -user wpsadmin -password wpsadmin -url http://vphostname.com:10039/wps/config -in /opt/IBM/WebSpher/PortalServer/doc/xml-samples/ExportRelease.xml -out /tmp/vp.exportrelease.xml

...this WILL work correctly. The hostname of vphostname.com for the DEV system will resolve and allow the XMLAccess export to work. HOWEVER, what happens if you execute the same command on a QA system? Will DNS resolves the DEV virtual portal hostname ... OR, the QA virtual portal hostname? Which system will actually produce the correct XMLAccess export ... can you trust each respective system to produce the correct XMLAccess export?

In practice, what is recommended for system administration purposes is NOT to use the hostname of the virutal portal - as that is LARGELY dependent of the individual system XMLAccess is configured to. Instead, use the special context root associated with that virtual portals. The special context root can be observed via the virtual portal administration portlet - OR - you may use the ./ConfigEngine.sh list-all-virtual-portals administration task. A practical-example.

Note in particular the -url is different for the SOURCE and TARGET environments. Correct objectIDs ARE produced as a results of the commands and the environments (for all intents and purposes) are identical - however - the syntax of the commands are NOT dependent on DNS and the virtual portal are guaranteed in such cases to resolve independent of DNS.

Q9) OK, I get the idea of using ReleaseBuilder to perform DIFF updates. How can I perform similar "DIFF" updates using PAA?"

A9) At this time - October 2016 - PAA itself can NOT be used to perform "DIFF" updates similar to the manner by which ReleaseBuiler peforms PAA updates. For example, ReleaseBuilder will perform a DIFF comparison based on the two inputs files you give it, e.g. REV1 and REV2. PAA, in constract, will ALWAYS assume REV0 as the first file, AND - REVN as the next file you give it.

So ... why is it this problematic? Comparing the "first" and the "latest" (ala PAA) should produce a correct end result? Right? Well, possibly/maybe, but not always . Consider the following scenario.

REV0: Golden copy . Everything is 100% perfect as is.
REV1: Three months later: One portlet is removed due to business requirements . No problem.
REV2: Six months later. Same portlet is recreated due to business requirements.

OK, so, REV0/REV1 would create a DIFF1.xml file which would delete the portlet.
Thereafter, REV1/REV2 would create a DIFF2.xml which would recreate the portlet. GOOD HERE with ReleaseBuilder. The old objectID would be removed, AND, the new objectID would be created.

NOW, let's take PAA which uses ONLY REV0/REVlatest, in this case REV2.
OK, so, REV0/REV2 would create a DIFF1.xml file which would NOT recognize the deletion/recreation of the portlet.
The old objectID would NOT be remove, AND, te new objectID would NOT be created.

Therefore, PAA at this time, (October 2016) we recommend ONLY for initial deployments, and ReleaseBuilder should be used for subsequent deployments. If you wish to use PAA for subsequent deployments in a manner that will guarantee environments remain in sync - please open a feature request - as PAA presently cannot make such guarantees given it is (presently) always set to use REV0 for its comparison:https://www.ibm.com/developerworks/rfe/

So where is the "DIFF" that PAA offers helpful? Answer - if you intend to perform a one-time update, and ONLY ONE update, this can be useful. In practice we realize there are multiple updates over the course of a systems' life and hence recommend using ReleaseBuilder for subsequent updates.

Q10) OK, got it - data in the Portal database is fairly sensitive to changes with respect to objectDs. How do I handle non-Portal database that is not in the Portal database? e.g. Independent ear file updates?

A10) The Portal server/lcstuer itself does NOT care about independent .ear files that you deploy to the WebSphere Application Server(s) ... so long as those applications do NOT have any direct interaction with the Portal server. The MOMENT you need those applications to be integrated and recognized by the Portal server, Portal must register them in its database. This is in technical terminology what Portal refers to as "predeployed applications" ... applications (.ear/.war - either one) deployed to the WAS server that the Portal database does not have any awareness of. Once the Portal database needs to have an awareness / register those WAS applications as associated with the Portal server - game plan changes slightly and we need to ensure this is not purely a .war/.ear application change that occurs across systems, but also a Portal database change that occurs across systems.

Generally speaking, while such applications - aka "predeployed applications" - CAN be made to work with Portal, IBM truly would ask overall - "why would you not include these applications overall as a part of your deployment process with Portal / why keep them separate?". From field experience, we have seen a handful of valid answers - however, in the vast majority of cases it results in minor changes for application developers (create a JSR286 .war instead of a pure J2EE .ear) which will bridge the gap between pure development and operations. As always, please free contact IBM Support with an planning questions. The details are often the make/break on a case-by-case basis and specifics can be discussed to determine what makes sense for a particular set of use case requirement(s).

Q11) What is the difference between install-paa and deploy-paa? When I do use each?

A11) *****

Q12) What is the difference between remove-paa and uninstall-paa? When I do use each?

A12) *****

Q13) I made an update to my application DEV. How do I repackage this into a custom PAA and ensure it gets pushed to higher-level enviornments?

A13) *****

Section 6: Misc. Topics

Q1) How can I migrate from Portal 6.0 or Portal 6.1 to Portal 8.5?

Both methods are described in detail by the Portal chief migration architect here. In practice, we have found the manual migrations to have a high success rate. A prerequisite for the success of the manual migrations is having a skillset with XMLAccess scripting to troubleshoot any issues that may arise when importing page/portlet data. The cross-version syndication of WCM content thereafter is fully tested and supported and generally works well between environments.

Q2) What is the Portal Page Migration tool?

A2) Portal pages in Portal 6.1 typically were created as a "standard" page. This meant you could choose from a number of predefined IBM page layouts, such as 1column, 2column, etc. However, customized layouts and other advanced features were often difficult to incorporate into standard pages. Starting with Portal 7.0 - the default type of page created for a new page is a "static" page. Static pages allow the advanced functionality and also can be modified using the newer modular theme architecture in Portal 8.0 and 8.5.

The migration process - either traditional or manual migration - will not attempt to convert from a standard page layout to a static page layout. Deleting the pages of type standard page and recreating the pages as static pages individually certainly is an option, however, if there are hundreds or thousands of such pages - a tool which can automatically update the pages is preferred. A separate tool is available - the Portal Page Migration tool - which can convert from a standard page layout to a static page layout. See the following link for more details on the tool and how it can be utilized in your Portal environment.

Q3) If I want to use managed pages in Portal 8.0 or 8.5, what is the recommended approach to migrating the environments?

A3) Let's suppose you have five different environments - normally we would look to migrate each of the five environments with the current data. However, if we intend to use managed pages - all of the environments MUST be baselined with each other / have the same objectIDs. Thus, migrating five different environments will not do us much good if we intended to make one of those environments the "golden" environment the other four environments will become perfect copies of. Thus, the recommendation is to choose a single environment to be the golden copy, migrate that environment, then performing a baseline procedure (see section 3) post-migration to the other four environments. This will ensure all five environments have the same objectID and managed pages will work post-migration.

Q4) I have two environments - one of those environments was migrated, the other is not-migrated. How do I get managed pages to work between both environments?

A4) Ideally such a scenario should not occur - that is, new work is started before an environment is migrated - however we know in practice this situation can come up. Pick an environment to become the golden copy, baseline the other environment based on the golden copy. Most likely the migrated environment will be the golden copy and the non-migrated environment will be blown away / baselined with the migrated copy. Any existing work in the non-migrated environment should be moved to the migrated environment to ensure it is not lost during the baselining .

Q5) I need SSL configured across my entire deployment. I have an SSL offloader in my architecture. Why are some links being generated with HTTP rather than HTTPs?

Many WebSphere Application Server products (not just WebSphere Portal!) will see the incoming traffic from the web server as non-SSL and generate HTTP links as a result. The HTTP links will be sent to the web browser, AND, the web browser attempting to access the HTTP link through the SSL offloader will fail as the SSL offloader will require all connections to be over HTTPs.

WebSphere Application Server offers a configurable parameter to account for this use case scenario. See this Technote for more details: http://www-01.ibm.com/support/docview.wss?uid=swg21221253. End result, Portal will detect/honor the header recognizing an SSL offloader is present and generate HTTPs links rather than HTTP links.

Section 7: Fixing Common Issues

Q1) I created an artifact named "test". I deleted it. I want to recreate the same artifact named "test". It's throwing an error. Why is this happening?

A1) When you delete a non-content in WebSphere Portal by default it is not removed from the release database immediately. Such operations can be complex and resource intensive and are instead delayed for processing. The item in the database is marked for deletion, but is not actually removed until a later time. A scheduled task (similar to a cron job) exists that will cleanup the Portal database twice a week during early morning hours. If an immediate cleanup is required - such as to delete and recreate a page with the same name - use the Task.xml xmlaccess script which will force an immediate cleanup of the release database, e.g.

Note #1: This script is global and need only be run once to apply to the base portal and virtual portal(s).
Note #2: The starting and completion of the cleanup is an asynchronous event. i.e. You may continue using Portal while its running. Monitor the SystemOut.log files for the task beginning immediately after running the XMLAccess command and completing thereafter. Typically the task will run in seconds, but may take a few minutes if the number of deletions is large.
Note #3: This will NOT purge the JCR database of versioned items, expired items, or deleted items. Separate actions are needed for removal of those items to complete successfully.

Q2) My lower environments have a specific DN structure associated with their data, e.g. dc=IBMDEV,dc=COM. My PROD environments have a different structure, e.g. dc=IBM,dc=COM. What do I do to handle this mismatch as data moves across environments??

A2) In your ReleaseBuilder/xmlaccess scripts, you may safely find and replace "dc=IBMDEV,dc=COM" with "dc=IBM,dc=COM" and achieve a functionally correct end result.

Q3) My DEV system filesystem is filling up. I've found out most of the data is coming from wp_profile/PortalServer/deployed/archive. Can I delete this directory?

Q4) What happens if I use Export.xml instead of ExportRelease.xml, ExportManagedPagesRelease.xml or ExportUniqueRelease.xml with ReleaseBuilder?

A4) Export.xml contains non-release data - in particular a element and will error out on it. If no other exports are available for running with ReleaseBuilder, you can edit the xmlaccess export with Export.xml and remove all non-release database data (namely domain="cust", domain="comm", and policy-node items) to make it an equivalent to an XMLAccess export with ExportRelease.xml. Contact IBM Support if you have questions on how to perform these manual actions.

Q5) Unable to delete virtual portal in Portal 8.0 or later

Notably in Portal 8.0 and later each virtual portal has its own copy of wcm libraries. When a virtual portal is deleted both the release data (pages/portlets/etc.) and jcr data (web content) are deleted at the same time. The more web content in the system, the longer it will take to delete the content . The default timeouts of 2 minutes for a database operations are good for normal run-time of the system to protect against long-running transactions from hanging the Portal database(s), however, can be a limiting factor when performing database intensive administrative activities.

Appendix A: Terminology

S2P staging to production
SOURCE system where data originates
TARGET system where data is imported from SOURCE
XMLAccess command line tool to export/import Portal release database data
ReleaseBuilder command line tool to compare two different XMLAccess exports of the same Portal server and produce a differential file showing changes made
REV1 Revision1 - most commonly the Portal server before any changes were made. 1 of 2 files used as inputs to ReleaseBuilder
REV2 Revision2 - most commonly the Portal server after changes have been made. 2 of 2 files used as inputs to ReleaseBuilder
DIFF / Delta the differential file created by ReleaseBuilder by comparing REV1 and REV2
PAA Portal Application Archive
Syndication a tool in the Portal administration console which can copy JCR data (WCM Libraries, Managed Pages, Managed Rules, etc.) between two different locations
One-way syndication when syndication sends data from SOURCE to TARGET
Two-way syndication when syndication sends data from SOURCE to TARGET, or, TARGET to SOURCE.
LOB line of business
Base Portal The main portal site - typically /wps/portal
Virtual Portal
Separate area of Portal site with different look and feel, typically assigned per LOB. Uses either a URL context or a hostname, e.g.

wps/portal/hr OR hr.ibm.com/wps/portal

but not
hr.ibm.com/wps/portal/hr

CF Cumulative fix. Updated code levels and new features of WebSphere Portal delivered approximately every 8-12 weeks.

Appendix Y: Acknowledgements

- The WebSphere Portal Information Development team for providing the Product Documentation.
- David Batres, Staff Software Engineer at IBM. For countless whiteboard discussions to hash out various technical topics that contributed to many parts of this document.
- Jim Madl, IBM SEAL WebSphere Portal at IBM. For feedback about the technical content of this document. Also for a significant contribution about a best practices syndication model noted in Section 6.
- John DeBinder, Chief Programmer WebShere Portal at IBM. For feedback about the technical content of this document.

Appendix Z: About the Author of this Document

Travis Cornwell is an Advisory Software Engineer at IBM working out of Research Triangle Park, North Carolina, USA. He began supporting the WebSphere Portal product in 2009 and is a subject matter expert in the areas of installation, configuration, administration, security, and performance. Travis has written numerous technical documents for WebSphere Portal and has been the primary technical reviewer/editor for many additional items.

If you have any feedback about the content of this document, Travis can be reached at: travis.cornwell@us.ibm.com.
If you encounter any failures following the steps in this document or have questions on how the content of this document can be utilized in your deployment process, you may open a PMR with WebSphere Portal Level 2 Support.