or ideas about this documentation. technical white papers.
Informatica Knowledge Base
As an Informatica customer.
Informatica Documentation
The Informatica Documentation team takes every effort to create accurate. We will use your feedback to improve our documentation. and technical tips. It also discusses performance tuning and server clusters. and web technology.informatica.com. access to the Informatica customer support case management system (ATLAS). and access to the Informatica user community. including managing user access and report schedules and exporting and importing objects in a Data Analyzer repository. its background. Let us know if we can contact you regarding your comments. the Informatica Knowledge Base. Informatica Documentation Center. user group information. and sales offices. The Data Analyzer Administrator Guide is written for system administrators. The services area of the site includes important information about technical support.com. SQL. It assumes that you have knowledge of relational databases. upcoming events. training and education. and implementation services. You can also find answers to frequently asked questions. you can access the Informatica Customer Portal site at http://my. You will also find product and partner information. If you have questions.
ix
. The site contains information about Informatica.informatica.com.informatica.Preface
The Data Analyzer Administrator Guide provides information on administering Data Analyzer. you can access the Informatica Knowledge Base at http://my.
Informatica Web Site
You can access the Informatica corporate web site at http://www. contact the Informatica Documentation team through email at infa_documentation@informatica. The site contains product information. usable documentation.com. comments. newsletters.
Informatica Resources
Informatica Customer Portal
As an Informatica customer. Use the Knowledge Base to search for documented solutions to known technical issues about Informatica products.

see the PowerCenter Administrator Guide. The Reporting Service is the application service that runs the Data Analyzer application in a PowerCenter domain. When you set up an analytic schema in Data Analyzer. you create a Data Analyzer real-time message stream
1
. Data Profiling Reports. Operational schema. 2 Data Analyzer Basics. A hierarchical schema contains attributes and metrics from an XML document on a web server or an XML document returned by a web service operation. Based on data in an XML document. and analyze corporate information from data stored in a data warehouse. For more information about the Reporting Service. Data Analyzer uses the characteristics of a dimensional data warehouse model to assist you to analyze data. you define the fact and dimension tables and the metrics and attributes in the star schema. Metadata Manager Reports. filter. 5 Localization. or other data storage models. 5 Data Lineage. Data Analyzer uses the familiar web browser interface to make it easy for a user to view and analyze business information at any level. Use the operational schema to analyze data in relational database tables that do not conform to the dimensional data model.CHAPTER 1
Data Analyzer Overview
This chapter includes the following topics:
♦ ♦ ♦ ♦ ♦ ♦
Introduction. Each schema must contain all the metrics and attributes that you want to analyze together. You can use Data Analyzer to run PowerCenter Repository Reports. Identify which tables contain the metrics and attributes for the schema. When you set up an operational schema in Data Analyzer. you define the tables in the schema. and define the relationship among the tables. To display real-time data in a Data Analyzer real-time report. 7
Introduction
PowerCenter Data Analyzer provides a framework for performing business analytics on corporate data. Data Analyzer works with the following data models:
♦
Analytic schema. format. 4 Security. With Data Analyzer. You can create a Reporting Service in the PowerCenter Administration Console.
♦
♦
Data Analyzer supports the Java Message Service (JMS) protocol to access real-time messages as data sources. 1 Data Analyzer Framework. you can extract. Based on a dimensional data warehouse in a relational database. Hierarchical schema. or create and run custom reports. operational data store. Based on an operational data store in a relational database.

create reports. You need a separate web server to set up a proxy server to enable external users to access Data Analyzer through a firewall. Data Analyzer uses the metadata in the Data Analyzer repository to determine the location from which to retrieve the data for the report. Data Analyzer stores metadata for schemas. user profiles. Data Analyzer updates the report when it reads JMS messages.
Main Components
Data Analyzer is built on JBoss Application Server and uses related technology and application programming interfaces (API) to accomplish its tasks. In Data Analyzer. and other objects in the Data Analyzer repository. It generates the requested contents and uses the application server to transmit the content back to the web browser. Data Analyzer uses the application server to handle requests from the web browser. you need to specify the Data Analyzer repository details.
Web Server
Data Analyzer uses an HTTP server to fetch and transmit Data Analyzer pages to web browsers.
Note: If you create a Reporting Service for another reporting source. When you run reports for any data source. The Java application server provides services such as database access and server load balancing to Data Analyzer. The Java Application Server also provides an environment that uses Java technology to manage application. When you create a Reporting Service. and view the results on the web browser. The Data Analyzer repository must reside in a relational database. you do not need to install a separate web server. The data for an analytic or operational schema must also reside in a relational database. you need to import the metadata for the
data source manually. Data Analyzer stores metadata in a repository database to keep track of the processes and objects it needs to handle web browser requests. metrics and attributes.
Application Server
JBoss Application Server helps the Data Analyzer Application Server manage its processes efficiently.with the details of the metrics and attributes to include in the report. JBoss Application Server is a Java 2 Enterprise Edition (J2EE)compliant application server. The Reporting Service configures the Data Analyzer repository with the metadata corresponding to the selected data source. Data Analyzer uses the following Java technology:
♦ ♦ ♦
Java Servlet API JavaServer Pages (JSP) Enterprise Java Beans (EJB)
2
Chapter 1: Data Analyzer Overview
. network.
Data Analyzer Framework
Data Analyzer works within a web-based framework that requires the interaction of several components. If the application server contains a web server. queries. and how to present the report. reports. you can read data from a data source.
Data Analyzer
Data Analyzer is a Java application that provides a web-based platform for the development and delivery of business analytics. and system resources. The data for a hierarchical schema resides in a web service or XML document. such as an enterprise data warehouse and authentication server. It includes components and services that may already exist in an enterprise infrastructure.

Note: You cannot create or delete users and groups. You launch Data Analyzer from the Administration Console. Forward reports through email. user profiles. the Service Manager stores the users and groups in the domain configuration database and notifies the Reporting Service.
Supporting Components
Data Analyzer has other components to support its processes. or by accessing the Data Analyzer URL from a browser. The Reporting Service copies the users and groups to the Data Analyzer repository. Data Analyzer connects to the repository with JDBC drivers. personalization. Data Analyzer reads data from a relational database. or change user passwords in Data Analyzer. you can extend the power of Data Analyzer when you set it up to work with these additional components. or Metadata Manager. You can create reports based on the schemas without accessing the data warehouse directly. and other objects and processes. The XML document can reside on a web server.
Data Source
For analytic and operational schemas. PowerCenter Client tools. When you enable the Reporting Service. You can only
modify the user settings such as the user name or the contact details in Data Analyzer.
Authentication Server
You use PowerCenter authentication methods to authenticate users logging in to Data Analyzer.♦ ♦ ♦
Java Database Connectivity (JDBC) Java Message Service (JMS) Java Naming and Directory Interface (JNDI)
Data Analyzer Repository
The repository stores the metadata necessary for Data Analyzer to track the objects and processes that it requires to effectively handle user requests. Data Analyzer Data Profiling Reports. including an API that allows you to integrate Data Analyzer features into other web applications and security adapters that allow you to use an LDAP server for authentication. or it can be generated by a web service operation.
Data Analyzer Framework
3
. Data Analyzer connects to the XML document or web service through an HTTP connection. reports and report delivery.
PowerCenter
You create and enable a Reporting Service on the Domain page of the PowerCenter Administration Console. For hierarchical schemas. Data Analyzer reads data from an XML document. or Metadata Manager Reports. You log in to Data Analyzer to create and run reports on data in a relational database or to run PowerCenter Repository Reports. the Administration Console starts Data Analyzer. The metadata includes information on schemas.
Mail Server
Data Analyzer uses Simple Mail Transfer Protocol (SMTP) to provide access to the enterprise mail server and facilitate the following services:
♦ ♦
Send report alert notification and SMS/Text Messages to alert devices. It connects to the database through JDBC drivers. see the PowerCenter Administrator Guide. When you use the Administration Console to create native users and groups. Although you can use Data Analyzer without these components. For more information about authentication methods.

Using Data Analyzer
When you use Data Analyzer to analyze business information. Create and run reports. 5. You can set the session timeout period according to the Data Analyzer usage in your organization.Web Portal
The Data Analyzer API enables you to integrate Data Analyzer into other web applications and portals. or hierarchical schema. Define the fact and dimension tables for an analytic schema.
4. Create a data connector to identify which data source to use when you run reports. or XML documents. Create a dashboard and customize it to display the indicators and links to reports and shared documents to which you want immediate access. Set up schedules to run reports regularly. Set up the connectivity information so that Data Analyzer can access the data warehouse.
Data Analyzer has many more features you can use to analyze and get the most useful information from your corporate data. Define an analytic. The system administrator can change the session timeout period by editing the value of the session-timeout property in the web. 7.xml file. The API specifies the functions available to developers to access Data Analyzer dashboards. set up the tables for an operational schema. Create reports based on the metrics and attributes you define. Define the data source. Data Analyzer terminates a user session if it does not have any activity for a length of time.
2. complete the following steps: 1. Set up the data connector.
3. operational. the session terminates or times out.
Data Analyzer Basics
This section lists the steps you need to complete to access analytic data in Data Analyzer. web service. You need system administrator privileges to define data sources. and other objects and display them in any web application or portal. For more information. reports. Define the metrics and attributes in the schemas. If you set up an analytic schema. set up a time dimension. Create dashboards. see “Configuration Files” on page 129. or define a hierarchical schema. Create indicators and alerts for the report. Import table definitions from the data warehouse or operational data store into the Data Analyzer repository. You need system administrator privileges to import table definitions or define rowsets. but you do not use it for 30 minutes. Create analytic workflows to analyze the data. Define the rowsets and columns for web services or XML data sources. Import the table definitions from JDBC data sources or set up rowsets and columns from XML sources. Set up alerts on reports based on events and threshold values that you define. This book presents the tasks that a system administrator typically performs in Data Analyzer. You need system administrator privileges to define the schemas in Data Analyzer. You can configure Data Analyzer to access more than one data source. To preserve application resources.
Configuring Session Timeout
By default.
4
Chapter 1: Data Analyzer Overview
. You need system administrator privileges to set up data connectors. 6. if you log in to Data Analyzer.

Data Analyzer reads data from the data warehouse and stores data in a repository to support its different components. attributes. For a Data Analyzer data lineage. Use the PowerCenter Administration Console to configure data lineage for a Reporting Service. You manage users and groups in the PowerCenter Administration Console. Metadata Manager is the PowerCenter metadata management and analysis tool. The Metadata Manager server displays the data lineage in an Internet Explorer browser window. Metadata Manager displays metadata objects for each repository. Data structures. Data Analyzer uses the PowerCenter authentication methods to authenticate users set up in the PowerCenter domain configuration database. You can access data lineage for metrics.. or attribute displays one or more of the following objects:
♦ ♦
Data Analyzer repositories. Data structures group metadata into categories. and reports from the following areas in Data Analyzer:
Data Analyzer Object Report Metric Access Data Lineage From. You can load objects from multiple Data Analyzer repositories into Metadata Manager. Data Analyzer connects to a Metadata Manager server. see the PowerCenter Administrator Guide. When you access data lineage from Data Analyzer. In the data lineage. Security and data integrity in the database servers that contain the data warehouse and the repository are essential for a reliable system environment. For more information about the PowerCenter authentication methods..Security
Data Analyzer provides a secure environment in which to perform business analytics. the data structures include the following:
− − − −
Reports Fact tables Dimension tables Table definitions
♦
Fields. It also provides system administrators a way to control access to Data Analyzer tasks and data based on privileges and roles granted to users and groups. Fields are objects within data structures that store the metadata. You cannot use data lineage with the Mozilla Firefox browser.
Data Lineage
You can access the data lineage for Data Analyzer reports. metric. Data Analyzer depends on database servers to provide their own security and data integrity facilities. It supports standard security protocols like Secure Sockets Layer (SSL). and metrics. attributes. and where it is used. For a Data Analyzer data lineage. Find tab Schema Directory > Metrics page Create > Report > Select Metrics page Analyze tab Schema Directory > Attributes page Create > Report > Select Attributes page Analyze tab
Attribute
Data lineage for a Data Analyzer report. Use data lineage to understand the origin of the data. fields include the following:
−
Metrics in reports
Security
5
. how it transforms.

see the Metadata Manager User Guide. For example. You can also email the data lineage to other users. Repository. REGIONS. the following data structures display in the data lineage:
♦ ♦ ♦ ♦
Data Analyzer report: Sales report Data Analyzer dimension tables: Countries. You can
display data lineage on the Internet Explorer browser. which provides the Country Name attribute for the Sales report. You cannot display data lineage on the Mozilla Firefox browser. Data Lineage for a Report
Fields
Data Structures
Repository
In Figure 1-1. the data lineage shows that the COUNTRIES table definition populates the Countries dimension table. After you access the data lineage. Regions Data Analyzer fact table: Costs Data Data Analyzer table definitions: COUNTRIES. Data structure that populates the field. Excel. Figure 1-1 shows the data lineage for a Data Analyzer report:
Figure 1-1. In Figure 1-1. Field Name. you can view details about each object in the data lineage. For more information. the parent for the Country Name field is the Countries dimension table.− −
Attributes in reports Columns in tables
Note: The Metadata Manager server must be running when you access data lineage from Data Analyzer. Name of the Data Analyzer repository that contains metadata for the report. Each field contains the following information:
♦ ♦ ♦
The direction of the arrows in the data lineage shows the direction of the data flow. COSTS_DATA Parent. fields are the metrics and attributes in the report. Data lineage shows the flow of the data displayed in a report. PA5X_RICH_SRC is the repository that contains metadata about the report.
Data Lineage for a Report
You can access data lineage for cached and on-demand reports. In this example. You can export the data lineage to an HTML.
6 Chapter 1: Data Analyzer Overview
.
In Figure 1-1. or PDF file. Name of the field.

To avoid data errors.
Data Analyzer Display Language
You can change the display language for the Data Analyzer client regardless of the locale of the Data Analyzer server. you must ensure that the language settings are correct when you complete the following tasks in Data Analyzer:
♦
Back up and restore Data Analyzer repositories. Metadata Manager displays the data flow for that metric or attribute only. enable UTF-8 character encoding in the Data Analyzer repository and data warehouse. The repositories you back up and restore must have the same language type and locale setting or the repository you restore must be a superset of the repository you
Localization
7
.Data Lineage for a Metric or Attribute
The data lineage for a metric or attribute is similar to the data lineage for a report. You change the display language for Data Analyzer in the Manage Accounts tab in Data Analyzer. Data Lineage for an Attribute
The attribute name is the only field that appears in the data lineage. For more information. see the Data Analyzer User Guide. For a metric or attribute. UTF-8 character encoding is an ASCII-compatible multi-byte Unicode and Universal Character Set (UCS) encoding method. see the database documentation.
Language Settings
When you store data in multiple languages in a database. Figure 1-2 show the data lineage for an attribute:
Figure 1-2. A language setting is a superset of another language setting when it contains all characters encoded in the other language.
Localization
Data Analyzer uses UTF-8 character encoding for displaying in different languages. You must change the display language for the Data Analyzer login page separately in the browser.
Data structures for reports that use the attribute. The data lineage also shows data structures for reports that use the metric or attribute.
The attribute name (Brand) appears within the data structure for the report. For more information about how to enable UTF-8 character encoding.

You can find the Asian Font Package from the following web site:
http://www. When you import an exported repository object.
♦
Setting the Default Expression for Metrics and Attributes
When you set the default expression for metrics and attributes. and time formats Data Analyzer displays. the repository you restore to must also support Japanese.
Exporting Reports with Japanese Fonts to PDF Files
If a report contains Japanese fonts and you export the report to a PDF file. date. For more information.
♦
Import and export repository objects. If you want to use a different default expression for a different locale. When you enter a date in an SQL expression or define a date value for a global variable. Data Analyzer uses the format for the repository database language setting.
Date and Number Formats
The language setting for your Data Analyzer user account determines the numeric. you must download the Asian Font Package from the Adobe Acrobat web site to view the PDF file. the language type and locale settings of the data warehouse and the Data Analyzer repository must be the same or the repository must be a superset of the data source. When Data Analyzer performs date calculations in calculated or custom metrics.back up. you must change the default expression in the metric or attribute property.html
8
Chapter 1: Data Analyzer Overview
. Data Analyzer uses the same expression regardless of the locale of the Data Analyzer server. Import table definitions from the data source. enter the date in the same format used in the data warehouse. When you import data warehouse table definitions into the Data Analyzer repository. For example.com/products/acrobat/acrrasianfontpack. see the Data Analyzer Schema Designer Guide. Save the Asian Font Package on the machine where you want to view the PDF file.adobe. if the repository you back up contains Japanese data. the repositories must have the same language type and locale setting or the destination repository must be a superset of the source repository.

Data Analyzer allows login access only to individuals with user accounts in Data Analyzer. and roles in the PowerCenter domain configuration database. Users can perform different tasks based on their privileges. 11
Overview
You create users. groups. and roles. 10 Managing Users. groups. Users in Data Analyzer need their own accounts to perform tasks and access data. A user must have an active account to perform tasks and access data in Data Analyzer. You can edit some user and group properties in Data Analyzer. PowerCenter stores the users.You can modify some user and group properties in Data Analyzer. Use the Security page of the PowerCenter Administration Console to create users. You assign privileges to users. see the PowerCenter Administrator Guide.
Restricting User Access
You can limit user access to Data Analyzer to secure information in the repository and data sources. groups. and roles for a Data Analyzer. Users can perform different tasks based on their privileges. groups. and roles in the PowerCenter Administration Console. For more information about creating users. and roles in the domain configuration database. groups. To secure information in the repository and data sources.
Authentication Methods
The way you manage users and groups depends on the authentication method you are using:
♦
Native.CHAPTER 2
Managing Users and Groups
This chapter includes the following topics:
♦ ♦ ♦
Overview. groups and roles in the Security page of the PowerCenter Administration Console. You create and manage users.
Setting Permissions
You can set permissions to determine the tasks that users can perform on a repository object. You set access permissions in Data Analyzer.
9
. 9 Managing Groups.

For more information. and which privileges and roles are assigned to them in the PowerCenter Administration Console.
10
Chapter 2: Managing Users and Groups
. their organization. You can restrict data access by group. PowerCenter Client tools. The Service Manager periodically synchronizes the list of users and groups in the repository with the users and groups in the domain configuration database. color schemes. The Service Manager periodically synchronizes users in the LDAP server with the users in the domain configuration database.
Note: If you edit any property of a user other than roles or privileges.
Select the group you want to edit and click Edit. Click Administration > Access Management > Groups. groups.
Connect to Data Analyzer from the PowerCenter Administration Console. the Service Manager synchronizes the users in the Data Analyzer repository with the updated LDAP users in the domain configuration database. The properties of the group appear. you can edit some group properties such as department.
Managing Groups
Groups allow you to organize users according to their roles in the organization. Metadata Manager. and permission assignments with the list of users and groups in the Data Analyzer repository.
For more information about authentication methods. In Data Analyzer. the Service Manager does not synchronize
the changes to the Data Analyzer repository. or assign a primary group to users in Data Analyzer.
To edit a group in Data Analyzer: 1. The Service Manager stores users and groups in the domain configuration database and copies the list of users and groups to the Data Analyzer repository. You cannot add users or roles to the group.
Editing a Group
You can see groups with privileges on a Reporting Service when you launch the Data Analyzer instance created by that Reporting Service. privileges. or query governing settings. role.♦
LDAP authentication. if you edit any property of a user in Data Analyzer. For example. and roles on the Security page of the Administration Console. you might organize users into groups based on their departments or management level. When you assign privileges and roles to users and groups for the Reporting Service in the Administration Console or when you assign permissions to users and groups in Data Analyzer.
User Synchronization
You manage users. the Service Manager stores the privilege. You manage users and groups.
3. see the PowerCenter Administrator Guide. see the PowerCenter Administrator Guide. In addition. The Groups page appears. the Service Manager does not synchronize the domain configuration database with the modification. Similarly. or by accessing the Data Analyzer URL from a browser. You manage the users and groups in the LDAP server but you create and manage the roles and privileges in the PowerCenter Administration Console. 2.

see “Configuring Departments and Categories” on page 90.
Managing Users
Each user must have a user account to access Data Analyzer.
4. see “Full Name for Data Analyzer Users” on page 12. Assign a color scheme for the group. The properties of the user appear. Last name of the user. The Users page appears. Click Administration > Access Management > Users. Data Analyzer saves the modification in the Data Analyzer repository.
To edit a user account in Data Analyzer: 1. see “Managing Color Schemes and Logos” on page 74.
5. it does not update the records in the domain configuration database. or modify other properties of the account. You assign privileges to a user. add the user to one or more groups. If you edit the first name. Query governing settings on the Groups page apply to reports that users in the group can run.4.
Color Scheme Assignment Query Governing
5.
Select the user record you want to edit and click on it. PowerCenter Client tools. For more information. When the Service Manager synchronizes with the Data Analyzer repository.
Edit any of the following properties:
Property Department Description Choose the department for the group.
Edit any of the following properties:
Property First Name Middle Name Last Name Description First name of the user.
Editing a User Account
You can see users with privileges on a Reporting Service when you launch the Data Analyzer instance created by that Reporting Service. Metadata Manager. and assign roles to the user in the PowerCenter Administration Console. a user must have the appropriate privileges for the Reporting Service. You cannot assign a group to the user or define a primary group for a user in Data Analyzer. see “Setting Rules for Queries” on page 85.
Enter a search string for the user in the Search field and click Find. For more information. 2. For more information. You can edit a user account in Data Analyzer to change the color scheme. or last name of the user. Data Analyzer uses the largest query governing settings from each group.
Click OK to return to the Groups page.
Managing Users
11
.
Connect to Data Analyzer from the PowerCenter Administration Console. To perform Data Analyzer tasks. If a user belongs to one or more groups in the same hierarchy level.
3. For more information about these properties. or by accessing the Data Analyzer URL from a browser. Middle name of the user. middle name. Data Analyzer displays the list of users that match the search string you specify.

After the conversion. Data Analyzer uses the default color scheme when the user logs in.
12
Chapter 2: Managing Users and Groups
. and last name based on the following rules: 1. You can associate the user with a department to organize users and simplify the process of searching for users. and then click the Data Restrictions button ( ) for the data for which you want to restrict user access. and last names based on the number of text strings separated by a space:
♦ ♦
If the full name has two text strings. Specify query governing settings for the user. For more information. You cannot edit this information. Unless users have administrator privileges. To add data restrictions to a user account. Data Analyzer uses this as the email for the sender when the user emails reports from Data Analyzer. For more information. the full name has the following syntax:
[<FirstName>] [<MiddleName>] <LastName>
Data Analyzer determines the full name as first. The query governing settings on the User page apply to all reports that the user can run. middle.
Department
Color Scheme Assignment
Query Governing
Note: Users can edit some of the properties of their own accounts in the Manage Account tab. the full name is separated into first. see “Configuring Departments and Categories” on page 90. Department for the user.
If the full name contains a comma. Data Analyzer sends the email to this address when it sends an alert notification to the user. 2. If no color scheme is selected. see “Managing Color Schemes and Logos” on page 74. see “Restricting Data Access” on page 16.
Full Name for Data Analyzer Users
Data Analyzer displays the full name property in the PowerCenter domain as the following user account properties:
♦ ♦ ♦
First name Middle name Last name If the full name does not contain a comma. For more information. the full name has the following syntax:
<LastName>. If the full name has more than three text strings.
Adding Data Restrictions to a User Account
You can restrict access to data based on user accounts. For more information. there is no middle name. Color schemes assigned at user level take precedence over color schemes assigned at group level. middle. Titles do not affect roles or Data Analyzer privileges.Property Title Email Address
Description Describes the function of the user within the organization or within Data Analyzer. Select the color scheme to use when the user logs in to Data Analyzer. they cannot change the color scheme assigned to them. any string after the third string is included in the last name. <FirstName> [<MiddleName>]
Any full name that contains a comma is converted to use the syntax without a comma:
[<FirstName>] [<MiddleName>] <LastName>
3. see “Setting Rules for Queries” on page 85. click Administration > Access Management > Users.

every user has default Read and Write permission on that object. dashboards. Allows you to view a folder or object. and schedules. you determine which users and groups have access to folders and repository objects.
Setting Access Permissions
Access permissions determine the tasks you can perform for a specific repository object. or Change Access permission on that object. Restrict access to data in fact tables and operational schemas using associated attributes. 16
Overview
You can customize Data Analyzer user access with the following security options:
♦
Access permissions. you determine which users and groups can access particular attribute values.
13
. Restrict user and group access to folders. template dimensions. Write. Use data restrictions to restrict users or groups from accessing specific data when they view reports. you determine which users and groups can Read. By customizing access permissions on an object. Use access permissions to restrict access to a particular folder or object in the repository. You can assign the following types of access permissions to repository objects:
♦ ♦
Read. reports. When you set access permissions. Allows you to edit an object. Data restrictions. When you create data restrictions. Data Analyzer does not display restricted data associated with those values.CHAPTER 3
Setting Permissions and Restrictions
This chapter includes the following topics:
♦ ♦ ♦
Overview. 13 Setting Access Permissions. metrics. Delete. attributes.
♦
When you create an object in the repository. Write. 13 Restricting Data Access. Also allows you to create and edit folders and objects within a folder. When a user with a data restriction runs a report.

you can use exclusive access permissions to restrict the Vendors group from viewing sensitive reports.
Note: Any user with the System Administrator role has access to all Public Folders and to their Personal Folder in
the repository and can override any access permissions you set. save them to your Personal Folder or your personal dashboard. To restrict the access of specific users or groups. For example. and default access permissions to create comprehensive access permissions for an object. Permit access to the users and groups that you select. you can override existing access permissions on all objects in the folder. including subfolders. grant the Sales group inclusive write permission to edit objects in the folder. Find > Public Folders > folder name Find > Personal Folder > folder name Find > Public Folders > report name Find > Personal Folder > report name Find > Public Folders > composite report name Find > Personal Folder > composite report name Find > Public Folders > dashboard name Find > Personal Folder > dashboard name Administration > Schema Design > Schema Directory > Metrics folder > metric folder name Administration > Schema Design > Schema Directory > Attributes folder > attribute folder name Administration > Schema Design > Schema Directory > Template Dimensions folder > template dimensions folder name
14
Chapter 3: Setting Permissions and Restrictions
. You can use a combination of inclusive. Users or groups must also have permissions to view individual subreports.
To grant more extensive access to a user or group. Content folder in Public Folders Content folder in Personal Folder Report in Public Folders Report in Personal Folder Composite Report in Public Folders Composite Report in Personal Folder Public Dashboard Personal Dashboard Metric Folder Attribute Folder Template Dimensions Folder Click. You can also permit additional access permissions to selected users and groups.
Navigate to a repository object you want to modify. use inclusive access permissions.
To set access permissions: 1. When you modify the access permissions on a folder.
By default. Use the General Permissions area to modify default access permissions for an object. Exclusive. a composite report might contain some subreports that do not display for all users. Setting access permissions for a composite report determines whether the composite report itself is visible but does not affect the existing security of subreports.. then set the access permissions for the user you select. The following table shows how to navigate to the repository object you want to modify:
To set access permissions on.. For example. exclusive. For example. Data Analyzer grants Read permission to every user in the repository. you can grant the Analysts group inclusive access permissions to delete a report. you can select Read as the default access permission for a folder. Allows you to delete a folder or an object from the repository. If you have reports and shared documents that you do not want to share. Restrict access from the users and groups that you select. You can completely restrict the selected users and groups or restrict them to fewer access permissions. To grant access permissions to users.♦ ♦
Delete. search for the user name. Allows you to change the access permissions on a folder or object. Change permission. use exclusive access permissions. and use an exclusive Read permission to deny an individual in the Sales group access to the folder. Therefore... Use the following methods to set access permissions:
♦ ♦
Inclusive.

8..
If you are editing access permissions on an item.
Note: Permissions set on composite reports do not affect permissions on the subreports. you can select Replace Permissions on Subfolders to apply access permission changes to all subfolders.
3.. The Query Results field displays groups or users that match the search criteria.. Click Include to include the user or group in the access permissions you select.
7.
Setting Access Permissions
15
. If you click Yes.
Click the Permissions button (
) for the repository object. such as a report or shared document.
4.
From the General Permissions area.. If you are editing access permissions on a folder. You can also select Replace Permissions on All Items in Folder to apply access permission changes to the reports and shared documents in the folder. click No to prevent all repository users from receiving default access permissions. Administration > Schema Design > Schema Directory > Metrics Folder > metric folder name > metric name Administration > Schema Design > Schema Directory > Attributes folder > attribute folder name > attribute name Administration > Schema Design > Schema Directory >Template Dimensions folder > template dimension folder name > template dimension name Administration > Scheduling > Time-Based Schedules > timebased schedule name Administration > Scheduling > Event-Based Schedules > eventbased schedule name Administration > Schema Directory > Filtersets > filterset name
Time-Based Schedule Event-Based Schedule Filterset 2. You can select groups or users by criteria such as name or department. 6. -orClick Exclude to exclude the user or group from the access permissions you select.
5. 9.To set access permissions on. Only those
subreports where a user or group has access permissions display in a composite report.
Select the group or user in the Query Results field. Metric Attribute Template Dimension
Click. The object name appears in quotes.
The Access Permissions page appears. Refine the selection by choosing the search criteria for the group or user. Select the access permissions you want to include or exclude. set the default access permissions. skip to step 4. Click Yes to allow all users to receive the default access permissions you select.
Click Make a Selection to search for a group or user.

Corporate Sales group granted additional write permission. When users in the Northeast Sales group run reports that include the SALES fact table and Region attribute. they view sales data for their region only. Data Analyzer uses the AND operator to apply all restrictions. Access the fact table or operational schema that contains the metric data you want to restrict and specify the associated attributes for which to restrict the metric data. Create data restrictions to keep sensitive data from appearing in reports. You can apply the data restriction to any user or group in the repository. You can apply the data restriction to a single fact table or operational schema or to all related data in the repository.
♦
If you have multiple data restrictions.
Click OK to save the access permissions settings. a Data Restrictions button appears in the report. For example. Data Analyzer applies the data restrictions in the order in which they appear in the Created Restrictions task area. They cannot see sales data for western or southern regions. use the OR or AND operator to group the data restrictions. Data Analyzer displays the data restrictions in simple grouping mode.
10. Select the fact table or operational schema that contains the metric data you want to restrict and specify the associated attributes for which to restrict the metric data. This allows you to make the data restriction as specific as required. Use this method to apply multiple data restrictions to the same user or group or to restrict all data associated with specified attribute values. Use this method to apply the same data restriction to more than one user or group. you can create a complex expression with nested conditions.
Red text and a minus sign indicate that the user Hansen is not permitted to read the Sales folder. you can create a data restriction that restricts the Northeast Sales group to sales data for stores in their region. Create data restrictions by user or group. By default. the following condition allows users to view data from every March and from the entire year of 2007:
IN March OR IN 2007
16
Chapter 3: Setting Permissions and Restrictions
. In this mode. You can create data restrictions using one of the following methods:
♦
Create data restrictions by object. you specify the users or groups to be restricted. Access the user or group you want to restrict.
Everyone has Read permission on the Sales folder. When you create a data restriction.Data Analyzer displays a minus sign (-) next to users or groups you exclude. If you have multiple data restrictions.
Restricting Data Access
You can restrict access to data associated with specific attribute values. When a report contains restricted data. unless restricted below. For example. In the advanced grouping mode.

Data Analyzer handles data restrictions differently depending on the relationship between the two groups. Data Analyzer updates the data restriction when you update the global variable value. For example. When you use a global variable in a data restriction. Data Analyzer joins the restrictions with the AND operator. Also..You can also use parentheses to create more complex groups of restrictions.. You can restrict access to data in the following objects:
♦ ♦
Fact tables Operational schemas
You cannot create data restrictions for hierarchical schemas.
Understanding Data Restrictions for Multiple Groups
A restricted user assigned to a restricted group is subject to both individual and group restrictions. Data Analyzer joins the two restrictions and returns no data:
Region IN ‘West’ AND Region NOT IN ‘West’
When a user belongs to more than one group.
Using Global Variables
You can use global variables when you define data restrictions. if the user has the restriction Region IN ‘West’ and the user’s group has the restriction Region NOT IN ‘West’.. AND operator
A user who belongs to. you can group three data restrictions:
Region NOT IN ‘North’ AND (Category IN ‘Footware’ OR Brand IN ‘BigShoes’)
In the above example.. The following table describes how Data Analyzer handles multiple group situations:
Data Analyzer joins data restrictions with. Both a group and its subgroup
Example If Group A has the following restriction: Region IN ‘East’ And Subgroup B has the following restriction:
Category IN ‘Women’
Data Analyzer joins the restrictions with AND: Region IN ‘East’ AND Category IN ‘Women’ Two groups that belong to the same parent group OR operator If Group A has the following restriction: Region IN ‘East’ And Group B has the following restriction: Category IN ‘Women’ Data Analyzer joins the restrictions with OR:
Region IN ‘East’ OR Category IN ‘Women’
Restricting Data Access by Object
Create data restrictions by object when you want to apply the restriction to more than one user or group or to create more than one data restriction for the object. For example. Data Analyzer allows users to view data which is not included in the North region and which is in either the Footware category or has the BigShoes brand. you cannot create data restrictions on fact tables or operational schemas using CLOB attributes.
Restricting Data Access
17
.

To create a data restriction for a user. 6. The data restriction appears in the Created Restrictions task area.
To adjust the restrictions. Click within the SQL query. Recently-used attributes appear in the list.
3. and then click the buttons to add numbers or operators to the SQL query..
Navigate to the object you want to restrict. In the Create Restriction task area. To browse or find other attributes.
8.
To create a data restriction for a group. Fact Table Operational Schema Click. click Advanced.
Click Select a Group/User. Data Analyzer displays the SQL query for the restriction in advanced mode. the group search option appears. 7.
) of the object you want to restrict..
Click the Data Restrictions button ( The Data Restrictions page appears. If the number of groups is 30 or more. select an attribute from the attribute list. If you select Group and the number of groups is less than 30. Use the asterisk or percent symbols as wildcard characters. You can select attribute values from a list. Enter attribute values.
To view the SQL query for the restriction. Select the user or group you want to restrict and click OK. to create more restrictions for the same user or group. CLOB attributes are not available for use in data restrictions.
To create data restrictions for. select an operator. The Select Group or User window appears.
11.To create data restrictions by object: 1. click Select Other Attributes. a list of available groups appears.
18
Chapter 3: Setting Permissions and Restrictions
.
5. If you can create more than one data restriction.
From the condition list.. described in steps 7 to 11. You can also manually enter attribute values. you can adjust the order of the restrictions and the operators to use between restrictions. Use the Basic or Advanced mode. If a global variable contains the attribute values you want to use. click Advanced in the Created Restrictions task area. Navigate to the attribute you want and select an attribute. If you select User and you know the user name you want to restrict. 9. enter it in the User field.. Data Analyzer displays buttons for adding numbers and operators to the SQL query for the data restriction. The Attribute Selection window appears.
Click Find. you can select a global variable. In advanced mode.
4.
10. you can edit the SQL query for a restriction. Data Analyzer displays the attributes for the object in the Attribute Selection window.
12. select Group.
Click Add. or you can search for specific values and Ctrl-click to select more than one. select User. search for a user or group. Or. Administration > Schema Design > Analytic Schemas > Show Fact Tables Administration > Schema Design > Operational Schemas
2.

Click OK to save the changes. This applies the data restriction to all data in the repository associated with the attribute you choose. Applied restrictions appear in the Current Data Restrictions area. you can create a single data restriction to restrict all sales and salary information from Europe. To select all schemas. select All Schemas.
) of the user or group profile you want to edit. For example.
To remove a data restriction.
Click to add left parenthesis.
In the Create Restriction task area. click the Remove button. Also. CLOB attributes are not available for use in data restrictions. -orTo create data restrictions for groups.
Select a schema from a list of available schemas. you can create data restrictions for metrics in any fact table or operational schema. When you edit a user or group.
Restricting Data Access 19
. When you have completed adding data restrictions for the user or group. click Select Other Attributes. To browse or find an attribute. When the attribute is associated with other fact tables or operational schemas in the repository. The Attribute Selection window appears.
From the condition list. The page shows a list of fact tables and operational schemas tables.
Click the Data Restrictions button ( The Data Restrictions page appears. if the Region attribute is associated with both the Sales fact table and Salary fact table.
15.
Click to add operators.
3.
5. you cannot create data restrictions on fact tables or operational schemas using CLOB attributes. click Cancel.
To create data restrictions for users or groups: 1. click Administration > Access Management > Groups. you can restrict all data related to the attribute values you select.
4. Recently-used attributes appear in the list. Click the appropriate list to group the restrictions. click Apply Restrictions. Data Analyzer displays lists for adding parentheses and operators. select an attribute from the attribute list.
2. To remove all data restrictions. Navigate to the attribute you want and select an attribute. 14. Data Analyzer displays all attribute folders for the object in the Attribute Selection window. click Administration > Access Management > Users. 13. You cannot create data restrictions for hierarchical schemas. Then click Groups to display all groups. Data restrictions limit the data that appears in the reports.
Click to change the order of the restrictions.
To create data restrictions for users.
Restricting Data Access by User or Group
Edit a user or group to restrict data when you want to create more than one restriction for the user or group.
Click to add right parenthesis. Hierarchical schemas are not available for use in data restrictions. You can restrict data in a single fact table or operational schema for an associated attribute.In advanced mode. select an operator.

click Cancel. To remove all data restrictions. you can edit the SQL query for a restriction. click Apply Restrictions. Use the Basic or Advanced mode.6.
Enter attribute values. You can also manually enter attribute values. click Advanced.
Click Add. You can select attribute values from a list.
To view the SQL query for the restriction. click Advanced in the Created Restrictions task area. Data Analyzer displays the SQL query for the restriction in advanced mode. Data Analyzer displays buttons for adding numbers and operators to the SQL query for the data restriction. In advanced mode.
Click to add right parenthesis. operators. click the Remove button.
12. In advanced mode.
Click OK to save the changes.
20
Chapter 3: Setting Permissions and Restrictions
. described in steps 3 to 8.
To adjust the restrictions. If you create more than one data restriction.
7. When you have completed adding data restrictions for the user or group.
9.
Click to change the order of the restrictions. you can adjust the order of the restrictions and the operators to use between restrictions. 11. If a global variable contains the attribute values you want to use. Applied restrictions appear in the Current Data Restrictions area. or you can search for specific values and Ctrl-click to select more than one.
8. Click within the SQL query. The data restriction appears in the Created Restrictions task area. you can select a global variable. the Created Restrictions task area displays lists for adding parentheses and operators. Click the appropriate list to group the restrictions. 10. and then click the buttons to add numbers or operators to the SQL query. to create more restrictions for the same user or group.
Click to add left Click to add parenthesis.
To remove a data restriction.

Attach reports to the time-based schedule as tasks. 23 Managing Reports in a Time-Based Schedule. 21 Creating a Time-Based Schedule. Attach a report to the time-based schedule when you create or edit the report. date. create a single-event schedule for December 2. Updates report data on a regular cycle. if you know that the
♦
21
. Configure the start time. 28 Defining a Business Day. 22 Managing Time-Based Schedules. it runs each report attached to the schedule. For example. You can configure the following types of time-based schedules:
♦
Single-event schedule. and repeating option of the schedule when you create or edit a time-based schedule. You might use a recurring schedule to run reports after a regularly scheduled update of the data warehouse. complete the following steps: 1.CHAPTER 4
Managing Time-Based Schedules
This chapter includes the following topics:
♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦
Overview. When Data Analyzer runs a time-based schedule. 29 Monitoring a Schedule. For example. Create a time-based schedule. 2. if you know that the database administrator will update the data warehouse on December 1. 29 Defining a Holiday. Create a single-event schedule for a one-time update of the report data. but do not know when other updates occur. You can attach any cached report to a time-based schedule. Attach imported cached reports to tasks from the time-based schedule. 25 Using the Calendar. such as once a week or on the first Monday of each month. Create a recurring schedule to update report data regularly. To use a time-based schedule. Updates report data only on the configured date. Recurring schedule. 29
Overview
A time-based schedule updates reports based on a configured schedule.

If you want to update reports when a PowerCenter session or batch completes. or hourly.m. If a scheduled run falls on a non-business day.
Select a repeat option. create a time-based schedule to update reports on the second Monday of every month. or quarterly.
Click Administration > Scheduling > Time-Based Schedules > Add. Hour.
Repeat Every (Monday/Tuesday/Wednesday/Thursday/Friday /Saturday/Sunday)
22
Chapter 4: Managing Time-Based Schedules
. or monthly views of all the time-based schedules in the repository. you can create an event-based schedule. a weekend or configured holiday. For a single-event schedule. Default is the current date on Data Analyzer. Monitor existing schedules with the Calendar or the Schedule Monitor. You can set up business days and holidays for the Data Analyzer Calendar. Day. you can create indicators and alerts for the reports. After you attach reports to a time-based schedule.
To create a time-based schedule: 1. monthly. tab. select one of the following repeat options:
Field Repeat Every (Number) (Minute/Hour/Day/Week/Month/Year) Description Repeats every specified number of units of time. The Calendar provides daily. Data Analyzer waits until the next scheduled run to run attached reports. newline character. Month. Use this setting to schedule recurring updates of report data. The Properties page appears. Single-event schedules run tasks once.
Enter the following information:
Field Name Description Name of the time-based schedule. When selected.
Description Business Day Only Start Date Start Time 3. and the following special characters: \/:*?“<>|‘&[] Description of the time-based schedule.
Creating a Time-Based Schedule
You can create single-event or recurring schedules to run reports as tasks. Use this setting to schedule weekly updates of report data. The name can include any character except a space. weekly. Repeats each week on the specified day(s). the schedule runs reports on business days only. select Do Not Repeat. Select minutes in increments of five. Date the schedule initiates. or Year as a unit of time. (noon). daily. You can select Minute. Week.
2. Default is 12:00 p. The Schedule Monitor provides a list of the schedules currently running reports. Recurring schedules can repeat every minute.data warehouse is updated the first Friday of every month. weekly. Time the schedule initiates. For a repeating schedule.

you can edit schedule properties.
Description Repeats on the specified day of the week of every month or year.
Select the repeat condition:
Field Always Until (Month) (Day) (Year) Description Schedule repeats until disabled or deleted from the repository. You can also remove reports or change the order in which they run.
Managing Time-Based Schedules 23
. and then click OK. Start the schedule immediately.
To edit a time-based schedule: 1.
Repeats every specified number of days from the beginning or end of the specified month. Use this setting to schedule monthly or yearly updates of report data. Stop the schedule immediately. 4. When Data Analyzer runs a time-based schedule. it runs each attached report.
Click the name of the schedule you want to edit. Schedule repeats until the date you specify. Edit schedule properties if necessary.
Managing Time-Based Schedules
After you create a time-based schedule. When you update the schedule of a time-based schedule. Edit schedule access permissions. You can complete the following tasks for a time-based schedule:
♦ ♦ ♦ ♦ ♦ ♦
Edit a schedule.
2.
3.
Editing a Time-Based Schedule
After you create a time-based schedule.
Click Administration > Scheduling > Time-Based Schedules.
5.
Click OK. you can attach reports to the schedule. The Time-Based Schedules page appears. Disable the schedule.
Click Tasks to remove reports from the schedule. You can attach any cached report to a time-based schedule. Default is the current date on Data Analyzer. Default is Always. the change impacts all attached reports and alerts. View or clear the schedule history. The Properties page appears. Use this setting to schedule quarterly updates of report data.Field Repeat the (First/Second/Third/Fourth) (Monday/Tuesday/Wednesday/Thursday/Friday /Saturday/Sunday) of every (Month/Year) Repeat on (Number) of days from the (Beginning of/End) of the (First/Second/Third Month) of each Quarter 4.

or the number of recurring errors during the run. By default every user with the appropriate privileges can edit a schedule. The Schedule History page appears. click Clear.
Click Administration > Scheduling > Time-Based Schedules.
2. End time.
Select the schedule you want to view.
To clear the history of the schedule. To set access permissions. The Time-Based Schedules page appears.
To view or clear the history of a time-based schedule: 1.Editing Access Permissions for a Time-Based Schedule
Access permissions determine which users and groups can attach reports to the schedule.
Starting a Time-Based Schedule Immediately
You can start a time-based schedule immediately instead of waiting for its next scheduled run. You can change the access permissions for a schedule to protect the security of the schedule. Status. or change access permissions to the schedule. 5. the number of successfully completed schedule runs. The Properties page appears. Click OK. The date and time Data Analyzer started running the schedule.
When you view schedule histories.
24
Chapter 4: Managing Time-Based Schedules
.
Viewing or Clearing a Time-Based Schedule History
You can view the history of a time-based schedule. Each time-based schedule has a history containing the following information:
♦ ♦ ♦
Start time.
Click History. click the Permissions button. You might also start the schedule if errors occurred during the previously scheduled run. modify the schedule. Lists whether the schedule or task completed successfully or the number of errors that occurred. The schedule name appears in parentheses. you can determine how long all tasks attached to the schedule take to update.
Click to change access permissions. You can also clear the history of a schedule.
4. You might start a time-based schedule immediately to test attached reports. The date and time Data Analyzer stops running the schedule.
3. You might clear a schedule history at the end of a quarter or to save space in the repository.

Click the Enabled button for the schedule you want to disable. View task properties.
Click Administration > Scheduling > Time-Based Schedules. View or clear a task history. When Data Analyzer runs a time-based schedule. click the Disabled button. Before you remove any schedule from the repository. aborting all attached reports. Data Analyzer recommends that you reassign all tasks attached to the schedule. The Time-Based Schedules page appears. see “Stopping a Schedule” on page 30. You can stop a schedule immediately when you need to restart the server. You might disable a schedule when it has no attached reports or when the update of source data is temporarily interrupted. View a list of attached reports. 2. you can attach reports to the schedule.To start a time-based schedule immediately: 1.
For the time-based schedule you want to start. click Run Now. When you want to enable the schedule again. 2.
Managing Reports in a Time-Based Schedule
25
. You can attach any cached report to a time-based schedule.
Managing Reports in a Time-Based Schedule
After you create a time-based schedule. When you want the schedule to resume.
To disable a time-based schedule: 1.
Stopping a Time-Based Schedule Immediately
You can stop a time-based schedule immediately. You can complete the following schedule-related tasks for a report:
♦ ♦ ♦ ♦ ♦
Attach a report to a time-based schedule. Remove a report from a time-based schedule.
Disabling a Time-Based Schedule
You can disable a time-based schedule when you do not want it to run. The status of the schedule changes to Disabled.
Removing a Time-Based Schedule
You can remove time-based schedules from the repository.
Click Administration > Scheduling > Time-Based Schedules. Data Analyzer starts the schedule and runs the attached reports. 3. you can enable the schedule. Click the Remove button for the schedule you want to delete.
To remove a time-based schedule: 1. it runs each attached report.
Click Administration > Scheduling > Time-Based Schedules. For more information.
2. Click OK.

Data Analyzer applies the rules when it runs the report on the time-based schedule. you must attach the reports during the same Data Analyzer session. but not to an eventbased schedule. the report schedule must update more often than the alert schedule updates. The list of the tasks attached to the schedule appears. and change the scheduling options.Attaching Reports to a Time-Based Schedule
You can attach a report to a time-based schedule using one of the following methods:
♦ ♦ ♦
Save a new report as cached.
6. Select a schedule and use the add task option to attach multiple imported cached reports to an existing schedule. The report appears as an item on the task list. If you want to add all available imported reports as a task for the schedule. 3. Data Analyzer attaches the rules to the schedule but does not display the rules on the list of tasks for the schedule. Add an imported report to a schedule.
4. To make troubleshooting easier. Set up multiple schedules to run a large number of reports. select the All check box next to Select Reports.
5. If you attach multiple reports to a schedule. You can attach reports that have alerts on a predefined schedule to a time-based schedule. Click the time-based schedule that you want to use.
Click Apply. 2. the following message appears:
Some of the imported reports must be put on schedules. Click Tasks. If you attach a report that has alerts on a predefined schedule to a time-based schedule.
You must attach any cached reports that you import to a schedule. Although the rules do not display on the Tasks page for the schedule. attach a small number of reports to a schedule. you cannot attach multiple reports. When a user selects broadcast or an alert rules for a time-based schedule.
Click Administration > Scheduling > Time-Based Schedules. If the session expires or you log out before attaching multiple reports from the import list. You must attach the imported reports individually.
26
Chapter 4: Managing Time-Based Schedules
. Select Save As on an existing report. Please assign the reports to schedules immediately.
You can attach multiple reports to a single schedule. Data Analyzer runs the reports concurrently. Select the schedule option when you save a new report to the repository. The Imported Scheduled Reports window appears.
To attach an imported cached report to a time-based schedule: 1.
Viewing Attached Reports
All reports that are attached to a time-based schedule display as a list of tasks for the schedule.
Attaching Imported Cached Reports to a Time-Based Schedule
When you import cached reports to the repository. To attach multiple reports from the list.
Select the imported reports that you want to add to the schedule. You can view these tasks on the Tasks page for the schedule. You can attach imported cached reports to time-based or event-based schedules.
Click Add. The Add button appears only when you have unscheduled imported reports in the repository. You can attach each imported report individually or attach multiple imported reports from a list to a single schedule. Save an existing report as a cached report.

and then click OK. or recurring errors when running the report. click OK. 6. Click the name of the schedule that runs the report.
Click Administration > Scheduling > Time-Based Schedules.
Click History. 7. 4.
2.
3. 4.
To view or clear a task history: 1.
Viewing Task Properties
You can view the task properties for any report attached to a time-based schedule. Click the name of the report. To clear the task history. The Properties page appears.
Viewing or Clearing a Task History
You can view a task history for reports attached to time-based schedules. 2. you must attach it to another schedule to ensure it updates in a timely manner. Remove a report when you plan to disable the schedule or when the report requires a new update strategy. click Clear.
5.
Managing Reports in a Time-Based Schedule
27
. You can also clear the history of a report.
Click Administration > Scheduling > Time-Based Schedules.
To view task properties: 1. View report histories to determine how long the report takes to update. the number of successfully completed runs. The Time-Based Schedules page appears. Click the name of the schedule that runs the report. The Task Properties page appears. You can clear a task history at the end of a quarter or to save space in the repository. Click Tasks. 3. You cannot modify the task properties. To return to Task Properties.
5. You can view a task history to compare the number of successful runs on different schedules. 3. Click Tasks. 2.
Click Tasks.
Click OK to close the Task Properties page. The Task Properties page appears.
Removing a Report from a Time-Based Schedule
You can remove a report from a time-based schedule. All attached reports display. Click the name of the report.
Click the schedule you want to view. When you remove a task.
Click Administration > Scheduling > Time-Based Schedules.To view a report attached to a time-based schedule: 1.

click a date. 4. and then click OK. week. The Calendar recognizes leap years.
Using the Calendar
Use the Calendar in the Scheduling section to view all enabled time-based schedules in the repository. or month. and monthly views. The Calendar appears. click a week. You can navigate from one view to another.
To view the Calendar: 1. To view a different date. Use the left and right arrows to navigate to the previous and next day.
2. respectively.
Navigating the Calendar
The Calendar provides daily. respectively. click the specific date. Use the left and right arrows to navigate to the previous and following weeks. select a different date or month in the calendar.
28
Chapter 4: Managing Time-Based Schedules
.
2. The default Calendar display is a view of the current day. The Daily view displays the current day and organizes the time-based schedules for the current day by hour.
3.
Navigating the Daily View
The Calendar opens to the Daily view by default.
Navigating the Weekly View
The Weekly view opens to the current week by default.
Click Remove. If you want to remove all attached reports. The Time-Based Schedules page appears.
Click Tasks.
Navigating the Monthly View
The Monthly view opens to the current month by default. The Calendar lists schedules by day. Select the check box for the report you want to remove. Use the left and right arrows to navigate to the previous and following months. select the check box in the title bar next to Name.
Click Weekly or Monthly to change the view of the Calendar. The Monthly view displays all time-based schedules for the month. respectively. weekly.
Click Administration > Scheduling > Calendar.
5. To access a Daily view. The Properties page appears.To remove a report from a time-based schedule: 1.
Click the name of the schedule you want to edit. The Weekly view displays all time-based schedules for the week. To access a Weekly view. To access a Daily view.
Click Administration > Scheduling > Time-Based Schedules.

The default business days are Monday through Friday.
To define business days: 1. When a schedule falls on a holiday. The Holidays page appears. Click OK. The Business Days page appears. and a brief description of the holiday. For example. After you define business days. Data Analyzer postpones the schedule to run attached reports on the next scheduled day. 4. You might also use the Schedule Monitor to verify whether Data Analyzer runs reports at the scheduled time. Clear the days you do not want defined as business days. Time-based schedules that are not configured to run only on business days still run on configured holidays. View all configured holidays on the Holidays page.Defining a Business Day
You can define business days for the Data Analyzer Calendar.
Click Add.
2.
Select the days you want to define as business days. date. the configured business days are Monday through Friday.
Defining a Holiday
You can define holidays for the Data Analyzer Calendar. By default. If March 1 falls on a Sunday. April 1. The business day setting overrides all other recurring schedule settings you create.
Click Apply. Business days are the days Data Analyzer treats as regular working days.
Enter the name.
3. You might check the Schedule Monitor before you restart Data Analyzer to make sure no schedules are running. like a weekend or holiday. If the schedule falls on a nonbusiness day. you can create time-based schedules that run only on those days. Data Analyzer runs the reports on the next scheduled day.
Monitoring a Schedule
The Schedule Monitor provides a list of all schedules that are currently running in the repository. Data Analyzer treats holidays as non-business days.
To define a holiday: 1. You create a schedule to run reports on the first of the month. The Holiday Properties page appears. Time-based schedules configured to run reports only on business days do not run on holidays.
Click Administration > Scheduling > Business Days. to run the schedule. and configure the schedule to run only on business days.
3.
Click Administration > Scheduling > Holidays. Data Analyzer waits until the next scheduled day.
Defining a Business Day 29
.
2. You can change these business days to fit your work schedule. there are no configured holidays.

You might stop a schedule when you need to restart the server or when a problem arises with source data. 3.
Stopping a Schedule
You can stop a running schedule and all attached reports through the Schedule Monitor. The Schedule Monitor lists all currently running schedules. click Administration > Scheduling > Schedule Monitoring.To monitor a schedule.
Click Remove to stop a running schedule.
Click Administration > Scheduling > Schedule Monitoring.
To stop a running schedule: 1.
30
Chapter 4: Managing Time-Based Schedules
. Click OK.
2. Data Analyzer displays schedules that are currently running.

31 Managing Event-Based Schedules. see “Step 1. 33 Managing Reports in an Event-Based Schedule. 2. To update reports in Data Analyzer when a session completes in PowerCenter. PowerCenter installs the PowerCenter Integration utility. For more information. Configure a PowerCenter session to call the PowerCenter Integration utility as a post-session command and pass the event-based schedule name as a parameter. see “Step 2. 35
Overview
PowerCenter Data Analyzer provides event-based schedules and the PowerCenter Integration utility so you can update reports in Data Analyzer based on the completion of PowerCenter sessions.
If the PowerCenter Integration utility is set up correctly. complete the following steps: 1. 31 Updating Reports When a PowerCenter Session Completes. Create an event-based schedule and attach cached reports to the schedule. Create an Event-Based Schedule” on page 32.
31
. Use the PowerCenter Integration Utility in PowerCenter” on page 33. You can create indicators and alerts for the reports in an event-based schedule. Data Analyzer runs each report attached to the eventbased schedule when a PowerCenter session completes.
Updating Reports When a PowerCenter Session Completes
When you create a Reporting Service in the PowerCenter Administration Console. The Schedule Monitor provides a list of the schedules currently running reports. You cannot use the PowerCenter Integration utility with a time-based schedule. You can monitor event-based schedules with the Schedule Monitor. For more information.CHAPTER 5
Managing Event-Based Schedules
This chapter includes the following topics:
♦ ♦ ♦ ♦
Overview.

Select the cached report option and a specific schedule when you save a new report to the repository. complete the following steps: 1. Set the JAVA_HOME environment variable to the location of the JVM. The PowerCenter Integration utility creates a log file when it runs after the PowerCenter session completes. 4.properties file contains information about the Reporting Service URL and the schedule queue name. create an event-based schedule in Data Analyzer and attach the reports that you want to run after the PowerCenter session completes. When you create a Reporting Service.location property to the location and the name of the PowerCenter Integration utility log file. The Add an Event-Based Schedule page appears. For example. Open the notifyias.
32
Chapter 5: Managing Event-Based Schedules
.
To create an event-based schedule: 1.PowerCenter installs a separate PowerCenter Integration utility for every Reporting Service that you create. 3.
2.location property determines the location and the name of the log file.bat Back up the notifyias file before you modify it.properties file to update reports in Data Analyzer. The notifyias. if you create a Reporting Service and call it DA_Test. 2. The logfile.properties file to point to the correct instance of the Reporting Service. Open the notifyias file in a text editor: UNIX: notifyias. The PowerCenter Integration utility considers the settings in the notifyias. the notifyias folder would be notifyias-DA_Test. PowerCenter sets the properties in the notifyias.
Click Add. you can attach it to a cached report when you save the report. Run the PowerCenter Integration utility to update reports in Data Analyzer when a session completes in PowerCenter.sh Windows: notifyias. Click OK. The Event-Based Schedules page appears. You do not need to provide information about the PowerCenter session you want to use.properties file in the notifyias-<Reporting Service Name> folder and set the logfile.
Creating an Event-Based Schedule
When you create an event-based schedule.
Enter a name and description for the schedule.
3.
Attaching Reports to an Event-Based Schedule
You can attach a report to an event-based schedule with one of the following methods:
♦
Save a new report as a cached report.
Step 1.
After you create the event-based schedule. you need to provide a name and description of the schedule. Before you run the PowerCenter Integration utility.
Click Administration > Scheduling > Event-Based Schedules. You can find the PowerCenter Integration utility in the following folder:
<PCInstallationfolder>\server\tomcat\jboss\notifyias-<Reporting Service Name>
PowerCenter suffixes the Reporting Service name to the notifyias folder. Create an Event-Based Schedule
To run reports in Data Analyzer after a session completes in PowerCenter.

sh Event-BasedScheduleName
Event-BasedScheduleName is the name of the Data Analyzer event-based schedule that contains the tasks you want to run when the PowerCenter session completes. you need to prefix the utility file name with the file path. View or clear the schedule history. Set up multiple schedules to run a large number of reports.bat Event-BasedScheduleName
Use the following shell command syntax for PowerCenter installed on UNIX:
notifyias. Edit schedule access permissions. If you attach multiple reports to a schedule.
Step 2.
You can attach multiple reports to a single schedule. or the PowerCenter Integration utility. and specify the name of the event-based schedule that you want to associate with the PowerCenter session. PowerCenter workflows.
Managing Event-Based Schedules
33
. If you want to run the PowerCenter Integration utility after all other tasks in a workflow complete. you need to navigate to the correct notifyias-<Reporting Service name> folder. Stop a schedule immediately. For more information about configuring post-session commands. Use the PowerCenter Integration Utility in PowerCenter
Before you can use the PowerCenter Integration utility in a PowerCenter post-session command. attach a small number of reports to a schedule. Data Analyzer then connects to the PowerCenter data warehouse to retrieve new data to update reports.
Editing an Event-Based Schedule
After you create an event-based schedule. To make troubleshooting easier. In the PowerCenter Workflow Manager. Remove a schedule. you must configure the PowerCenter session to call the PowerCenter Integration utility as a post-session command. Use the following post-session command syntax for PowerCenter installed on Windows:
notifyias. Disable a schedule.
Managing Event-Based Schedules
You can perform the following tasks to manage an event-based schedule:
♦ ♦ ♦ ♦ ♦ ♦ ♦
Edit a schedule. see the PowerCenter Workflow Administration Guide. Start a schedule immediately.♦
Save an existing report as a cached report. You can set up the post-session command to send Data Analyzer notification when the session completes successfully. Data Analyzer runs the reports concurrently. You can also run the PowerCenter Integration utility as a command task in a PowerCenter workflow. create an event-based schedule as outlined in the previous step. you can run it as the last task in the workflow. you can edit its name and description. Select Save As on a report. When you use the PowerCenter Integration utility in the post-session command. then change the scheduling options. If the system path does not include the path of the PowerCenter Integration utility.

Click OK.To edit an event-based schedule: 1.
Editing Access Permissions for an Event-Based Schedule
Access permissions determine which users and groups can attach reports to the schedule. The Edit an Event-Based Schedule page appears. Lists the successful completion of the schedule or the number of errors that have occurred.
2. you can change the access permissions for the schedule. You might clear a schedule history at the end of a quarter or to save space in the repository. By default. If you want to view the reports assigned as tasks to the schedule. click Tasks. modify the schedule. The Event-Based Schedules page appears.
3.
4.
Click the schedule you want to view. The date and time the schedule completes.
Click the name of the schedule you want to edit. or the number of recurring errors.
To view an event-based schedule histor y: 1. The Event-Based Schedules page appears. click Clear.
2. End time. You might start the schedule if errors occurred during the last run of the schedule. or change access permission for the schedule.
Viewing or Clearing an Event-Based Schedule History
You can view the history of an event-based schedule to see the following information:
♦ ♦ ♦
Start time. Status. The date and time Data Analyzer started the schedule. Click History. 3. To edit access permissions.
Edit the name or description of the event-based schedule.
Starting an Event-Based Schedule Immediately
You can start an event-based schedule immediately instead of waiting for the related PowerCenter session to complete.
To clear the schedule history. the system administrator and users with the Set Up Schedules and Tasks privilege and Write permission on the schedule can edit an event-based schedule. the number of successfully completed runs of the schedule. click the Permissions button.
34
Chapter 5: Managing Event-Based Schedules
. 5. If you want to view the history of the schedule. You might start an event-based schedule immediately to test attached reports and report alerts.
Click Administration > Scheduling > Event-Based Schedules. click History.
Click OK. To secure a schedule.
Click Administration > Scheduling > Event-Based Schedules. You can also clear the history of an event-based schedule.
View schedule histories to determine how long attached reports take to complete.
4. The Schedule History page appears with the schedule name in parentheses.

3. You can stop a schedule immediately when you need to restart the server. When you want the schedule to resume.
To disable an event-based schedule: 1.
Managing Reports in an Event-Based Schedule
After you create an event-based schedule.
Click Administration > Scheduling > Event-Based Schedules. When Data Analyzer runs an event-based schedule.
For the event-based schedule you want to start. click Run Now.
2. Remove a report from an event-based schedule. which stops all attached reports. For more information. Click OK. Data Analyzer starts the schedule and runs the attached reports. To enable the schedule again. you can attach any cached reports to the schedule. see “Stopping a Schedule” on page 30. You can perform the following tasks to manage reports in an event-based schedule:
♦ ♦ ♦ ♦ ♦
View a list of attached reports. Attach imported cached reports to a schedule.
To remove an event-based schedule: 1. View or clear a report history. The Status of the schedule changes to Disabled. View task properties. The Event-Based Schedules page appears. Before removing a schedule from the repository.
Stopping an Event-Based Schedule Immediately
You can stop an event-based schedule immediately. 2.
2.
Click the Enabled button for the schedule you want to disable. reassign all attached reports to another schedule.
Managing Reports in an Event-Based Schedule
35
. You might want to remove an event-based schedule when the PowerCenter session is no longer valid. The Event-Based Schedules page appears.
Click Administration > Scheduling > Event-Based Schedules. click Disabled. you can enable the schedule. it runs each attached report.
Click Administration > Scheduling > Event-Based Schedules.
Removing an Event-Based Schedule
You can remove event-based schedules from the repository.
Disabling an Event-Based Schedule
You can disable an event-based schedule when you do not want it to run. Click the Remove button for the schedule you want to delete.To start an event-based schedule immediately: 1. You might disable a schedule when it has no attached reports or when the update of source data has been interrupted.

and then click OK. 4.
Click Administration > Scheduling > Event-Based Schedules.
To view or clear a report history: 1. View report histories to determine how long a report takes to update.
Click OK. Data Analyzer displays all attached reports. or recurring errors when running the report. 7. 3. You might want to view a report history to compare the number of successful runs on different schedules. You might clear history at the end of a quarter or to make space in the repository. Click the name of the schedule that runs the report. Click the name of the report. The Task Properties page appears. Click the name of the report. Data Analyzer displays the report history.
2.
5. 2.Viewing Attached Reports
You can view all reports attached to an event-based schedule.
Click Administration > Scheduling > Event-Based Schedules.
Click Tasks.
Viewing or Clearing a Report History
You can view a report history for the reports attached to an event-based schedule.
To view task properties: 1. The schedule properties display.
36
Chapter 5: Managing Event-Based Schedules
. 4. Click the name of the schedule that runs the report. 2.
5. 3. You can also clear report the history.
Click History.
To clear the history. Click Tasks.
Click Administration > Scheduling > Event-Based Schedules. The Event-Based Schedules page appears. click OK. Click Tasks.
Click the name of the schedule you want to edit. the number of successfully completed runs. The Task Properties page appears. click Clear. To return to Task Properties.
3.
Viewing Task Properties
You can view the properties of any report attached to an event-based schedule.
To view tasks attached to an event-based schedule: 1.
6.

To attach an imported cached report to an event-based schedule: 1.
4.
You must attach each imported cached report to a schedule. If you want to remove all attached reports. The Imported Scheduled Reports window appears.
Managing Reports in an Event-Based Schedule
37
.Removing a Report from an Event-Based Schedule
You can remove a report from an event-based schedule. You can attach imported reports individually or attach multiple imported reports from a list to a single schedule. you cannot attach multiple reports. Data Analyzer displays the following message:
Some of the imported reports must be put on schedules. and then click OK. When you remove a cached report. select the check box in the title bar next to Name. You can attach imported cached reports to time-based or event-based schedules. You must attach the imported reports individually. 2. To attach multiple reports from the list. 3.
Attaching Imported Cached Reports to an Event-Based Schedule
When you import cached reports to the repository. Please assign the reports to schedules immediately. Click the name of the schedule you want to edit and then click Tasks. You might want to remove a report when you plan to disable the schedule or when the report requires a new update strategy. Click to add the reports to existing schedules. you must attach them during the same Data Analyzer session. Click the event-based schedule that you want to use.
Click Administration > Scheduling > Event-Based Schedules.
Click Remove. The Add button appears only when you have unscheduled imported reports in the repository.
Click Administration > Scheduling > Event-Based Schedules. 3. 2.
To remove a report from an event-based schedule: 1.
Click Add. Select the check box for the report you want to remove. If the session expires or you log out before attaching the reports from the import list. attach it to another schedule to ensure it updates in a timely manner. Click Tasks. The list of the tasks assigned to the schedule appears:
Appears when imported reports are not yet scheduled. 4.

5.
38
Chapter 5: Managing Event-Based Schedules
.
Click Apply. If you want to add all available imported reports to the schedule. The report appears as an item on the task list. click the All check box.
6.
Select the reports that you want to add to the schedule.

Exporting and importing repository objects uses considerable system resources. see “Using the Import Export Utility” on page 65.
−
When exporting a calculated metric.
Exporting a Schema
You can export analytic and operational schemas. You can also export repository objects using the ImportExport command line utility. and other schema objects associated with the metric.
Exporting Metrics and Associated Schema Objects
When Data Analyzer exports a metric or schema and the associated objects. verify that you have enough space available in the Windows temp directory. attributes. Data Analyzer exports the definitions of the following schema objects associated with the metric:
♦
Fact tables associated with the exported metric. Use this file to import the repository objects into a Data Analyzer repository. tables.
40
Chapter 6: Exporting Objects from the Repository
.When you export the repository objects. Any change might invalidate the XML file and prevent you from using it to import objects into a Data Analyzer repository. It does not export the definition of the table or schema that contains the metrics or any other schema object associated with the metric or its table or schema.
Exporting Analytic Schemas
When exporting a metric from an analytic schema. usually in the C: drive. Data Analyzer exports the metrics you select. it also exports all metrics. When you export a schema from the Data Analyzer repository. attributes. However. Schedule exporting and importing tasks so that you do not disrupt Data Analyzer users. Data Analyzer also exports all fact tables associated with any of the exported metrics. for the temporary space typically required when a file is saved. Data Analyzer creates an XML file that contains information about the exported objects. For more information. If you perform these tasks while users are logged in to Data Analyzer. do not modify the XML file created when you export objects. Data Analyzer also exports all associated metrics that are used to calculate the calculated metric. You can view the XML files with any text editor. When you save the XML file on a Windows machine. users might experience slow response or timeout errors.
Exporting Metric Definitions Only
When you export only metric definitions. you can select individual metrics within a schema to export or you can select a folder that contains metrics. it exports different objects based on the type of schema you select. including the calculated metric and those used to calculate it. and tables in the operational schema and the join expressions for the operational schema tables. You can also choose whether to export only metric definitions or to export all metrics. You can export the following metrics and schemas:
♦ ♦ ♦ ♦
Operational schemas or metrics in operational schemas Analytic schemas or metrics in analytic schemas Hierarchical schemas or metrics in hierarchical schemas Calculated metrics
Exporting Operational Schemas
When Data Analyzer exports a metric from an operational schema.

If you export a calculated metric. Drill paths associated with any of the attributes in the dimension tables. Data Analyzer exports BaseMetric3 and its entire operational schema. At the top of the Metrics section.
Dimension keys in the exported fact table.
Click Administration > XML Export/Import > Export Schemas. Base metric 3 (BaseMetric3) is a metric from an operational schema (OpSch1). which is calculated from BaseMetric3 and BaseMetric4. Attributes in the exported dimension tables. select Export the Metrics with the Associated Schema Tables and Attributes. Data Analyzer exports only the template dimension and its attributes. If you export only a template dimension. or template dimensions that you want to export. you have the following metrics:
♦ ♦ ♦
Base metric 1 (BaseMetric1) and base metric 2 (BaseMetric2) are metrics from fact tables in an analytic schema.
Exporting Hierarchical Schemas
When Data Analyzer exports a metric from a hierarchical schema. You can also export template dimensions separately. If you define a new object in the repository or if you create a new folder or move objects in the Schema Directory. Data Analyzer exports the fact table associated with each metric. Aggregate fact tables associated with the exported fact tables. If you export a template dimension table associated with the exported metric.
2. In addition. the changes may not immediately display in the Schema Directory export list. which is calculated from BaseMetric1 and BaseMetric2. and BaseMetric4 and its entire operational schema. Data Analyzer does not export the time dimension. Aggregate. metrics. or hierarchical schemas. you can select Metrics to select all folders and metrics in the list. and snowflake dimension tables associated with the dimension tables. You can export the time dimensions separately. To export the metric definitions and associated tables and attributes. To export only metric definitions. Data Analyzer exports all schema objects associated with the metrics in these fact tables. The Export Schemas page displays all the folders and metrics in the Metrics folder of the Schema Directory.
To export schema objects: 1. If you export a calculated metric. template. select Export Metric Definitions Only. Base metric 4 (BaseMetric4) is a metric from a different operational schema (OpSch2). its associated fact table. It does not export any associated schema object. In addition.
Select the type of information you want to export. Click Refresh Schema to display the latest list of folders and metrics in the Schema Directory. Dimension tables associated with the exported fact tables.
3.− ♦ ♦ ♦ ♦ ♦ ♦
When exporting a fact table associated with a time dimension. which is calculated from BaseMetric1 and BaseMetric3. Data Analyzer exports only one definition of the template dimension. it also exports all metrics and attributes in the hierarchical schema.
Exporting Calculated Metrics
Calculated metrics are derived from two or more base metrics from analytic. Data Analyzer exports BaseMetric3 and its entire associated operational schema.
Exporting a Schema
41
. For example.
Select the folders.
If you export a calculated metric. and the schema objects associated with the metric in that fact table. Data Analyzer exports BaseMetric1. operational.

Click Administration > XML Export/Import > Export Time Dimensions.
Navigate to the directory where you want to save the file. The Save As window appears. 7. Data Analyzer exports the schema to an XML file. Click Export as XML. Data Analyzer exports the time dimension table to an XML file. When you export a report. Data Analyzer exports all reports in the folder and its subfolders. 3. You can export cached and on-demand reports.
5. Time dimension tables contain date.
To export a time dimension table: 1.
Exporting a Time Dimension
You can export time dimension tables to an XML file.
6. When exporting cached reports.
Select the time dimension you want to export. The File Download window appears.
Click Save.
Click Export as XML.
Exporting a Report
You can export reports from public and personal folders. You can export multiple reports at once. The File Download window appears. If an XML file with the same name already exists in the directory.
4. You can also select individual metrics in different folders.
Click Save. The Export Time Dimensions page displays the time dimension tables in the repository. Enter a name for the XML file and click Save. The Save As window appears.You can select Template Dimensions to select all template dimensions in the list or select a metrics folder to export all metrics within the folder. Enter a name for the XML file and click Save. Data Analyzer exports the report data and the schedule for cached reports. 6.
2.
5.
4. When you export a folder.and time-related attributes that describe the occurrence of a metric. Data Analyzer always exports the following report components:
♦ ♦ ♦
Report table Report charts Filters
42
Chapter 6: Exporting Objects from the Repository
.
Navigate to the directory where you want to save the file. Data Analyzer prompts you to overwrite the file or rename the new file.

the changes may not immediately display in the report export list. Exported public highlighting uses the state set for all users as the default highlighting state. Click Export as XML.♦ ♦ ♦ ♦
Calculations Custom attributes All reports in an analytic workflow All subreports in a composite report
By default. Exported personal and public alerts use the state set for all report subscribers as the default alert state. Enter a name for the XML file. Data Analyzer exports the definitions of all selected reports. You can choose not to export any of these components:
♦ ♦ ♦ ♦ ♦ ♦
Indicators Alerts Highlighting Permissions Schedules Filtersets Gauge indicators. Data Analyzer exports all the workflow reports. you can export them separately. The Export Report page displays all public and personal folders in the repository that you have permission to access. Exported public gauge indicators keep their original owner. The Save As window appears.
To export a report: 1.
7. Select a folder to export all subfolders and reports in the folder. The File Download window appears. or delete a folder or report. From the list of Export Options. When you export a report that uses global variables. Alerts.
2. click Export Options. modify.
Click Save. 5. with the following exceptions:
♦
♦ ♦
To export an analytic workflow.
Exporting a Report
43
. Data Analyzer also exports the following components associated with reports. Exported personal gauge indicators do not keep their original owner. clear each component that you do not want to export to the XML file. If you create. Highlighting. Click Refresh Reports to display the latest list of reports from Public Folders and Personal Folder. When you export the originating report of an analytic workflow.
Data Analyzer exports all current data for each component. 4. you need to export only the originating report.
To modify the report components to export.
Select the folders or reports that you want to export.
Navigate to the directory where you want to save the file. Although the global variables are not exported with the report.
6. 8.
3.
Click Administration > XML Export/Import > Export Reports. Data Analyzer lists the global variables used in the report. The user who imports the report becomes the owner of the gauge indicator and the gauge indicator becomes personal to that user. and then click Save. Data Analyzer does not export any personal highlighting.

Data Analyzer exports the definitions of all selected global variables.
Click Administration > XML Export/Import > Export Dashboards.Exporting a Global Variable
You can export any global variables defined in the repository.
4. Enter a name for the XML file and click Save. Optionally. You can export more than one dashboard at a time.
44
Chapter 6: Exporting Objects from the Repository
. Data Analyzer exports the following objects associated with the dashboard:
♦ ♦ ♦ ♦ ♦ ♦
Reports Indicators Shared documents Dashboard filters Discussion comments Feedback Access permissions Attributes and metrics in the report Real-time objects
Data Analyzer does not export the following objects associated with the dashboard:
♦ ♦ ♦
When you export a dashboard.
2. The File Download window appears.
Click Save. The Export Dashboards page appears.
Navigate to the directory where you want to save the file.
5. Optionally. The Export Global Variables page appears.
Click Administration > XML Export/Import > Export Global Variables.
To export a dashboard: 1.
Exporting a Dashboard
When you export a dashboard.
Select the dashboards that you want to export.
Click Export as XML.
3. listing all the global variables in the repository. the Export Options button is unavailable. Therefore. listing all the dashboards in the repository that you can export.
Select the global variables that you want to export. 6.
2. The Save As window appears. Data Analyzer creates one XML file for the global variables and their default values. you cannot select specific components to export. select Name at the top of the list to select all the dashboards in the list. select Name at the top of the list to select all the global variables in the list. You can export any of the public dashboards defined in the repository. When you export multiple global variables.
To export a global variable: 1.

To export a user security profile: 1. 6. If there are a large number of users in the repository.
5. Data Analyzer exports the definitions of all selected dashboards and objects associated with the dashboard. Enter a name for the XML file and click Save.
Select a user whose security profile you want to export. The Export Security Profile page displays a list of all the users in the repository
3. which include folders. Enter a name for the XML file and click Save.
Exporting a Security Profile
Data Analyzer keeps a security profile for each user or group in the repository.
Navigate to the directory where you want to save the file.
4.3. The File Download window appears. click the page number. 7. Data Analyzer does not export any object definitions and displays the following message:
There is no content to be exported. and attributes. When Data Analyzer exports a security profile. The Save As window appears.
4. metrics. reports. it exports access permissions for objects under the Schema Directory.
Exporting a Group Security Profile
You can export a security profile for only one group at a time.
Click Save. If a user or group security profile you export does not have access permissions or data restrictions. Data Analyzer allows you to export one security profile at a time.
Navigate to the directory where you want to save the file. Data Analyzer exports the security profile definition of the selected user. Click Export from Users. The File Download window appears.
Click Administration > XML Export/Import > Export Security Profile. The Save As window appears.
5.
Click Export as XML.
Exporting a Security Profile
45
. To view a list of users on other pages.
Exporting a User Security Profile
You can export a security profile for one user at a time.
6. 2. Data Analyzer does not export access permissions for filtersets. A security profile consists of the access permissions and data restrictions that the system administrator sets for a user or group.
Click Save.
Click Export as XML. or shared documents. Data Analyzer lists one page of users and displays the page numbers at the top.

Data Analyzer runs a report with an event-based schedule when a PowerCenter session completes. The Save As window appears. You can click Names at the top of the list to select all schedules in the list. Data Analyzer exports the definitions of all selected schedules.
Navigate to the directory where you want to save the file.
6.
5. 2. The File Download window appears.
Select the group whose security profile you want to export.
2.
5. Click Export from Groups. Enter a name for the XML file and click Save.To export a group security profile: 1. When you export a schedule. click the page number.
Exporting a Schedule
You can export a time-based or event-based schedule to an XML file.
3. 6. If there are a large number of groups in the repository. Data Analyzer exports the security profile definition for the selected group. Enter a name for the XML file and click Save. The File Download window appears.
To export a schedule: 1.
Click Administration > XML Export/Import > Export Security Profile.
Click Export as XML. The Export Schedules page displays a list of the schedules in the repository. Data Analyzer lists one page of groups and displays the page numbers at the top. The Save As window appears. 7. Data Analyzer runs a report with a timebased schedule on a configured schedule.
4.
Click Save.
Click Export as XML.
Navigate to the directory where you want to save the file. The Export Security Profile page displays a list of all the groups in the repository.
Click Save. Data Analyzer does not export the history of the schedule.
Click Administration > XML Export/Import > Export Schedules. To view groups on other pages.
46
Chapter 6: Exporting Objects from the Repository
.
Select the schedule you want to export.
4.
3.

If you double-click the XML file.Troubleshooting
After I export an object. do not edit the file. However. Use a text editor to open the XML file.
Troubleshooting
47
. The web browser cannot locate the DTD file Data Analyzer uses for exported objects. I double-click the XML file and receive the following error:
The system cannot locate the resource specified. the operating system tries to open the file with a web browser.dtd'. Error processing resource 'Principal<DTDVersion>. Changes might invalidate the file.

48
Chapter 6: Exporting Objects from the Repository
.

54 Importing a Global Variable. 53 Importing a Report. 50 Importing a Time Dimension. the destination repository must be a superset of the source repository.
Data Analyzer imports objects based on the following constraints:
♦
49
. 49 Importing a Schema. When you import a repository object that was exported from a different repository.CHAPTER 7
Importing Objects to the Repository
This chapter includes the following topics:
♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦
Overview. For more information. or. 59 Importing a Schedule. both repositories must have the same language type and locale settings. 57 Importing a Security Profile. 62
Overview
You can import objects into the Data Analyzer repository from a valid XML file of exported repository objects. see “Localization” on page 7. You can import the following repository objects from XML files:
♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦
Schemas Time dimensions Reports Global variables Dashboards Security profiles Schedules Users Groups Roles You can import objects into the same repository or a different repository. 61 Troubleshooting. 56 Importing a Dashboard.

You might want to back up the target repository before you import repository objects into it. Ordinarily. You cannot overwrite global variables that already exist in the repository.0 repositories or later. If you try to import an invalid XML file. The file might include the following tables:
− − −
Fact table associated with the metric Dimension tables associated with the fact table Aggregate tables associated with the dimension and fact tables
50
Chapter 7: Importing Objects to the Repository
.
XML Validation
When you import objects. For more information. see the PowerCenter Configuration Guide. all users in Data Analyzer have read and write access to the report. If you perform these tasks while users are logged in to Data Analyzer.
Importing Objects from a Previous Version
You can import objects from Data Analyzer 5. When you import objects from a previous version. Data Analyzer upgrades the objects to the current version. if you are not sure of the validity of an XML file. you can limit access to the report for users who are not system administrators by clearing the Publish to Everyone option. you do not need to validate an XML file that you create by exporting from Data Analyzer. you might not be able to use it to import objects into a Data Analyzer repository. Data Analyzer system administrators can access all imported repository objects. However. For more information.♦ ♦
You can import objects from Data Analyzer 5. you can validate the XML file against the DTD provided by Data Analyzer. You can then change the access permissions to the report to restrict specific users or groups from accessing it.
Importing a Schema
You can import schemas from an XML file.x upgrades the attribute to one with an advanced expression. if you import objects that already exist in the repository. You can also import repository objects using the ImportExport command line utility. Exporting and importing repository objects use considerable system resources. Except for global variables. when you import a Data Analyzer 5. see “Importing Objects from a Previous Version” on page 50. The schema tables associated with the exported metrics in the XML file. For example. Make sure that you schedule exporting and importing tasks so that you do not disrupt Data Analyzer users. If you modify the XML file.0 or later. Data Analyzer 8. You must ensure that you do not modify an XML file of exported objects.0 report using a custom attribute with groups. see the PowerCenter Administrator Guide. users might experience slow response or timeout errors.
Object Permissions
When you import a repository object. You can back up a Data Analyzer repository in the PowerCenter Administration Console. Data Analyzer grants you the same permissions to the object as the owner of the object. When you import a report. A valid XML file can contain definitions of the following schema objects:
♦
Tables. If you publish an imported report to everyone. Data Analyzer stops the import process and displays the following message:
Error occurred when trying to parse the XML file. For more information about upgrading objects in the repository. you can validate it against the Data Analyzer DTD file when you start the import process. you can choose to overwrite the existing objects.

You can import a metric only if its associated fact table exists in the target repository or the definition of its associated fact table is also in the XML file. When you export metrics with the associated schema tables and attributes. All metrics exported to the XML file.
Importing a Schema
51
. the XML file contains only a list of metric definitions. If you export the metric definition only. The file can include calculated metrics and base metrics. see “Importing a Time Dimension” on page 53. Click Open. If the XML file contains only the metric definition. The drill paths associated with exported attributes. Time keys. The time keys associated with exported tables. Operational schemas. metrics. If you import a schema that contains time keys. Click Browse to select an XML file from which to import schemas. you must make sure that the fact table for the metric exists in the target repository.− − ♦
Snowflake dimensions associated with the dimension tables Template dimensions associated with the dimension tables or exported separately
Schema joins. select Validate XML against DTD. 3. The file can include the following relationships:
− −
Fact table joined to a dimension table Dimension table joined to a snowflake dimension
♦ ♦ ♦ ♦ ♦
Metrics. 4.
To import a schema: 1. The name and location of the XML file display on the Import Schemas page.
Click Administration > XML Export/Import > Import Schemas. User name of the Data Analyzer user who last modified the table. Data Analyzer imports the metrics and attributes in the hierarchical schema. When you import an operational schema. The relationships between tables associated with the exported metrics in the XML file. and operational schemas display in separate sections. drill paths. time keys. For more information. the XML file contains different types of schema objects.
To validate the XML file against the DTD. It then displays a list of all the object definitions in the XML file that already exist in the repository. attributes.
Click Import XML. When you import a hierarchical schema.
5. You can choose to overwrite objects in the repository. Drill paths. Attributes. Date when the table was last modified. Imported Schema Table Description
Property Name Last Modified Date Last Modified By Description Name of the fact or dimension tables associated with the metric to be imported. schema joins. you must import or create a time dimension. The attributes in the fact and dimension tables associated with the exported metrics in the XML file.
When you import a schema. Data Analyzer displays a list of all the definitions contained in the XML file. The lists of schema tables. Table 7-1 shows the information that Data Analyzer displays for schema tables:
Table 7-1.
2. Data Analyzer imports the following objects:
− − −
Tables in the operational schema Metrics and attributes for the operational schema tables Schema joins
♦
Hierarchical schemas. The Import Schemas page appears.

User name of the person who last modified the metric. Can also be the name of a snowflake dimension table associated with a dimension table. Can also be the name of a dimension table that joins to a snowflake dimension. Date when the metric was last modified. Fact or dimension table that contains the attribute.Table 7-2 shows the information that Data Analyzer displays for the schema joins:
Table 7-2. Foreign key and primary key columns that join a fact and dimension table or a dimension table and a snowflake dimension in the following format: Table.PrimaryKey
Table2 Name
Join Expression
Table 7-3 shows the information that Data Analyzer displays for the metrics:
Table 7-3. Imported Drill Paths Information
Property Name Last Modified Date Last Modified By Paths Description Name of the drill path that includes attributes in the fact or dimension tables associated with the metric to be imported. List of attributes in the drill path that are found in the fact or dimension tables associated with the metric to be imported. Imported Schema Join Expression
Property Table1 Name Description Name of the fact table that contains foreign keys joined to the primary keys in the dimension tables.
Table 7-5 shows the information that Data Analyzer displays for the drill paths:
Table 7-5. Imported Attributes Information
Property Name Last Modified Date Last Modified By Analyzer Table Locations Description Name of the attributes found in the fact or dimension tables associated with the metric to be imported. square brackets ([]) display in place of a fact table.
52
Chapter 7: Importing Objects to the Repository
.ForeignKey = Table. Date when the attribute was last modified.
Table 7-4 shows the information that Data Analyzer displays for the attributes:
Table 7-4. Date when the drill path was last modified. User name of the person who last modified the drill path. Imported Metrics Information
Property Name Last Modified Date Last Modified By Analyzer Table Locations Description Name of the metric to be imported. Fact table that contains the metric. If the metric is a calculated metric. Name of the dimension table that contains the primary key joined to the foreign keys in the fact table. User name of the person who last modified the attribute.

Click Continue.Table 7-6 shows the information that Data Analyzer displays for the time keys:
Table 7-6. Click Browse to select an XML file from which to import time dimensions.
Click Apply. Click Open. If you select to overwrite schema objects.
Importing a Time Dimension
Time dimension tables contain date. select Overwrite at the top of each section.
Table 7-8 shows the information that Data Analyzer displays for the hierarchical schemas:
Table 7-8. When you import a time dimension table. select Validate XML against DTD. Date when the operational schema was last modified. Date when the hierarchical schema was last modified. If objects in the XML file are already defined in the repository. The Import Time Dimensions page appears.
Importing a Time Dimension 53
.
7. Data Analyzer imports the definitions of all selected schema objects. To overwrite only specific schema objects. 3. Description Name of the hierarchical schema to be imported.
To validate the XML file against the DTD. select the object. Data Analyzer imports the primary attribute. To overwrite all the schema objects. Imported Time Keys Information
Property Name Description Name of the time key associated with the fact table. The name and location of the XML file display on the Import Time Dimensions page.
Table 7-7 shows the information that Data Analyzer displays for the operational schemas:
Table 7-7. select Overwrite All. 4. confirm that you want to overwrite the objects. User name of the person who last modified the operational schema.
2. secondary attribute. Imported Hierarchical Schema Information
Property Name Last Modified Date Last Modified By 6. You can import a time dimension table from an XML file. Imported Operational Schemas Information
Property Name Last Modified Date Last Modified By Description Name of the operational schema to be imported. User name of the person who last modified the hierarchical schema.
Click Administration > XML Export/Import > Import Time Dimensions. and calendar attribute of the time dimension table. a list of the duplicate objects appears.
To import a time dimension table: 1.and time-related attributes that describe the occurrence of metrics and establish the time granularity of the data in the fact table. To overwrite the schema objects of a certain type.

Data Analyzer displays a message that you have successfully imported the time dimensions.
Click Import XML.
Data Analyzer imports all data for each component. Imported gauge indicators do not keep their original owner. Table 7-9 shows the information that Data Analyzer displays for the time dimensions:
Table 7-9. If objects in the XML file are already defined in the repository. Data Analyzer displays the time dimensions found in the XML file. with the following exceptions:
♦
♦ ♦
54
Chapter 7: Importing Objects to the Repository
. Data Analyzer imports the following components of a report:
♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦
Report table Report chart Indicators Alerts Filters Filtersets Highlighting Calculations Custom attributes All reports in an analytic workflow Permissions Report links Schedules Gauge indicators. it becomes personal to the user who imports the report.
Select the objects you want to overwrite. Data Analyzer imports the definitions of all selected time dimensions. Imported Time Dimension Information
Property Name Last Modified Date Last Modified By Description Name of the time dimension table.
Click Continue. If the gauge indicator is personal. a list of the duplicate objects appears. Alerts. The user who imports the report becomes the owner of the gauge indicator. If you successfully import the time dimensions. When available.5. Depending on the reports included in the file and the options selected when exporting the reports. Imported personal and public alerts use the state set for all report subscribers as the default alert state. Highlighting. Data Analyzer does not export any personal highlighting.
7. Imported public highlighting uses the state set for all users as the default highlighting state. 8.
Importing a Report
You can import reports from an XML file. User name of the Data Analyzer user who last modified the report. the XML file might not contain all supported metadata. Date when the time dimension table was last modified.
6. Click Continue.

Data Analyzer creates a public folder called Personal Reports with the import date. and the data may not be available immediately. For example. When you import a report exported from a personal folder. Data Analyzer creates a new folder within the public folder called Personal Reports with the date of import and creates a subfolder named for the owner of the personal folder. you first must run the report. the following error appears:
Result set is null. Data Analyzer imports reports to the same folder in the target repository. If you import a composite report. You can import cached and on-demand reports. select Validate XML against DTD. The name and location of the XML file display on the Import Reports page. it imports reports from the public folder to the public folder. you can overwrite the existing report. and global variables used in the report are defined in the target repository.
5. Running reports in the background can be a long process. To ensure security for the reports from the personal folders. Data Analyzer does not import report data for cached reports. then Data Analyzer also imports the schedule stored in the cached report. ensure that all imported analytic workflows have unique report names prior to export. you must import or recreate the objects before you run the report.
Importing Reports from Public or Personal Folders
You can import reports exported from any folder in the repository.
Steps for Importing a Report
To import a report: 1. If during the export process. the XML file contains all the subreports. Thus.
Importing a Report
55
. the XML file contains all workflow reports. If you try to view an imported cached report immediately after you import it. You can run imported cached reports in the background immediately after you import them. make sure all the metrics. The Import Reports page appears.
To view the data for the report. For example. When importing multiple workflows. You can also edit the report and save it before you view it to make sure that Data Analyzer runs the report before displaying the results. When possible. If you import a report that uses objects not defined in the target repository. and copies the imported report into a subfolder called Mozart.When you import a report.
Click Administration > XML Export/Import > Import Reports. you chose to export schedules associated with a report. When Data Analyzer imports a report to a repository that does not have the same folder as the originating repository. Data Analyzer also overwrites the workflow reports. If you import a report and its corresponding analytic workflow. if you import a report exported from a personal folder called Mozart. If you choose to overwrite the report.
To validate the XML file against the DTD. Click Browse to select an XML file from which to import reports. 3. such as Personal Reports (Imported 8/10/04). attributes.
2. If a report of the same name already exists in the same folder. 4. Data Analyzer does not import analytic workflows containing the same workflow report names. Data Analyzer displays the reports found in the XML file. you are the owner of the new public folder. Data Analyzer creates a new folder of that name for the report. You can choose to overwrite the subreports or composite report if they are already in the repository. Click Open.
Click Import XML.

Table 7-10 shows the properties that Data Analyzer displays for the reports:
Table 7-10. To immediately update the data for all the cached reports in the list.
2. To overwrite all reports. Data Analyzer displays the global variables found in the XML file. Data Analyzer displays a message that you have successfully imported them.
To validate the XML file against the DTD. it displays a message that you need to assign the cached reports to a schedule in the target repository. If attributes or metrics associated with the report are not defined in the repository.
56
Chapter 7: Importing Objects to the Repository
. If you continue the import process.
To allow all users to have access to the reports.
Click Administration > XML Export/Import > Import Global Variables.
Click Continue. Click Open. select Overwrite next to the report name. Data Analyzer displays a list of the undefined objects. Data Analyzer lists any folders created for the reports.
To import a global variable: 1. Create the required objects in the target repository before attempting to import the report again. Description Name of the reports found in the XML file. Imported Report Properties
Property Name Last Modified Date Last Modified By Path 6. Data Analyzer imports the definitions of all selected reports. 3.
Importing a Global Variable
You can import global variables that are not defined in the target repository. Date when the report was last modified.
7. If you import cached reports.
5. Click Browse to select an XML file from which to import global variables. Data Analyzer runs the cached reports in the background. To overwrite any of the reports. To cancel the import process. For more information about attaching the imported cached reports to a schedule immediately. you might not be able to run it successfully. If you successfully import the reports. select Run Cached Reports after Import. Location of the report in the Public Folders or Personal Folder. If reports in the XML file are already defined in the repository.
8. Data Analyzer imports only the global variables that are not in the target repository.
Click Import XML.
Click Continue. select Validate XML against DTD. select Publish to Everyone. The name and location of the XML file display on the Import Global Variables page. If the XML file contains global variables already in the repository. click Cancel. When necessary. User name of the Data Analyzer user who last modified the report. After you import the reports. The Import Global Variables page appears. you can cancel the process. a list of the duplicate reports appears. If you import the report. see “Attaching Imported Cached Reports to a Time-Based Schedule” on page 26 and “Attaching Imported Cached Reports to an Event-Based Schedule” on page 37. 4. select Overwrite at the top of the list.

Data Analyzer does not import global variables whose names exist in the repository. However.
Click Continue. even if the values are different. If the XML file includes global variables already in the repository. then Data Analyzer creates a new Public Folders > Dashboards_n folder to store the dashboards (for example. Dashboards_1 or Dashboards_2).
Importing a Dashboard
Dashboards display links to reports. Data Analyzer imports only the variables that are not in the repository. You must add those indicators to the dashboard manually. Description Name of the global variable found in the XML file. When Data Analyzer imports a dashboard to a repository that does not have the same folder as the originating repository. Data Analyzer imports the dashboards to a new Public Folders > Personal Dashboards (Imported MMDDYY) > Owner folder. Data Analyzer provides an option to overwrite the object. When you import a dashboard. If an object exists in the repository. shared documents. Data Analyzer imports all indicators for the originating report and workflow reports in a workflow. Value of the global variable.
♦ ♦ ♦
When you import a dashboard. Dashboards exported from a personal folder. Data Analyzer imports the following objects associated with the dashboard:
♦ ♦ ♦ ♦ ♦ ♦
Reports Indicators Shared documents Dashboard filters Discussion comments Feedback Access permissions Attributes and metrics in the report Real-time objects
Data Analyzer does not import the following objects associated with the dashboard:
♦ ♦ ♦
Dashboards are associated with the folder hierarchy. Data Analyzer stores the imported dashboard in the following manner:
♦
Dashboards exported from a public folder. Data Analyzer creates a new folder of that name for the dashboard. Data Analyzer displays a warning. and indicators. Data Analyzer imports the dashboards to the corresponding public folder in the target repository. click Continue. Data Analyzer imports the dashboards to the Public Folders > Dashboards folder. Imported Global Variable Description
Property Name Value 6. To continue the import process. If the Dashboards folder already exists at the time of import. Personal dashboard.
Importing a Dashboard
57
. indicators for workflow reports do not display on the dashboard after you import it. If you continue the import process. Dashboards exported from an earlier version of Data Analyzer. When you import a dashboard from an XML file.Table 7-11 shows the information that Data Analyzer displays for the global variables:
Table 7-11. Data Analyzer imports a personal dashboard to the Public Folders folder.

select Validate XML against DTD. the report does not display on the imported dashboard. Data Analyzer displays a list of the dashboards. The Import Dashboards page appears. select Overwrite at the top of the list. the report does not display on the imported dashboard. Data Analyzer does not automatically display imported dashboards in your subscription list on the View tab. click Cancel. Data Analyzer imports the definitions of all selected dashboards and the objects associated with the dashboard. or shared documents. reports.
To validate the XML file against the DTD. The name and location of the XML file display on the Import Dashboards page. To cancel the import process. Table 7-12 shows the information that Data Analyzer displays for the dashboards:
Table 7-12.
To import a dashboard: 1. If the attributes or metrics in a report associated with the dashboard do not exist.
7.
Click Import XML. You must manually subscribe to imported dashboards to display them in the Subscription menu. User name of the Data Analyzer user who last modified the dashboard.
To continue the import process. Imported Dashboard Information
Property Name Last Modified Date Last Modified By Description Name of the dashboard found in the XML file.
8. 4.When you import a dashboard. and shared documents already defined in the repository.
Click Administration > XML Export/Import > Import Dashboards.
6. report. make sure all the metrics and attributes used in reports associated with the dashboard are defined in the target repository. To overwrite a dashboard. Data Analyzer displays the list of dashboards found in the XML file. select Overwrite next to the item name.
2. If the attributes or metrics in a report associated with the dashboard do not exist. click Apply. 3. Data Analyzer does not import the attributes and metrics in the reports associated with the dashboard.
Click Continue.
5. Date when the dashboard was last modified. Data Analyzer displays a list of the metrics and attributes in the reports associated with the dashboard that are not in the repository.
Click Apply.
58
Chapter 7: Importing Objects to the Repository
. Click Browse to select an XML file from which to import dashboards. or shared document. Click Open. reports. To overwrite all dashboards.

Select the users you want to associate with the security profile. and metrics. The Import Security Profiles page appears. When you append a security profile. If you overwrite existing security profiles. Data Analyzer assigns the user or group only the data restrictions and access permissions found in the new security profile. When you overwrite a security profile.
Click Import XML. The Import Security Profile page displays all users in the repository.
4. When you import a security profile and associate it with a user or group.
To import a user security profile: 1. Click Open. You can assign the same security profile to more than one user or group. For example.
Importing a Security Profile
59
. including folders. select the check box under Users at the top of the list. attributes.
Click Administration > XML Export/Import > Import Security Profiles. click Append to add the imported security profile to existing security profiles. Or. 8.
Importing a User Security Profile
You can import a user security profile and associate it with one or more users. Data Analyzer joins the restrictions using the OR operator. Data Analyzer keeps a security profile for each user or group in the repository.
6. To associate the security profiles with all displayed users.Importing a Security Profile
A security profile consists of data restrictions and access permissions for objects in the Schema Directory.
Click Continue. The name and location of the XML file display on the Import Security Profiles page.
Click Overwrite to replace existing security profiles with the imported security profile.
To validate the XML file against the DTD. The Import Security Profiles window displays the access permissions and data restrictions for the security profile. the Sales group restriction changes to show only data related to the United States. Data Analyzer removes the old restrictions associated with the user or group. select Validate XML against DTD. you import a security profile with the following data restriction for the Sales fact table: Region Name show only ‘United States’. 7. you must first select the user or group to which you want to assign the security profile. If you append the profile. To associate the security profile with all users in the repository. 2. When a user or group has a data restriction and the imported security profile has a data restriction for the same fact table or schema and associated attribute. the Sales group data restriction changes to the following restriction: Region Name show only ‘United States’ OR Region Name show only ‘Europe’. Click Import to Users. Click Browse to select an XML file from which to import a security profile.
9. When you import a security profile from an XML file.
5. select Import To All.
3. Data Analyzer appends new data restrictions to the old restrictions but overwrites old access permissions with the new access permissions. you can either overwrite the current security profile or add to it. The Sales group has an existing Sales fact table data restriction: Region Name show only ‘Europe’.

Click Overwrite to replace existing security profiles with the imported security profile. 7. Click Open. To cancel the import process.
6.
To continue the import process. To associate the security profiles with all displayed groups.
3.
9. Select the groups you want to associate with the security profile.
5. select Validate XML against DTD.
Click Continue. Description Name of the restricted table found in the security profile.
Click Administration > XML Export/Import > Import Security Profile.
Click Continue. Click Import to Groups.
Importing a Group Security Profile
You can import a group security profile and associate it with one or more groups. 8. click Cancel. Imported Security Profile: Restricted Objects
Property Object Name Description Indicates the Schema Directory path of the restricted schema object if the restricted object is a folder. Imported Security Profile: Data Restrictions
Property Schema Table Name Security Condition 10. Indicates the fact or dimension table and attribute name if the object is an attribute. or metric. The Import Security Profile page appears. Data Analyzer displays a list of the objects in the security profile that are not in the repository. The name and location of the XML file display on the Import Security Profile page. attribute. Click Browse to select an XML file from which to import a security profile.
10. The list of access permissions and data restrictions that make up the security profile appears.
60
Chapter 7: Importing Objects to the Repository
.
11. select the check box under Groups at the top of the list.
Click Import XML. It imports access permissions and data restrictions only for objects defined in the repository. 4. Description of the data access restrictions for the table.Table 7-13 shows the information that Data Analyzer displays for the restricted objects:
Table 7-13.
Click Continue. The Import Security Profile page displays all groups in the repository. Click Append to add the imported security profile to existing security profiles.
To validate the XML file against the DTD.
To import a group security profile: 1.
Type
Table 7-14 shows the information that Data Analyzer displays for the data restrictions:
Table 7-14. select Import To All. Data Analyzer imports the security profile and associates it with all selected users. 2. click Continue. Indicates the fact table and metric name if the object is a metric. To associate the security profile with all groups in the repository. Indicates whether the schema object is a folder.

The Import Schedules page appears. Click Browse to select an XML file from which to import a schedule. click the Overwrite check box next to the schedule. Click Open. 3. click the Overwrite check box at the top of the list. To cancel the import process. You can then attach reports to the imported schedule. 4. When you import a schedule from an XML file. you do not import the task history or schedule history.
Importing a Schedule
61
. click Continue.
Click Continue. The name and location of the XML file display on the Import Schedules page. To overwrite all schedules.
7.
To validate the XML file against the DTD. The list of objects found in the XML file appears. If the schedules in the XML file are already defined in the repository. It imports access permissions and data restrictions only for objects defined in the repository.
6.
Click Import XML.
Click Continue.
Importing a Schedule
You can import a time-based or event-based schedule from an XML file.
Click Administration > XML Export/Import > Import Schedules. a list of the duplicate schedules appears. Data Analyzer imports the security profile and associates it with all selected groups. Imported Schedule Information
Property Name Last Modified Date Last Modified By Description Name of the schedule found in the XML file. To overwrite a schedule.
5.
11. Data Analyzer does not attach the schedule to any reports. click Cancel. Date when the schedule was last modified.Data Analyzer displays a list of the objects in the security profile that are not in the repository. Data Analyzer imports the schedules. select Validate XML against DTD.
To import a schedule: 1. User name of the person who last modified the schedule. When you import a schedule.
2.
To continue the import process. Table 7-15 shows the information that Data Analyzer displays for the schedules found in the XML file:
Table 7-15.

You must increase the default value of DynamicSections connection property to at least 500. You might need to contact your database system administrator to change some of these settings.
On the command line. The name of the download file is connectjdbc. To change the default transaction time out for Data Analyzer.sh
4.ssp To increase the value of the DynamicSections property: 1.EJBException: nested exception is: Exception: SQL Exception: [informatica][DB2 JDBC Driver]No more available statements. You can modify the settings of the application server. You can now run large import processes without timing out.
Enter the following license key and click Add:
62
Chapter 7: Importing Objects to the Repository
. the database. Depending on the error that Data Analyzer generates. If you are importing large amounts of data from XML and the transaction time is not enough. Download the utility from the Product Downloads page of DataDirect Technologies web site:
http://www. I have an IBM DB2 8.datadirect. When I import large XML files.timeout. run the following file extracted from the connectjdbc. The default value of the DynamicSections connection property is 200.
Extract the contents of the connectjdbc.x repository database. If you use this driver to connect to a DB2 8.
3. Follow the instructions in the DataDirect Connect for JDBC Installation Guide. I run out of time.x repository.jar file in a temporary directory and install the DataDirect Connect for JDBC utility. see “Configuration Files” on page 129. After you change this value. click the DataDirect Connect for JDBC Any Java Platform link and complete the registration information to download the file.
2. edit the value of the import. Is there a way to raise the transaction time out period? The default transaction time out for Data Analyzer is 3600 seconds (1 hour).ejb.properties file. you must restart the application server.com/download/index. Use the DataDirect Connect for JDBC utility to increase the default value of the DynamicSections connection property and recreate the JDBC driver package.
The error occurs when the default value of the DynamicSections property of the JDBC driver is too small to handle large XML imports. you can change the default transaction time out value. How can I import large XML files? The Data Analyzer installer installs a JDBC driver for IBM DB2 8.seconds property in the DataAnalyzer.transaction.bat UNIX: Installer. you might want to modify the following parameters:
♦ ♦ ♦
DynamicSections value of the JDBC driver Page size of the temporary table space Heap size for the application
Increasing the DynamicSections Value
Data Analyzer might display the following message when you import large XML files:
javax.Troubleshooting
When I import my schemas into Data Analyzer.jar. Data Analyzer generates different errors.properties file. For more information about editing the DataAnalyzer.
On the Product Downloads page.x. or the JDBC driver to solve the problem. Please recreate your package with a larger dynamicSections value. Data Analyzer might display error messages when you import large XML files.jar file: Windows: Installer.

For more information. databaseName=<DatabaseName>. create a new system temporary table space with the page size of 32KB.
11. increase the value of the application heap size configuration parameter (APPLHEAPSZ) to 512. 9. click Press Here to Continue.
In the testforjdbc folder.
If you continue getting the same error message when you import large XML files.
Increasing Heap Size for the Application
Data Analyzer might display the following message when you import large XML files:
[informatica][DB2 JDBC Driver][DB2]Virtual storage or database resource is not available ErrorCode=-954 SQLState=57011
This problem occurs when there is not enough storage available in the database application heap to process the import request. Restart the application server. Click Connect. The installation program for the DataDirect Connect for JDBC utility creates the testforjdbc folder in the directory where you extracted the connectjdbc. 10. ReplacePackage=TRUE. On the repository database. To resolve the problem. enter the user name and password you use to connect to the repository database from Data Analyzer. 13. For more information. see the IBM DB2 documentation.
Click Next twice and then click Install. enter the following:
jdbc:datadirect:db2://<ServerName>:<PortNumber>. PortNumber is the port number of the database. In the Database field. run the Test for JDBC Tool: Windows: testforjdbc.CreateDefaultPackage=TRUE.sh
8. Restart the application server.
Troubleshooting
63
.DynamicSections=500
ServerName is the name of the machine hosting the repository database. 12. log out of Data Analyzer and stop the application server.
Modifying the Page Size of the Temporary Table Space
Data Analyzer might display the following message when you import large XML files:
SQL1585N A temporary table space with sufficient page size does not exist
This problem occurs when the row length or number of columns of the system temporary table exceeds the limit of the largest temporary table space in the database. you can run the Test for JDBC Tool again and increase the value of DynamicSections to 750 or 1000.jar file. To resolve the error. and then close the window.eval 5. DatabaseName is the name of the repository database. see the IBM DB2 documentation. 6.
7. Click Connection > Connect to DB.
On the Test for JDBC Tool window. Click Finish to complete the installation.bat UNIX: testforjdbc.
In the User Name and Password fields.

64
Chapter 7: Importing Objects to the Repository
.

you can run the utility to import all reports from an XML file or export all dashboards to an XML file. You must run the utility multiple times to import or export different types of objects. you can import only those global variables that do not already exist in the repository. For example. 65 Running the Import Export Utility. you cannot use the Import Export utility to import users.CHAPTER 8
Using the Import Export Utility
This chapter includes the following topics:
♦ ♦ ♦ ♦
Overview. use the Data Analyzer Administration tab. You can also use the utility to archive your repository without using a browser. You can also use the Data Analyzer Administration tab to import or export all objects of a specified type. the same rules as those about import or export from the Data Analyzer Administration tab apply. Data Analyzer does not store user passwords in the Data Analyzer repository. or roles. When you use the Import Export utility. Use the Import Export utility to migrate repository objects from one repository to another. For example.0 repositories or later. When you run the Import Export utility. Use the utility to import or export the security profile of an individual user or group. If Data Analyzer is installed with the LDAP authentication method. You cannot use the utility to import or export other individual objects. you can use the utility to quickly migrate Data Analyzer repository objects from a development repository into a production repository. you cannot use the utility to export a specific user or report to an XML file. Data Analyzer authenticates the passwords directly in the LDAP directory. with the Import Export utility or the Data Analyzer Administration tab. 70
Overview
The Import Export utility lets you import and export Data Analyzer repository objects from the command line. You can use the Import Export utility to import objects from Data Analyzer 5. To import or export individual objects. groups. With the LDAP authentication method. For example. Data Analyzer imports or exports all objects of a specified type. 69 Troubleshooting. 66 Error Messages. For example.
65
.

Running the Import Export Utility
Before you run the Import Export utility to import or export repository objects, you must meet the following requirements:
♦ ♦

To run the utility, you must have the System Administrator role or the Export/Import XML Files privilege. To import or export users, groups, or roles, you must also have the Manage User Access privilege. Data Analyzer must be running.

You can import Data Analyzer objects from XML files that were created when you exported repository objects from Data Analyzer. You can use files exported from Data Analyzer 5.0 or later. The default transaction time out for Data Analyzer is 3,600 seconds (1 hour). If you are importing large amounts of data from XML files and the transaction time is not enough, you can change the default transaction time out value. To change the default transaction time out for Data Analyzer, edit the value of the import.transaction.timeout.seconds property in DataAnalyzer.properties. After you change this value, you must restart the application server. When you run the Import Export utility, you specify options and arguments to import or export different types of objects. Specify an option by entering a hyphen (-) followed by a letter. The first word after the option letter is the argument. To specify the options and arguments, use the following rules:
♦ ♦ ♦ ♦

Specify the options in any order. Utility name, options, and argument names are case sensitive. If the option requires an argument, the argument must follow the option letter. If any argument contains more than one word, enclose the argument in double quotes.

To run the utility on Windows, open a command line window. On UNIX, run the utility as a shell command.
Note: Back up the target repository before you import repository objects into it. You can back up a Data

Analyzer repository with the Repository Backup utility.
To run the Import Export utility: 1.

Go to the Data Analyzer utilities directory. The default directory is <PCAEInstallationDirectory>/DataAnalyzer/import-exportutil/.

2.

Run the utility with the following format: Windows:
ImportExport [-option_1] argument_1 [-option_2] argument_2 ...

Table 8-1 lists the options and arguments you can specify:
Table 8-1. Options and Arguments for the Import Export Utility
Option -i Argument repository object type Description Import a repository object type. For more information about repository object types, see Table 8-2 on page 68. Use the -i or -e option, but not both. Export a repository object type. For more information about repository object types, see Table 8-2 on page 68. Use the -i or -e option, but not both.

-e

repository object type

66

Chapter 8: Using the Import Export Utility

Table 8-1. Options and Arguments for the Import Export Utility
Option -w Argument No argument Description Import only. Instructs the Import Export utility to overwrite existing repository objects of the same name. If you do not specify this option and if a repository object with the same name already exists, the utility exits without completing the operation. If you do not use a hyphen when importing a security profile, the security profile being imported is appended to the existing security profile of the user or group. If you use this option when exporting repository objects, the utility displays an error message. Name of the XML file to import from or export to. The XML file must follow the naming conventions for the operating system where you run the utility. You can specify a path for the XML file. If you specify a path for the XML file: - When you import a repository object type, the Import Export utility looks for the XML file in the path you specify. - When you export an object type, the utility saves the XML file in the path you specify. For example, to have the utility save the file in the c:/PA directory, enter the following command:
ImportExport -e user -f c:/PA/Users.xml -u admin -p admin -l http://my.server.com:7001/ias

-f

XML file name

If you do not specify a path for the XML file: - When you import a repository object type, the Import Export utility looks for the XML file in the directory where you run the utility. - When you export an object type, the utility saves the XML file in the directory where you run the utility. For example, when you enter the following command, the utility places Users.xml in the directory where you run the utility:
ImportExport -e user -f Users.xml -u admin -p admin -l http://my.server.com:7001/ias

-u -p -l

user name password url

Data Analyzer user name. Password for the Data Analyzer user name. URL for accessing Data Analyzer. Contact the system administrator for the URL. The Data Analyzer URL has the following format:
http://host_name:port_number/ ReportingServiceName

ReportingServiceName is the name of the Reporting Service that runs the Data Analyzer instance. For example, PowerCenter runs on a machine with hostname fish.ocean.com and has a Reporting Service named IASReports with port number 18080. Use the following URL for Data Analyzer:
http://fish.ocean.com:18080/IASReports

-h -n

No argument user name or group name

Displays a list of all options and their descriptions, and a list of valid repository objects. Use to import or export the security profile of a user or group. For more information, see Table 8-2 on page 68.

Security profile of a user. You must specify the following security profile option: -n <user name> Security profile of a group. You must specify the following security profile option: -n <group name> Schedules

To export the security profile of group Managers to the Profiles.xml file, use the following command:
ImportExport -e groupsecurity -n Managers -f Profiles.xml -u admin -p admin -l http://localhost:7001/ias

schedule

To export all schedules to the Schedules.xml file, use the following command:
ImportExport -e schedule -f c:\Schedules.xml -u jdoe -p doe -l http://localhost:7001/ias

user

Users

To export all users to the Users.xml file, use the following command:
ImportExport -e user -f c:\Users.xml -u jdoe -p doe -l http://localhost:7001/ias

group

Groups

To import groups from the Groups.xml file into the repository, use the following command:
ImportExport -i group -f c:\Groups.xml -u jdoe -p doe -l http://localhost:7001/ias

role

Roles

To import roles from the Roles.xml file into the repository, use the following command:
ImportExport -i role -f c:\Roles.xml -u jdoe -p doe -l http://localhost:7001/ias

The Import Export utility runs according to the specified options. If the utility successfully completes the requested operation, a message indicates that the process is successful. If the utility fails to complete the requested operation, an error message displays.

68

Chapter 8: Using the Import Export Utility

Cause: Action: The user does not exist in Data Analyzer or password is incorrect. Check the validity and case sensitivity of the option letters. Check that the user exists in Data Analyzer or the password is correct. with the specified name. The Import Export utility can display the following error messages: Unknown error.
Incorrect number of command-line options.
The import file contains a different repository object type than the repository object type given for the option -i.
Invalid username or password.
Illegal option value. If the requested operation fails because a required option or argument is missing or not specified correctly. Cause: Action: The user does not have the Export/Import XML Files privilege or the Manage User Access privilege to import or export users. and a list of valid repository objects.Error Messages
If the Import Export utility fails to complete the requested operation. exists in the specified directory. Check that a valid XML file. Check the spelling of the option values you entered. Check the syntax and spelling. Assign the appropriate privileges to the user. groups. For example. Cause: Action: You omitted an option or included more options than needed. Use the correct object type or a different XML file. or roles. Check the XML file name. The error message indicates why the requested operation failed. Cause: Action: The XML file specified for the import (-i) option does not contain the correct object type.
Error Messages
69
. Or. Assign write permission to the user for the directory where you want to place the XML file. the Import Export utility also displays a list of all options and their descriptions. Contact the system administrator or Informatica Global Customer Support.
Unknown option. Cause: Action: Utility failed to run for unknown reasons. make sure there is enough hard disk space.
The import file does not exist or cannot be read. Cause: Action: You entered an incorrect option letter. you entered -x or -E to export a file.
The export file cannot be written. Cause: Action: The directory where you want to place the XML file is read only or has run out of hard disk space. Cause: Action: You entered an incorrect argument for an option letter. Cause: Action: The XML file to be imported does not exist or does not contain valid XML data or the utility cannot access the file.
The user does not have privileges to import/export. it displays an error message.

groups and roles. Cause: Action: See the root cause message. contact Informatica Global Customer Support. Cause: Action: An XML file of the same name already exists in the specified path.
Action:
Import file is empty. Cause: Action: Data Analyzer session has timed out.
Troubleshooting
Importing a Large Number of Reports
If you use the Import Export utility to import a large number of reports (import file size of 16MB or more). Check that Data Analyzer is running and try to run the utility again. Check the spelling of the user name or group name. The action depends on the root cause. Check that the URL is correct and try to run the utility again. first delete them from Data Analyzer. You cannot use the Import Export utility to import users. If the XML file includes global variables already in the repository.
An export file with the provided filename already exists. increase the value for the -mx option in the script file that starts the utility. Cause: You cannot import global variables if they already exist in the repository. The root cause is: <error message>. Run the utility again.
The Data Analyzer session is invalid. and then run the utility.
The user or group does not exist. If the Java process for the Import Export utility runs out of memory. Cause: Action: Data Analyzer is installed with the LDAP authentication method. Cause: Action: There is no data in the XML file. To increase the memory allocation for the Java process.
Note: Back up the script file before you modify it.
The configured security realm does not support the import of users. increase the memory allocation for the process. If error still occurs.A communication error has occurred with Data Analyzer.
Global variables cannot be overwritten. the Import Export utility displays this error message. the Java process for the Import Export utility might run out of memory and the utility might display an exception message. Delete the XML file before you enter the command. groups. Cause: Action: User name or group name that you typed for importing or exporting a security profile does not exist. Contact the Data Analyzer system administrator.
70
Chapter 8: Using the Import Export Utility
. or roles. If you want to import global variables already in the repository. Use a valid XML file.

make sure that the URL you provide with the -l option starts with https:// and uses the correct port for the SSL connection.
Save and close the Import Export utility script file.
Locate the -mx option in the Java command:
java -ms128m -mx256m -jar repositoryImportExport.
5. the trusted CAs are defined in the cacerts keystore file in the JAVA_HOME/jre/lib/security/ directory.jar $*
4. increase the value to 1024.ssl.bat UNIX: ImportExport. If the utility still displays an exception. By default.
Troubleshooting
71
.
4. you must provide the location of the trusted keystore when you run the Import Export utility.
Using SSL with the Import Export Utility
To use SSL.
Locate the Import Export utility script in the Data Analyzer utilities directory:
<PCAEInstallationDirectory>/DataAnalyzer/import-exportutil
2.net.
Open the script file with a text editor: Windows: ImportExport.
When you run the Import Export utility. Data Analyzer needs a certificate that must be signed by a trusted certificate authority (CA). If Data Analyzer uses a certificate signed by a CA not defined in the default cacerts file or if you have created your own trusted CA keystore.jar
TrustedCAKeystore is the keystore for the trusted CAs.net.
Add the trusted CA parameter to the Java command that starts the ImportExport utility:
java -ms128m -mx256m -Djavax. To specify the location of the trusted CAs.trustStore=<TrustedCAKeystore> -jar repositoryImportExport. such as Verisign. The default directory is <PCAEInstallationDirectory>/DataAnalyzer/import-exportutil/.To increase the memory allocation: 1.
Increase the value for the -mx option from 256 to a higher number depending on the size of the import file.
Locate the Import Export utility script file in the Data Analyzer utilities directory.
Note: Back up the Import Export script file before you modify it. add the following parameter to the Import Export utility script:
-Djavax.
Tip: Increase the value to 512.bat UNIX: ImportExport.
Save and close the Import Export utility file.sh
3.sh
3.
Open the script file with a text editor: Windows: ImportExport. you do not need to specify the location of the trusted CA keystore when you run the Import Export utility.
2.ssl.
To specify the location of the trusted CAs: 1.trustStore=
If Data Analyzer uses a certificate signed by a CA defined in the default cacerts file.

♦
Betton Books color scheme.
♦ ♦
All file names are case sensitive. You can modify or add color schemes and images in the EAR directory to customize the Data Analyzer color schemes and images for the organization. complete the following steps: 1. you can search for these objects by department or category on the Find tab. You can set a default color scheme for all users and groups. When you associate repository objects with a department or category. Use any HTML hexadecimal color code to define colors. Create department and category names for your organization. Edit the predefined color scheme and change the file name of the Logo Image URL field or the Login Page Image URL to the name of your image file. The EAR directory containing images for this color scheme is in the following location:
/custom/images/standard
This is the default image directory for Data Analyzer. For the Betton Books color scheme. This is the default Data Analyzer color scheme. The EAR directory containing images for the Betton Books color scheme is in the following location:
/custom/images/standard/color/green
Adding a Logo to a Predefined Color Scheme
To use a predefined color scheme with your own logo or login page image. Control display settings for users and groups.
74
Chapter 9: Managing System Settings
.
Using a Predefined Color Scheme
Data Analyzer provides the following predefined color schemes that you can use or modify:
♦
Informatica color scheme. Copy the logo or login image file to the predefined images folder. You can also assign users and groups to specific color schemes.
♦
Managing Color Schemes and Logos
A color scheme defines the look and feel of Data Analyzer. Alternative predefined color scheme. using your own images and colors. Enter the name of the logo image file you want to use.♦ ♦
Report header and footer. Predefined color scheme folder name. Logo Image URL. Data Analyzer references the image and logo files in the Data Analyzer images directory on the web server associated with the application server. leave the Images Directory field blank. Metadata configuration. You can edit existing color schemes or create new color schemes. Enter the name of the login page image file that you want to use. Create the headers and footers printed in Data Analyzer reports. For the Informatica color scheme. By default. Enter the following information in the predefined color scheme settings:
♦
Images Directory. Display Settings. The color schemes and image files used in Data Analyzer are stored in the EAR directory. use green for the Images Directory field. Login Page Image URL. 3. 2. You can associate repository objects with a department or category to help you organize the objects. the Informatica color scheme is the default color scheme for all users and groups in Data Analyzer.

Display Items in the Color Scheme Page
Display Item Background Page Header Primary Secondary Heading Sub-Heading Description Background color of Data Analyzer. If you specify a URL.
2. Page header of Data Analyzer. use the forward slash (/) as a separator. enter the following URL in the Logo Image URL field:
http://monet.
To edit the settings of a color scheme. For more information about hexadecimal color codes.PaintersInc. port 7001. If blank. use the forward slash (/) as a separator. Table 9-1 shows the display items you can modify in the Color Scheme page:
Table 9-1. click the name of the color scheme. Data Analyzer looks for the images in the default image directory.
3.
Optionally. Name of the color scheme directory where you plan to store the color and image files. Report heading on the Analyze tab. login. It also displays the directory for the images and the URL for the background. The Color Scheme page displays the settings of the color scheme.
Enter hexadecimal color codes to represent the colors you want to use. and logo image files.com.
Click Administration > System Management > Color Schemes and Logos. enter file and directory information for color scheme images:
♦ ♦ ♦ ♦
Images Directory. Data Analyzer uses all the colors and images of the selected predefined color scheme with your logo or login page image. The Color Schemes and Logos page displays the list of available color schemes. Name of a background image file in the color scheme directory or the URL to a background image on a web server.com:7001/CompanyLogo.gif
The URL can point to a logo file in the Data Analyzer machine or in another web server. Name of the login page image file in the color scheme directory or the URL to a login image on a web server.
To edit a predefined color scheme: 1.
All file names are case sensitive. Login Page Image URL.
4. Logo Image URL. you might lose your changes when you upgrade to future versions of Data Analyzer. see “HTML Hexadecimal Color Codes” on page 121. if the host name of the web server where you have the logo file is http://monet.
Editing a Predefined Color Scheme
You can edit the colors and image directories for predefined color schemes and preview the changes.PaintersInc. To display the login page properly. If you modify a predefined color scheme.
Managing Color Schemes and Logos
75
.You can also enter a URL for the logo and login image files. The height of your login page image must be approximately 240 pixels. The color scheme uses the hexadecimal color codes for each display item. Name of a logo file image in the color scheme directory or the URL to a logo image on a web server. or the width of your monitor setting. Report sub-heading on the Analyze tab. Background Image URL. For example. the width of your login page image must be approximately 1600 pixels. Section heading such as the container heading on the View tab. Section sub-heading such as the container sub-heading on the View tab. If you specify a URL.

and Access Management. pop-up windows. 2. 7. For example. The name of the color scheme folder can be up to 10 characters. Copy your image files into the new folder.
Click Close to close the Color Scheme Preview window. copy your logo and image files into the new directory:
/custom/images/standard/color/CompanyColor
76
Chapter 9: Managing System Settings
.
Button Colors Tab Colors
5. complete the following steps: 1. Administration.
6.
Create a folder for the images and logo. View. Tabs include items such as the Define Report Properties tab in Step 5 of the Create Report wizard and the toolbar on the Analyze tab. Find. 3. Use the same color in Section for the Selected field in Tab Colors so that color flows evenly for each tab under the Primary Navigation tab. Create a folder for the images and make sure it contains the new images. including Schema Design. Display Items in the Color Scheme Page
Display Item Section Odd Table Row Even Table Row Selected Rows Primary Navigation Tab Colors Secondary Navigation Colors Description Background color for sections such as forms on the Administration tab. Real-time Configuration. Even rows in a list. and Manage Account tabs.
To create a new color scheme folder: 1. Click OK to save your changes. Tabs under the Primary Navigation tab. Create a New Color Scheme Folder
Create a folder in the color schemes directory and copy the image files you want to use to this folder.
Creating a Color Scheme
You can create a Data Analyzer color scheme.Table 9-1. Alerts. navigate to the EAR directory. Buttons in Data Analyzer. Scheduling. When you create a color scheme. Make sure Data Analyzer can access the images to use with the color scheme.
To preview the choices. Analyze. Create. click Preview. if you want to create a /CompanyColor directory for your new color scheme. Rows you select in the report table or on tabs such as the Find tab. Odd rows in a list. you can use your own images and logos. Add the directory and files for the new color scheme under the default image directory. System Management. Create a new color scheme in Data Analyzer and use the new folder as the Images Directory. To create a new color scheme folder. Menu items on the Administration tab. To create a color scheme. and tabs with drop-down lists. The Color Scheme Preview window displays an example of the way Data Analyzer will appear with the color scheme.
Create a folder for the new color scheme:
/custom/images/standard/color/
2. XML Export/Import.
Step 1.

Data Analyzer uses the selected color scheme as the default for the repository. For more information about hexadecimal color codes.
Click Administration > System Management > Color Schemes and Logos. enter the name of the color scheme folder you created. If you do not set up new colors for the color scheme.
Enter the name and description of the new color scheme. If you do not specify a color scheme for a user or group. Make sure the image file is saved in the color scheme folder you created earlier. enter the file name of the background image you want to use. you can create the color scheme in Data Analyzer and use the new color scheme directory. select Default next to the color scheme name.
9. The Color Scheme page appears.
Managing Color Schemes and Logos
77
. 5. Click Apply. see “HTML Hexadecimal Color Codes” on page 121. set the colors you want to use for the color scheme and provide the new folder name for the images. The new color scheme folder must exist in the EAR directory for Data Analyzer to access it. 8.
Click Administration > System Management > Color Schemes and Logos. After you set up the folder for the images to use in a new color scheme. All file names are case sensitive. The background and logo image files can have file names that you specify.
In the Logo Image URL field. Data Analyzer uses a default set of colors that may not match the colors of your image files.
Click Preview to preview the new color scheme colors. Enter the hexadecimal codes for the colors you want to use in the new color scheme. Create a New Color Scheme in Data Analyzer
On the Color Schemes page.
6.
To set the default color scheme for Data Analyzer. For more information about display items on the Color Scheme page. In the Login Page Image URL field. in GIF or JPG format. see Table 9-1 on page 75.
2. In the Background Image URL field.
3. 4. In the Images Directory field.
To create a new color scheme in Data Analyzer: 1. 7. the image files for your color scheme must have the same names and format as the image files for the predefined color schemes. enter the file name of the logo image to use. Since Data Analyzer references the image files to display them in Data Analyzer. enter the file name of the login page image to use.You must have image files for all buttons and icons that display in Data Analyzer. 10. The Color Schemes and Logos page appears.
Selecting a Default Color Scheme
You can select a default color scheme for Data Analyzer. Click OK to save the new color scheme. Data Analyzer uses the Informatica color scheme.
To select a default color scheme: 1.
Click Add. 3.
Step 2.
2. The Color Schemes and Logos page displays the list of available color schemes.

the objects used for the activity. Logoff time. JDBC log. select the users or groups you want to assign to the color scheme.
6. You can assign color schemes to users and groups when you edit the color scheme. the user requesting the activity. Lists all repository connection activities. In the Query Results area. Login time. and click Add.
Managing Logs
Data Analyzer provides the following logs to track events and information:
♦ ♦
User log. 2. The name of the user accessing Data Analyzer. and save the user log. activity type. System log.
78
Chapter 9: Managing System Settings
.
♦ ♦ ♦
Viewing the User Log
With the user log. 7. 5. Lists the location and login and logout times for each user. If the user does not have a primary group. Global cache log. The IP address accessing Data Analyzer when available. The Assign Color Scheme window appears. warning. and debugging messages about the size of the Data Analyzer global cache. Click OK to save the color scheme. the color scheme for the primary group takes precedence over the other group color schemes. Remote host. Data Analyzer stores the user log entries in the repository.
Click Administration > System Management > Color Schemes and Logos. To assign additional users or groups. informational. Lists error. When you assign a user and its group to different color schemes. To assign the color scheme to a user or group. The date and time the user logged in based on the machine running the Data Analyzer server. 3. Remote address. informational. Click the name of the color scheme you want to assign. Lists error. including the success or failure of the activity. Lists Data Analyzer activity. The user log lists the following information:
♦ ♦ ♦ ♦ ♦
Login name. clear. You can also assign color schemes when you edit the user or group on the Access Management page. You can view.
4. you can track user activity in Data Analyzer. repeat steps 3 to 5. click Edit. Data Analyzer uses the default color scheme. Assign specific color schemes when you want a user or group to use a color scheme other than the default color scheme.
Use the search options to produce a list of users or groups. the user color scheme takes precedence over the group color scheme.
To assign a color scheme: 1. When a user belongs to more than one group. The host name accessing Data Analyzer when available. and debugging messages.Assigning a Color Scheme
You can assign color schemes to users and groups.
Click OK to close the dialog box. Activity log. warning. You can also configure it to log report queries. The date and time the user logged out based on the machine running the Data Analyzer server. and the duration of the request and activity.

maxRowsToDisplay property in DataAnalyzer. and then follow the prompts to save the log to disk. Data Analyzer stores the activity log entries in the repository.user. Data Analyzer clears all entries except for users who have logged in during the past 24 hours and have not yet logged off. If you sort the user log by a column. Start time. Click Clear. Data Analyzer sorts on all user log data. such as web. such as the number of requests to view or run reports. Data Analyzer deletes the log entries from the repository.
To view the user log. The overall time in milliseconds takes to perform the request. hold the pointer over the user name. When you clear the user log.
To clear the user log: 1. Object type. If the user has not logged out. Tables. (XML file only. User role.properties. Object name. (XML file only.
Saving and Clearing the User Log
You can save the user log to an XML file. Click Save. Activity. click Administration > System Management > User Log. User role. The time the user issued the activity request.
To save a user log: 1.) The SQL statement used to run a report. The requested activity. hold the pointer over the user name. 2. Source. The type of object requested.
Click Administration > System Management > User Log. duration displays the length of time the user has been logged into Data Analyzer. The difference between login and logout times for each user. SQL. Status. 2. see “Configuration Files” on page 129. Data Analyzer displays up to 1. The status of the activity. not just the currently displayed rows. Request ID. you can track the activity requests for your Data Analyzer server. You might save a user log before clearing it to keep a record of user access. The source type of the activity request. Use this statistic to optimize database performance and schedule reports.
Managing Logs
79
. By default. To view the role of the user. User name. such as Execute or Update.) The tables used in the SQL statement for a report. You can change the number of rows by editing the value of the logging. By default. the activity log tracks the following information:
♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦
Activity ID.♦ ♦
Duration. To view the role of the user. API. The identification number of the activity. DB access. Clear the activity log on a regular basis to optimize repository performance. such as report. such as Success or Failure. The name of the object requested. For more information about editing DataAnalyzer. The Data Analyzer user requesting the activity.000 rows in the user log. Duration. The time in milliseconds Data Analyzer takes to send the activity request to the data warehouse. You can clear the Data Analyzer user log. or scheduler.properties. The role of the user.
Click Administration > System Management > User Log.
Configuring and Viewing the Activity Log
With the activity log. The identification number of the request that the activity belongs to.

Click Clear.maxRowsToDisplay property in the DataAnalyzer. the System log displays error and warning messages. You can configure the activity log to provide the query used to perform the activity and the database tables accessed to complete the activity. not just the currently displayed rows. This additional information appears in the XML file generated when you save the activity log.
Configuring the System Log
Data Analyzer generates a system log file named ias. Click Save. click Administration > System Management > Activity Log. When you clear the activity log. You might save the activity log to file before you clear it to keep a record of Data Analyzer activity.
To configure the activity log: 1.log which logs messages produced by Data Analyzer. By default. If you sort the activity log by a column. 2. 2.
Click Administration > System Management > Log Configuration. Click SQL in the Activity Log area to log queries. Data Analyzer displays up to 1.
Click Administration > System Management > Activity Log.xml file. You can view the system log file with any text editor.activity. You can change the number of rows by editing the value of the logging. You can clear the activity log of all entries to free space and optimize repository performance. Data Analyzer sorts on all activity log data.
Click Administration > System Management > Activity Log. and then follow the prompts to save the log to disk. You can change the name of the log file and the directory where it is saved by editing the log4j.
To clear the activity log: 1. select both SQL and Tables.
To save an activity log: 1.To view the activity log.
80
Chapter 9: Managing System Settings
.
Saving and Clearing the Activity Log
You can save the activity log to an XML file. 2.properties file. You can locate the system log file in the following directory:
<PowerCenter_install folder>/server/tomcat/jboss/server/informatica/log/<Reporting Service Name>
By default. Data Analyzer clears all entries from the log. save the activity log to file. You can choose to display the following messages in the system log:
♦ ♦ ♦ ♦
Errors Warnings Information Debug
To specify the messages displayed in the system log file:
Click Administration > System Management > Log Configuration.000 rows in the activity log. Data Analyzer logs the additional details. You might also save the activity log to view information about the SQL statements and tables used for reports. To log the tables accessed in the query. To view the information.

jboss. When you add an LDAP server. see the PowerCenter Administrator Guide. Access LDAP directory contacts. you must provide a value for the BaseDN property.dir}/log/<Reporting Service Name>/ias. modify the File parameter to include the path and file name:
<param name=”File” value=”d:/Log_Files/mysystem. After you set up the connection to the LDAP directory service. You can view the log file with any text editor.To configure the name and location of the system log file: 1. If you installed JBoss Application Server using the PowerCenter installer.properties. You use the LDAP settings in Data Analyzer to access contacts within the LDAP directory service when you send email from Data Analyzer. The Base distinguished name entries define the type of information that is stored in the LDAP directory. use the forward slash (/) or two backslashes (\\) in the path as the file separator. For more information about LDAP authentication. You use the PowerCenter LDAP authentication to authenticate the Data Analyzer users and groups.DailyRollingFileAppender"> <param name="File" value="${jboss.log.
Managing LDAP Settings
Lightweight Directory Access Protocol (LDAP) is a set of protocols for accessing information directories.appender. You can use LDAP in the following ways:
♦ ♦
Authentication. If you do not know the value for BaseDN.server. Data Analyzer does not support a single backslash as a file separator.
2.properties file.xml file in the following directory:
<PowerCenter_install folder>/server/tomcat/jboss/server/informatica/ias/<Reporting Service Name>/META-INF
The above folder is available after you enable the Reporting Service and the Data Analyzer instance is started. If you specify a path.log. you can add the LDAP server on the LDAP Settings page.
Modify the value of the File parameter to specify the name and location for the log file.file property in the DataAnalyzer. You can also determine whether Data Analyzer appends data to the file or overwrites the existing JDBC log file by editing the jdbc. if you want to save the Data Analyzer system logs to a file named mysystem.
Configuring the JDBC Log
Data Analyzer generates a JDBC log file. Your changes will take affect in Data Analyzer within several minutes.append property in DataAnalyzer.logging.
Locate the log4j.
Save the file. contact your LDAP system administrator. enter the Base distinguished name entries for your LDAP directory. locate the JDBC log file in the following directory:
<PowerCenter_install folder>/server/tomcat/jboss/bin/
You can change the name of the file and the directory where it is saved by editing the jdbc.
Open the file with a text editor and locate the following lines:
<appender name="IAS_LOG" class="org. In the BaseDN property.home.
Managing LDAP Settings
81
.
To access contacts in the LDAP directory service.log in a folder called Log_Files in the D: drive.log”/>
4. users can email reports and shared documents to LDAP directory contacts. For example.log"/>
3.

Select Anonymous if the LDAP server allows anonymous authentication. You must enter a valid system name and system password for the LDAP server. Required when using System authentication. If your LDAP server requires system authentication. select System.company.com BaseDN: dc= company_name. contact your LDAP system administrator.
To modify the settings of an LDAP server. System name of the LDAP server.com BaseDN: dc=company_name. Required when using System authentication.company. Contact your LDAP system administrator for the system name and system password. System password for the LDAP server. If you do not know the BaseDN.domain.dc=com Authentication: System System Name: cn=Admin.If you use Microsoft Active Directory as the LDAP directory. The following example lists the values you need to enter on the LDAP Settings page for an LDAP server running Microsoft Active Directory:
Name: Test URL: ldap://machine. LDAP Server Settings
Setting Name URL BaseDN Description Name of the LDAP server you want to configure.dc=com System Password: password
The following example lists the values you need to enter on the LDAP Settings page for an LDAP server running a directory service other than Microsoft Active Directory:
Name: Test URL: ldap:// machine. URL for the server.
Authentication
System Name System Password
4. you must choose System authentication as the type of authentication on the LDAP Settings page. Authentication method your LDAP server uses.cn=users.dc= company_name.dc=com Authentication: Anonymous To add an LDAP server: 1. Use the following format: ldap://machine. Enter the following information.com Base distinguished name entry identifies the type of information stored in the LDAP directory.
82
Chapter 9: Managing System Settings
. 3. Select System if you use Microsoft Active Directory as an LDAP directory. The LDAP Settings page appears.
Click Add.
Click Administration > System Management > LDAP Settings. Table 9-2 lists the LDAP server settings you can enter:
Table 9-2.
2.
Click OK to save the changes. click the name of the LDAP server on the LDAP Settings page.

and receive email alerts.
2. Enter the URL for the proxy server you configured during installation.
Configuring SMS/Text Messaging and Mobile Carriers
To allow users to receive one-way SMS/Text message alerts on a phone or pager. enter the URL to the outbound mail server. Click Apply.
2. With outbound mail server configured. Configure an external URL so that users can access Data Analyzer from the internet. The Delivery Settings page appears. Depending on the mail server. You can configure one outbound mail server at a time. see the Data Analyzer User Guide. The URL must begin with http:// or https://. you might need to create a mail server connector before configuring the mail server.
Configuring the Mail Server
The mail server provides outbound email access for Data Analyzer and users. 3. Allows users to connect to Data Analyzer from the internet.
Configuring the External URL
The external URL links Data Analyzer with your proxy server.Managing Delivery Settings
You can determine how users access Data Analyzer and which functions they can access with delivery settings. you must configure SMS/Text messaging. The mail server you configure must support Simple Mail Transfer Protocol (SMTP).
To configure the external URL: 1. enter the URL for the proxy server. the users also need to select a mobile carrier.
Click Apply. To receive SMS/Text message alerts. For more information about using an SMS/Text pager or phone as an alert device.
3. SMS/text messaging and mobile carriers. External URL. Allows Data Analyzer users to email reports and shared documents. The Delivery Settings page appears.
In the Mail Server field.
Managing Delivery Settings
83
.
To configure the mail server: 1. Allows users to register an SMS/Text pager or phone as an alert delivery device.
Click Administration > System Management > Delivery Settings. users can email reports and shared documents.
Click Administration > System Management > Delivery Settings. You can configure the following delivery settings:
♦ ♦ ♦
Mail server.
In the External URL field. Data Analyzer configures the following mobile carriers:
♦ ♦ ♦ ♦ ♦
ATT Cingular Nextel Sprint Verizon
You can configure additional mobile carriers by entering connection information for the carriers.

The Java vendor web site. enter the name and address for the mobile carrier. database version. Click Apply. The Java vendor.
84 Chapter 9: Managing System Settings
.
To specify contact information: 1. Home. phone number. The Operating System section displays the operating system. Vendor URL.net. you enter mobile. select SMS/Text Messaging. Java. enter the domain and extension of the email address associated with your device. driver version. The System Information page contains the following sections:
♦
System Information.
2. Vendor.
Click Administration > System Management > Delivery Settings. 2. To add a mobile carrier. version. driver name. The version of the Java Virtual Machine (JVM).att. see your wireless carrier documentation. Java Version. If you do not know the domain and extension. The System Information section lists the Data Analyzer version and build. Enter the name.
In the Delivery Settings area.
4. Classpath.
Specifying Contact Information
When a system problem occurs.To configure SMS/Text Messaging and mobile carriers: 1. database server type. and architecture of the machine hosting Data Analyzer. if the wireless email address for ATT is myusername@mobile.
Viewing System Information
On the System Information page. and user name. Servlet API. 3. in the Mobile Carriers task area. 3. The Delivery Settings page displays.
To view system information:
Click Administration > System Management > System Information. You can specify contact information for the system administrator in the System Management Area. In the address field. The version of the application server that runs Data Analyzer. A list of the paths and files contained in the Java classpath system variable.att. you can view information about Data Analyzer and the machine that hosts it. and email address of the system administrator. users may need to contact the system administrator.
Click Administration > System Management > Contact Information. Data Analyzer adds the mobile carrier to the list of mobile carriers. JDBC connection string. For example. Operating System. repository version. The version of the Java Servlet API. The home directory of the JVM.
Click Add.net. The Java section displays the following information about the Java environment on the machine hosting Data Analyzer:
− − − − − − −
♦ ♦
Application Server.

You may have more than one SQL query for the report.Setting Rules for Queries
You can configure the time limit on each SQL query for a report. clear the Use Default Settings option. Default is 240 seconds. The Query Governing page appears.
2.
Click Apply. These settings apply to all reports. Report Processing Time includes time to run all queries for the report. 3. or report level.
To set up group query governing rules: 1. Data Analyzer displays a warning message and drops the excess rows. When you clear this option. and the maximum number of rows that each query returns. Default is 600 seconds.
Enter the query governing settings you want to use. Maximum number of rows SQL returns for each query.
Click Administration > System Management > Query Governing. If a user belongs to one or more groups in the same level in the group hierarchy.
Row Limit
3. Table 9-3 describes the system query governing rules you can enter:
Table 9-3. Maximum amount of time allowed for the application server to run the report. Default is 20. If a query returns more rows than the row limit. Query governing settings for the group override system query governing settings. Data Analyzer uses the largest query governing setting from each group. You can set up these rules for querying at the following levels:
♦ ♦ ♦ ♦
System Group User Report
When you change the system query governing setting or the query governing setting for a group or user.000 rows.
Click Administration > Access Management > Groups. Click Edit next to the group whose properties you want to modify. System Query Governing Settings
Setting Query Time Limit Report Processing Time Limit Description Maximum amount of time for each SQL query.
4. user. unless you override them at the group. 2.
Setting Query Rules at the System Level
You can specify the query governing settings for all reports in the repository.
Setting Query Rules at the Group Level
You can specify query governing settings for all reports belonging to a specific group. In the Query Governing section. Data Analyzer uses the query governing settings entered on this page.
To set up group query governing rules: 1. Data Analyzer uses the system query governing settings.
Setting Rules for Queries 85
. the time limit on processing a report. you must log out of Data Analyzer and log in again for the new query governing settings to take effect. When this option is selected.
Enter the query governing rules.

For more information about each setting, see Table 9-3 on page 85.
5.

Click OK. Data Analyzer saves the group query governing settings.

Setting Query Rules at the User Level
You can specify query governing settings for all reports belonging to a specific user. Query governing settings for the user override group and system query governing settings.
To set up user query governing rules: 1. 2. 3.

Click Administration > Access Management > Users. Click the user whose properties you want to modify. In the Query Governing section, clear the Use Default Settings option. When you clear this option, Data Analyzer uses the query governing settings entered on this page. When this option is selected, Data Analyzer uses the query governing settings for the group assigned to the user.

4.

Enter the query governing settings you want to use. For more information about each setting, see Table 9-3 on page 85.

5.

Click OK. Data Analyzer saves the user query governing settings.

Query Governing Rules for Users in Multiple Groups
If you specify query governing settings for a user, Data Analyzer uses the query governing setting when it runs reports for the user. If you do not specify query governing settings for a user, Data Analyzer uses the query governing settings for the group that the user belongs to. If a user belongs to multiple groups, Data Analyzer assigns the user the least restrictive query governing settings available. Data Analyzer ignores groups with the system default query governing settings. For example, you have not specifically configured query governing settings for a user. The user belongs to three groups with the following query governing settings:
Group Group 1 Group 2 Group 3 Row Limit 25 rows Query Time Limit 30 seconds

Default query governing settings 18 rows 120 seconds

Data Analyzer does not consider Group 2 in determining the group query governing settings to use for the user reports. For the row limit, Data Analyzer uses the setting for Group 1 since it is the least restrictive setting. For query time limit, Data Analyzer uses the setting for Group 3 since it is the least restrictive setting.

Setting Query Rules at the Report Level
You can specify query governing settings for a specific report. Query governing settings for a specific report override group, user, and system query governing settings.
To set up report query governing rules: 1. 2. 3.

Click Publish. On the Report Properties tab, click More Options. In the Query Governing section, clear the Use Default Settings option. When you clear this option, Data Analyzer uses the query governing settings entered on this page. When this option is selected, Data Analyzer uses the query governing settings for the user.

Setting Rules for Queries

87

7.

Enter the query governing settings you want to use. For more information about each setting, see Table 9-3 on page 85.

8.

Click Save.

Configuring Report Table Scroll Bars
You can configure report tables to appear with a scroll bar. When you enable the Show Scroll Bar on Report Table option, Data Analyzer displays a scroll bar when data in a report table extends beyond the size of the browser window. When the option is disabled, you use the browser scroll bar to navigate large report tables. By default, Data Analyzer displays scroll bars in report tables.
To change report table scroll bar display: 1.

Configuring Report Headers and Footers
In the Header and Footer page, you can configure headers and footers for reports. You can configure Data Analyzer to display text, images, or report information such as report name. Headers and footers display on the report when you complete the following report tasks:
♦ ♦ ♦ ♦ ♦

Print. Headers and footers display in the printed version of the report. Export. Headers and footers display when you export to an HTML or PDF file. Broadcast. Headers and footers display when you broadcast a report as an HTML, PDF, or Excel file. Archive. Headers and footers display when you archive a report as an HTML, PDF, or Excel file. Email. Headers and footers display when you email a report as an HTML or PDF file.

You can display text or images in the header and footer of a report. When you select the headers and footers to display, preview the report to verify that the headers and footers display properly with enough spaces between text or images. Table 9-4 lists the options you can select to display in the report headers and footers:
Table 9-4. Display Options for Report Headers and Footers
Header/Footer Left Header Center Header Right Header Display Options Text or image file. Text. Text.

88

Chapter 9: Managing System Settings

Select report properties to display. Name of the report. enter the complete URL for the image when you configure the header or footer. For example.com.PaintersInc. Data Analyzer displays the specified name in the footer. or last name. If you want to use an image file in a different location.Table 9-4. .HeaderFooter. . Display Options for Report Headers and Footers
Header/Footer Left Footer Display Options One or more of the following report properties: . You can use the PDF. By default.ShrinktoWidth property in the DataAnalyzer.gif
If Data Analyzer cannot find the header or footer image in the color scheme directory or the URL.
Select to display text or image. port 7001. Data Analyzer shrinks the font to fit the text in the allotted space by default.Last Update.Name. When you enter a large amount of text in a header or footer. broadcast.properties file to determine how Data Analyzer handles long headers and footers. middle name. . Name of the user.
Center Footer Right Footer
The image files you display in the left header or the right footer of a report can be any image type supported by your browser. The Report Header and Footer page appears. or select report property to display.
Select an option and enter text to display. export. you must update the images in the EAR directory. archive. Data Analyzer does not display any image for the report header or footer.com:7001/Header_Logo. Users can specify their names on the Manage Account tab. allowing Data Analyzer to display only the text that fits in the header or footer. You can also configure Data Analyzer to keep header and footer text the configured font size.PaintersInc. Enter the text or image file name to display. enter the following URL:
http://monet. or email the report.
To configure report headers and footers: 1. Data Analyzer looks for the header and footer image files in the image file directory for the current Data Analyzer color scheme.User Name. The report header and footer image files are stored with the color scheme files in the EAR directory.
Configuring Report Headers and Footers
89
. Or select to display both. Text and Page Number. Text or image file.gif image file is http://monet.
Click Administration > System Management > Header and Footer. If you want to modify or use a new image for the left header or right footer. if the host name of the web server where you saved the Header_Logo. Date when the report was last updated. If a user specifies a first name.Printed On.
Select an option and enter text. Date and time when you print.

6. The department name appears in the list in the Departments area.
4. For more information about the header and footer display options. The category name appears in the list in the Categories area. Associating repository objects with a department or category can also help you search for these objects on the Find tab. Click Add. You might use category names to organize repository objects according to object characteristics. To use text for left headers.
4.
5.
Note: If you make more changes in the report header and footer configuration. such as Quarterly or Monthly. close the preview window
and click Preview again to see the new report header and footer. select the top field and enter the text to display. To use an image for the right footer. Adobe Acrobat launches in a new browser window to display a preview of the report.
Click Preview to see how the report will look with the headers and footers you selected. On the Report Header and Footer page. If the image is not in the default image directory. For left footers. see Table 9-4 on page 88.
To configure report footers. select the lower field and enter the name of an image file in the Data Analyzer EAR file or specify a URL for the image. enter the name of the category.
Click Administration > System Management > Metadata Configuration.
3. The Categories Departments page appears. select the top field and enter the text to use.
Click OK.
90
Chapter 9: Managing System Settings
. select the lower field and enter the name of the file to use. click Apply to set the report header and footer.
6.
To configure department and category: 1. enter the name of the department.
In the Departments area. select the headers you want to display and enter the header text. To use an image for the left header.
Close the preview window. Click Add.2. such as Human Resource and Development. Data Analyzer looks for the header and footer images in the image directory for the color scheme.
To configure report headers. Or click Cancel to discard the changes to the headers and footers. Data Analyzer saves the department or category names you added.
In the Categories area. specify the complete URL. you can choose properties specific to the report. You can associate the category or department you created with repository objects. select the footer you want to display. You might use department names to organize repository objects according to the departments in your organization. To use text for the right footer. 5.
Configuring Departments and Categories
You can associate repository objects with a department or category to organize repository objects.
2. 3.

000. if you have more than 100 groups or users.
Locate the line containing the following property:
searchLimit
The value of the searchLimit property is the maximum number of groups or users in the search result before you must refine the search criteria.xml so you can configure the user or group display according to your requirements:
♦ ♦
showSearchThreshold. Determines the number of groups or users Data Analyzer displays before displaying the Search box.
4.
To change group or user display options in web.xml: 1. Default is 1. Data Analyzer displays a Search box so you can find the group or user you want to edit. refine the search criteria.
Change the value of the searchLimit property according to your requirements.informatica. Back up the web. searchLimit.
<init-param> <param-name> InfUserAdminUIConfigurationStartup.xml.
Open the /custom/properties/web. Restart Data Analyzer.ias.searchLimit </param-name> <param-value>1000</param-value> </init-param>
5. Determines the maximum number of groups or users in the search results before you must refine the search criteria.
2.
<init-param> <param-name> InfUserAdminUIConfigurationStartup.xml file before you modify it.000 groups or users in the search results.
Change the value of the showSearchThreshold property according to your requirements.Configuring Display Settings for Groups and Users
By default.xml file with a text editor and locate the line containing the following property:
showSearchThreshold
The value of the showSearchThreshold property is the number of groups or users Data Analyzer displays without providing the Search box.ias.com.xml file is stored in the EAR directory.showSearchThreshold </param-name> <param-value>100</param-value> </init-param>
3. You can customize the way Data Analyzer displays users or groups. If Data Analyzer returns more than 1. Data Analyzer provides the following properties in a file named web.
Configuring Display Settings for Groups and Users
91
.
Note: The web.com.informatica. 6. useradmin. Default is 100. useradmin.
Save and close web.

92
Chapter 9: Managing System Settings
.

CHAPTER 10
Working with Data Analyzer Administrative Reports
This chapter includes the following topics:
♦ ♦ ♦
Overview.
93
. On the Administrator’s Dashboard. After you set up the Data Analyzer administrative reports. The Data Analyzer administrative reports use an operational schema based on tables in the Data Analyzer repository. You can view the administrative reports in two areas:
♦ ♦
Administrator’s Dashboard. They also require a data connector that includes the Data Analyzer administrative reports data source and operational schema. You can add charts or indicators. the number of reports accessed in each hour for the day. If you need additional information in a report. Provides information on the number of users who logged in for the day.
Administrator’s Dashboard
The Administrator’s Dashboard displays the indicators associated with the administrative reports. 97
Overview
Data Analyzer provides a set of administrative reports that enable system administrators to track user activities and monitor processes. or change the format of any report. 93 Setting Up the Data Analyzer Administrative Reports. They require a data source that points to the Data Analyzer repository. Data Analyzer Administrative Reports folder. and any errors encountered when Data Analyzer runs cached reports. you can quickly see how well Data Analyzer is working and how often users log in. you can modify it to add metrics or attributes. You can enhance the reports to suit your needs and help you manage the users and processes in Data Analyzer more efficiently. The reports provide a view into the information stored in the Data Analyzer repository. You can access all administrative reports in the Data Analyzer Administrative Reports public folder under the Find tab. you can view and use the reports just like any other set of reports in Data Analyzer. They include details on Data Analyzer usage and report schedules and errors. The Administrator’s Dashboard has the following containers:
♦
Today’s Usage. 94 Using the Data Analyzer Administrative Reports.

2. see “Step 2. The administrative reports display information from the Data Analyzer repository. To run the administrative reports.
Setting Up the Data Analyzer Administrative Reports
Informatica ships a set of prepackaged administrative reports for Data Analyzer. To set up the administrative reports. Enter a name and description for the data source.
Select JDBC Data Source. Set Up a Data Source for the Data Analyzer Repository” on page 94. Select the server type of your Data Analyzer repository. Add the repository data source to a data connector.
Click Administration > Schema Design > Data Sources. To have the reports and indicators regularly updated.
To create the repository data source: 1. Add the administrative reports to a schedule. You must create a data source that points to the Data Analyzer repository. For more information. Add the Data Source to a Data Connector” on page 95. and the longest running cached reports for the current month.♦ ♦ ♦
Historical Usage.
Data Analyzer Administrative Reports Folder
The Data Analyzer Administrative Reports folder stores all the administrative reports. you need a data connector that contains the data source to the repository. you can run the administrative reports on specific schedules. and run reports from this folder. Displays the users who logged in the most number of times during the month. see “Step 4. Provides a report on the Data Analyzer users who have never logged in. you can skip this step and use the
existing data source for the administrative reports. Add the Administrative Reports to Schedules” on page 96.
Step 1. you can set up the administrative reports on Data Analyzer. Future Usage. The Data Source page appears. complete the following steps: 1. Import the administrative reports to the Data Analyzer repository. You need a data source to connect to the repository. see “Step 1. the longest running on-demand reports. Import the XML files in the <PowerCenter_install folder>/DA-tools/AdministrativeReports folder to the Data Analyzer repository. Create a data source for the Data Analyzer repository. The information comes from the Data Analyzer repository. see “Step 3. You can view. 5. Also provides reports on the most and least accessed reports for the year. For more information.
3. Import the Data Analyzer Administrative Reports” on page 95. For more information about.
94
Chapter 10: Working with Data Analyzer Administrative Reports
.
2. Admin Reports. 4. You must enable the Reporting Service and access the Data Analyzer URL to set up the administrative reports. Lists the cached reports in Data Analyzer and when they are scheduled to run next. After you create a Reporting Service in the PowerCenter Administration Console and the corresponding Data Analyzer instance is running properly. open. For more information. Set Up a Data Source for the Data Analyzer Repository
The administrative reports provide information on the Data Analyzer processes and usage. click Add. and then add the data source to a data connector. On the Data Sources page.
Note: If you have a data source that points to the Data Analyzer repository.
3.
4.

Customize the JDBC connection string with the information for your Data Analyzer repository database. add the administrative reports data source to the specific data connector. Sybase ASE.
Step 3.
2.
To add the administrative reports data source to the system data connector: 1. Data Analyzer does not support a Teradata repository. Teradata. When you select Other. Enter the user name and password to connect to the repository database. Data Analyzer does not support a DB2 repository on OS/390.
Click Administration > Schema Design > Data Connectors.
Setting Up the Data Analyzer Administrative Reports
95
. Select to connect to a Microsoft SQL Server repository. If the connection fails. dashboards. and database-specific global variables that you need to run the administrative reports. Add the Data Source to a Data Connector
Data Analyzer uses a data connector to connect to a data source and read the data for a report. When you select the server type.
Step 2.
9. you must create one before running the Data Analyzer administrative reports. ensure that the Reporting Service is enabled and the Data Analyzer instance is running properly. Select to connect to an IBM DB2 repository. The Data Connectors page appears. click Add. Typically. Data Analyzer uses the system data connector to connect to all the data sources required for Data Analyzer reports. Import the XML files under the <PowerCenter_install folder>/DA-tools/AdministrativeReports folder. For more information about data connectors. Select if you want to use a different driver or you have a repository that requires a different driver than those provided by Data Analyzer.
In the Additional Schema Mappings section. DB2 (AS/400). 8.
Click OK. Import the Data Analyzer Administrative Reports
Before you import the Data Analyzer administrative reports. Select to connect to an Oracle repository. If Data Analyzer does not have a data connector. see “Importing Objects to the Repository” on page 49. 7. DB2. SQL Server. verify that the repository database information is correct. If you have several data connectors and you want to use a specific data connector for the administrative reports. Consult your database administrator if necessary. DB2 (OS/390).
3. schedules.Data Analyzer provides JDBC drivers to connect to the Data Analyzer repository and data warehouse. add the administrative reports data source to the system data connector. Data Analyzer supplies the driver name and connection string format for the JDBC drivers that Data Analyzer provides. Data Analyzer displays the properties of the system data connector. To enable Data Analyzer to run the administrative reports.
6. see the Data Analyzer Schema Designer Guide. For more information about importing XML files. Other. Select to connect to a Sybase repository.
Click the name of the system data connector. Data Analyzer does not support a DB2 repository on AS/400. The XML files contain the schemas. Test the connection. The server type list includes the following databases:
♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦
Oracle. you must provide the driver name and connection string.

5. The Midnight Daily schedule is one of the schedules created when you install Data Analyzer.
Click Publish.
You can now run the administrative reports using the system data connector.
Click Add.
Step 4.
96
Chapter 10: Working with Data Analyzer Administrative Reports
. you might want to review the list of reports in the Data Analyzer Administrative Reports folder to make sure that the cached reports have been added to the correct schedule. Locate and click the folder named Data Analyzer Administrative Reports.
Click the Find Tab.
6. 3. 8. Save the report. Add the Administrative Reports to Schedules
Data Analyzer provides a set of schedules that you can use to run the administrative reports on a regular basis. verify that the cached reports are assigned to the appropriate schedules. In the Available Schemas section. The PA_Reposit operational schema is one of the schemas installed by the PowerCenter Reports installer.
To add the administrative reports to schedules: 1. 4. Repeat steps 1 to 8 to verify that the following administrative reports are assigned to the appropriate schedules:
Report Todays Logins Todays Report Usage by Hour Top 5 Logins (Month To Date) Top 5 Longest Running On-Demand Reports (Month To Date) Top 5 Longest Running Scheduled Reports (Month To Date) Total Schedule Errors for Today Schedule Hourly Refresh Hourly Refresh Midnight Daily Midnight Daily Midnight Daily Hourly Refresh
The Hourly Refresh schedule is one of the schedules installed by the PowerCenter Reports installer.
Click OK.
7. 5. Data Analyzer displays the additional schema mapping for the system data connector. Click Edit. select Cached. 2.Data Analyzer expands the section and displays the available schemas in the repository. and then select Hourly Refresh from the list of schedules. click Public Folders. The report appears in the Create Report wizard. 9. In the folders section of the Find tab. Select a report to add to a schedule. After you import all the necessary objects for the administrative reports. select the administrative reports data source you created earlier.
4.
6. select PA_Reposit and click Add >>. On the Properties tab. After you complete the steps to add the reports to the schedules.
In the Data Source list. 7. The public folder named Data Analyzer Administrative Reports contains the administrative reports.

You can access this report from the Today’s Usage container on the Administrator’s Dashboard and from the Find Tab. Use this report to determine the system usage for the current day. This report displays the average response time for the five longest-running on-demand reports in the current month to date. Use this report to get information on the reports accessed by users in the current day. You can access this report from the Today’s Usage container on the Administrator’s Dashboard and from the Find Tab. Report Activity Details for Current Month. listed in alphabetical order:
♦ ♦
Activity Log Details.
Using the Data Analyzer Administrative Reports
The Data Analyzer administrative reports are located in the Data Analyzer Administrative Reports public folder on the Find tab. Use this on-demand report to view the activity logs. Use this report to monitor the update time for various reports. the report provides detailed information about all reports accessed by any user in the current day. This report provides information about the next scheduled update for cached reports. It is the primary report for an analytic workflow. The report shows the list of 10 reports that users find most useful. This on-demand report provides information about the reports accessed within the current month.
To review the schedule for a report in the Data Analyzer Administrative Reports folder. Todays Logins. You can also access these reports from the Administrator’s Dashboard. Use this report to determine the users who logged in to Data Analyzer the most number of times in the current month. It is the primary report for an analytic workflow. You can access this report from the Admin Reports container on the Administrator’s Dashboard and from the Find Tab. You can also use it to determine whether an on-demand report needs to
♦
♦ ♦
♦
♦
♦
♦
♦
♦
Using the Data Analyzer Administrative Reports
97
. Use this report to monitor the update time for various reports.10. You can access this report from the Find tab. This report provides the login count and average login duration for users who logged in on the current day. You can access this report from the Future Usage container on the Administrator’s Dashboard and from the Find Tab.
After you schedule the administrative reports. You can view this report as part of the analytic workflow for the Todays Logins primary report or as a standalone report. Data Analyzer updates this cached report based on the Hourly Refresh schedule. Top 5 Logins (Month To Date). View this report as part of the analytic workflows for several primary reports or as a standalone report. You can access this report from the Admin Reports container on the Administrator’s Dashboard and from the Find Tab. The report displays the user names and number of times each user logged in. Report Activity Details. You can access this report from the Find tab. Data Analyzer updates this cached report based on the Midnight Daily schedule. you need to create a data source for the repository. You can access this report from the Historical Usage container on the Administrator’s Dashboard and from the Find Tab. Report Refresh Schedule. It is the primary report for an analytic workflow. Use this on-demand report to determine the 10 least used reports in the current calendar year. When you run the Report Activity Details from the Find tab. When you run this report from the Find tab. Top 5 Longest Running On-Demand Reports (Month To Date). Top 10 Most Accessed Reports this Year. Todays Report Usage by Hour. Data Analyzer updates this cached report based on the Hourly Refresh schedule. This report provides information about the number of reports accessed for each hour of the current day. Data Analyzer updates this cached report based on the Hourly Refresh schedule. Data Analyzer provides the following administrator reports. Use this report to help you tune the database or web server. Bottom 10 Least Accessed Reports this Year. it displays access information for all reports in the repository. Reports Accessed by Users Today. Use this report to determine the reports most accessed by users in the current calendar year. select a report and look at the Report Properties section.

run on a schedule. You can access this report from the Historical Usage container on the Administrator’s Dashboard and from the Find Tab. Data Analyzer updates this cached report based on the Midnight Daily schedule.
♦

Top 5 Longest Running Scheduled Reports (Month To Date). This report displays the time that Data Analyzer takes to display the five longest running cached reports in the current month to date. Use this report for performance tuning and for determining whether a cached report needs to run on demand. You can access this report from the Historical Usage container on the Administrator’s Dashboard and from the Find Tab. Data Analyzer updates this cached report based on the Midnight Daily schedule. Total Schedule Errors for Today. This report provides the number of errors Data Analyzer encountered when running cached reports. Use this report to monitor cached reports and modify them if necessary. You can access this report from the Today’s Usage container on the Administrator’s Dashboard and from the Find Tab. Data Analyzer updates this cached report based on the Hourly Refresh schedule. User Log Details. Use this on-demand report to view the user logs. You can access this report from the Find tab. User Logins (Month To Date). This report displays the number of times each user logged in during the month. Use this report to determine how often users log in to Data Analyzer. You can access this report from the Historical Usage container on the Administrator’s Dashboard and from the Find Tab. Users Who Have Never Logged On. This report provides information about users who have never logged in to Data Analyzer. Use this report to make administrative decisions about disabling accounts. You can access this report from the Admin Reports container on the Administrator’s Dashboard and from the Find Tab.

♦

♦ ♦

♦

98

Chapter 10: Working with Data Analyzer Administrative Reports

CHAPTER 11

Performance Tuning
This chapter includes the following topics:
♦ ♦ ♦ ♦ ♦

Overview
Data Analyzer requires the interaction of several components and services, including those that may already exist in the enterprise infrastructure, such as the enterprise data warehouse and authentication server. Data Analyzer is built on JBoss Application Server and uses related technology and application programming interfaces (APIs) to accomplish its tasks. JBoss Application Server is a Java 2 Enterprise Edition (J2EE)compliant application server. Data Analyzer uses the application server to handle requests from the web browser. It generates the requested contents and uses the application server to transmit the content back to the web browser. Data Analyzer stores metadata in a repository database to keep track of the processes and objects it needs to handle web browser requests. You can tune the following components to optimize the performance of Data Analyzer:
♦ ♦ ♦ ♦

Database Operating system Application server Data Analyzer

Database
Data Analyzer has the following database components:
♦ ♦

Data Analyzer repository Data warehouse

99

The repository database contains the metadata that Data Analyzer uses to construct reports. The data warehouse contains the data for the Data Analyzer reports. The data warehouse is where the report SQL queries are executed. Typically, it has a very high volume of data. The execution time of the reports depends on how well tuned the database and the report queries are. Consult the database documentation on how to tune a high volume database for optimal SQL execution. The Data Analyzer repository database contains a smaller amount of data than the data warehouse. However, since Data Analyzer executes many SQL transactions against the repository, the repository database must also be properly tuned to optimize the database performance. This section provides recommendations for tuning the Data Analyzer repository database for best performance.
Note: Host the Data Analyzer repository and the data warehouse in separate database servers. The following

repository database tuning recommendations are valid only for a repository that resides on a database server separate from the data warehouse. If you have the Data Analyzer repository database and the data warehouse in the same database server, you may need to use different values for the parameters than those recommended here.

Oracle
This section provides recommendations for tuning the Oracle database for best performance.

Statistics
To ensure that the repository database tables have up-to-date statistics, periodically run the following command for the repository schema:
EXEC DBMS_STATS.GATHER_SCHEMA_STATS(ownname=><RepositorySchemaName>, cascade=>true,estimate_percent=>100);

For more information about tuning an Oracle database, see the Oracle documentation.

User Connection
For an Oracle repository database running on HP-UX, you may need to increase the number of user connections allowed for the repository database so that Data Analyzer can maintain continuous connection to the repository. To enable more connections to the Oracle repository, complete the following steps: 1. At the HP-UX operating system level, raise the maximum user process (maxuprc) limit from the default of 75 to at least 300. Use the System Administration Manager tool (SAM) to raise the maxuprc limit. Raising the maxuprc limit requires root privileges. You need to restart the machine hosting the Oracle repository for the changes to take effect. 2. In Oracle, raise the values for the following database parameters in the init.ora file:
♦ ♦

Raise the value of the processes parameter from 150 to 300. Raise the value of the pga_aggregate_target parameter from 32 MB to 64 MB (67108864).

Updating the database parameters requires database administrator privileges. You need to restart Oracle for the changes to take effect. If the Data Analyzer instance has a high volume of usage, you may need to set higher limits to ensure that Data Analyzer has enough resources to connect to the repository database and complete all database processes.

100

Chapter 11: Performance Tuning

Linux
To optimize Data Analyzer on Linux. Enlarge the maximum open file descriptors.
Operating System
For all UNIX operating systems. periodically run the following command for the repository schema:
REORGCHK UPDATE STATISTICS on SCHEMA <DBSchemaName>
Analysis of table statistics is important in DB2. set the following parameter values for the Data Analyzer repository database:
LOCKLIST = 600 MAXLOCKS=40 DBHEAP = 4000 LOGPRIMARY=100 LOGFILSIZ=2000
For more information about DB2 performance tuning.boulder. you need to make several changes to your Linux environment.html?Open
Microsoft SQL Server 2000
To ensure that repository database tables and indexes have up-to-date statistics.IBM DB2
To ensure that the repository database tables have up-to-date statistics.com/Redbooks.nsf/RedbookAbstracts/sg246432.
Enlarging Shared Memory and Shared Memory Segments
By default. For optimal performance. refer to the following IBM Redbook:
http://publib-b. periodically run the sp_updatestats stored procedure on the repository schema. You need to increase these values because the Java threads need to have access to the same area of shared memory and its resultant segments. The following recommendations for tuning the operating system are based on information compiled from various application server vendor web sites. make sure the file descriptor limit for the shell running the application server process is set to at least 2048.ibm. enter the following commands as root on the machine where you install Data Analyzer:
# echo "2147483648" > /proc/sys/kernel/shmmax # echo "250 32000 100 128" > /proc/sys/kernel/sem
These changes only affect the system as it is running now. Enlarge the maximum per-process open file descriptors. Linux limits the amount of memory and the number of memory segments that can be shared among applications to a reasonably small value. you may encounter transaction deadlocks during times of high concurrency usage. If you do not update table statistics periodically.d/rc.local Operating System 101
. You must modify basic system and kernel settings to allow the Java component better access to the resources of your system:
♦ ♦ ♦
Enlarge the shared memory and shared memory segments. To change these parameters. Enter the following commands to make them permanent:
# echo '#Tuning kernel parameters' >> /etc/rc. Use the ulimit command to set the file descriptor limit.

local
Enlarging the Maximum Per-Process Open File Descriptors
Increase the maximum number of open files allowed for any given process.local # echo 'echo "250 32000 100 128" > /proc/sys/kernel/sem' >> /etc/rc.html
The HPjconfig recommendations for a Java-based application server running on HP-UX 11 include the following parameter values:
Max_thread_proc = 3000 Maxdsiz = 2063835136 Maxfiles=2048 Maxfiles_lim=2048 Maxusers=512
102
Chapter 11: Performance Tuning
.conf '* soft nofile 4096' >> /etc/security/limits.so' >> /etc/pam.d/rc.1620. You can download the configuration utility from the following HP web site:
http://h21007. Enter the following commands as root to increase the maximum open file descriptors per process:
# # # # echo echo echo echo '# Set soft and hard process file descriptor limits' >> /etc/security/limits.ipv4. Enter the following command as root to increase the maximum number of open file descriptors:
# echo "65536" > /proc/sys/fs/file-max
These changes affect the system as it is currently running.tcp_max_syn_backlog Suggested Values 1500 1024 8192
HP-UX
You can tune the following areas in the HP-UX operating system to improve overall Data Analyzer performance:
♦ ♦ ♦
Kernel Java Process Network
Kernel Tuning
HP-UX has a Java-based configuration utility called HPjconfig which shows the basic kernel parameters that need to be tuned and the different patches required for the operating system to function properly.# echo 'echo "2147483648" > /proc/sys/kernel/shmmax' >> /etc/rc.com/dspp/tech/tech_TechDocumentDetailPage_IDX/1.d/login
Additional Recommended Settings
Table 11-1 shows additional recommended settings for Linux operating system parameters:
Table 11-1.msgmni net.conf '* hard nofile 4096' >> /etc/security/limits.hp. Recommended Settings for Linux Parameters
Linux Parameters /sbin/ifconfig lo mtu kernel. this is set to 4096 files. Enter the following commands to make them permanent:
# echo 'echo "65536" > /proc/sys/fs/file-max' >> /etc/rc.d/rc.1701.conf 'session required /lib/security/pam_limits.d/rc.00. By default.www2. Increasing this limit removes any bottlenecks from all the Java threads requesting files.local
Enlarging the Maximum Open File Descriptors
Linux has a programmed limit for the number of files it allows to be open at any one time.

Setting Parameters Using ndd
Use the ndd command to set the TCP-related parameters. see the document titled “Tunable Kernel Parameters” on the following HP web site:
http://docs. Increase the value to 64 MB to optimize the performance of the application server that Data Analyzer runs on.
Solaris
You can tune the Solaris operating system to optimize network and TCP/IP operations in the following ways:
♦ ♦ ♦
Use the ndd command.0/native_threads/java
Network Tuning
For network performance tuning. restart the machine.Ncallout=6000 Nfile=30000 Nkthread=3000 Nproc=2068
Note: For Java processes to function properly. use the following command:
ndd -set /dev/tcp tcp_conn_request_max 1024
After modifying the settings. as shown in the following example:
ndd -set /dev/tcp tcp_conn_req_max_q 16384
Tip: Use the netstat -s -P tcp command to view all available TCP-related parameters. see the HP documentation. Set parameters in the /etc/system file. To set the JVM virtual page size.com/hpux/onlinedocs/TKP-90203/TKP-90203. to set the tcp_conn_request_max parameter. it is important that the HP-UX operating system is on the proper
patch level as recommended by the HPjconfig tool. The default value for the Java virtual machine instruction and data page sizes is 4 MB. Set parameters on the network card. use the following command:
chatr +pi64M +pd64M <JavaHomeDir>/bin/PA_RISC2.html
Java Process
You can set the JVM virtual page size to improve the performance of a Java process running on an HP-UX machine. For more information about kernel parameters affecting Java performance. Recommended ndd Settings for HP-UX
ndd Setting tcp_conn_request_max tcp_xmit_hiwater_def tcp_time_wait_interval tcp_recv_hiwater_def tcp_fin_wait_2_timeout Recommended Value 16384 1048576 60000 1048576 90000
For example. use the ndd command to view and set the network parameters. For more information about tuning the HP-UX kernel. Table 11-2 provides guidelines for ndd settings:
Table 11-2.
Operating System
103
.hp.

Table 11-3 lists the TCP-related parameters that you can tune and their recommended values:
Table 11-3. the tcp_time_wait_interval parameter was called tcp_close_
wait_interval. Recommended /etc/system Settings for Solaris
Parameter rlim_fd_cur rlim_fd_max tcp:tcp_conn_hash_size semsys:seminfo_semume semsys:seminfo_semopm *shmsys:shminfo_shmmax autoup Recommended Value 8192 8192 32768 1024 200 4294967295 900
104
Chapter 11: Performance Tuning
.
Table 11-4 lists the /etc/system parameters that you can tune and the recommended values:
Table 11-4. the hash table size. holding these socket resources can have a significant negative impact on performance. The default value of this parameter on Solaris is four minutes. To optimize socket performance. When many clients connect for a short period of time. Change the default file descriptor limits.7. This parameter determines the time interval that a TCP socket is kept alive after issuing a close call. and other tuning parameters in the /etc/system file. Recommended ndd Settings for Solaris
ndd Setting /dev/tcp tcp_time_wait_interval /dev/tcp tcp_conn_req_max_q /dev/tcp tcp_conn_req_max_q0 /dev/tcp tcp_ip_abort_interval /dev/tcp tcp_keepalive_interval /dev/tcp tcp_rexmit_interval_initial /dev/tcp tcp_rexmit_interval_max /dev/tcp tcp_rexmit_interval_min /dev/tcp tcp_smallest_anon_port /dev/tcp tcp_xmit_hiwat /dev/tcp tcp_recv_hiwat /dev/tcp tcp_naglim_def /dev/ce instance /dev/ce rx_intr_time /dev/tcp tcp_fin_wait_2_flush_interval Recommended Value 60000 16384 16384 60000 30000 4000 10000 3000 32768 131072 131072 1 0 32 67500
Note: Prior to Solaris 2. configure your operating system to have the appropriate number of file descriptors. Setting this parameter to a value of 60000 (60 seconds) has shown a significant throughput enhancement when running benchmark JSP tests on Solaris.
Setting Parameters in the /etc/system File
Each socket connection to the server consumes a file descriptor. You might want to decrease this setting if the server is backed up with a queue of half-opened connections.
Note: Restart the machine if you modify /etc/system parameters.

AIX
If an application on an AIX machine transfers large amounts of data. Recommended Buffer Size Settings for nfso Command for AIX
Parameter nfs_socketsize nfs_tcp_socketsize Recommended Value 200000 200000
To permanently set the values when the system restarts. add the commands to the /etc/rc. to set the tcp_sendspace parameter.
Setting Parameters on the Network Card
Table 11-5 lists the CE Gigabit card parameters that you can tune and the recommended values:
Table 11-5. For example.com/pseries/en_US/aixbman/prftungd/prftungd. Use the no and nfso commands to set the buffer sizes.boulder. Recommended Buffer Size Settings for no Command for AIX
Parameter tcp_sendspace tcp_recvspace rfc1323 tcp_keepidle Recommended Value 262144 262144 1 600
Table 11-7 lists the nfso parameters that you can set and their recommended values:
Table 11-7.net file.Table 11-4. Recommended CE Gigabit Card Settings for Solaris
Parameter ce:ce_bcopy_thresh ce:ce_dvma_thresh ce:ce_taskq_disable ce:ce_ring_size ce:ce_comp_ring_size ce:ce_tx_ring_size Recommended Value 256 256 1 256 1024 4096
For more information about Solaris tuning options. For more information about AIX tuning options.ibm. Recommended /etc/system Settings for Solaris
Parameter tune_t_fsflushr Recommended Value 1
*Note: Set only on machines that have at least 4 GB of RAM.htm
Operating System
105
. see the Solaris Tunable Parameters Reference Manual. use the following command:
/usr/sbin/no -o tcp_sendspace=262144
Table 11-6 lists the no parameters that you can set and their recommended values:
Table 11-6. you can increase the TCP/IP or UDP buffer sizes. see the Performance Management Guide on the IBM web site:
http://publib16.

However.
Servlet/JSP Container
JBoss Application Server uses the Apache Tomcat 5. some users may need to wait for their HTTP request to be served. modify the following configuration file:
<JBOSS_HOME>/server/informatica/deploy/jbossweb-tomcat55. the parameter is set to 50. Usually. maxSpareThreads. leading to a general slow down of Data Analyzer. Data Analyzer may generate unexpected results if you modify properties that are not documented in this section. minSpareThreads. However. Decreasing the number of threads means that fewer users can use Data Analyzer concurrently. each of which has a different set of configuration files and parameters that can be tuned. To tune the Servlet/JSP container.5 Servlet/JSP container. Number of request processing threads initially created in the pool. more concurrent users may cause the application server to sustain a higher processing load. the parameter is set to 200.Windows
Disable hyper-threading on a four-CPU Windows 200 machine to provide better throughput for a clustered application server in a high concurrency usage environment. the parameter is set to 4. The following are some of the JBoss Application Server components and recommendations for tuning parameters to improve the performance of Data Analyzer running on JBoss Application Server. If not specified.apache. Maximum number of unused request processing threads that can exist before the pool begins stopping the unused threads. If not specified. the Windows 2000 default settings for the TCP/IP parameters are adequate to ensure optimal network performance. You can also define and configure thread handling in the JBoss Application Server configuration files.address}" maxThreads="250" strategy="ms" maxHttpHeaderSize="8192" emptySessionPath="true" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true"/>
The following parameters may need tuning:
♦
maxThreads. Data Analyzer is configured to have a maximum of 250 request processing threads which is acceptable for most environments.5-doc/config/index. For additional information about configuring the Servlet/JSP container.sar/server.bind. If the number of threads is too low. You can tune the Servlet/JSP container to make an optimal number of threads available to accept and process HTTP requests. Please increase maxThreads
Although the Servlet/JSP container configuration file contains additional properties. waiting.
Application Server
JBoss Application Server consists of several components. Set the attribute to a value smaller than the value set for maxThreads. then the following message may appear in the log files:
ERROR [ThreadPool] All threads are busy.1 Connector on port 8080 --> <Connector port="8080" address="${jboss. which determines the maximum number of simultaneous requests that the Servlet/JSP container can handle. leading to faster response times. see the Apache Tomcat Configuration Reference on the Apache Tomcat website:
http://tomcat. You may need to modify this value to achieve better performance.
♦ ♦
By default. For more
106 Chapter 11: Performance Tuning
. Increasing the number of threads means that more users can use Data Analyzer concurrently.A HTTP/1.html
The Servlet/JSP container configuration file does not determine how JBoss Application Server handles threads.org/tomcat-5. Maximum number of request processing threads that can be created in the pool. Fewer concurrent users may alleviate the load on the application server.xml
The following is a typical configuration:
<!-. If not specified.

information about configuring thread management on JBoss Application Server.
JSP Optimization
Data Analyzer uses JavaServer Pages (JSP) scripts to generate content for the web pages used in Data Analyzer.SID=prfbase8 </connection-url> <driver-class> com. the configuration file name is oracle_ds. Data Analyzer keeps a pool of database connections for the repository. Set the development parameter to false in a production installation.JspServlet</servlet-class> <init-param> <param-name>logVerbosityLevel</param-name> <param-value>WARNING</param-value> <param-name>development</param-name> <param-value>false</param-value> </init-param> <load-on-startup>3</load-on-startup> </servlet>
The following parameter may need tuning:
♦
development. the JSP scripts must be compiled when they are executed for the first time. This works only when the development parameter is set to true.
Repository Database Connection
Data Analyzer accesses the repository database to get metadata information. When set to true. Informatica ships Data Analyzer with pre-compiled JSPs.servlet.apache. For example. The following is a typical configuration:
<datasources> <local-tx-datasource> <jndi-name>jdbc/IASDataSource</jndi-name> <connection-url> jdbc:informatica:oracle://aries:1521. In production environment.jdbc. To optimize Data Analyzer database connections. for an Oracle repository.xml
The following is a typical configuration:
<servlet> <servlet-name>jsp</servlet-name> <servlet-class>org.xml
The name of the file includes the database type. you can set the checkInterval parameter to specify when the JSPs are checked.
If you set the development parameter to true. you can tune the database connection pools. <DB_Type> can be oracle.
♦
checkInterval. set it to 600 seconds. checks for modified JSPs at every access. db2 or other databases. Checks for changes in the JSP files on an interval of n seconds.oracle.informatica. see the JBoss Application Server documentation.sar/conf/web.xml. Typically. you can modify the following configuration file to optimize the JSP compilation:
<JBOSS_HOME>/server/informatica/deploy/jbossweb-tomcat55. If you find that you need to compile the JSP files either because of customizations or while patching. modify the JBoss configuration file:
<JBOSS_HOME>/server/informatica/deploy/<DB_Type>_ds.OracleDriver </driver-class> Application Server 107
. To tune the repository database connection pool. To avoid having the application server compile JSP scripts when they are executed for the first time.jasper. For example:
<param-name>checkInterval</param-name> <param-value>99</param-value>
Note: Make sure that the checkInterval is not too low.

Each report needs a database connection.resource. Set a higher value for idle-timeout-minutes.adapter.jboss.plugins. After you use it. Since Data Analyzer accesses the repository very frequently.xml. because there may be several scheduled reports running in the background.OracleExceptionSorter </exception-sorter-class-name> <min-pool-size>5</min-pool-size> <max-pool-size>50</max-pool-size> <blocking-timeout-millis>5000</blocking-timeout-millis> <idle-timeout-minutes>1500</idle-timeout-minutes> </local-tx-datasource> </datasources>
The following parameters may need tuning:
♦ ♦ ♦ ♦
min-pool-size.plugins.StatelessSessionInstanceInterceptor </interceptor> <interceptor transaction="Bean">
108
Chapter 11: Performance Tuning
.BMT --> <interceptor transaction="Bean"> org.ejb. Maximum size of the connection pool.jboss. The pool is empty until it is first accessed.jboss. it consumes resources to check for idle connections and clean them out. There are also six message-driven beans (MDBs) used for scheduling and real-time processes.ejb. Maximum time in milliseconds that a caller waits to get a connection when no more free connections are available in the pool.ProxyFactoryFinderInterceptor </interceptor> <interceptor> org.jboss. It has over 50 stateless session beans (SLSB) and over 60 entity beans (EB). idle-timeout-minutes.TxInterceptorCMT</interceptor> <interceptor transaction="Container" metricsEnabled="true"> org. Length of time an idle connection remains in the pool before it is used. blocking-timeout-millis.ejb.StatelessSessionInstanceInterceptor </interceptor> <!-. You can tune the EJB pool parameters in the following file:
<JBOSS_HOME>/server/Informatica/conf/standardjboss.jboss.jdbc.plugins.ejb.jboss.
EJB Container
Data Analyzer uses Enterprise Java Beans extensively.<user-name>powera</user-name> <password>powera</password> <exception-sorter-class-name> org.plugins.plugins.ejb.ejb.jboss.MetricsInterceptor</interceptor> <interceptor transaction="Container"> org. the most important tuning parameter is the EJB pool.jboss.
The following is a typical configuration
<container-configuration> <container-name> Standard Stateless SessionBean</container-name> <call-logging>false</call-logging> <invoker-proxy-binding-name> stateless-rmi-invoker</invoker-proxy-binding-name> <container-interceptors> <interceptor>org.CMT --> <interceptor transaction="Container"> org. it will contain at least the minimum number of pool-size connections.LogInterceptor</interceptor> <interceptor> org.vendor.
Stateless Session Beans
For SLSBs.ejb.plugins.
The max-pool-size value needs to be at least five more than the maximum number of concurrent users. max-pool-size.SecurityInterceptor</interceptor> <!-. Minimum number of connections in the pool. It may block other threads that require new connections.plugins.

ejb. the messaging system delivers messages to the MDB when they are available. Represents the maximum number of objects in the pool.plugins.ejb.ejb.jboss.jboss.plugins.jboss. then <MaximumSize> is a strict upper limit for the number of objects that will be created. the number of active objects can exceed the <MaximumSize> if there are requests for more objects.resource.
♦
Message-Driven Beans (MDB)
MDB tuning parameters are very similar to stateless bean tuning parameters.plugins.connectionmanager.
You can set two other parameters to fine tune the EJB pool.MessageDrivenInstanceInterceptor
Application Server
109
.ejb. The main difference is that MDBs are not invoked by clients.RunAsSecurityInterceptor </interceptor> <!-. Any subsequent requests will wait for an object to be returned to the pool.CMT --> <interceptor transaction="Container"> org.ejb.jboss. To tune the MDB parameters.ejb. If you set <strictMaximumSize> to true. When the value is set to true.jboss.LogInterceptor</interceptor> <interceptor>org.StatelessSessionInstancePool</instance-pool> <instance-cache></instance-cache> <persistence-manager></persistence-manager> <container-pool-conf> <MaximumSize>100</MaximumSize> </container-pool-conf> </container-configuration>
The following parameter may need tuning:
♦
MaximumSize.plugins. then <strictTimeout> is the amount of time that requests will wait for an object to be made available in the pool.xml
The following is a typical configuration:
<container-configuration> <container-name>Standard Message Driven Bean</container-name> <call-logging>false</call-logging> <invoker-proxy-binding-name>message-driven-bean </invoker-proxy-binding-name> <container-interceptors> <interceptor>org. They may be tuned after you have completed proper iterative testing in Data Analyzer to increase the throughput for high concurrency installations:
♦
strictMaximumSize. These parameters are not set by default in Data Analyzer. Instead. only the <MaximumSize> number of objects will be returned to the pool. modify the following configuration file:
<JBOSS_HOME>/server/informatica/conf/standardjboss.TxInterceptorCMT</interceptor> <interceptor transaction="Container" metricsEnabled="true"> org.plugins.MessageDrivenInstanceInterceptor </interceptor> <!-. However.jboss. the <strictMaximumSize> enforces a rule that only <MaximumSize> number of objects will be active.ejb.TxInterceptorBMT</interceptor> <interceptor transaction="Bean" metricsEnabled="true"> org. strictTimeout.org.jboss. If <strictMaximumSize> is set to false.plugins.jboss.MetricsInterceptor </interceptor> <interceptor transaction="Container"> org.plugins.jboss.ejb.plugins.ejb.MetricsInterceptor</interceptor> <interceptor> org.ProxyFactoryFinderInterceptor </interceptor> <interceptor>org. If <strictMaximumSize> is set to true.jboss.BMT --> <interceptor transaction="Bean"> org.jboss.plugins.plugins.ejb.CachedConnectionInterceptor </interceptor> </container-interceptors> <instance-pool> org.

jboss.resource.jboss.ProxyFactoryFinderInterceptor </interceptor> <interceptor>org.CachedConnectionInterceptor </interceptor> <interceptor> org.connectionmanager.jboss.plugins.connectionmanager. However.jboss.jboss. Represents the maximum number of objects in the pool.plugins.plugins.ejb.
Enterprise Java Beans
Data Analyzer EJBs use bean-managed persistence (BMP) as opposed to container-managed persistence (CMP).</interceptor> <interceptor transaction="Bean"> org.MessageDrivenTxInterceptorBMT </interceptor> <interceptor transaction="Bean" metricsEnabled="true"> org.ejb.jboss.plugins.EntitySynchronizationInterceptor </interceptor> </container-interceptors> <instance-pool>org.plugins.jboss.ejb. If <strictMaximumSize> is set to true. then <MaximumSize> is a strict upper limit for the number of objects that will be created.jboss.plugins.LogInterceptor</interceptor> <interceptor>org. the number of active objects can exceed the <MaximumSize> if there are requests for more objects.resource.SecurityInterceptor </interceptor> <interceptor>org. The EJB tuning parameters are in the following configuration file:
<JBOSS_HOME>/server/informatica/conf/standardjboss.TxInterceptorCMT </interceptor> <interceptor metricsEnabled="true"> org.EntityInstancePool
110
Chapter 11: Performance Tuning
.plugins.ejb.
The following is a typical configuration:
<container-configuration> <container-name>Standard BMP EntityBean</container-name> <call-logging>false</call-logging> <invoker-proxy-binding-name>entity-rmi-invoker </invoker-proxy-binding-name> <sync-on-commit-only>false</sync-on-commit-only> <container-interceptors> <interceptor>org.jboss.MetricsInterceptor</interceptor> <interceptor> org.jboss.plugins.ejb.MetricsInterceptor</interceptor> <interceptor>org.ejb.plugins.MessageDrivenInstancePool </instance-pool> <instance-cache></instance-cache> <persistence-manager></persistence-manager> <container-pool-conf> <MaximumSize>10</MaximumSize> </container-pool-conf> </container-configuration>
The following parameter may need tuning:
♦
MaximumSize.ejb.EntityCreationInterceptor </interceptor> <interceptor>org.ejb.jboss.ejb.jboss.plugins.jboss.plugins.EntityReentranceInterceptor </interceptor> <interceptor> org.jboss.CachedConnectionInterceptor </interceptor> </container-interceptors> <instance-pool>org.ejb. only the <MaximumSize> number of objects will be returned to the pool.plugins.ejb.ejb. if <strictMaximumSize> is set to false.plugins.plugins.ejb.jboss.xml.ejb.jboss.EntityLockInterceptor </interceptor> <interceptor>org.EntityInstanceInterceptor </interceptor> <interceptor>org. Otherwise.

Aggregation
Data Analyzer can run more efficiently if the data warehouse has a good schema design that takes advantage of aggregate tables to optimize query execution.jboss.75</cache-load-factor> </cache-policy-conf> </container-cache-conf> <container-pool-conf> <MaximumSize>100</MaximumSize> </container-pool-conf> <commit-option>A</commit-option> </container-configuration>
The following parameter may need tuning:
♦
MaximumSize. the <strictMaximumSize> parameter enforces a rule that only <MaximumSize> number of objects will be active. If you set <strictMaximumSize> to true.jboss. if <strictMaximumSize> is set to false.lock. only the <MaximumSize> number of objects will be returned to the pool.ejb.ejb.
Data Analyzer Processes 111
. then <MaximumSize> is a strict upper limit for the number of objects that will be created. These parameters are not set by default in Data Analyzer.
♦
Data Analyzer Processes
To design schemas and reports and use Data Analyzer features more effectively.jboss. then <strictTimeout> is the amount of time that requests will wait for an object to be made available in the pool.
You can set two other parameters to fine tune the EJB pool.plugins.ejb. However. Any subsequent requests will wait for an object to be returned to the pool. When the value is set to true.BMPPersistenceManager </persistence-manager> <locking-policy>org. strictTimeout. If <strictMaximumSize> is set to true.QueuedPessimisticEJBLock </locking-policy> <container-cache-conf> <cache-policy>org. Otherwise. the number of active objects can exceed the <MaximumSize> if there are requests for more objects. Data Analyzer delegates the ranking task to the database by doing a multi-pass query to first get the ranked items and then running the actual query with ranking filters. Data Analyzer performance improves if the data warehouse contains good indexes and is properly tuned. If the report has one level of ranking.plugins.</instance-pool> <instance-cache>org.ejb. Represents the maximum number of objects in the pool. If the ranking is defined on a calculation that is performed in the middle tier.jboss. They may be tuned after you have completed proper iterative testing in Data Analyzer to increase the throughput for high concurrency installations:
♦
strictMaximumSize.plugins.
Ranked Reports
Data Analyzer supports two-level ranking. use the following guidelines.plugins.LRUEnterpriseContextCachePolicy </cache-policy> <cache-policy-conf> <min-capacity>50</min-capacity> <max-capacity>1000000</max-capacity> <overager-period>300</overager-period> <max-bean-age>600</max-bean-age> <resizer-period>400</resizer-period> <max-cache-miss-period>60</max-cache-miss-period> <min-cache-miss-period>1</min-cache-miss-period> <cache-load-factor>0.EntityInstanceCache </instance-cache> <persistence-manager>org.

On a slow workstation with a CPU speed less than 1 GHz. If you have a data warehouse with a large volume of data. Data Analyzer may display messages warning that the JavaScripts on the page are running too slow. On a typical workstation with a CPU speed greater than 2. set the Data Source is Timestamp attribute property so that Data Analyzer includes conversion functions in the SQL query for any report the uses the column. Use interactive charts whenever possible to improve performance. Use the Data Source is Timestamp property for an attribute to have Data Analyzer include conversion functions in the SQL query. If a report includes a column that contains date and time information but the report requires a daily granularity. If there are over 5. Data Analyzer includes conversion functions in the WHERE clause and SELECT clause to get the proper aggregation and filtering by date only. If a column contains date information only. If a column has a numeric datatype. JDBC packages the returned data in a BigDecimal format. Each cell in a report on the Analyze tab has embedded JavaScript objects to capture various user interactions. Data Analyzer performs date manipulation on any column with a datatype of Date. If a high degree of precision is not required. However. avoid creating reports with ranking defined on custom attributes or custom metrics. Step 4 of the Create Report wizard. the report may take several minutes to display.
Interactive Charts
An interactive chart uses less application server resources than a regular chart. not including time. For more information about editing your general preferences to enable interactive charts. which has a high degree of precision. A report with second level ranking. These types of reports consume resources and may slow down other Data Analyzer processes.
Date Columns
By default. Set column datatypes to reflect the actual precision required.Data Analyzer has to pull all the data before it evaluates the calculation expression and ranks the data and filter. create reports with two levels of ranking based on smaller schemas or on schemas that have good aggregate tables and indexes.
Datatype of Table Columns
Data Analyzer uses JDBC drivers to connect to the data warehouse. On the machine hosting the application server. requires a multi-pass SQL query to first get the data to generate the top 10 products and then get the data for each product and corresponding top five customers. see the Data Analyzer User Guide.
JavaScript on the Analyze Tab
The Analyze tab in Data Analyzer uses JavaScript for user interaction. On the Formatting tab. conversion functions in a query prevent the use of database indexes and makes the SQL query inefficient.
112
Chapter 11: Performance Tuning
. consider making the report cached so that it can run in the background. then a BigDecimal format for columns in tables with a large volume of data adds unnecessary overhead. an interactive chart can use up to 25% less CPU resources than a regular chart. You can control the number of rows displayed on a page in Layout and Setup. If a column contains date and time information. For optimal performance.000 cells in a report. interactive charts display at about the same speed as regular charts. set the number of rows to display per page for a report on the Analyze tab. based on the column datatype defined in the database. If the report is defined to show Total Others at End of Table. clear the Data Source is Timestamp attribute property so that Data Analyzer does not include conversion functions in the SQL query for any report the uses the column.5 GHz. Make sure that a report displayed in the Analyze tab has a restriction on the number of cells displayed on a page. Also. Data Analyzer runs another SQL query to get the aggregated values for the rows not shown in the report. JDBC uses a different data structure when it returns data. such as the top 10 products and the top five customers for each product.

For example.
Row Limit for SQL Queries
Data Analyzer fetches all the rows returned by an SQL query into the JVM before it displays them on the report.maxInMemory
When a user runs a report. Since a chart may use only a subset of the report columns and rows as a datapoint. To improve performance. user level. Do not use the report schedules to frequently update reports to simulate real-time reports. This situation can drastically affect the performance of Data Analyzer. Data Analyzer runs only the tasks in an event in parallel mode. you add ReportA to a schedule that runs every five minutes. For more information about query governing. see “Setting Rules for Queries” on page 85. performing time comparisons. If there are a large number of concurrent users on Data Analyzer and each runs multiple reports.
Frequency of Schedule Runs
Setting the report schedules to run very frequently. minimize the number of security profiles in Data Analyzer. This means that each chart that Data Analyzer generates for a report has computing overhead associated with it. By default. Since generating a report for each security profile is a subtask for each report. it may already have fetched hundreds of rows and stored them in the JVM heap.
Query Governing
You can restrict the number of rows returned by an SQL query for a report with the query governing settings in Data Analyzer. or formatting reports into sections. You can set parameters in Data Analyzer to restrict the number of rows returned by an SQL query for a report and to manage the amount of memory it uses. Data Analyzer must pre-fetch all the rows so that the full dataset is available for operations such as ranking or ordering data. at the server level. However. To keep Data Analyzer scalable.
Scheduler and User-Based Security
Data Analyzer supports parallel execution of both time-based and event-based schedulers.
ProviderContext. Data Analyzer saves the dataset returned by the report query in the user session until the user terminates the session. the memory requirements can be considerable. For optimal performance. Within a task. Data Analyzer generates a subset of the report dataset for each chart. use the real-time message stream features available in Data Analyzer. You can set this parameter at the system level. it is important to restrict the number of rows returned by the SQL query of a report. you have five reports with user-based security and there are 500 security profiles for subscribers to the report. such as every five minutes. If ReportA takes six minutes to run. Although Data Analyzer displays only 20 rows in a page. such as 1000. Report designers who create a large number of charts to cover all possible user requirements can weaken the performance and scalability of Data Analyzer. Data Analyzer starts running ReportA again before the previous run is completed. Data Analyzer keeps two reports in
Data Analyzer Processes 113
. limit the number of returned rows to a small value.Number of Charts in a Report
Data Analyzer generates the report charts after it generates the report table. You can increase the value for specific reports that require more data. To keep Data Analyzer from consuming more resources than necessary. Data Analyzer runs subtasks sequentially. consider the overhead cost associated with report charts and create the minimum set of charts required by the end user. Data Analyzer cannot take advantage of parallel scheduler execution and sequentially generates the report for each security profile. and report level. For example. If you require reports to deliver real-time data. can create problems. Data Analyzer must execute each of the five reports for each of the 500 security profiles.

divide the used memory by the total memory configured for the JVM. The value must be greater than or equal to 2. For more information about managing the activity and user logs. These parallel threads are default threads spawned by the browser. Data Analyzer releases the memory after the expiration of session-timeout.abortThreshold
When a user runs a report that involves calculation or building large result sets. Data Analyzer provides an estimate of the length of time a report takes to display.
♦
♦
Purging of Activity Log
Data Analyzer logs every activity or event that happens in Data Analyzer in the activity log. The table indicators use plain HTML instead of DHTML. then Data Analyzer displays an error. if the used memory is 1. which.abortThreshold property in the DataAnalyzer. If the percentage is below the threshold. Include only reports that have small datasets in a workflow.
Indicators in Dashboard
Data Analyzer uses two parallel threads to load indicators in the dashboards. Data Analyzer obtains the report model and the datapoint for the gauge at the same time and can immediately create the gauge. it checks the amount of available memory. you must clear these two logs frequently.maxInMemory property in DataAnalyzer. Data Analyzer has been optimized to handle the way multiple indicators are queued up for loading:
♦ ♦
In a dashboard with indicators based on cached and on-demand reports. Data Analyzer runs the underlying report once. the percentage of memory that is in use is 50%. Closing a browser window does
not release the memory immediately. Data Analyzer records every user login in the user log. Both for cached and ondemand reports. To calculate the percentage. These logs can grow quickly. You can edit the providerContext. Before Data Analyzer starts calculating the report or building the tabular result set.properties file to set the maximum percentage of memory that is in use before Data Analyzer stops building report result sets and executing report queries. For example. Set the value as low as possible to conserve memory. When a user closes a browser window without logging out.000 KB.
ProviderContext. Data Analyzer uses the data in the activity log to calculate the Estimated Time to Run the Report for an on-demand
114
Chapter 11: Performance Tuning
. and the total memory configured for the JVM is 2.properties to set the number of reports that Data Analyzer keeps in memory.
Note: A user must log out of Data Analyzer to release the user session memory. To improve Data Analyzer performance. Similarly.000 KB. Data Analyzer keeps the datasets for all reports in a workflow in the user session. If the amount of free memory does not meet a pre-defined percentage. All indicators on a dashboard based on the same report use the same resultset. If the percentage is above the threshold. For on-demand reports. It uses a first in first out (FIFO) algorithm to overwrite reports in memory with more recent reports. Data Analyzer continues with the requested operation. the default value of 2 is sufficient. Gauges based on cached reports load the fastest because gauges have only one data value and they are cached in the database along with the report model. Data Analyzer might run out of memory that results in the users getting a blank page. which results in very little overhead for rendering the table indicators on the browser. you can set a threshold value between 50% and 99%. Data Analyzer loads all indicators based on cached reports before it loads indicators based on on-demand reports. When there are multiple indicators based on a single report. by default. The default value is 95%. Data Analyzer displays an error and stops processing the report request. Typically. Data Analyzer retains report results that are part of a workflow or drill path in memory irrespective of the value set in this property. see “Managing System Settings” on page 73. is 30 minutes.the user session at a time. Typically. You can edit the providerContext.

properties.maxCapacity. Set to 0 to ensure that no connections are maintained in the data source pool. If you set a value less than the number of concurrent users. If legends are not essential in a chart.report. Regular charts are rendered at server side and use the server CPU resources. use the following recommendations:
♦ ♦ ♦
For dashboard indicators.evictionPeriodMins. Position-based indicators can use indexes in the java collection for faster access of the database.window property in DataAnalyzer.properties” on page 130. Default is 2. Hence.000 rows.estimation. use indicators based on cached reports instead of on-demand reports. For more information about the estimation window property. Minimum number of connections maintained in the data source pool. the scan can get progressively slower for large datasets. To optimize the database connection pool for a data source.maxCapacity=20 dynapool. Data Analyzer creates a new connection to the data source to calculate a report. consider displaying the chart without legends to improve Data Analyzer performance.
Recommendations for Dashboard Design
When you design a dashboard. Data Analyzer returns an error message to some users. the default value of 30 days is fine. whereas value-based indicators have to perform a linear scan of the rowset to match up the values. Set the value to the total number of concurrent users. see “Properties in DataAnalyzer. If the value is 0.evictionPeriodMins=5 dynapool.properties. This pool of JDBC connections is different from the pool of connections to the repository defined at the application server level. modify the connection pool settings in DataAnalyzer. You can specify the number of days that Data Analyzer uses for the estimate by editing the queryengine. dynapool. Maximum number of connections that the data source pool can grow to. The following is a typical configuration:
# Datasource definition # dynapool. For most cases.waitForConnectionSeconds=1 dynapool. then the SQL query to calculate the estimated time may take considerable CPU resources because it calculates the estimated time by doing an average of all the entries for a specified number of days. Number of minutes between eviction runs or clean up operations during which Data Analyzer cleans up failed and idle connections from the connection pool. Depending on the number of legends in a chart. Use aggregate tables for indicators based on ondemand reports on the dashboards.connectionIdleTimeMins=10 datamart.
Data Analyzer Processes 115
♦
♦
.
♦
Chart Legends
When Data Analyzer displays charts with legends. In a high usage environment. Use position-based indicators instead of value-based indictors for reports with a volume of more than 2. the Data Analyzer charting engine must perform many complex calculations to fit the legends in the limited space available on the chart. dashboards provide summarized information. it might take Data Analyzer from 10% to 50% longer to render a chart with legends.minCapacity.minCapacity=2 dynapool.defaultRowPrefetch=20
The following parameters may need tuning:
♦
dynapool. dynapool.
Connection Pool Size for the Data Source
Data Analyzer internally maintains a pool of JDBC connections to the data warehouse. Default is 5 minutes. Interactive charts are rendered on the browser and require much less server resources. use interactive charts on the dashboard. If the activity log contains a lot of data. Typically.

The SQL queries that Data Analyzer runs against the repository are not CPU or IO intensive. This type of distributed architecture can be more economical because it can leverage existing infrastructure. Number of seconds Data Analyzer waits for a connection from the pool before it aborts the operation. If you set the parameter to 0 or a negative value. Default is 1.You can set the value to half of the value set for the parameter dynapool. dynapool.
Server Location and CPU Power and RAM
If you locate the application server and database server in a single machine. It also becomes a single point of failure. For optimal performance. You can keep the repository and data warehouse on the same database but in separate schemas as long as the machine has enough CPU and memory resources to handle the repository SQL queries and the data warehouse SQL queries. network latency is an issue in a distributed architecture. have the repository database as close as possible to the application server Data Analyzer runs on. Although a single-machine architecture means that there is no network latency. However. An alternative to the single-machine architecture is a distributed system where the servers are located on different machines across a network.waitForConnectionSeconds. network latency between the application server and the repository database must be minimal. Data Analyzer sets the parameter to the default value. As with any major software implementation project. network latency between the application server and the data warehouse must also be minimal. carefully perform capacity planning and testing before a Data Analyzer deployment. since Data Analyzer runs a large number of them. If you set the parameter to 0. Data Analyzer ignores this property if the parameter dynapool. the requirements for a very powerful machine makes it an expensive solution. Default is 10. However. and does not allow a connection to remain idle for too long.
116
Chapter 11: Performance Tuning
.connectionIdleTimeMins.
♦
Server Location
Data Analyzer runs on an application server and reads data from a database server.evictionPeriodMins is not set. these servers must have enough CPU power and RAM. Data Analyzer runs only a few SQL queries against the data warehouse.
♦
dynapool. Data Analyzer runs a large number of SQL queries against the repository to get the metadata before running any report. However. Data Analyzer does not wait and aborts the operation. the machine must have enough CPU power and RAM to handle the demands of each of the server processes. Number of minutes that a connection may remain idle. There should also be minimal network latency between these servers. it can connect to more than one data
warehouse. frees the connections for report calculations. For optimal performance. The choice of architecture depends on the requirements of the organization. Enter a positive value for this parameter. The SQL queries that Data Analyzer runs against the data warehouse return many rows and are CPU and IO intensive.
Note: Data Analyzer connects to only one repository database. the data warehouse requires more CPU power than the repository database. Typically.
Server Location and Network Latency
There are two database components in Data Analyzer: the repository and data warehouse. Since the queries return many rows.connectionIdleTimeMins so that Data Analyzer performs the eviction run. Make sure that all processes have enough resources to function optimally.

report. Set up custom color schemes and logos on the Data Analyzer Administration tab. 118 Setting Up Color Schemes and Logos.
Using the Data Analyzer URL API
You can use the URL interface provided with the Data Analyzer API to provide links in a web application or portal to specific pages in Data Analyzer. Use the Data Analyzer API single sign on (SSO) scheme to access Data Analyzer web pages without a user login. 117 Using the Data Analyzer URL API.properties file to display or hide the Data Analyzer header or navigation bar. or tab pages. Set the user interface (UI) configuration properties in the DataAnalyzer.
117
. Data Analyzer provides several ways to allow you to modify the look and feel of Data Analyzer. For more information about the Data Analyzer URL API.CHAPTER 12
Customizing the Data Analyzer Interface
This chapter includes the following topics:
♦ ♦ ♦ ♦ ♦
Overview. The URL consists of the Data Analyzer location and parameters that determine the content and interface for the Data Analyzer page. 117 Using the Data Analyzer API Single Sign-On. such as dashboard. 118 Setting the UI Configuration Properties. see the Data Analyzer SDK Guide. You can use the following techniques to customize Data Analyzer:
♦ ♦ ♦ ♦
Use the URL API to display Data Analyzer web pages on a portal. 118
Overview
You can customize the Data Analyzer user interface so that it meets the requirements for web applications in your organization.

see the Data Analyzer SDK Guide. add the following property:
uiconfig. For more information about the Data Analyzer API SSO. buttons. you can set up an SSO mechanism that allows you to log in once and be authenticated in all subsequent web applications that you access. see “Managing Color Schemes and Logos” on page 74. You can also create color schemes and use custom graphics. the Data Analyzer login appears even if you have already logged in to the portal where the Data Analyzer pages are displayed. if you display Data Analyzer web pages in another web application or portal.ShowNav=true
The properties determine what displays in the header section of the Data Analyzer user interface which includes the logo.Using the Data Analyzer API Single Sign-On
When you access Data Analyzer.properties and set the properties to false.default. For more information. the logout and help links.<ConfigurationName>. and navigation bar display on all the Data Analyzer pages. The UI configuration include the following properties:
uiconfig.
Setting Up Color Schemes and Logos
Data Analyzer provides two color schemes for the Data Analyzer interface. you can add a UI configuration named default to DataAnalyzer. and the navigation bar:
Navigation Bar
Header Section
Default UI Configuration
By default.<ConfigurationName>. you can define a user interface configuration that determines how Data Analyzer handles specific sections of the user interface. You must enter a user name and password. To hide the navigation bar or the header section on the Data Analyzer pages.properties. To avoid multiple logins. and logos to match the standard color scheme for the web applications in your organization.
Setting the UI Configuration Properties
In DataAnalyzer.ShowHeader=false
118
Chapter 12: Customizing the Data Analyzer Interface
. the login page appears.ShowHeader=true uiconfig. Ordinarily. The Data Analyzer API provides an SSO mechanism that you can use when you display Data Analyzer pages in another web application or portal. You can configure Data Analyzer to accept the portal authentication and bypass the Data Analyzer login page. when a user logs in to Data Analyzer through the Login page. the logo. To hide the whole header section. You can use the default Informatica color scheme and the sample color scheme named Betton Books as a starting point for a custom color scheme. logout and help links.

properties.
Include the parameter <UICONFIG> and the configuration name in the URL when you call the Data Analyzer Administration page from the portal:
http://HostName:PortNumber/InstanceName/jsp/api/ShowAdministration.properties includes examples of the properties for the default UI configuration.ShowNav=false
2. complete one of the following tasks:
♦ ♦ ♦
Change the values of the default configuration instead of adding a new configuration.
UI Configuration Parameter in Data Analyzer URL
If you use the URL API to display Data Analyzer pages on another web application or a portal.properties:
♦ ♦ ♦
The default configuration properties are not required in DataAnalyzer.properties.jsp?<UICONFIG>=Fred
For more information about the Data Analyzer URL API. It can include only alphanumeric characters.
For more information about modifying the settings in DataAnalyzer. see the Data Analyzer SDK Guide. specifying a configuration name:
uiconfig. The header section of the Data Analyzer page appears on the portal according to the setting in the configuration. For example. Add them only if you want to modify the default configuration settings or create new UI configurations. complete the following steps: 1. Add the following properties to DataAnalyzer. If you want to
change the default configuration settings. The default settings determine what Data Analyzer displays after the Login page. uncomment the default properties and update the values of the properties. Set the default configuration to the same values as your customized configuration. see “Configuration Files” on page 129. to display the Data Analyzer administration page on a portal without the navigation bar. If you access a Data Analyzer page with a specific configuration through the URL API and the session expires. It cannot include special characters. Setting the ShowHeader property to false implicitly sets the ShowNav property to false. you can add a configuration to DataAnalyzer.ShowNav=false
Tip: DataAnalyzer.To hide only the navigation bar.properties and include the configuration name in the URL. the Login page appears.Fred.ShowHeader=true uiconfig. add the following property:
uiconfig. After you login.Fred. To avoid this. Customize the Data Analyzer login page to use your customized configuration after user login.default.
Setting the UI Configuration Properties
119
. not the configuration passed through the URL.
Configuration Settings
Use the following guidelines when you set up a configuration in DataAnalyzer. The configuration name can be any length and is case sensitive.properties. Data Analyzer displays the Data Analyzer pages based on the default configuration.

You must use the EAR
Repackager utility to extract DataAnalyzer.properties in the Data Analyzer EAR file.
120
Chapter 12: Customizing the Data Analyzer Interface
.The following examples show what appears on the Data Analyzer header when the UI configuration properties are set to different values:
♦ ShowHeader=true
and ShowNav=true (default setting)
♦ ShowHeader=true
and ShowNav=false
♦ ShowHeader=false
and ShowNav=false
Note: Data Analyzer stores DataAnalyzer.properties from the Data Analyzer EAR file before you can modify the UI configuration properties.

properties infa-cache-service. The following configuration files that contain the settings for an instance of Data Analyzer are stored in its EAR file:
♦ ♦ ♦
DataAnalyzer.xml. Contains the configuration settings for an instance of Data Analyzer. You can modify the following configuration files:
♦ ♦
DataAnalyzer. you can modify the Data Analyzer configuration files. They are stored in the Data Analyzer EAR directory.xml. Although infacache-service.
♦
Modifying the Configuration Files
Each instance of Data Analyzer has an associated enterprise archive (EAR) file. 130 Properties in infa-cache-service.APPENDIX B
Configuration Files
This appendix includes the following topics:
♦ ♦ ♦ ♦ ♦
Overview. infa-cache-service. you only need to modify specific settings. 141
Overview
To customize Data Analyzer for your organization.xml
129
.properties. They are stored in the Data Analyzer EAR directory. 129 Properties in DataAnalyzer. 129 Modifying the Configuration Files. you only need to modify specific settings. Contains the global cache configuration settings for Data Analyzer.properties.xml web.xml contains many settings. web. Although web.xml.xml.xml contains many settings. 137 Properties in web. The configuration files define the appearance and operational parameters of Data Analyzer. They are stored in the Data Analyzer EAR directory. Contains additional configuration settings for an instance of Data Analyzer.

you may need to modify more than one property to effectively customize Data Analyzer operations:
♦
Dynamic Data Source Pool Properties. Data Analyzer periodically updates the list of users and groups in the repository with the list of users and groups in the LDAP directory service.properties together to achieve a specific result. In the following groups of properties.
♦
Security Adapter Properties.properties
The DataAnalyzer.
Properties in DataAnalyzer. You can modify DataAnalyzer. To optimize the database connection pool for a data source. You must customize some properties in DataAnalyzer.maxCapacity dynapool.frequency securityadapter.waitForConnectionSeconds dynapool. 2. complete the following steps: 1.To change the settings in the configuration files stored in the Data Analyzer EAR file. you can modify the following properties.properties file contains the configuration settings for an instance of Data Analyzer.syncOnSystemStart
130
Appendix B: Configuration Files
.connectionIdleTimeMins datamart. open the configuration file you want to modify and search for the setting you want to customize. If you use LDAP authentication.defaultRowPrefetch
For more information.
− −
securityadapter. Change the settings and save the configuration file. Data Analyzer internally maintains a pool of JDBC connections to the data source. Data Analyzer provides a synchronization scheduler that you can customize to set the schedule for these updates based on the requirements of your organization. Several properties in DataAnalyzer.properties to customize the operation of an instance of Data Analyzer. modify the following properties:
− − − − − −
dynapool.properties control the processes within the connection pool. 3. Restart Data Analyzer. With a text editor. To customize the synchronization scheduler.evictionPeriodMins dynapool.minCapacity dynapool. see “Connection Pool Size for the Data Source” on page 115.

xml determines how the global cache is configured. Default is alert@informatica. Determines whether global caching is enabled for the repository. Set to true to increase Data Analyzer performance. This set of properties determine the look and feel of the Data Analyzer user interface.properties
Property alert. Number of days before a subscription for cached reports expires.Report.NoOfDaysToExpire
Chart.compatibility. you need to enter a valid email address for your organization. The font must exist on the machine hosting Data Analyzer. see the Data Analyzer User Guide.The machine where the repository database resides performs fast enough that enabling global caching does not provide a performance gain. If you are using the Internet Explorer browser.ShowNav
Note: Do not modify the properties in the section of DataAnalyzer.Fontname
Properties in DataAnalyzer. If you use an SMTP mail server. Default is true. Data Analyzer retrieves objects from the repository each time a user accesses them. you must enter an email address that includes a domain.GlobalCaching
Cache. Compatibility level of the API.xml” on page 137.Subscription. see “Properties in infa-cache-service. However.0 and 4.fromaddress Description From address used for alerts sent by Data Analyzer.com. infa-cache-service. Data Analyzer retrieves it from the cache instead of accessing the repository. You might want to disable global caching for the following reasons: .level
Cache. When global caching is enabled. Together they define a single user interface configuration. When a user accesses an object that exists in the cache. . the font does not have to exist on the workstation. have installed Adobe SVG Viewer. Font to use in all charts generated by this instance of Data Analyzer.ConfigurationName. Leaving the default value does not affect alert functionality.1 API. For more information about editing your general preferences to enable interactive charts. If you are using the Mozilla Firefox browser. Set it to blank to use the current API.properties
131
.
api. Supported values are 40 or blank.ConfigurationName. If set to true. the font must also exist on the workstation that accesses Data Analyzer. If set to false. Properties in DataAnalyzer. Default is Helvetica. Table B-1 describes the properties in the DataAnalyzer. Set it to 40 to force the current API to behave in the same way as the Data Analyzer 4. you can modify the following properties:
− −
uiconfig. and enabled interactive charts. Default is 7.properties labeled For Data Analyzer system
use only. Data Analyzer creates a cache in memory for repository objects accessed by Data Analyzer users. To customize the navigation and header display of Data Analyzer.ShowHeader uiconfig.properties file:
Table B-1. For more information about configuring global caching.♦
UI Configuration Properties. You can modify several properties in this file to customize how the global cache works.The machine running Data Analyzer has insufficient memory for the global cache.

Minfontsize
compression.MaxDataPoints
Chart. but will not use a font size smaller than the value of this property. Maximum number of containers allowed in custom layouts for dashboards. Default is 512.compressableMimeTypes after verifying browser support. MIME types for dynamic content that Data Analyzer compresses. By default. the browser might display an error.Fontsize. Enter a comma-separated list of MIME types. if Data Analyzer compresses a MIME type not supported by the browser. If the browser does not support compressed files of a MIME type. The value must be smaller than the value of Chart. but will not use a font size larger than the value of this property.
Chart. Default is 10. However. By default. an error message appears.Fontsize Description Maximum font size to use on the chart axis labels and legend. Default is 7. Properties in DataAnalyzer. text/javascript.compressableMimeTypes. the default is sufficient for most organizations. Maximum number of rows that Data Analyzer fetches in a report query. These MIME types may work with compression regardless of whether the browser supports compression or if an intervening proxy would otherwise break compression. Minimum size (in bytes) for a response to trigger compression. Default is 30. MIME types for dynamic content that Data Analyzer always compresses.MaximumNumberofContainers
datamart. Data Analyzer compresses only the MIME types listed in compressionFilter.compressThreshold
CustomLayout.properties
Property Chart.defaultRowPrefetch
132
Appendix B: Configuration Files
. Minimum font size to use on the chart axis labels and legend. Data Analyzer determines the actual font size. Data Analyzer determines the actual font size. Data Analyzer compresses dynamic content of the following MIME types: text/html. Maximum number of data points to plot in all charts. application/x-javascript. Default is 20. Data Analyzer does not compress dynamic content of the unsupported MIME type.Table B-1. Using this property may result in marginally better performance than using compressionFilter. Default is 1000. Data Analyzer compresses responses if the response size is larger than this number and if it has a compressible MIME type. Typically.alwaysCompressMimeTypes
compressionFilter. Enter a commaseparated list of MIME types. If Data Analyzer users select more data points than the value of this property.compressableMimeTypes
compressionFilter. no MIME types are listed. Some MIME types are handled by plug-ins that decompress natively. without verifying that the browser can support compressed files of this MIME type.

Default is 2. Dirty reads and non-repeatable reads cannot occur. . Supported values are: . Determines the maximum number of characters in a CLOB attribute that Data Analyzer displays in a report cell. Nonrepeatable reads and phantom reads can occur. and phantom reads can occur. Dirty reads. For example. . Data Analyzer uses the default transaction level of the database. String to use as a prefix for the dynamic JDBC pool name.NONE. If no property is set for a data source.poolNamePrefix
Properties in DataAnalyzer. Increasing this setting can slowData Analyzer performance. Default is 2.transactionIsolationLevel. Phantom reads can occur. Dirty reads. Data Analyzer uses the data restriction merging behavior provided in Data Analyzer 5. Add the following entries: .datalength
dynapool.properties
133
.maxCapacity. Transactions are not supported. Default is 20.transactionIsolationLevel. Default is 1000. If set to true.datamart.x and previous releases and does not support AND/OR conditions in data restriction filters.transactionIsolationLevel. and phantom reads cannot occur.READ_COMMITTED. Minimum number of initial connections in the data source pool. If set to false. non-repeatable reads.ias_test=REPEATABLE _READ DataRestriction. non-repeatable reads.capacityIncrement dynapool. . Data Analyzer uses the data restriction merging behaviors in Data Analyzer 4. Set the value to 25% of the maximum concurrent users.REPEATABLE_READ.READ_UNCOMMITTED.SERIALIZABLE. Determines whether the pool can shrink when connections are not in use. Properties in DataAnalyzer. Default is false.0. Default is IAS_. Maximum number of connections that the data source pool may grow to. see the Data Analyzer Schema Designer Guide. Number of connections that can be added at one time.maxCapacity
dynapool.
datatype.initialCapacity
dynapool.datamart. you have a data source named ias_demo that you want to set to READ_UNCOMMITTED and another data source named ias_test that you want to set to REPEATABLE_READ (assuming that the databases these data sources point to support the respective transaction levels).Table B-1.properties
Property datamart.OldBehavior Provided for backward compatibility. Dirty reads cannot occur.CLOB. The value cannot exceed dynapool.1 and supports AND/OR conditions in data restriction filters. The value must be greater than zero. Add a property for each data source and then enter the appropriate value for that data source. For more information about CLOB support. Set the value to the total number of concurrent users. Default is true.ias_demo=READ_UNC OMMITTED . DataSourceName Description Transaction isolation level for each data source used in your Data Analyzer instance. .allowShrinking
dynapool.

url
host. Default is true.Table B-1. Default is 300 seconds (5 minutes).refreshTestMinutes Description Frequency in minutes at which Data Analyzer performs a health check on the idle connections in the pool. Default is 3600 seconds (1 hour). you might need to increase this value. Default is 5.append
134
Appendix B: Configuration Files
. Frequency in seconds that Data Analyzer refreshes indicators with animation. After this period. To import a large XML file.waitSec
GroupBySuppression. Default is 60. the installation process installs online help files on the same machine as Data Analyzer and sets the value of this property.waitForConnection
dynapool. the number of connections in the pool reverts to the value of its initialCapacity parameter if the allowShrinking parameter is true. Determines whether to append or overwrite new log information to the JDBC log file.url
import.timeout. URL for the location of Data Analyzer online help files. Set to true to append new messages. URL for the Data Analyzer instance.transaction. Determines whether Data Analyzer groups values by row attributes in cross tabular report tables for reports with a suppressed GROUP BY clause when the data source stores a dataset in more than one row in a table. Maximum number of seconds a client waits to grab a connection from the pool if none is readily available before giving a timeout error. Default is true. If the data source stores a dataset in a single row in a table. By default. For more information. Data Analyzer should not perform the check too frequently because it locks up the connection pool and may prevent other clients from grabbing connections from the pool. the Data Analyzer installation sets the value of this property in the following format: http://Hostname:PortNumber/InstanceName/ Number of seconds after which the import transaction times out. Default is 1.
dynapool. Default is true. Number of minutes Data Analyzer allows an idle connection to be in the pool. Set to false to overwrite existing information. Determines whether Data Analyzer waits for a database connection if none are available in the connection pool. the value of this property does not affect how the report displays. By default.pollingIntervalSeconds
jdbc.files. Set to true to group values by the row attributes.properties
Property dynapool.shrinkPeriodMins
dynapool.seconds
Indicator.GroupOnAttributePair
help. see the Data Analyzer User Guide.log. Properties in DataAnalyzer. Set to false if you do not want the Data Analyzer report to group the data based on the row attributes.

properties
Property jdbc.ShrinkToWidth
Determines how Data Analyzer handles header and footer text in reports saved to PDF. Maximum number of rows to display in the user log. To specify a path. Displaying a number larger than the default value may cause the browser to stop responding. Data Analyzer does not consider the value set for this property while retaining results of the reports that are part of workflow or drill path.abortThresHold
queryengine. Number of days used to estimate the query execution time for a particular report.log. Data Analyzer displays an error and notifies the user about the low memory condition.properties
135
. Data Analyzer displays an unlimited number of rows.maxInMemory
providerContext. Displaying a number larger than the default value may cause the browser to stop responding.user.Table B-1.log.estimation.log. The percentage is calculated by dividing the used memory by the total memory configured for the JVM. Properties in DataAnalyzer. If set to zero.maxRowsToDisplay
Maps. Default is 1000. Default is true. Data Analyzer creates the JDBC log file in the following default directory:
<PowerCenter_install folder>/server/tomcat/jboss/bin/
Default is iasJDBC. The default value is 2. Data Analyzer does not support a single backslash as a file separator. Directory where the XML files that represent maps for the Data Analyzer geographic charts are located. use the forward slash (/) or two backslashes (\\) as the file separator. Default is 95. logging. defaults to 1000. Number of reports that Data Analyzer keeps in memory for a user session. For more information. If the percentage is above the threshold. If not specified. Defines the maximum percentage of memory that is in use before Data Analyzer stops building report result sets and running report queries. Set to false to use the configured font size and allow Data Analyzer to display only the text that fits in the header or footer. Data Analyzer continues with the requested operation.file Description Name of the JDBC log file. Data Analyzer estimates the execution time for a report by averaging all execution times for that report during this estimation window.
providerContext. Set to true to allow Data Analyzer to shrink the font size of long headers and footers to fit the configured space. Default is 1000.maxRowsToDisplay Maximum number of rows to display in the activity log. to set the log file to myjdbc.log in a directory called Log_Files in the D: drive. The default location is in the following directory:
<PCAEInstallationDirectory>/DataAnalyzer/ maps
logging.window
Properties in DataAnalyzer. The directory must be located on the machine where Data Analyzer is installed.file=d:/Log_Files/myjdbc. Data Analyzer does not retain report results when you set the property value below 2. For example. set the value of the property to include the path and file name: jdbc. defaults to 1000. see “Configuring Report Headers and Footers” on page 88. Default is 30. If set to zero.Directory
PDF.activity.HeaderFooter. If the percentage is below the threshold. Data Analyzer displays an unlimited number of rows. If not specified.log If you do not specify a path.

maxRowsPerTable
report. Default is 300. Determines whether Data Analyzer synchronizes the user list at startup. Default is view.compress
136
Appendix B: Configuration Files
. Possible values are view or analyze. This property specifies the interval between the end of the last synchronization and the start of the next synchronization. You can add this property to DataAnalyzer. Default is 720 minutes (12 hours).maxSectionSelectorValues
report. The Service Manager considers the value set for this property as the batch size to copy the users. Data Analyzer displays all sections on the Analyze tab. Default is false. or is set to false. If true. Data Analyzer disables all user list synchronization. Determines whether the servlet compresses files. Maximum number of attribute values users can select for a sectional report table. and in reports generated by the Data Analyzer API. Determines whether Data Analyzer displays the Summary section in a sectional report table when you email a report from the Find tab or when you use the Data Analyzer API to generate a report. Data Analyzer displays the sections on multiple pages. Set to true to enable servlet compression. the Service Manager copies the users from the domain configuration database to the Data Analyzer repository in batches. Maximum number of rows to display for each page or section for a report on the Analyze tab. Default is true.properties
Property ReportingService.properties and set the value of the batch size.
report. Default is true.maxSectionsPerPage
report.batchsize Description Number of users that the PowerCenter Service Manager processes in a batch. Default is 15. Determines the default tab on which Data Analyzer opens a report when users double-click a report on the Find tab.userReportDisplayMode
securityadapter. Determines the number of minutes between synchronization of the Data Analyzer user list. Users can change this default report view by editing their Report Preferences on the Manage Account tab. Set to false to display both the Summary and Grand Totals sections on the Analyze tab but hide these sections in reports emailed from the Find tab and in reports generated by the Data Analyzer API. Data Analyzer does not synchronize the user list at startup. Set to true to display the Summary section and hide the Grand Totals section on the Analyze tab. During synchronization. Set to false to disable.syncOnSystemStart
servlet. If the value is not an increment of 5. Maximum number of sectional tables to display per page on the Analyze tab. If a report has more sections than the value set for this property. If the property is not set. Data Analyzer synchronizes the user list when it starts. Default is 100. Properties in DataAnalyzer. including synchronization at startup.frequency
securityadapter. Data Analyzer rounds the value up to the next value divisible by 5. If you set the time interval to 0. Default is 65. If a report contains more sectional tables than this number. in reports emailed from the Find tab.Table B-1.showSummary
report. Set to false only if you see problems with compressed content.

jscriptContentEncoding Description Determines whether the servlet compresses JavaScript loaded by <script> tags through content-encoding for browsers that support this compression.properties
Property servlet.
Properties in infa-cache-service. Set to true to display the navigation bar. For example. Set to false only if you see problems with compressed JavaScript. Setting ShowHeader to false implicitly sets ShowNav to false.
servlet. Determines whether to display the header section for the Data Analyzer pages. However.Date and time as separate attributes in same table Determines whether Data Analyzer converts a primary date column from date and time to date before using the primary date in SQL queries with date field comparisons. Set to true to display the header section. Set to true to allow the server to send compressed files without checking for browser support. Applicable to the following types of time dimension: . For more information about enabling global caching. Set to false to hide the header section.compress. Default is false. Determines whether the server verifies that the browser contains an Accept-Encoding header and thus supports compression before sending a compressed response.xml
A cache is a memory area where frequently accessed data can be stored for rapid access. if the datatype of the primary date column in the table is TIMESTAMP. but can have impact on performance.Date and time in separate tables . you define a Date Only time dimension.properties determines whether global caching is enabled for Data Analyzer.ShowHeader
uiconfig. Set to false to disable. Default is true.ConfigurationName. help. DB2 generates an error when Data Analyzer compares the primary date column with another column that has a DATE datatype.xml
137
. set this property to true. Set to false to force the server to check if the browser can handle compression before sending compressed files. for the given user interface configuration. and logout links. including the logo.useCompressionThroughProxies
TimeDimension.properties” on page 130. useDateConversionOnPrimaryDate
uiconfig.ConfigurationName. The Cache.GlobalCaching property in DataAnalyzer. see “Properties in DataAnalyzer. Data Analyzer uses the primary date in date comparisons without any date conversion. In this case.Date only . Set to true only all browsers used by Data Analyzer users support compression.Table B-1. To ensure that Data Analyzer always converts the primary date column to DATE before using it in date comparisons. Properties in DataAnalyzer. navigation bar. Set this property to false if the primary date is stored in a DATE column and date conversion is not necessary. Determines whether to display the Data Analyzer navigation bar for the given configuration. and this property is set to the default value of false. a date conversion is necessary to avoid SQL errors.ShowNav
Properties in infa-cache-service. Set to false to hide the navigation bar. Set to true to enable servlet compression of JavaScript. Default is false. The date conversion ensures that Data Analyzer accurately compares dates. the data source is DB2.

In the directory where you extracted the Data Analyzer EAR file.xml to configure the following global cache features:
♦ ♦
Lock acquisition timeout Eviction policy
If you disable global caching in the Cache.xml in the following directory:
<PowerCenter_Install folder>/server/tomcat/jboss/server/informatica/ias/<reporting service name>/properties
2. Data Analyzer creates a global cache in memory for repository objects accessed by Data Analyzer users. Although infa-cacheservice.
To configure the lock acquisition timeout: 1. locate infa-cache-service. Changes to the default values of the unsupported properties may generate unexpected results. Data Analyzer retrieves the object from the repository. Data Analyzer stores data in the global cache in a hierarchical tree structure consisting of nodes.properties. Locate the following text:
name=”LockAcquisitionTimeout”
4. Data Analyzer uses JBoss Cache to maintain the global cache for Data Analyzer. If a user updates an object that exists in the global cache. Data Analyzer releases the lock on the object node. By default.When global caching is enabled. When a user modifies an object that exists in the global cache.xml.000 milliseconds.xml contains a number of properties to support the global cache.jboss. Data Analyzer has lost the connection to the repository.xml file with a text editor. Use infa-cache-service. a report or dashboard. see the JBoss Cache documentation library:
http://labs. Data Analyzer acquires a lock on the object node when it commits the update or delete transaction to the repository. only the properties documented in this section are supported by Data Analyzer.xml determine how the global cache is configured. The next time a user accesses the same object. When a user first accesses an object.
If Data Analyzer frequently rolls back transactions due to lock acquisition timeouts. Data Analyzer removes the object from the cache and then saves the updated object to the repository. 3. When global caching is enabled.GlobalCaching property in DataAnalyzer. Data Analyzer retrieves the object from the global cache instead of the repository. A node contains the data for a single cached object. The next time a user accesses the updated object. Data Analyzer may not be able to acquire a lock on an object node in the global cache under the following conditions:
♦ ♦
Another user or background process has locked the same object node.xml determines how long Data Analyzer attempts to acquire a lock on an object node.
Change the attribute value according to your requirements.
Open the infa-cache-service. the LockAcquisitionTimeout attribute is set to 10. If Data Analyzer cannot acquire a lock during this time period. When the transaction completes. For more information about JBoss Cache. the properties in infa-cache-service. Data Analyzer ignores the properties in infa-cache-service. you can increase the value of the LockAcquisitionTimeout attribute. Data Analyzer retrieves the object from the repository and then stores the object in memory.com/portal/jbosscache/docs
Configuring the Lock Acquisition Timeout
The global cache uses an optimistic node locking scheme to prevent Data Analyzer from encountering deadlocks. for example. it rolls back the transaction and displays an appropriate message to the user.
<attribute name=”LockAcquisitionTimeout”>10000</attribute>
138
Appendix B: Configuration Files
. The LockAcquisitionTimeout attribute in infa-cache-service.

Save and close infa-cache-service. For example. the /Reports region contains all cached reports. You can configure a different eviction policy for each region so that Data Analyzer caches more or less objects of a particular type. gauges. Set the value to 0 to have Data Analyzer cache an infinite number of objects. User specific objects defined for reports. and analytic schemas. Default varies for each region. /Time.xml includes several eviction policy attributes. User profiles. Data source definitions. Attribute definitions. Metric definitions. The eviction policy works on regions of the global cache. Infa-cache-service. Calendar and time dimension definitions.
Configuring the Eviction Policy
To manage the size of the global cache. Content folder definitions in the Find tab. Add infa-cache-service. You can modify these attributes to customize when Data Analyzer removes objects from the global cache. For example. and highlighting rules added by each user.xml
139
. /Security. Dashboard definitions. Eviction Policy Attributes
Attribute wakeUpIntervalSeconds Description Frequency in seconds that Data Analyzer checks for objects to remove from the global cache. Report definitions. /Reports.5. /Reports/Variables. Administrative system settings. Table B-2 lists the eviction policy attributes you can configure for the global cache:
Table B-2. Default is 60 seconds. You can decrease this value to have Data Analyzer run the eviction policy more frequently. /DataSources.xml defines the following global cache regions:
♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦
/Dashboards. and contact information. Operational. Data Analyzer uses an eviction policy to remove the least frequently used objects from the cache when the cache approaches its memory limit. /Reports/User. Default region if an object does not belong to any of the other defined regions.xml. /Schemas.
maxNodes
Properties in infa-cache-service. /DataConnectors. Each global cache region contains the cached data for a particular object type. delivery settings. /Users. For example. Data connector definitions. /Attributes. Global variables used in reports. For example. /Trees. Current time values for calendar and time dimension definitions. color schemes. and role definitions. /System. you can increase the maximum number of dashboards and decrease the maximum number of reports that Data Analyzer stores in the global cache. Data Analyzer writes informational messages to a global cache log file when a region approaches its maxNodes limit. logs. Access permissions on an object and data restrictions defined for users or groups. Maximum number of objects stored in the specified region of the global cache. 6. group definitions. /Metrics.xml back to the Data Analyzer EAR file. indicators. hierarchical. if a large number of concurrent users frequently access dashboards but not reports. /_default_.
Each global cache region defined in infa-cache-service.

Locate the region whose eviction policy you want to modify. the /Users region has a timeToLiveSeconds value of 1.
Open the infa-cache-service.
140
Appendix B: Configuration Files
.xml file back to the Data Analyzer EAR file.xml in the following directory:
<PowerCenter_Install folder>/server/tomcat/jboss/server/informatica/ias/<reporting service name>/properties
2.
To configure the eviction policy: 1. Data Analyzer removes cached user data if it has not been accessed for 30 minutes. Default varies for each region.
Change the attribute values for the region according to your requirements. 8. Set the value to 0 to define no time limit.
<attribute name="wakeUpIntervalSeconds">60</attribute>
5. Locate the following text:
name=”wakeUpIntervalSeconds”
4. Defined for each region of the global cache.xml defines an idle time limit only for regions that contain user specific data. Data Analyzer removes objects that have reached the timeToLiveSeconds or maxAgeSeconds limits. to change the attribute values for the /Dashboards region.
Repeat steps 5 to 6 for each of the global cache regions whose eviction policy you want to modify. For example. For example.
maxAgeSeconds
Data Analyzer checks for objects to remove from the global cache at the following times:
♦ ♦
The wakeUpIntervalSeconds time period ends. you can define maximum age limits for the other regions so that Data Analyzer removes objects from the cache before the maxNodes limit is reached. A global cache region reaches its maxNodes limit. Maximum number of seconds an object can remain in the global cache. Data Analyzer removes the least recently used object from the region. modify the following lines:
<region name="/Dashboards"> <attribute name="maxNodes">200</attribute> <attribute name="timeToLiveSeconds">0</attribute> <attribute name="maxAgeSeconds">0</attribute> </region>
7.xml defines a maximum age limit for only the /_default_ region. If Data Analyzer runs on a machine with limited memory. 9. locate infa-cache-service. you can define idle time limits for the other regions so that Data Analyzer removes objects from the cache before the maxNodes limit is reached. Data Analyzer also removes objects from any region that have reached the timeToLiveSeconds or maxAgeSeconds limits.xml. By default. Set the value to 0 to define no time limit. Defined for each region of the global cache. Save and close infa-cache-service.
Change the value of the wakeUpIntervalSeconds attribute according to your requirements. Eviction Policy Attributes
Attribute timeToLiveSeconds Description Maximum number of seconds an object can remain idle in the global cache. infa-cache-service.Table B-2. If Data Analyzer runs on a machine with limited memory. to locate the /Dashboards region. infa-cache-service. Default varies for each region.800 seconds (30 minutes). For example. locate the following text:
region name="/Dashboards"
6.xml file with a text editor. 3.
In the directory where you extracted the Data Analyzer EAR file. By default. Add the infa-cache-service.

Default is 1000. Data Analyzer synchronizes only user accounts.xml file contains a number of settings.
login-session-timeout
searchLimit
session-timeout
showSearchThreshold
TemporaryDir
Properties in web. Default is tmp_ias_dir. during synchronization. the session expires.xml
141
. Data Analyzer does not support a single backslash as a file separator. Data Analyzer resets the session timeout to the value of the session-timeout property.xml file contains configuration settings for Data Analyzer. You can modify this file to customize the operation of an instance of Data Analyzer. After the user successfully logs in. in minutes. The directory must be a shared file system that all servers in the cluster can access. Data Analyzer creates the directory in the following default directory: <PCAEInstallationDirectory>/JBoss403/bin/ To specify a path. Default is 30.xml
The web. Directory where Data Analyzer stores temporary files. If you specify a new directory. Default is true.xml
Property enableGroupSynchronization Description If you use LDAP authentication.Properties in web. Table B-3 describes the properties in web. Data Analyzer terminates sessions that are inactive for the specified time period. Although the web. for an inactive session on the Login page. Maximum number of groups or users Data Analyzer displays in the search results before requiring you to refine your search criteria. If you want to keep user accounts in the LDAP directory service but keep the groups in the Data Analyzer repository. Maximum number of groups or users Data Analyzer displays before displaying the Search box so you can find a group or user. Session timeout. this property determines whether Data Analyzer updates the groups in the repository when it synchronizes the list of users and groups in the repository with the LDAP directory service. not groups. for an inactive session. set this property to false so that Data Analyzer does not delete or add groups to the repository during synchronization. Default is 5. use the forward slash (/) or two backslashes (\\) as the file separator. You must maintain the group information within Data Analyzer. in minutes. Default is 100. When this property is set to false. If the user does not successfully log in and the session remains inactive for the specified time period. By default. You can specify a full directory path such as D:/temp/DA. you typically modify only specific settings in the file. Properties in web. Data Analyzer deletes the users and groups in the repository that are not found in the LDAP directory service.xml that you can modify:
Table B-3. Session timeout.

NOTICES
This Informatica product (the “Software”) includes certain drivers (the “DataDirect Drivers”) from DataDirect Technologies. EITHER EXPRESSED OR IMPLIED. CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS.
. THE DATADIRECT DRIVERS ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND. BREACH OF CONTRACT. WHETHER OR NOT INFORMED OF THE POSSIBILITIES OF DAMAGES IN ADVANCE. INCIDENTAL. SPECIAL. INCLUDING BUT NOT LIMITED TO. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT. INCLUDING. BREACH OF WARRANTY. NEGLIGENCE. STRICT LIABILITY. WITHOUT LIMITATION. THE IMPLIED WARRANTIES OF MERCHANTABILITY. MISREPRESENTATION AND OTHER TORTS. an operating company of Progress Software Corporation (“DataDirect”) which are subject to the following terms and conditions: 1. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION. INDIRECT. 2. FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.