Directory structure

Table of contents

There is a working directory on the file system of the HPE Consumption Analytics Portal server containing files used in data processing. Those files are organized into the following subdirectories:

Directory

Description

Processing<working_dir>/processing

Intermediate files created or consumed by batch jobs. Generally, there is one subdirectory for each active batch process. In these subdirectories, there are feed directories and the intermediate files.

Legacy batch jobs stored as XML files. The job launcher will search this directory for job files if no path is provided.

You can run the job files in this directory with the ccjob command line utility, or from the Job Maintenance page of the Cloud Cruiser Portal.

Collector Usage Files<working_dir>/usage_files

A staging area for raw data to be read by collectors. Under this directory, you would typically create a subdirectory for each feed of data from the cloud resources.

Logs<working_dir>/logs

Logs for each job run

Scripts<working_dir>/scripts

Miscellaneous executable scripts called by workbooks and batch jobs to do processing not available natively

Email Templates<install_dir>/templates

Email message templates

jdbc_jars

An optional directory when using custom JDBC jars for data collection.

Locate your working directory somewhere other than the directory where you installed HPE Consumption Analytics Portal. For example, C:\cc-working. This allows you to perform software upgrades without affecting working scripts and files.

In the dialog box, enter the complete path to an existing directory and click OK. The base directory changes for all the directories shown. You can optionally change the name and location of individual directories.