☻Page 2 of 202☻ DATASTAGE QUESTIONS 1. What is the flow of loading data into fact & dimensional tables? A) Fact table - Table with Collection of Foreign Keys corresponding to the Primary Keys in Dimensional table. Consists of fields with numeric values. Dimension table - Table with Unique Primary Key. Load - Data should be first loaded into dimensional table. Based on the primary key values in dimensional table, the data should be loaded into Fact table. 2. What is the default cache size? How do you change the cache size if needed? A. Default cache size is 256 MB. We can increase it by going into Datastage Administrator and selecting the Tunable Tab and specify the cache size over there. 3. What are types of Hashed File? A) Hashed File is classified broadly into 2 types.
a)S tati c - Sub divided into 17 types based on Primary Key Pattern.
b)Dyn ami c - sub divided into 2 types

i) Generic ii) Specific. Dynamic files do not perform as well as a well, designed static file, but do perform better
than a badly designed one. When creating a dynamic file you can specify the following
Although all of these have default values)

By Default Hashed file is "Dynamic - Type Random 30 D" 4. What does a Config File in parallel extender consist of? A) Config file consists of the following.
a) Number of Processes or Nodes.
b) Actual Disk Storage Location.
www.allwalkin.blogspot.com
5. What is Modulus and Splitting in Dynamic Hashed File? A. In a Hashed File, the size of the file keeps changing randomly.
If the size of the file increases it is called as "Modulus".
If the size of the file decreases it is called as "Splitting".

6. What are Stage Variables, Derivations and Constants? A. Stage Variable - An intermediate processing variable that retains value during read and doesn’t pass the value into target column. Derivation - Expression that specifies value to be passed on to the target column. Constant - Conditions that are either true or false that specifies flow of data with a link. 7. Types of views in Datastage Director? There are 3 types of views in Datastage Director
a) Job View - Dates of Jobs Compiled.
b) Log View - Status of Job last run

9. Orchestrate Vs Datastage Parallel Extender? A) Orchestrate itself is an ETL tool with extensive parallel processing capabilities and
running on UNIX platform. Datastage used Orchestrate with Datastage XE (Beta version
of 6.0) to incorporate the parallel processing capabilities. Now Datastage has purchased
Orchestrate and integrated it with Datastage XE and released a new version Datastage 6.0
i.e Parallel Extender.

10. Importance of Surrogate Key in Data warehousing? A) Surrogate Key is a Primary Key for a Dimension table. Most importance of using it is it is independent of underlying database. i.e. Surrogate Key is not affected by the changes going on with a database.
11. How to run a Shell Script within the scope of a Data stage job? A) By using "ExcecSH" command at Before/After job properties. 12. How to handle Date conversions in Datastage? Convert a mm/dd/yyyy format to yyyy-dd-mm? A) We use a) "Iconv" function - Internal Conversion. b) "Oconv" function - External Conversion. Function to convert mm/dd/yyyy format toyy y y-dd-mm is Oconv(Iconv(Filedname,"D/MDY[2,2,4]"),"D-MDY[2,2,4]") 13 How do you execute datastage job from command line prompt? A) Using "dsjob" command as follows. dsjob -run -jobstatus projectname jobname 14. Functionality of Link Partitioner and Link Collector? Link Partitioner: It actually splits data into various partitions or data flows using various partition methods. Link Collector: It collects the data coming from partitions, merges it into a single data flow and loads to target. 15. Types of Dimensional Modeling? A) Dimensional modeling is again sub divided into 2 types.
a) Star Schema - Simple & Much Faster. Denormalized form.
b) Snowflake Schema - Complex with more Granularity. More normalized form.

16. Differentiate Primary Key and Partition Key? oraveen@yahoo.com ☻Page 4 of 202☻ Primary Key is a combination of unique and not null. It can be a collection of key values called as composite primary key. Partition Key is a just a part of Primary Key. There are several methods of partition like Hash, DB2, and Random etc. While using Hash partition we specify the Partition Key.
17. Differentiate Database data and Data warehouse data? A) Data in a Database is
a) Detailed or Transactional
b) Both Readable and Writable.
c) Current.

18. Containers Usage and Types? Container is a collection of stages used for the purpose of Reusability. There are 2 types of Containers. a) Local Container: Job Specific b) Shared Container: Used in any job within a project. 19.Compare and Contrast ODBC and Plug-In stages? ODBC: a) Poor Performance.
b) Can be used for Variety of Databases.
c) Can handle Stored Procedures.

Data Stage Administrator: Used to create the project. Contains set of properties oraveen@yahoo.com ☻Page 6 of 202☻ We can set the buffer size (by default 128 MB)
We can increase the buffer size.
We can set the Environment Variables.
In tunable we have in process and inter-process
In-process—Data read in sequentially
Inter-process— It reads the data as it comes.
It just interfaces to metadata.

Data Stage Manager: We can view and edit the Meta data Repository.
We can import table definitions.
We can export the Data stage components in.x ml or.d sx format.
We can create routines and transforms
We can compile the multiple jobs.

Data Stage Designer: We can create the jobs. We can compile the job. We can run the job. We can declare stage variable in transform, we can call routines, transform, macros, functions. We can write constraints. Data Stage Director: We can run the jobs. oraveen@yahoo.com ☻Page 7 of 202☻ We can schedule the jobs. (Schedule can be done daily, weekly, monthly, quarterly)
We can monitor the jobs.
We can release the jobs.

Q 23 What is Meta Data Repository? Meta Data is a data about the data. It also contains Query statistics
ETL statistics
 Business subject area
Source Information
 Target Information
Source to Target mapping Information.
www.allwalkin.blogspot.com
Q 24 What is Data Stage Engine? It is a JAVA engine running at the background. Q 25 What is Dimensional Modeling? Dimensional Modeling is a logical design technique that seeks to present the data in a standard framework that is, intuitive and allows for high performance access. Q 26 What is Star Schema? Star Schema is a de-normalized multi-dimensional model. It contains centralized fact tables surrounded by dimensions table.
Dimension Table: It contains a primary key and description about the fact table.
Fact Table: It contains foreign keys to the dimension tables, measures and aggregates.

Q 27 What is surrogate Key? It is a 4-byte integer which replaces the transaction / business / OLTP key in the dimension table. We can store up to 2 billion record. Q 28 Why we need surrogate key? It is used for integrating the data may help better for primary key. Index maintenance, joins, table size, key updates, disconnected inserts and partitioning. Q 29 What is Snowflake schema? It is partially normalized dimensional model in which at two represents least one dimension or more hierarchy related tables. Q 30 Explain Types of Fact Tables?
Factless Fact: It contains only foreign keys to the dimension tables.
Additive Fact: Measures can be added across any dimensions.
Semi-Additive: Measures can be added across some dimensions. Eg, % age, discount
Non-Additive: Measures cannot be added across any dimensions. Eg, Average

Ads by Google oraveen@yahoo.com ☻Page 8 of 202☻ Conformed Fact: The equation or the measures of the two fact tables are the same under the facts are measured across the dimensions with a same set of measures. Q 31 Explain the Types of Dimension Tables? Conformed Dimension: If a dimension table is connected to more than one fact table, the granularity that is defined in the dimension table is common across between the fact tables. Junk Dimension: The Dimension table, which contains only flags.
Monster Dimension: If rapidly changes in Dimension are known as Monster Dimension.
De-generative Dimension: It is line item-oriented fact table design.

Q 32 What are stage variables? Stage variables are declaratives in Transformer Stage used to store values. Stage variables are active at the run time. (Because memory is allocated at the run time). Q 33 What is sequencer? It sets the sequence of execution of server jobs. Q 34 What are Active and Passive stages? Active Stage: Active stage model the flow of data and provide mechanisms for combining data streams, aggregating data and converting data from one data type to another. Eg, Transformer, aggregator, sort, Row Merger etc. Passive Stage: A Passive stage handles access to Database for the extraction or writing of data. Eg, IPC stage, File types, Universe, Unidata, DRS stage etc. Q 35 What is ODS? Operational Data Store is a staging area where data can be rolled back. Q 36 What are Macros? They are built from Data Stage functions and do not require arguments. A number of macros are provided in the JOBCONTROL.H file to facilitate getting information about the current job, and links and stages belonging to the current job. These can be used in expressions (for example for use in Transformer stages), job control routines, filenames and table names, and before/after subroutines.
These macros provide the functionality of using theDS Ge t Pro jec tIn f o,DS GetJob In f o, DSGetStageInfo, and DSGetLinkInfo functions with the DSJ.ME token as the JobHandle and can be used in all active stages and before/after subroutines. The macros provide the functionality for all the possibleInf oT ype arguments for theDS Get…In f o functions. See the Function call help topics for more details. The available macros are: DSHostName
DSProjectName
DSJobStatus
DSJobName
www.allwalkin.blogspot.com
oraveen@yahoo.com ☻Page 9 of 202☻ DSJobController
DSJobStartDate
DSJobStartTime
DSJobStartTimestamp
DSJobWaveNo
DSJobInvocations
DSJobInvocationId
DSStageName
DSStageLastErr
DSStageType
DSStageInRowNum
DSStageVarList
DSLinkRowCount
DSLinkLastErr
DSLinkName

1) Examples 2) To obtain the name of the current job: 3) MyName = DSJobName To obtain the full current stage name: MyName = DSJobName :″.″ : DSStageName Q 37 What is keyMgtGetNextValue? It is a Built-in transform it generates Sequential numbers. Its input type is literal string & output type is string. Q 38 What are stages? The stages are either passive or active stages. Passive stages handle access to databases for extracting or writing data. Active stages model the flow of data and provide mechanisms for combining data streams, aggregating data, and converting data from one data type to another. Q 39 What index is created on Data Warehouse? Bitmap index is created in Data Warehouse. Q 40 What is container? A container is a group of stages and links. Containers enable you to simplify and
modularize your server job designs by replacing complex areas of the diagram with a
single container stage. You can also use shared containers as a way of incorporating
server job functionality into parallel jobs.
DataStage provides two types of container:

oraveen@yahoo.com ☻Page 10 of 202☻ • Local containers. These are created within a job and are only accessible by that job. A local container is edited in a tabbed page of the job’s Diagram window. • Shared containers. These are created separately and are stored in the Repository in the same way that jobs are. There are two types of shared container Q 41 What is function? ( Job Control – Examples of Transform Functions ) Functions take arguments and return a value. BASIC functions: A function performs mathematical or string manipulations on the arguments supplied to it, and return a value. Some functions have 0
arguments; most have 1 or more. Arguments are always in parentheses, separated
by commas, as shown in this general syntax:

FunctionName(argument, argument ) DataStage BASIC functions: These functions can be used in a job control routine, which is defined as part of a job’s properties and allows other jobs to be
run and controlled from the first job. Some of the functions can also be used for
getting status information on the current job; these are useful in active stage
expressions and before- and after-stage subroutines.

To do this ... Use this function ... Specify the job you want to control DSAttachJob Set parameters for the job you want to control DSSetParam Set limits for the job you want to control DSSetJobLimit Request that a job is run DSRunJob Wait for a called job to finish DSWaitForJob Gets the meta data details for the specified link DSGetLinkMetaData Get information about the current project DSGetProjectInfo Get buffer size and timeout value for an IPC or Web Service stage DSGetIPCStageProps Get information about the controlled job or current job DSGetJobInfo Get information about the meta bag properties associated with the named job DSGetJobMetaBag Get information about a stage in the controlled job or current job DSGetStageInfo Get the names of the links attached to the specified stage DSGetStageLinks Get a list of stages of a particular type in a job. DSGetStagesOfType Get information about the types of stage in a job. DSGetStageTypes Get information about a link in a controlled job or current job DSGetLinkInfo Get information about a controlled job’s parameters DSGetParamInfo oraveen@yahoo.com ☻Page 11 of 202☻ Get the log event from the job log DSGetLogEntry Get a number of log events on the specified subject from the job log DSGetLogSummary Get the newest log event, of a specified type, from the job log DSGetNewestLogId Log an event to the job log of a different job DSLogEvent Stop a controlled job DSStopJob Return a job handle previously obtained fromDS Attach Job DSDetachJob Log a fatal error message in a job's log file and aborts the job. DSLogFatal Log an information message in a job's log file. DSLogInfo Put an info message in the job log of a job controlling current job. DSLogToController Log a warning message in a job's log file. DSLogWarn Generate a string describing the complete status of a valid attached job. DSMakeJobReport Insert arguments into the message template. DSMakeMsg Ensure a job is in the correct state to be run or validated. DSPrepareJob Interface to system send mail facility. DSSendMail Log a warning message to a job log file. DSTransformError Convert a job control status or error code into an explanatory text message. DSTranslateCode
Suspend a job until a named file either exists or does not exist. DSWaitForFile
Checks if a BASIC routine is cataloged, either in VOC as a

callable item, or in the catalog space. DSCheckRoutine Execute a DOS or Data Stage Engine command from a before/after subroutine. DSExecute Set a status message for a job to return as a termination message when it finishes DSSetUserStatus Q 42 What is Routines? Routines are stored in theRou ti n es branch of the Data Stage Repository, where you can create, view or edit. The following programming components are classified as routines: Transform functions, Before/After subroutines, Custom UniVerse functions, ActiveX (OLE) functions, Web Service routines Q 43 What is data stage Transform?
Q 44 What is Meta Brokers?
Q 45 What is usage analysis?
Q 46 What is job sequencer?

Question: Dimensional modelling is again sub divided into 2 types. Answer: A) Star Schema - Simple & Much Faster. Denormalized form. B) Snowflake Schema - Complex with more Granularity. More normalized form. Question: Importance of Surrogate Key in Data warehousing? Answer: Surrogate Key is a Primary Key for a Dimension table. Most importance of using it is, it is independent of underlying database, i.e. Surrogate Key is not affected by the changes going on with a database.
Question: Differentiate Database data and Data warehouse data? Answer: Data in a Database is
A) Detailed or Transactional
B) Both Readable and Writable.
C) Current.
www.allwalkin.blogspot.com
Question: What is the flow of loading data into fact & dimensional tables? Answer: Fact table - Table with Collection of Foreign Keys corresponding to the Primary Keys in Dimensional table. Consists of fields with numeric values. Dimension table - Table with Unique Primary Key. Load - Data should be first loaded into dimensional table. Based on the primary key values in dimensional table, then data should be loaded into Fact table. Question: Orchestrate Vs Datastage Parallel Extender? Answer: Orchestrate itself is an ETL tool with extensive parallel processing capabilities and running on UNIX platform. Datastage used Orchestrate with Datastage XE (Beta version of 6.0) to incorporate the parallel processing capabilities. Now Datastage has purchased Orchestrate and integrated it with Datastage XE and released a new version Datastage 6.0 i.e. Parallel Extender.
Question: Differentiate Primary Key and Partition Key? Answer: Primary Key is a combination of unique and not null. It can be a collection of key values called as composite primary key. Partition Key is a just a part of Primary Key. There are several methods of partition like Hash, DB2, Random etc...While using Hash partition we specify the Partition Key.
oraveen@yahoo.com ☻Page 14 of 202☻ Question: What are Stage Variables, Derivations and Constants? Answer: Stage Variable - An intermediate processing variable that retains value during read and doesn’t pass the value into target column.
Constraint - Conditions that are either true or false that specifies flow of data with a link.
Derivation - Expression that specifies value to be passed on to the target column.

Question: What is the default cache size? How do you change the cache size if needed? Answer: Default cache size is 256 MB. We can increase it by going into Datastage Administrator and selecting the Tunable Tab and specify the cache size over there. Question: What is Hash file stage and what is it used for? Answer: Used for Look-ups. It is like a reference table. It is also used in-place of ODBC, OCI tables for better performance. Question: What are types of Hashed File? Answer: Hashed File is classified broadly into 2 types.
A) Static - Sub divided into 17 types based on Primary Key Pattern.
B) Dynamic - sub divided into 2 types

i) Generic ii) Specific Default Hased file is "Dynamic - Type Random 30 D" Question: What are Static Hash files and Dynamic Hash files? Answer: As the names itself suggest what they mean. In general we use Type-30 dynamic Hash files. The Data file has a default size of 2GB and the overflow file is used if the data exceeds the 2GB size.
Question: What is the Usage of Containers? What are its types? Answer: Container is a collection of stages used for the purpose of Reusability.
There are 2 types of Containers.
A) Local Container: Job Specific
B) Shared Container: Used in any job within a project.

Question: Compare and Contrast ODBC and Plug-In stages? Answer: ODBC PLUG-IN Poor Performance Good Performance Can be used for Variety of Databases Database Specific (only one database) Can handle Stored Procedures Cannot handle Stored Procedures oraveen@yahoo.com ☻Page 15 of 202☻ Question: How do you execute datastage job from command line prompt? Answer: Using "dsjob" command as follows. dsjob -run -jobstatus projectname jobname Question: What are the command line functions that import and export the DS jobs? Answer: dsimport.exe - imports the DataStage components. dsexport.exe - exports the DataStage components. Question: How to run a Shell Script within the scope of a Data stage job? Answer: By using "ExcecSH" command at Before/After job properties. Question: What are OConv () and Iconv () functions and where are they used? Answer: IConv() - Converts a string to an internal storage format OConv() - Converts an expression to an output format. Question: How to handle Date convertions in Datastage? Convert mm/dd/yyyy format toyyyy-dd-mm? Answer: We use
a) "Iconv" function - Internal Convertion.
b) "Oconv" function - External Convertion.

Question: What does a Config File in parallel extender consist of? Answer: Config file consists of the following.
a) Number of Processes or Nodes.
b) Actual Disk Storage Location.

Question: Functionality of Link Partitioner and Link Collector? Answer: Link Partitioner: It actually splits data into various partitions or data flows using various Partition methods. Link Collector: It collects the data coming from partitions, merges it into a single data flow and loads to target. oraveen@yahoo.com ☻Page 16 of 202☻ Question: What is Modulus and Splitting in Dynamic Hashed File? Answer: In a Hashed File, the size of the file keeps changing randomly.
If the size of the file increases it is called as "Modulus".
If the size of the file decreases it is called as "Splitting".

Question: Did you Parameterize the job or hard-coded the values in the jobs? Answer: Always parameterized the job. Either the values are coming from Job Properties or from a ‘Parameter Manager’ – a third part tool. There is no way you will hard–code some parameters in your jobs. The often Parameterized variables in a job are: DB DSN name, username, password, dates W.R.T for the data to be looked against at.
Question: Have you ever involved in updating the DS versions like DS 5.X, if so tell us some the steps you have taken in doing so? Answer: Yes.
The following are some of the steps:
1. Definitely take a back up of the whole project(s) by exporting the project as a .dsx file

2. See that you are using the same parent folder for the new version also for your old jobs using the hard-coded file path to work. 3. After installing the new version import the old project(s) and you have to compile them all again. You can use 'Compile All' tool for this. 4. Make sure that all your DB DSN's are created with the same name as old ones. This step is for moving DS from one machine to another. 5. In case if you are just upgrading your DB from Oracle 8i to Oracle 9i there is tool on DS CD that can do this for you. 6. Do not stop the 6.0 server before the upgrade, version 7.0 install process collects project information during the upgrade. There is NO rework (recompilation of existing jobs/routines) needed after the upgrade.
Question: How did you handle reject data? Answer: Typically a Reject-link is defined and the rejected data is loaded back into data warehouse. So Reject link has to be defined every Output link you wish to collect rejected data. Rejected data is typically bad data like duplicates of Primary keys or null- rows where data is expected.
oraveen@yahoo.com ☻Page 17 of 202☻ Question: What are other Performance tunings you have done in your last project to increase the performance of slowly running jobs? Answer: Staged the data coming from ODBC/OCI/DB2UDB stages or any database on the server using Hash/Sequential files for optimum performance also for data recovery in case job aborts. Tuned the OCI stage for 'Array Size' and 'Rows per Transaction' numerical values for faster inserts, updates and selects. Tuned the 'Project Tunables' in Administrator for better performance.
Used sorted data for Aggregator.
Sorted the data as much as possible in DB and reduced the use of DS-Sort for better

performance of jobs. Removed the data not used from the source as early as possible in the job. Worked with DB-admin to create appropriate Indexes on tables for better performance of DS queries. Converted some of the complex joins/business in DS to Stored Procedures on DS for faster execution of the jobs. If an input file has an excessive number of rows and can be split-up then use standard logic to run jobs in parallel. Before writing a routine or a transform, make sure that there is not the functionality required in one of the standard routines supplied in the sdk or ds utilities categories. Constraints are generally CPU intensive and take a significant amount of time to process. This may be the case if the constraint calls routines or external macros but if it is inline code then the overhead will be minimal. Try to have the constraints in the 'Selection' criteria of the jobs itself. This will eliminate the unnecessary records even getting in before joins are made. Tuning should occur on a job-by-job basis.
Use the power of DBMS.
Try not to use a sort stage when you can use an ORDER BY clause in the database.
Using a constraint to filter a record set is much slower than performing a SELECT …

WHERE…. Make every attempt to use the bulk loader for your particular database. Bulk loaders are generally faster than using ODBC or OLE. Question: Tell me one situation from your last project, where you had faced problem and How did u solve it? Answer: 1. The jobs in which data is read directly from OCI stages are running extremely slow. I had to stage the data before sending to the transformer to make the jobs run faster. 2. The job aborts in the middle of loading some 500,000 rows. Have an option either cleaning/deleting the loaded data and then run the fixed job or run the job again from the row the job has aborted. To make sure the load is proper we opted the former.
Question: Tell me the environment in your last projects Answer: Give the OS of the Server and the OS of the Client of your recent most project
Ads by Google oraveen@yahoo.com ☻Page 18 of 202☻ Question: How did u connect with DB2 in your last project? Answer: Most of the times the data was sent to us in the form of flat files. The data is dumped and sent to us. In some cases were we need to connect to DB2 for look-ups as an instance then we used ODBC drivers to connect to DB2 (or) DB2-UDB depending the situation and availability. Certainly DB2-UDB is better in terms of performance as you know the native drivers are always better than ODBC drivers. 'iSeries Access ODBC Driver 9.00.02.02' - ODBC drivers to connect to AS400/DB2.
Question: What are Routines and where/how are they written and have you written any routines before? Answer: Routines are stored in the Routines branch of the DataStage Repository, where you can create, view or edit. The following are different types of Routines: 1. Transform Functions 2. Before-After Job subroutines 3. Job Control Routines Question: How did you handle an 'Aborted' sequencer? Answer: In almost all cases we have to delete the data inserted by this from DB manually and fix the job and then run the job again. Question: What are Sequencers? Answer: Sequencers are job control programs that execute other jobs with preset Job parameters. Question: Read the String functions in DS Answer: Functions like [] -> sub-string function and ':' -> concatenation operator
Syntax:
string [ [ start, ] length ]
string [ delimiter, instance, repeats ]
www.allwalkin.blogspot.com
Question: What will you in a situation where somebody wants to send you a file and use that file as an input or reference and then run job. Answer: •Under Windows: Use the 'WaitForFileActivity' under the Sequencers and then run the job. May be you can schedule the sequencer around the time the file is expected to arrive. • Under UNIX: Poll for the file. Once the file has start the job or sequencer depending on the file. Question: What is the utility you use to schedule the jobs on a UNIX server other than using Ascential Director? Answer: Use crontab utility along with dsexecute() function along with proper parameters passed. oraveen@yahoo.com ☻Page 19 of 202☻ Question: Did you work in UNIX environment? Answer: Yes. One of the most important requirements. Question: How would call an external Java function which are not supported by DataStage? Answer: Starting from DS 6.0 we have the ability to call external Java functions using a Java package from Ascential. In this case we can even use the command line to invoke the Java function and write the return values from the Java program (if any) and use that files as a source in DataStage job.
Question: How will you determine the sequence of jobs to load into data warehouse? Answer: First we execute the jobs that load the data into Dimension tables, then Fact tables, then load the Aggregator tables (if any). Question: The above might raise another question: Why do we have to load the dimensional tables first, then fact tables: Answer: As we load the dimensional tables the keys (primary) are generated and these keys (primary) are Foreign keys in Fact tables. Question: Does the selection of 'Clear the table and Insert rows' in the ODBC stage send a Truncate statement to the DB or does it do some kind of Delete logic. Answer: There is no TRUNCATE on ODBC stages. It is Clear table blah blah and that is a delete from statement. On an OCI stage such as Oracle, you do have both Clear and Truncate options. They are radically different in permissions (Truncate requires you to have alter table permissions where Delete doesn't).
Question: How do you rename all of the jobs to support your new File-naming conventions? Answer: Create an Excel spreadsheet with new and old names. Export the whole project as a dsx. Write a Perl program, which can do a simple rename of the strings looking up the Excel file. Then import the new dsx file probably into a new project for testing. Recompile all jobs. Be cautious that the name of the jobs has also been changed in your job control jobs or Sequencer jobs. So you have to make the necessary changes to these Sequencers.
Question: When should we use ODS? Answer: DWH's are typically read only, batch updated on a schedule ODS's are maintained in more real time, trickle fed constantly oraveen@yahoo.com ☻Page 20 of 202☻ Question: What other ETL's you have worked with? Answer: Informatica and also DataJunction if it is present in your Resume. Question: How good are you with your PL/SQL? Answer: On the scale of 1-10 say 8.5-9 Question: What versions of DS you worked with? Answer: DS 7.5, DS 7.0.2, DS 6.0, DS 5.2 Question: What's the difference between Datastage Developers...? Answer: Datastage developer is one how will code the jobs. Datastage designer is how will design the job, I mean he will deal with blue prints and he will design the jobs the stages that are required in developing the code
Question: What are the requirements for your ETL tool? Answer: Do you have large sequential files (1 million rows, for example) that need to be compared every day versus yesterday? If so, then ask how each vendor would do that. Think about what process they are going to do. Are they requiring you to load yesterday’s file into a table and do lookups? If so, RUN!! Are they doing a match/merge routine that knows how to process this in sequential files? Then maybe they are the right one. It all depends on what you need the ETL to do.
If you are small enough in your data sets, then either would probably be OK. Question: What are the main differences between Ascential DataStage and Informatica PowerCenter? Answer: Chuck Kelley’s Answer: You are right; they have pretty much similar functionality. However, what are the requirements for your ETL tool? Do you have large sequential files (1 million rows, for example) that need to be compared every day versus yesterday? If so, then ask how each vendor would do that. Think about what process they are going to do. Are they requiring you to load yesterday’s file into a table and do lookups? If so, RUN!! Are they doing a match/merge routine that knows how to process this in sequential files? Then maybe they are the right one. It all depends on what you need the ETL to do. If you are small enough in your data sets, then either would probably be OK.
Les Barbusinski’s Answer: Without getting into specifics, here are some differences you may want to explore with each vendor: • Does the tool use a relational or a proprietary database to store its Meta data and scripts? If proprietary, why? oraveen@yahoo.com ☻Page 21 of 202☻ • What add-ons are available for extracting data from industry-standard ERP, Accounting, and CRM packages? • Can the tool’s Meta data be integrated with third-party data modeling and/or business intelligence tools? If so, how and with which ones? • How well does each tool handle complex transformations, and how much external scripting is required? • What kinds of languages are supported for ETL script extensions? Almost any ETL tool will look like any other on the surface. The trick is to find out which one will work best in your environment. The best way I’ve found to make this determination is to ascertain how successful each vendor’s clients have been using their product. Especially clients who closely resemble your shop in terms of size, industry, in- house skill sets, platforms, source systems, data volumes and transformation complexity.
Ask both vendors for a list of their customers with characteristics similar to your own that have used their ETL product for at least a year. Then interview each client (preferably several people at each site) with an eye toward identifying unexpected problems, benefits, or quirkiness with the tool that have been encountered by that customer. Ultimately, ask each customer – if they had it all to do over again – whether or not they’d choose the same tool and why? You might be surprised at some of the answers.
Joyce Bischoff’s Answer: You should do a careful research job when selecting products. You should first document your requirements, identify all possible products and evaluate each product against the detailed requirements. There are numerous ETL products on the market and it seems that you are looking at only two of them. If you are unfamiliar with the many products available, you may refer to www.tdan.com, the Data Administration Newsletter, for product lists.
If you ask the vendors, they will certainly be able to tell you which of their product’s features are stronger than the other product. Ask both vendors and compare the answers, which may or may not be totally accurate. After you are very familiar with the products, call their references and be sure to talk with technical people who are actually using the product. You will not want the vendor to have a representative present when you speak with someone at the reference site. It is also not a good idea to depend upon a high-level manager at the reference site for a reliable opinion of the product. Managers may paint a very rosy picture of any selected product so that they do not look like they selected an inferior product.
Question: How many places u can call Routines? Answer: Four Places u can call 1. Transform of routine a. Date Transformation b. Upstring Transformation 2. Transform of the Before & After Subroutines 3. XML transformation oraveen@yahoo.com ☻Page 22 of 202☻ 4. Web base transformation Question: What is the Batch Program and how can generate? Answer: Batch program is the program it's generate run time to maintain by the Datastage itself but u can easy to change own the basis of your requirement (Extraction, Transformation, Loading) .Batch program are generate depends your job nature either simple job or sequencer job, you can see this program on job control option.
Question: Suppose that 4 job control by the sequencer like (job 1, job 2, job 3, job 4 ) if job 1 have 10,000 row ,after run the job only 5000 data has been loaded in target table remaining are not loaded and your job going to be aborted then.. How can short out the problem?
Answer: Suppose job sequencer synchronies or control 4 job but job 1 have problem, in this condition should go director and check it what type of problem showing either data type problem, warning massage, job fail or job aborted, If job fail means data type problem or missing column action .So u should go Run window ->Click-> Tracing->Performance or In your target table ->general -> action-> select this option here two option
(i) On Fail -- Commit , Continue (ii) On Skip -- Commit, Continue. First u check how much data already load after then select on skip option then continue and what remaining position data not loaded then select On Fail , Continue ...... Again Run the job defiantly u gets successful massage Question: What happens if RCP is disable? Answer: In such case OSH has to perform Import and export every time when the job runs and the processing time job is also increased... Question: How do you rename all of the jobs to support your new File-naming conventions? Answer: Create a Excel spreadsheet with new and old names. Export the whole project as a dsx. Write a Perl program, which can do a simple rename of the strings looking up the Excel file. Then import the new dsx file probably into a new project for testing. Recompile all jobs. Be cautious that the name of the jobs has also been changed in your job control jobs or Sequencer jobs. So you have to make the necessary changes to these Sequencers.
Question: What will you in a situation where somebody wants to send you a file and use that file as an input or reference and then run job. Answer: A. Under Windows: Use the 'WaitForFileActivity' under the Sequencers and then run the job. May be you can schedule the sequencer around the time the file is expected to arrive. B. Under UNIX: Poll for the file. Once the file has start the job or sequencer depending on the file
oraveen@yahoo.com ☻Page 23 of 202☻ Question: What are Sequencers? Answer: Sequencers are job control programs that execute other jobs with preset Job parameters. Question: How did you handle an 'Aborted' sequencer? Answer: In almost all cases we have to delete the data inserted by this from DB manually and fix the job and then run the job again. Question34: What is the difference between the Filter stage and the Switch stage? Ans: There are two main differences, and probably some minor ones as well. The two main differences are as follows. 1) The Filter stage can send one input row to more than one output link. The Switch stage can not - the Csw i tch construct has an implicitbreak in everycas e. 2) The Switch stage is limited to 128 output links; the Filter stage can have a theoretically unlimited number of output links. (Note: this isn ot a challenge!)
www.allwalkin.blogspot.com
Question: How can i achieve constraint based loading using datastage7.5.My target tables have inter dependencies i.e. Primary key foreign key constraints. I want my primary key tables to be loaded first and then my foreign key tables and also primary key tables should be committed before the foreign key tables are executed. How can I go about it?
Ans:1) Create a Job Sequencer to load you tables in Sequential mode In the sequencer Call all Primary Key tables loading Jobs first and followed by Foreign key tables, when triggering the Foreign tables load Job trigger them only when Primary Key load Jobs run Successfully ( i.e. OK trigger)
2) To improve the performance of the Job, you can disable all the constraints on the tables and load them. Once loading done, check for the integrity of the data. Which does not meet raise exceptional data and cleanse them. This only a suggestion, normally when loading on constraints are up, will drastically performance will go down. 3) If you use Star schema modeling, when you create physical DB from the model, you can delete all constraints and the referential integrity would be maintained in the ETL process by referring all your dimension keys while loading fact tables. Once all dimensional keys are assigned to a fact then dimension and fact can be loaded together. At the same time RI is being maintained at ETL process level.
Question: How do you merge two files in DS? oraveen@yahoo.com ☻Page 24 of 202☻ Ans:Either use Copy command as a Before-job subroutine if the metadata of the 2 files are same or create a job to concatenate the 2 files into one, if the metadata is different. Question: How do you eliminate duplicate rows? Ans: Data Stage provides us with a stage Remove Duplicates in Enterprise edition. Using that stage we can eliminate the duplicates based on a key column. Question: How do you pass filename as the parameter for a job? Ans: While job development we can create a parameter 'FILE_NAME' and the value can be passed while Question: How did you handle an 'Aborted' sequencer? Ans: In almost all cases we have to delete the data inserted by this from DB manually and fix the job and then run the job again. Question: Is there a mechanism available to export/import individual DataStage ETL jobs from the UNIX command line? Ans: Try dscmdexport and dscmdimport. Won't handle the "individual job" requirement. You can only export full projects from the command line. You can find the export and import executables on the client machine usually someplace like: C:\Program Files\Ascential\DataStage. Question: Diff. between JOIN stage and MERGE stage.
Answer:
JOIN: Performs join operations on two or more data sets input to the stage and then

outputs the resulting dataset. MERGE:Combines a sorted master data set with one or more sorted updated data sets. The columns from the records in the master and update data set s are merged so that the out put record contains all the columns from the master record plus any additional columns from each update record that required.
A master record and an update record are merged only if both of them have the same values for the merge key column(s) that we specify .Merge key columns are one or more columns that exist in both the master and update records.
Question: Advantages of the DataStage? Answer: Business advantages: • Helps for better business decisions; • It is able to integrate data coming from all parts of the company; • It helps to understand the new and already existing clients; • We can collect data of different clients with him, and compare them; oraveen@yahoo.com ☻Page 25 of 202☻ • It makes the research of new business possibilities possible; • We can analyze trends of the data read by him. Technological advantages: • It handles all company data and adapts to the needs; • It offers the possibility for the organization of a complex business intelligence; • Flexibly and scalable; • It accelerates the running of the project; • Easily implementable. oraveen@yahoo.com ☻Page 26 of 202☻ DATASTAGE FAQ2 1. What is the architecture of data stage? Basically architecture of DS is client/server architecture. Client components & server components Client components are 4 types they are
1. Data stage designer
2. Data stage administrator
3. Data stage director
4. Data stage manager

Data stage designer is user for to design the jobs Data stage manager is used for to import & export the project to view & edit the contents of the repository. Data stage administrator is used for creating the project, deleting the project & setting the environment variables. Data stage director is use for to run the jobs, validate the jobs, scheduling the jobs. Server components DS server: runs executable server jobs, under the control of the DS director, that extract, transform, and load data into a DWH. DS Package installer: A user interface used to install packaged DS jobs and plug-in; Repository or project: a central store that contains all the information required to build DWH or data mart. 2. What r the stages u worked on? 3. I have some jobs every month automatically delete the log details what r the steps u have to take for that We have to set the option autopurge in DS Adminstrator. 4. I want to run the multiple jobs in the single job. How can u handle. In job properties set the option ALLOW MULTIPLE INSTANCES. oraveen@yahoo.com ☻Page 27 of 202☻ 5. What is version controlling in DS? In DS, version controlling is used for back up the project or jobs.
This option is available in DS 7.1 version onwards.
Version controls r of 2 types.
1. VSS- visual source safe
2. CVSS- concurrent visual source safe.

VSS is designed by Microsoft but the disadvantage is only one user can access at a time, other user can wait until the first user complete the operation.
CVSS, by using this many users can access concurrently. When compared to VSS, CVSS
cost is high.
www.allwalkin.blogspot.com
6. What is the difference between clear log file and clear status file? Clear log--- we can clear the log details by using the DS Director. Under job menu clear log option is available. By using this option we can clear the log details of particular job. Clear status file---- lets the user remove the status of the record associated with all stages of selected jobs.(in DS Director) 7. I developed 1 job with 50 stages, at the run time one stage is missed how can u identify which stage is missing? By using usage analysis tool, which is available in DS manager, we can find out the what r the items r used in job. 8. My job takes 30 minutes time to run, I want to run the job less than 30 minutes? What r the steps we have to take? By using performance tuning aspects which are available in DS, we can reduce time. Tuning aspect In DS administrator : in-process and inter process In between passive stages : inter process stage OCI stage : Array size and transaction size And also use link partitioner & link collector stage in between passive stages 9. How to do road transposition in DS? Pivot stage is used to transposition purpose. Pivot is an active stage that maps sets of columns in an input table to a single column in an output table. 10. If a job locked by some user, how can you unlock the particular job in DS? oraveen@yahoo.com ☻Page 28 of 202☻ We can unlock the job by using clean up resources option which is available in DS Director. Other wise we can find PID (process id) and kill the process in UNIX server. 11. What is a container? How many types containers are available? Is it possible to use container as look up? A container is a group of stages and links. Containers enable you to simplify and modularize your server job designs by replacing complex areas of the diagram with a single container stage.
DataStage provides two types of container: • Local containers. These are created within a job and are only accessible by that job only. • Shared containers. These are created separately and are stored in the Repository in the same way that jobs are. Shared containers can use any job in the project. Yes we can use container as look up. 12. How to deconstruct the shared container? To deconstruct the shared container, first u have to convert the shared container to local container. And then deconstruct the container. 13. I am getting input value like X = Iconv(“31 DEC 1967”,”D”)? What is the X value? X value is Zero. Iconv Function Converts a string to an internal storage format.It takes 31 dec 1967 as zero and counts days from that date(31-dec-1967). 14. What is the Unit testing, integration testing and system testing? Unit testing: As for Ds unit test will check the data type mismatching, Size of the particular data type, column mismatching. Integration testing: According to dependency we will put all jobs are integrated in to one sequence. That is called control sequence. System testing: System testing is nothing but the performance tuning aspects in Ds. 15. What are the command line functions that import and export the DS jobs? Dsimport.exe---- To import the DataStage components Dsexport.exe---- To export the DataStage components oraveen@yahoo.com ☻Page 29 of 202☻ 16. How many hashing algorithms are available for static hash file and dynamic hash file? Sixteen hashing algorithms for static hash file. Two hashing algorithms for dynamic hash file( GENERAL or SEQ.NUM) 17. What happens when you have a job that links two passive stages together? Obviously there is some process going on. Under covers Ds inserts a cut-down transformer stage between the passive stages, which just passes data straight from one stage to the other.
18. What is the use use of Nested condition activity? Nested Condition. Allows you to further branch the execution of a sequence depending on a condition. 19. I have three jobs A,B,C . Which are dependent on each other? I want to run A & C jobs daily and B job runs only on Sunday. How can u do it? First you have to schedule A & C jobs Monday to Saturday in one sequence. Next take three jobs according to dependency in one more sequence and schedule that job only Sunday. oraveen@yahoo.com ☻Page 30 of 202☻ My DataStage and Data Warehouse Materials My DataStage Tips: 1. Aggregator stage does not support more than one source, if you try to do this you will get error, “The destination stage cannot support any more stream input links”. 2. You can give N number input links to transformer stage, but you can give sequential file stage as reference link. You can give only one sequential file stage as primary link and number other links as reference link. If you try to give sequential file stage as reference link you will get error as, “The destination stage cannot support any more stream input links” because reference link represent a lookup table, but sequential file does not use as lookup table, Hashed file can be use as lookup table.
Sequential file stage: •Sequential file stage is provided by datastage to access data from sequential file. (Text file) •The access mechanism of a sequential file is sequence order.
•We cannot use a sequential file as a lookup.
•The problem with sequential file we cannot directly ‘filter rows’ and query is not

supported. Update actions in sequential file: •Over write existing file (radio button). •Append to existing file (radio button). •Backup existing file (check box).
Hashed file stage: •Hashed file is used to store data in hash file. •A hash file is similar to a text file but the data will be organized using ‘hashing algorithm’. •Basically hashed file is used for lookup purpose. •The retrieval of data in hashed file faster because it uses ’hashing algorithm’. Update actions in Hashed file: •Clear file before waiting
•Backup existing file.
•Sequential file (all are check boxes).
www.allwalkin.blogspot.com
Conformed dimension: •A dimension table connects to more than one fact table. We present this same dimension table in both schemes and we refer to dimension table as conformed dimension. oraveen@yahoo.com ☻Page 31 of 202☻ Conformed fact: •Definitions of measurements (facts) are highly consistent we call them as conformed fact. Junk dimension: •It is convenient grouping of random flags and aggregates to get them out of a fact table and into a useful dimensional framework. Degenerated dimension: •Usually occur in line item oriented fact table designs. Degenerate dimensions are normal, expected and useful. •The degenerated dimension key should be the actual production order of number and should set in the fact table without a join to anything. Time dimension: •It contains a number of useful attributes for describing calendars and navigating. •An exclusive time dimension is required because the SQL date semantics and functions cannot generate several important features, attributes required for analytical purposes. •Attributes like week days, week ends, holidays, physical periods cannot be generated by SQL statements. Fact less fact table: •Fact table which do not have any facts are called fact less fact table. •They may consist of keys; these two kinds of fact tables do not have any facts at all. •The first type of fact less fact table records an ‘event’. •Many event tracking tables in dimensional data warehouses turn out to be factless. Ex:A student tracking system that details each ‘student attendance’ event each day. •The second type of fact less fact table is coverage. The coverage tables are frequently needed when a primary fact table in dimensional DWH is sparse. Ex:The sales fact table that records the sales of products in stores on particular days under each promotion condition Types of facts: •Additive:facts involved in the calculations for the purpose of deriving summarized data. •Semi additive:facts that involved in the calculations at a particular context of time. •Non additive: facts that cannot involved in the calculations at every point of time. oraveen@yahoo.com ☻Page 41 of 202☻ Round-Robin. This is the default method. Using the round-robin method the stage will read a row from each input link in turn. Sort/Merge. Using the sort/merge method the stage reads multiple sorted inputs and writes one sorted output. • Sort Key. This property is only significant where you have chosen a collecting algorithm of Sort/Merge. It defines how each of the partitioned data sets are known to be sorted and how the merged output will be sorted. The key has the following format:
Columnname{sortorder] [,Columnname[sortorder]]...
Columnname specifies one (or more) columns to sort on.
sortorder defines the sort order as follows:

In an NLS environment, the collate convention of the locale may affect the sort order. The default collate convention is set in the DataStage Administrator, but can be set for individual jobs in the Job Properties dialog box.
Ascending Order Descending Order A d asc dsc ascending descending A D ASC DSC ASCENDING DESCENDING For example: FIRSTNAME d, SURNAME D Specifies that rows are sorted according to FIRSTNAME column and SURNAME column in descending order. The Link Collector stage can have up to 64 input links. This is where the data to be collected arrives. The Input Name drop-down list on theIn pu ts page allows you to select which of the 64 links you are looking at.
The Link Collector stage can have a single output link. oraveen@yahoo.com ☻Page 42 of 202☻ DataStage Tutorial 1. About DataStage
2. Client Components
3. DataStage Designer.
4. DataStage Director
5. DataStage Manager
6. DataStage Administrator
www.allwalkin.blogspot.com
7. DataStage Manager Roles 8. Server Components 9. DataStage Features 10. Types of Jobs
11. DataStage NLS
12. JOB
13. Aggregator
14. Hashed File
15. UniVerse

About DataStage DataStage is a tool set for designing, developing, and running applications that populate one or more tables in a data warehouse or data mart. It consists of client and server components.
Client Components DataStage Designer. A design interface used to create DataStage applications (known as jobs). Each job specifies the data sources, thet ransfo rm s required, and the destination of the data. Jobs are compiled to create executables that are scheduled by the Director and run by the Server.
DataStage Director. A user interface used to validate, schedule, run, and monitor DataStage jobs. DataStage Manager. A user interface used to view and edit the contents of the Repository. DataStage Administrator A user interface used to configure DataStage oraveen@yahoo.com ☻Page 43 of 202☻ DataStage Manager Roles • Importtabl e or stored procedure definitions • Createtable or stored procedure definitions, data elements, custom transforms, server job routines, mainframe routines, machine profiles, and plug-ins There are also more specialized tasks that can only be performed from the DataStage Manager. These include: • Perform usage analysis queries. • Report on Repository contents. • Importing, exporting and packaging DataStage jobs. Server Components There are three server components which are installed on a server: • Repository. A central store that contains all the information required to build a data mart or data warehouse. • DataStage Server. Runs executable jobs that extract, transform, and load data into a data warehouse. • DataStage Package Installer. A user interface used to install packaged DataStage jobs and plug-ins. DataStage Features Extracts data from any number or types of database
Handles all the meta data definitions required to define your data warehouse.
Aggregates data. You can modify SQL SELECT statements used to extract data.

Transforms data. DataStage has a set of predefined transforms and functions you can use to convert your data. Loads the data warehouse Types of jobs There are three basic types of DataStage job: • Server jobs. These are compiled and run on the DataStage server. A server job will connect to databases on other machines as necessary, extract data, process it, then write the data to the target data warehouse. oraveen@yahoo.com ☻Page 44 of 202☻ • Parallel jobs. These are available only if you have Enterprise Edition installed. Parallel jobs are compiled and run on a DataStage UNIX server, and can be run in parallel on SMP, MPP, and cluster systems. • Mainframe jobs. These are available only if you have Enterprise MVS Edition installed. A mainframe job is compiled and run on the mainframe. Data extracted by such jobs is then loaded into the data warehouse. There are two other entities that are similar to jobs in the way they appear in the DataStage Designer, and are handled by it. These are: Shared containers. These are reusable job elements. They typically comprise a number of stages and links. Copies of shared containers can be used in any number of server jobs and edited as required.
Job Sequences. A job sequence allows you to specify a sequence of DataStage jobs to be executed, and actions to take depending on results DataStage NLS • Process data in a wide range of languages • Accept data in any character set into most DataStage fields • Use local formats for dates, times, and money (Server Jobs) • Sort data according to local rules JOB A job consists of stages linked together which describe the flow of data from a data source to a final data warehouse. Built-In Stages – Server Jobs Aggregator. Aggregator stages are active stages that classify data rows from a single input link into groups and compute totals or other aggregate functions for each group. The summed totals for each group are output from the stage via an output link.
Hashed File. Extracts data from or loads data into databases that contain hashed files. Also acts as an intermediate stage for quick lookups. Hashed File stages represent a hashed file, i.e., a file that uses a hashing algorithm for distributing records in one or more groups on disk. You can use a Hashed File stage to oraveen@yahoo.com ☻Page 45 of 202☻ extract or write data, or to act as an intermediate file in a job. The primary role of a Hashed File stage is as a reference table based on a single key field. Each Hashed File stage can have any number of inputs or outputs. Input links specify the data you are writing. Output links specify the data you are extracting. UniVerse. • Extracts data from or loads data into UniVerse databases. UniData. • Extracts data from or loads data into UniData databases. ODBC. Extracts data from or loads data into databases that support the industry standard Open Database Connectivity API. This stage is also used as an intermediate stage for aggregating data.
ODBC stages are used to represent a database that supports the industry standard Open Database Connectivity API. You can use an ODBC stage to extract, write, or aggregate data.
Each ODBC stage can have any number of inputs or outputs. Input links specify the data you are writing. Output links specify the data you are extracting and any aggregations required.
Sequential File. Extracts data from or loads data into "flat files" in the Windows NT file system. Sequential File stages are used to extract data from, or write data to, a text file in the server file system. The text file can be created or exist on any drive that is either local or mapped to the server. Each Sequential File stage can have any number of inputs or outputs.
Folder Stage. Folder stages are used to read or write data as files in a directory located on the DataStage server. Folder stages are used to read or write data as files in a directory. The folder stages can read multiple files from a single directory and can deliver the files to the job as rows on an output link. By default, the file content is delivered with oraveen@yahoo.com ☻Page 46 of 202☻ newlines converted to char(254) field marks. The folder stage can also write rows of data as files to a directory. The rows arrive at the stage on an input link. Transformer. Receives incoming data, transforms it in a variety of ways, and outputs it to another stage in the job. Transformer stages do not extract data or write data to a target database. They are used to handle extracted data, perform any conversions required, and pass data to another Transformer stage or a stage that writes data to a target data table.
Transformer stages in server jobs can have any number of inputs and outputs. The link from the main data input source is designated the primary input link. There can only be one primary input link, but there can be any number of reference inputs.
Container. Represents a group of stages and links. The group is replaced by a single Container stage in the Diagram window. IPC Stage. Provides a communication channel between DataStage processes running simultaneously in the same job. It allows you to design jobs that run on SMP systems with great performance benefitswww.allwalkin.blogspot.com
.An inter-process (IPC) stage is a passive stage which provides a communication channel between DataStage processes running simultaneously in the same job. It allows you to design jobs that run on SMP systems with great performance benefits. To understand the benefits of using IPC stages, you need to know a bit about how DataStage jobs actually run as processes, see Chapter 2 of the Server Job Developer's Guide for information.
The output link connecting IPC stage to the stage reading data can be opened as soon as the input link connected to the stage writing data has been opened. You can use Inter-process stages to join passive stages together. For example you could use them to speed up data transfer between two data sources Link Collector Stage. Takes up to 64 inputs and allows you to collect data from these links and route it along a single output link. oraveen@yahoo.com ☻Page 48 of 202☻ Job Sequences DataStage provides a graphical Job Sequencer which allows you to specify a sequence of server or parallel jobs to run. The sequence can also contain control information, for example, you can specify different courses of action to take depending on whether a job in the sequence succeeds or fails. Once you have defined a job sequence, it can be scheduled and run using the DataStage Director. It appears in the DataStage Repository and in the DataStage Director client as a job.
oraveen@yahoo.com ☻Page 49 of 202☻ LEARN FEATURES OF DATASTAGE DATASTAGE: DataStage has the following features to aid the design and processing required to build a data warehouse: Uses graphical design tools. With simple point-and-click techniques you can draw a scheme to represent your processing requirements. Extracts data from any number or type of database. Handles all the metadata definitions required to define your data warehouse. You can view and modify the table definitions at any point during the design of your application.
Aggregates data. You can modify SQL SELECT statements used to extract data. Transforms data. DataStage has a set of predefined transforms and functions you can use to convert your data. You can easily extend the functionality by defining your own transforms to use.
Loads the data warehouse. COMPONENTS OF DATASTAGE: DataStage consists of a number of client and server components. DataStage has four client components 1.DataStage Designer. A design interface used to create DataStage applications (known as jobs). Each job specifies the data sources, the transforms required, and the destination of the data. Jobs are compiled to create executables that are scheduled by the Director
and run by the Server (mainframe jobs are transferred and run on the mainframe). 2.DataStage Director. A user interface used to validate, schedule, run, and monitor DataStage server jobs and parallel jobs. 3.DataStage Manager. A user interface used to view and edit the contents of the Repository. 4.DataStage Administrator. A user interface used to perform administration tasks such as setting up DataStage users, creating and moving projects, and setting up purging criteria.
SERVER COMPONENTS: There are three server components: 1.Repository. A central store that contains all the information required to build a data mart or data warehouse. 2.DataStage Server. Runs executable jobs that extract, transform, and load data into a data warehouse. 3. DataStage Package Installer. A user interface used to install packaged DataStage jobs and plug-ins. oraveen@yahoo.com ☻Page 50 of 202☻ DATASTAGE PROJECTS: You always enter DataStage through a DataStage project. When you start a DataStage client you are prompted to attach to a project. Each project contains: •DataStage jobs.www.allwalkin.blogspot.com
•Built-in components. These are predefined components used in a job.
•User-defined components. These are customized components created using the

DataStage Manager. Each user-defined component performs a specific task in a job. DATASTAGE JOBS: There are three basic types of DataStage job: 1.Server jobs. These are compiled and run on the DataStage server. A server job will connect to databases on other machines as necessary, extract data, process it, then write the data to the target datawarehouse.
2.Parallel jobs. These are compiled and run on the DataStage server in a similar way to server jobs, but support parallel processing on SMP, MPP, and cluster systems.
3.Mainframe jobs. These are available only if you have Enterprise MVS Edition installed. A mainframe job is compiled and run on the mainframe. Data extracted by such jobs is then loaded into the data warehouse.
SPECIAL ENTITIES: •Shared containers. These are reusable job elements. They typically comprise a number of stages and links. Copies of shared containers can be used in any number of server jobs or parallel jobs and edited as required. •Job Sequences. A job sequence allows you to specify a sequence of DataStage jobs to be executed, and actions to take depending on results. TYPES OF STAGES: •Built-in stages. Supplied with DataStage and used for extracting aggregating, transforming, or writing data. All types of job have these stages. •Plug-in stages. Additional stages that can be installed in DataStage to perform specialized tasks that the built-in stages do not support Server jobs and parallel jobs can make use of these. •Job Sequence Stages. Special built-in stages which allow you to define sequences of activities to run. Only Job Sequences have these. oraveen@yahoo.com ☻Page 51 of 202☻ DATASTAGE NLS: DataStage has built-in National Language Support (NLS). With NLS installed, DataStage can do the following: •Process data in a wide range of languages
•Accept data in any character set into most DataStage fields
•Use local formats for dates, times, and money (server jobs)
•Sort data according to local rules

To load a data mart or data warehouse, you must do the following: • Set up your project
•Create a job
•Develop the job
•Edit the stages in the job
•Compile the job
•Run the job

SETTING UP YOUR PROJECT: Before you create any DataStage jobs, you must set up your project by entering information about your data. This includes the name and location of the tables or files holding your data and a definition of the columns they contain. Information is stored in table definitions in the Repository.
STARTING THE DATASTAGE DESIGNER: To start the DataStage Designer, choose Start → Programs → Ascential DataStage → DataStage Designer. The Attach to Project dialog box appears: TO CONNECT TO A PROJECT: 1. Enter the name of your host in the Host system field. This is the name of the system where the DataStage Server components are installed. oraveen@yahoo.com ☻Page 52 of 202☻ 2. Enter your user name in the User name field. This is your user name on the server system.
3. Enter your password in thePassw ord field.
4. Choose the project to connect to from thePro je ct drop-down list box.

5. ClickOK. The DataStage Designer window appears with theNew dialog box open, ready for you to create a new job: CREATING A JOB: Jobs are created using the DataStage Designer. For this example, you need to create a server job, so double-click the New Server Job icon. oraveen@yahoo.com ☻Page 53 of 202☻ Choose File → Save to save the job. The Create new job dialog box appears: DEFINING TABLE DEFINITIONS: For most data sources, the quickest and simplest way to specify a table definition is to import it directly from your data source or data warehouse. IMPORTING TABLE DEFINITIONS: 1. In the Repository window of the DataStage Designer, select the Table Definitions branch, and chooseI mp ort      Table Definitions… from the shortcut menu. The Import Metadata (ODBC Tables) dialog box appears: 2. Choose data Source Name from theDS N drop-down list box. 3. ClickOK. The updated Import Metadata ( ODBC Tables) dialog box displays all the files for the chosen data source name: oraveen@yahoo.com ☻Page 54 of 202☻ 4. Selectproj ect.EXAMPLE1 from theT ab les list box, whereproj e ct is the name of your DataStage project. 5. ClickOK. The column information fromEXAMP L E1 is imported into DataStage. 6. A table definition is created and is stored under the Table Definitions → ODBC → DSNNAME branch in the Repository. The updated DataStage Designer window displays the new table definition entry in the Repository window. DEVELOPING A JOB: Jobs are designed and developed using the Designer. The job design is developed in the Diagram window (the one with grid lines). Each data source, the data warehouse, and each processing step is represented by a stage in the job design. The stages are linked together to show the flow of data.
For Example we can develop a job with the following three stages: A Universe stage to representEXAMPL E1 (the data source). • A Transformer stage to convert the data in the DATE column from anY Y YY-MM-DD date in internal date format to a string giving just year and month (YYYY-MM). • A Sequential File stage to represent the file created at run time (the data warehouse in this example). Adding Stages: Stages are added using the tool palette. This palette contains icons that represent the components you can add to a job. The palette has different groups to organize the tools available.
oraveen@yahoo.com ☻Page 55 of 202☻ To add a stage: 1. Click the stage button on the tool palette that represents the stage type you want to add. 2. Click in the Diagram window where you want the stage to be positioned. The stage appears in the Diagram window as a square. You can also drag items from the palette to the Diagram window.
We recommend that you position your stages as follows:
Data sources on the left
Data warehouse on the right
Transformer stage in the center

When you add stages, they are automatically assigned default names. These names are based on the type of stage and the number of the item in the Diagram window. You can use the default names in the example.
Once all the stages are in place, you can link them together to show the flow of data. Linking Stages You need to add two links:
• One between the Universe and Transformer stages
• One between the Transformer and Sequential File stages

Links are always made in the direction the data will flow, that is, usually left to right. When you add links, they are assigned default names. You can use the default names in the example.
To add a link: 1. Right-click the first stage, hold the mouse button down and drag the link to the transformer stage. Release the mouse button. 2. Right-click the Transformer stage and drag the link to the Sequential File stage. The following screen shows how the Diagram window looks when you have added the stages and links: oraveen@yahoo.com ☻Page 56 of 202☻ Editing the Stages Your job design currently displays the stages and the links between them. You must edit each stage in the job to specify the data to use and what to do with it. Stages are edited in the job design by double-clicking each stage in turn. Each stage type has its own editor.
Editing the UniVerse Stage The data source (EXAMPLE1) is represented by a UniVerse stage. You must specify the data you want to extract from this file by editing the stage. Double-click the stage to edit it. The UniVerse Stage dialog box appears: This dialog box has two pages: •S tage. Displayed by default. This page contains the name of the stage you are editing. TheG en eral tab specifies where the file is found and the connection type. •O u tp u ts. Contains information describing the data flowing from the stage. You edit this page to describe the data you want to extract from the file. In this example, the output from this stage goes to the Transformer stage. To edit the Universe stage:
1. Check that you are displaying theGen e ral tab on theS tage page. Chooselocaluv from the Data source name drop-down list.Loc aluv is where EXAMPLE1 is copied to during installation. oraveen@yahoo.com ☻Page 57 of 202☻ The remaining parameters on theGen eral andD e tails tabs are used to enter logon details and describe where to find the file. BecauseEX AMPL E1 is installed inl ocalu v, you do not have to complete these fields, which are disabled.
2. Click theO u tp u ts tab. TheO u tp u ts page appears: TheO u tp u ts page contains the name of the link the data flows along and the following four tabs:
•Gen e ral. Contains the name of the table to use and an optional description of the link.
•Col u mn s. Contains information about the columns in the table.
•S el ecti on. Used to enter an optional SQL SELECT clause (anAd vanc ed procedure).
• View SQL. Displays the SQL SELECT statement used to extract the data.
3. Choosed stage.E XAMPL E 1 from the Available tables drop-down list.
4. ClickAd d to addd stage.E XAM PL E 1 to the Table names field.
www.allwalkin.blogspot.com
5. Click theColu mn s tab. TheColu mn s tab appears at the front of the dialog box. You must specify the columns contained in the file you want to use. Because the column definitions are stored in a table definition in the Repository, you can load them directly. 6. ClickLoad…. The Table Definitions window appears with thenUniVerse
localuv branch highlighted. 7. Selectd stage.E XA M PL E 1. The Select Columns dialog box appears, allowing you to select which column definitions you want to load. 8. In this case you want to load all available columns definitions, so just clickOK. The column definitions specified in the table definition are copied to the stage. TheCol u mn s tab contains definitions for the four columns inE XAMPL E1:
oraveen@yahoo.com ☻Page 58 of 202☻ 9. You can use the Data Browser to view the actual data that is to be output from the UniVerse stage. Click the View Data… button to open the Data Browser window. 11. Choose File → Save to save your job design so far. Editing the Transformer Stage The Transformer stage performs any data conversion required before the data is output to another stage in the job design. In this example, the Transformer stage is used to convert the data in the DATE column from anYYYYMM-DD date in internal date format to a string giving just the year and month (YYYY-MM).
There are two links in the stage:
• The input from the data source (EXAMPLE1)
• The output to the Sequential File stage

To enable the use of one of the built-in DataStage transforms, you will assign data elements to the DATE columns input and output from the Transformer stage. A oraveen@yahoo.com ☻Page 59 of 202☻ DataStage data element defines more precisely the kind of data that can appear in a given column. In this example, you assign the Date data element to the input column, to specify the date is input to the transform in internal format, and the MONTH.TAG data element to the output column, to specify that the transform produces a string of the formatY Y Y Y-
MM. Double-click the Transformer stage to edit it. The Transformer Editor appears: 1. Working in the upper-left pane of the Transformer Editor, select the input columns that you want to derive output columns from. Click on the CODE, DATE, and QTY columns while holding down theCtrl key.
2. Click the left mouse button again and, keeping it held down, drag the selected columns to the output link in the upper-right pane. Drop the columns over the Column Name field by releasing the mouse button. The columns appear in the top pane and the associated metadata appears in the lower-right pane:
3. In the Data element field for the DSLink3.DATE column, selectDate from the drop- down list. 4. In the SQL type field for the DSLink4 DATE column, selectCh ar from the drop- down list. 5. In theL en gth field or the DSLink4 DATE column, enter7. 6. In the Data element field for the DSLink4 DATE column, selectMO N T H .T AG from the drop-down list. Next you will specify the transform to apply to the input DATE column to produce the output DATE column. You do this in the upper right pane of the Transformer Editor.
7. Double-click theDe r ivation field for the DSLink4 DATE column. The Expression Editor box appears. At the moment, the box contains the textDSLink3.DATE, which oraveen@yahoo.com ☻Page 60 of 202☻ indicates that the output is directly derived from the input DATE column. Select the text DSLink3 and delete it by pressing the Deleteke y. 10. Select theMO NT H .T AG transform. It appears in the Expression Editor box with the argument field [%Arg1%] highlighted. 11. Right-click to open the Suggest Operand menu again. This time, selectInput Column. A list of available input columns appears: oraveen@yahoo.com ☻Page 61 of 202☻ 12. SelectDS L i n k 3.DAT E. This then becomes the argument for the transform. 13. ClickOK to save the changes and exit the Transformer Editor. Once more the small icon appears on the output link from the transformer stage to indicate that the link now has column definitions associated with it.
Editing the Sequential File Stage The data warehouse is represented by a Sequential File stage. The data to be written to the data warehouse is already specified in the Transformer stage. However, you must enter the name of a file to which the data is written when the job runs. If the file does not exist, it is created. Double-click the stage to edit it. The Sequential File Stage dialog box appears:
This dialog box has two pages: •S tage. Displayed by default. This page contains the name of the stage you are editing and two tabs. TheGen eral tab specifies the line termination type, and theNL S tab specifies a character set map to use with the stage (this appears if you have NLS installed).
•In p u ts. Describes the data flowing into the stage. This page only appears when you have an input to a Sequential File stage. You do not need to edit the column definitions on this page, because they were all specified in the Transformer stage.
oraveen@yahoo.com ☻Page 62 of 202☻ To edit the Sequential File stage: 1. Click theIn p u ts tab. TheIn p u ts page appears. This page contains: • The name of the link. This is automatically set to the link name used in the job design. •Gen eral tab. Contains the pathname of the file, an optional description of the link, and update action choices. You can use the default settings for this example, but you may want to enter a file name (by default the file is named after the input link).
•For ma t tab. Determines how the data is written to the file. In this example, the data is written using the default settings that is, as a comma-delimited file. •Col u mn s tab. Contains the column definitions for the data you want to extract. This tab contains the column definitions specified in the Transformer stage’s output link. 2. Enter the pathname of the text file you want to create in the File name field, for example,seqfile.txt. By default the file is placed in the server project directory (for
example, c:\Ascential\DataStage\Projects\datastage) and is named after the input link, but
you can enter, or browse for, a different directory.
3. ClickOK to close the Sequential File Stage dialog box.
4. ChooseFile S av e to save the job design.
The job design is now complete and ready to be compiled.

Compiling a Job When you finish your design you must compile it to create an executable job. Jobs are compiled using the Designer. To compile the job, do one of the following:
• Choose File → Compile.
• Click theComp i l e button on the toolbar.
The Compile Job window appears:

Running a Job Executable jobs are scheduled by the DataStage Director and run by the DataStage Server. You can start the Director from the Designer by choosing Tools → Run Director. oraveen@yahoo.com ☻Page 73 of 202☻ TheIn p u ts page has the following field and two tabs:
• Input name. The name of the input link to the Aggregator stage.
•Gen e ral. Displayed by default. Contains an optional description of the link.

•Col u mn s. Contains a grid displaying the column definitions for the data being written to the stage, and an optional sort order. Column name: The name of the column. Sort Order: Specifies the sort order. This field is blank by default, that is, there is no sort order. ChooseAsc e n d in g for ascending order,Desc en d in g for descending order, orIgn or e if you do not want the order to be checked. Key: Indicates whether the column is part of the primary key.
SQL type: The SQL data type.
Length: The data precision. This is the length for CHAR data and the maximum

length for VARCHAR data. Scale: The data scale factor.
Nullable: Specifies whether the column can contain null values.
Display: The maximum number of characters required to display the column data.
Data element: The type of data in the column.
Description: A text description of the column.

Defining Aggregator Output Data When you output data from an Aggregator stage, the properties of output links and the column definitions of the data are defined on theO u tp u ts page in the Aggregator Stage dialog box.
TheO u tp u ts page has the following field and two tabs: • Output name. The name of the output link. Choose the link to edit from theOu tp u t name drop-down list box. This list box displays all the output links from the stage. •Gen e ral. Displayed by default. Contains an optional description of the link. •Col u mn s. Contains a grid displaying the column definitions for the data being output from the stage. The grid has the following columns: Column name. The name of the column. Group. Specifies whether to group by the data in the column. oraveen@yahoo.com ☻Page 74 of 202☻ Derivation. Contains an expression specifying how the data is aggregated. This is a complex cell, requiring more than one piece of information. Double-clicking the cell opens theDe rivatio n Transformer Stages Transformer stages do not extract data or write data to a target database. They are used to handle extracted data, perform any conversions required, and pass data to another Transformer stage or a stage that writes data to a target data table.
Using a Transformer Stage Transformer stages can have any number of inputs and outputs. The link from the main data input source is designated the primary input link. There can only be one primary input link, but there can be any number of reference inputs.
When you edit a Transformer stage, the Transformer Editor appears. An example Transformer stage is shown below. In this example, metadata has been defined for the input and the output links.
Link Area The top area displays links to and from the Transformer stage, showing their columns and the relationships between them. The link area is where all column definitions, key expressions, and stage variables are defined. The link area is divided into two panes; you can drag the splitter bar between them to resize the panes relative to one another. There is also a horizontal scroll bar, allowing you to scroll the view left or right. The left pane shows input links, the right pane shows output links. The input link shown at the top of the left pane is always the primary link. Any subsequent links are reference links. For all types of link, key fields are shown in bold. Reference link key fields that have no expression defined are shown in red (or the color defined inT ools‰ Op tion s), as are output columns that have no derivation defined.
oraveen@yahoo.com ☻Page 75 of 202☻ Within the Transformer Editor, a single link may be selected at any one time. When selected, the link’s title bar is highlighted, and arrowheads indicate any selected columns. Metadata Area The bottom area shows the column metadata for input and output links. Again this area is divided into two panes: the left showing input link metadata and the right showing output link metadata. The metadata for each link is shown in a grid contained within a tabbed page. Click the tab to bring the required link to the front. That link is also selected in the link area. www.allwalkin.blogspot.com
If you select a link in the link area, its metadata tab is brought to the front automatically. You can edit the grids to change the column metadata on any of the links. You can also add and delete metadata.
Input Links The main data source is joined to the Transformer stage via the primary link, but the stage can also have any number of reference input links. A reference link represents a table lookup. These are used to provide information that might affect the way the data is changed, but do not supply the actual data to be changed. Reference input columns can be designated as key fields. You can specify key expressions that are used to evaluate the key fields. The most common use for the key expression is to specify an equi-join, which is a link between a primary link column and a reference link column. For example, if your primary input data contains names and addresses, and a reference input contains names and phone numbers, the reference link
name column is marked as a key field and the key expression refers to the primary link’s name column. During processing, the name in the primary input is looked up in the reference input. If the names match, the reference data is consolidated with the primary data. If the names do not match, i.e., there is no record in the reference input whose key matches the expression given, all the columns specified for the reference input are set to the null value.
Output Links You can have any number of output links from your Transformer stage. You may want to pass some data straight through the Transformer stage unaltered, but it’s likely that you’ll want to transform data from some input columns before outputting it from the Transformer stage. You can specify such an operation by entering a BASIC expression or by selecting a transform to apply to the data. DataStage has many built-in transforms, or you can define your own custom transforms that are stored in the Repository and can be reused as required. The source of an output link column is defined in that column’s
Derivation cell within the Transformer Editor. You can use the Expression Editor to enter expressions or transforms in this cell. You can also simply drag an input column to an output column’sDe r ivation cell, to pass the data straight through the Transformer oraveen@yahoo.com ☻Page 76 of 202☻ stage. In addition to specifying derivation details for individual output columns, you can also specify constraints that operate on entire output links. A constraint is a BASIC expression that specifies criteria that data must meet before it can be passed to the output link. You can also specify a reject link, which is an output link that carries all the data not output on other links, that is, columns that have not met the criteria. Each output link is processed in turn. If the constraint expression evaluates to TRUE for an input row, the data row is output on that link. Conversely, if a constraint expression evaluates to FALSE for an input row, the data row is not output on that link.
Constraint expressions on different links are independent. If you have more than one output link, an input row may result in a data row being output from some, none, or all of the output links. For example, if you consider the data that comes from a paint shop, it could include information about any number of different colors. If you want to separate the colors into different files, you would set up different constraints. You could output the information about green and blue paint on Link A, red and yellow paint on Link B, and black paint on Link C. When an input row contains information about yellow paint, the Link A constraint expression evaluates to FALSE and the row is not output on Link A. However, the input data does satisfy the constraint criterion for
Link B and the rows are output on Link B. If the input data contains information about white paint, this does not satisfy any constraint and the data row is not output on Links A, B or C, but will be output on the reject link. The reject link is used to route data to a table or file that is a “catch-all” for rows that are not output on any other link. The table or file containing these rejects is represented by another stage in the job design.
Inter-Process Stages An inter-process (IPC) stage is a passive stage which provides a communication channel between DataStage processes running simultaneously in the same job. It allows you to design jobs that run on SMP systems with great performance benefits. To understand the benefits of using IPC stages, you need to know a bit about how DataStage jobs actually run as processes,
In this example the job will run as two processes, one handling the communication from sequential file stage to IPC stage, and one handling communication from IPC stage to ODBC stage. As soon as the Sequential File stage has opened its output link, the IPC stage can start passing data to the ODBC stage. If the job is running on a multi-processor system, the two processor can run simultaneously so the transfer will be much faster. You can also use the IPC stage to explicitly specify that connected active stages should run as
oraveen@yahoo.com ☻Page 77 of 202☻ separate processes. This is advantageous for performance on multi-processor systems. You can also specify this behavior implicitly by turning inter process row buffering on, either for the whole project via DataStage Administrator, or individually for a job in its
Job Properties dialog box. Using the IPC Stage When you edit an IPC stage, the InterProcess Stage dialog box appears. This dialog box has three pages: •S tage. TheS tag e page has two tabs,Gen eral andProp erti es. TheGen e r al page allows you to specify an optional description of the page. TheProp erties tab allows you to specify stage properties.
oraveen@yahoo.com ☻Page 78 of 202☻ •Inputs. The IPC stage can only have one input link. theInputs page displays information about that link. •Ou tp u ts. The IPC stage can only have one output link. TheO u tpu ts page displays information about that link. Defining IPC Stage Properties TheProp er ti es tab allows you to specify two properties for the IPC stage: • Buffer Size. Defaults to 128 Kb. The IPC stage uses two blocks of memory; one block can be written to while the other is read from. This property defines the size of each block, so that by default 256 Kb is allocated in total.
•T i meou t. Defaults to 10 seconds. This gives time limit for how long the stage will wait for a process to connect to it before timing out. This normally will not need changing, but may be important where you are prototyping multi-processor jobs on single processor platforms and there are likely to be delays.
Defining IPC Stage Input Data The IPC stage can have one input link. This is where the process that is writing connects.
The Inputs page has two tabs:Gen eral andCol u mn s.
•Gen e ral. TheGen eral tab allows you to specify an optional description of the stage.

•Col u mn s. TheColu mn s tab contains the column definitions for the data on the input link. This is normally populated by the metadata of the stage connecting on the input side. You can alsoL oad a column definition from the Repository, or type one in yourself (andS ave it to the Repository if required). Note that the metadata on the input link must be identical to the metadata on the output link.
Defining IPC Stage Output Data The IPC stage can have one output link. This is where the process that is reading connects.
TheO u tp u ts page has two tabs:Gen e ral andCo l u mn s.
•Gen e ral. TheGen eral tab allows you to specify an optional description of the stage.

•Col u mn s. TheColu mn s tab contains the column definitions for the data on the input link. This is normally populated by the metadata of the stage connecting on the input side. You can alsoL oad a column definition from the Repository, or type one in yourself (andS ave it to the Repository if required). Note that the metadata on the output link must be identical to the metadata on the input link.
Link Partitioner Stage: The Link Partitioner stage is an active stage which takes one input and allows you to distribute partitioned rows to up to 64 output links. The stage expects the output links to oraveen@yahoo.com ☻Page 101 of 202☻ Convert NonOsDelim To OsDelim In TargetFileList
OsCmd = Copy:' ' : Flag : " " :SourceFileList:' ':TargetFileList
Call DSLogInfo('Copying ': SourceFileList: ' to

':TargetFileList,RoutineName)
Call DSExecute(OsType,OsCmd,OsOutput,OsStatus)
If OsStatus Then

* Routine to look to see if the key passed exists in the file passed
* If so, then the non-key field from the file is returned
* If not found, "***Not Found***" is returned
** The routine requires the UniVerse file named to have been created

Call DSExecute(OsType,OsCmd,OsOutput,OsStatus) If OsStatus Then Call DSLogInfo('The move command (':OsCmd:') returned status ':OsStatus:':':@FM:OsOutput,'DSMoveFiles') End Else Call DSLogInfo('Files moved...','DSMoveFiles') End Ans = OsStatus Routine Name:ErrorMgmtDummy: Value: The Value to Be Mapped
FieldName: The Name of the source field that the Value is contained in
Format: The name of the Hash file containing the mapping data
Default: The Default value to return if value is not found
Msg: ny text you want to store against an error
SeverityInd: The Error Severity Indicator: I-Information, W-Warning, E-Error, F-
Fatal
ErrorLogInd: An Indicator to indicate of errors should be logged (Note this is not
yet implemented)
HashFileLocation: A Hashfile could be either local to the Module or Generic. Enter
"G" for Generic "L" for Local

* FUNCTION Map(Value,FieldName,Format,Default,Msg,ErrorLogInd) ** Executes a lookup against a hashed file using a key ** Input Parameters : Arg1: Value = The Value to be Mapped
or checked
*

Arg2: FieldName = The Name of the field
that is either the Target of the Derivation or the sourceField that
value is contained in

oraveen@yahoo.com ☻Page 107 of 202☻ * Arg3: Format = The name of the Hash
file containing the mapping data
*

Arg4: Default = The Default value to
return if value is not found
*

Arg5: Msg = Any text you want
stored against an error
*

Arg6: SeverityInd = An Indicator to the
servity Level
*

Arg7: ErrorLogInd = An Indicator to
indicate if errors should be logged
*

Arg8: HashfileLocation = An Indicator to
indicate of errors should be logged (Note this is not yet implemented)
** Return Values: If the Value is not found, return value is: -1. or

If (Ans = "-1" or Ans = "-2" or UpCase(Ans)= "BLOCKED") and ErrorLogInd = "Y" Then Ret_Code=LogToHashFile(Mod_Run_Num,Ticket_Group,Ticket_Sequence,Set_Key ,Table,FieldName,Chk_Value,Ans,Msg,SeverityInd) End RETURN(Ans) FileExists:
Move files from one directory to another
Function File Exits(Filename)

oraveen@yahoo.com ☻Page 109 of 202☻ End * Gets the timestamp. Doesn't handle the case where there are suffix
types and timestamp only contains 5 digits without "_" inbetween
If Index(File_Name, "_", 6) = 0 Then

vString='-':Trim(vString, '0','L') End else
vString=Trim(Arg1, '0','L')
end
Ans=vString
End
www.allwalkin.blogspot.com
FTPFile: Script_Path: he path to where the Unix Script file lives
File_Path: The Value to Be Mapped
File_Name: The Name of the source field that the Value is contained in
IP_Address: The name of the Hash file containing the mapping data

User_ID: The Default value to return if value is not found
Password: Any text you want to store against an error
Target_Path: The target path where the ifle is to saved on the target server

IF vParamFile AND FileFound(vParamFile) Then OPENSEQ vParamFile TO hParamFile Then LoopREADSEQ vLineRaw FROM hParamFile ON ERROR Call DSLogWarn('Error from ':vParamFile:'; Status = ':STATUS(),vCurRoutineName)
CLOSE hParamFile
vFailed = @TRUE

* Attempt to find the table name in our cache.
Locate FileName in FilesOpened Setting POS Then
Write Err_Rec To FileHandles(POS), Key Then

TAns = 0 End Else TAns = -1 End End Else * Table is not in cache of opened tables, so open it. Openpath PathFile To FileHandles(POS) Then
FilesOpened = FileName
Write Err_Rec To FileHandles(POS), Key Then TAns = 0
Else

TAns = 0 End Else TAns = -1 End End Else * Table is not in cache of opened tables, so open it. Openpath PathFile To FileHandles(POS) Then
FilesOpened = FileName
Write Err_Rec To FileHandles(POS), Key Then TAns = 0
Else

TAns = -1 End End Else TAns = -2 End End oraveen@yahoo.com ☻Page 124 of 202☻ Ans = TAns RETURN(Ans) MandatoryFieldCheck: Check whether the field name passed is mandatory Routine to check to see if the passed field is populated, and if not, to check to see if it is mandatory. If the field contains "?", then it is handled as if it is blank. The routine uses a control table containing process name, field name, group name and exclusion flag to control mandatory or not. The routine arguments are the field name, the field, the group key, whether this is
the first mandatory check for the record, and the process name when the first check
flag is "Y".

A variable kept in memory (Mandlist) is used to record the mandatory check failures. When the passed field name is "Getmand", no processing is performed except to return the Mandlist field. * Routine to check whether the passed field is filled, and if not,
whether it is mandatory.
** The routine uses a UniVerse file "MANDATORY_FIELD_HF" which contains

the mandatory field controls ** Arg1 Field name to be checked (literal) * Arg2 Field value * Arg3 Group name * Arg4 1st call for record * Arg5 The process name on the first call (this is saved in
storage for subsequent calls)
** Declare shared memory storage.

Then Mandlist = Arg1 Else Mandlist = Mandlist:",":Arg1 end End Map:(Routinue Name) * FUNCTION Map(Value,FieldName,Format,Default,Msg,ErrorLogInd) ** Executes a lookup against a hashed file using a key ** Input Parameters : Arg1: Value = The Value to Be Mapped * Arg2: FieldName = The Name of the field
that is either the Target of the Derivation or the sourceField that
value is contained in
*www.allwalkin.blogspot.com

Arg3: Format = The name of the Hash
file containing the mapping data
*

Arg4: Default = The Default value to
return if value is not found
*

Arg5: Msg = Any text you want
stored against an error
*

Arg6: SeverityInd = An Indicator to the
servity Level
*

Arg7: ErrorLogInd = An Indicator to
indicate if errors should be logged
*

Arg8: HashfileLocation = An Indicator to
indicate of errors should be logged (Note this is not yet implemented)
** Return Values: If the Value is not found, return value is: -1. or

If Num(ColumnPosition) then ColumnPosition = Int(ColumnPosition) If ColumnPosition > 0 and ColumnPosition <99999 Then PositionReturn = 1 End End * Attempt to find the table name in our cache. Locate Format in FilesOpened Setting POS Then Read Rec From FileHandles(POS), Chk_Value Then If PositionReturn Then Ans = Rec Else Ans = Rec End Else Ans = Default_Ans End End Else oraveen@yahoo.com ☻Page 129 of 202☻ * Table is not in cache of opened tables, so open it. Openpath PathFormat To FileHandles(POS) Then
FilesOpened = Format
Read Rec From FileHandles(POS), Chk_Value Else

For i = 1 To Var_Len If Num(Value [i,1]) Then Pattern [i,1] = "n" end
Else If Alpha(Value [i,1]) Then
Pattern[i,1] = "a"
end

Else
Pattern[i,1] = Value [i,1]
end

end
Next i
Ans = Pattern

Checks a passed field to see if it matches the pattern which is also passed.: The input field is checked to see if it conforms to the format that is also passed as a second parameter. oraveen@yahoo.com ☻Page 131 of 202☻ The result of the routine is True is the pattern matches the required format, and false if it does not. If the second parameter is empty, then true is returned. Equate TransformName To "PatternMatchCheck" Begin Case Case Arg2 = "" ;* No pattern - so return true Ans = 1 Case Arg3 = "" ;* Only 1 pattern passed Ans = Arg1 Matches Arg2 Case 1 ;* All other cases Ans = Arg1 Matches Arg2 : CHAR(253) : Arg3 End Case PrepareJob: $INCLUDE DSINCLUDE JOBCONTROL.H
Job_Handle = DSAttachJob (Job_Name, DSJ.ERRWARN)
ErrCode1=DSPrepareJob(Job_Handle)
ErrCode2 = DSDetachJob(Job_Handle)
Ans= ErrCode2

Arg2=Arg1[Outer,1] If NUM(Arg2) then
length2=counter-1
if length2 = 0
thenlength2=counter

postNum=RIGHT(Arg1,length2)
END
elsepostNum=RIGHT(Arg1,counter)

END END counter=counter+1 Next Outer Ans=postNum ReverseDate: Function ReverseDate(DateVal) * Function ReverseDate(DatelVal) * Date mat be in the form of DDMMYYYY i.e. 01102003 or DMMYYYY 1102003 If Len(DateVal) = 7 then NDateVal = "0" : DateVal End Else NDateVal = DateVal End Ans = NDateVal[5,4] : NDateVal[3,2] : NDateVal[1,2] RunJob: oraveen@yahoo.com ☻Page 135 of 202☻ The routine runs a job. Job parameters may be supplied. The result is a dynamic
array containing the job status, and row count information for each link. The
routine UtilityGetRunJobInfo can be used to interpret this result.

As well as the job name and job parameters, the routine parameters allow the job warning limit and row count limit to be set. Format of returned dynamic array: Status<1>=Jobname=FinishStatus
Status<2>=Jobname
Status<3>=JobStartTimeStamp
Status<4>=JobStopTimeStamp
Status<5>=LinkNames (value mark @VM delimited)
Status<6>=RowCount (value mark @VM delimited)
www.allwalkin.blogspot.com
FunctionRunJob(Arg1,Arg2,Arg3,Arg4) * Demonstrate how to run a job within the GUI development enviroment.
Arguments may
* be passed in. The result is a dynamic array with the resulting status
and run
* statistics (row counts for every link on every stage in the job)
*$INCLUDE DSINCLUDE JOBCONTROL.H

As well as the job name and job parameters, the routine parameters allow the job warning limit and row count limit to be set. FunctionRunDetachJob(Arg1,Arg2,Arg3,Arg4) * Run a job, and detach from it so that this job can end *$INCLUDE DSINCLUDE JOBCONTROL.H Equate RoutineName To 'RunJobAndDetach'
Equate RunJobName To Arg1
Equate Params

Param(ParamNum,1),Param(ParamNum,2)) Next ParamNum ErrCode = DSRunJob(JobHandle, DSJ.RUNNORMAL) ErrCode = DSDetachJob(JobHandle) End Ans = 0 RunShellCommandReturnStatus: Function RunShellcommandreturnstatus(Command) Call DSLogInfo('Running command:':Command,'RunShellCommandReturnStatus') Call DSExecute('UNIX',Command,Ans,Ret) oraveen@yahoo.com ☻Page 139 of 202☻ Call DSLogInfo('Output from command:':Ans,'RunShellCommandReturnStatus') Return(Ret) SegKey: Segment_Num: An Integer number representing the order number of the
Segment in the IDoc
Segment_Parm: A Segment Parameter containing a string of Y's and N's in
order of Segment_Num denoting of the segment should be written to in
this Module
Key: The Value to Be Mapped
ErrorLogInd: An Indicator to indicate of errors should be logged (Note
this is not yet implemented)
www.allwalkin.blogspot.com
Function Seqkey(Segment_Num,segmentparam,key,ErrorLogInd) * FUNCTION SegKey(Value,ErrorLogInd) ** Executes a lookup against a hashed file using a key ** Input Parameters : Arg1: Segment_Num * Arg2: Segment_Parm * Arg1: Key = An ordered Pip separated
set of Seqment Primary Key Fields
*

Arg2: ErrorLogInd = An Indicator to indicate
of errors should be logged (Note this is not yet implemented)
** Return Values: If the Value is not found, return value is: -1. or

End End ElseAns = "Invalid_Key" End SetDSParamsFromFile: A before job subroutine to set Job parameters from an external flat file Input Arg should be of the form: ParamDir,ParamFile
If ParamDir is not supplied, the routine assumes the Project directory
If ParamFile is not supplied, the routine assumes the Job Name (this
could be dangerous)
The routine will abort the job if anything doesn't go to plan

Note: a lock is placed to stop the same job from running another
instance of this routine. The second instance will have to wait for the
routine to finish before being allowed to proceed. The lock is released
however the routine terminates (normal, abort...)

The parameter file should contain non-blank lines of the form ParName = ParValue White space is ignored. oraveen@yahoo.com ☻Page 141 of 202☻ The Routine may be invoked via the normal Before Job Subroutine
setting, or from within the 'Job Properties->Job Control' window by
entering "Call DSU.SetParams('MyDir,MyFile',ErrorCode)"

For Andrew Webb's eyes only - The routine could be made to work off a hashed file, or environment variables quite easily. It is not possible to create Job Parameters on-the-fly because they are referenced within a Job via an EQUATE of the form JobParam%%1 = STAGECOM.STATUS<7,1> JobParam%%2 = STAGECOM.STATUS<7,2> etc This is then compiled up....So forget it! Subroutinues SetDsparmsformfile(inputArg,Errorcode) $INCLUDE DSINCLUDE DSD_STAGE.H
$INCLUDE DSINCLUDE JOBCONTROL.H
$INCLUDE DSINCLUDE DSD.H
$INCLUDE DSINCLUDE DSD_RTSTATUS.H

Locate(ParamName,ParamList,1;ParamPos) Then If Index(UpCase(ParamName),'PASSWORD',1) = 0 Then Call DSLogInfo('Parameter "':ParamName:'" set to "':ParamValue:'"',SetParams) Else Call DSLogInfo('Parameter "':ParamName:'" set but not displayed on log',SetParams)
End
ElseCall DSLogWarn('Parameter ':ParamName:' does not exist in

setParamsForFileSplit:
Using values from a control file this routine will run a job multiple times loading the
specified number of rows for each job run.

Function setParamsForFileSplit:(ControlFilename,Jobname) *********************************************************************** * Nick Bond.... * oraveen@yahoo.com ☻Page 144 of 202☻ * This routine retrieves values from a control file and passes them as paramters to * * a job which is run once for each record in the control file. ************************************************************************** $INCLUDE DSINCLUDE JOBCONTROL.H Equate Routine TO 'setParamsForFileSplit' Call DSLogInfo('Starting Routine ', Routine)
vFileName = ControlFileName
vJobName = JobName
vRecord = 1

** Input Parameters : Arg1: Mod_Run_ID = The unique number
allocated to a run of an Module
*

Arg2: Ticket_File_ID = The File ID
assigned to the source of the Current Row
*

Arg3: Ticket_Sequence = The Ticket Sequence Number of the Current Row oraveen@yahoo.com ☻Page 146 of 202☻ * Arg4: Ticket_Set_Key = Identifies a set of
rows e.g. an Invoice number to set of invoice lines
*

Arg5: Job_Stage_Name = The Name of the
Stage in the Job you want recorded in the error log
*

Arg6: Mod_Root_Path = Root of the module
- used for location of error hash file
** Don't Return Ans but need to keep the compiler happy

log of the controlling sequence as an informational message. The routine should be used sparingly in production jobs to avoid degrading the performance. The return value of the function is always 1.: Function UtilityMessageToControllerLog(Arg1) * Write an informational message to the log of the controlling job. ** This function is mainly intended for development purposes, but can be used
* within a production environment for tracing data values. The user
should

Ads by Googleoraveen@yahoo.com ☻Page 148 of 202☻ * use this function cautiously, as it will cause a decrease in
performance
* if called often.
*

parameters of child job and thereafter find the parameter in the parent job, getting value and setting parameter value in child job.
Input Argument : Job handle (set by using DSAttachJob in Job Control)
Output : If a parameter is not found the routine returns 3, otherwise 0.
Function UTLprapagateparam(Handle)

"UTLPropagateParms") oraveen@yahoo.com ☻Page 149 of 202☻ End Else*** If the parameter is not found in parent job: *** - write a warning to log file.
*** - return code changed to 3.
Call DSLogWarn ("Parameter : ":ThisParam:" does not

exist in ":ParentJobName, "UTLPropagateParms") Ans = 3 End While ChildParams # '' Do Repeat
End
Return(Ans)

ValidateField: Checks the length and data type of a value. Also checks value is a valid date if the type is Date. Any errors are logged to the Error Hash File Field_Value: The value from the field being validated
Field_Name: The name of the field being validated
Length: The maximum length of the field being validated
Data_Type: The data type expected - possible values (Numeric, Alpha, Date, Char)
Date_Format: If Data_Type is 'Date' Then the format must be specified. The syntax
for this is the same as for the Iconv function. i.e "D/YMD[4,2,2]" for a date in the
format 2004/12/23

End oraveen@yahoo.com ☻Page 157 of 202☻ DataStage & DWH Interview Questions of Companies COMPANY: TCS (DataStage) 1. Tell about yourself?
2. Types of Stages? Examples
3. What are active stages and passive stages?
4. Can you filter data in hashed file? (No)
5. Difference between sequential and hashed file?
6. How do you populate time dimension?
7. Can we use target hashed file as lookup? (Yes)
8. What is Merge Stage?
9. What is your role?
10. What is Job Sequencer?
11. What are stages in sequences?
12. How do you pass parameters?
13. What parameters you used in your project?
14. What are log tables?
15. What is job controlling?
16. Facts and dimension tables?
17. Confirmed dimensions?
18. Time dimension contains what data? (numeric data)
19. Difference between OLTP and OLAP?
20. Difference between star schema and snow flake schema?
21. What are hierarchies? Examples?
22. What are materialized views?
23. What is aggregation?
24. What is surrogate key? Is it used for both fact and dimension tables?
25. Why do you go for oracle sequence generator rather than datastage routine?
26. Flow of data in datastage?
27. Initial loading and incremental loading?
28. What is SCD? Types?
29. How do you develop SCD type2 in your project?
30. How do you load dimension data and fact data? Which is first?
31. Any idea about shell scripting and UNIX?
32. Difference between oracle function and procedure?
33. Difference between unique and primary key?
34. Difference between union and union all?
35. What is minus operator?

Ads by Google oraveen@yahoo.com ☻Page 158 of 202☻ COMPANY: ACCENTURE (Datastage) 1. What is audit table? 2. If there is a large hash file and a smaller oracle table and if you are looking up from transformer in different jobs which will be faster?
3. Tell me about SCD’s?
4. How did you implement SCD in your project?
5. Do a business people need to know the surrogate key?
6. What are derivations in transformer?
7. How do you use surrogate key in reporting?
8. Logs view in datastage, logs in Informatica which is clear?
9. Have you used audit table in your project?
10. What is keen? Have you used it in your project?
11. While developing your project what are the considerations you take first like performance

or space?
12. What is job scheduler? Have you used it? How did you do?
13. Have you used datastage parallel extender?
14. What is the Link Partitioner and link collector stage?
15. How does pivot stage work?

16. What is surrogate key? What is the importance of it? How did you implement it in your project?
17. Totally how many jobs did you developed and how many lookups did you use totally?
18. How do constraint in transformer work?
19. How will you declare a constraint in datastage?
20. How will you handle rejected data?
21. Where the data stored in datastage?
22. Give me some performance tips in datastage?
23. Can we use sequential file as a lookup?
24. How does hash file stage lookup?
25. Why can’t we use sequential file as a lookup?

26. What is data warehouse?
27. What is ‘Star-Schema’?
28. What is ‘Snowflake-Schema’?
29. What is difference between Star-Schema and Snowflake-Schema?
30. What is mean by surrogate key?
31. What is ‘Conformed Dimension’?
32. What is Factless Fact Table?
33. When will we use connected and unconnected lookup?
34. Which cache supports connected and unconnected lookup?
35. What is the difference between SCD Type2 and SCD Type3?
36. Draw the ETL Architecture?
37. Draw the DWH Architecture?
38. What is materialized view?
39. What is procedure?
40. What is Function?
41. What is the difference between procedure and function?
42. What is trigger?
43. What are types of triggers?

oraveen@yahoo.com ☻Page 159 of 202☻ COMPANY: SATHYAM (Datastage) 1. Tell me about yourself?
2. What are the client components?
3. About administrator? With this, what do you do in your project?
4. What is you project and explain the process?

5. Informational dimensions? 6. Measures? 7. What is data mart size and data warehouse size? 8. Fact table? Dimension table? 9. Data Mart? 10. How do you clear source files? 11. Pivot Stage? 12. How do you find a link, if not found?
13. Difference between transformer and routine?
14. How do you secure your project?
15. How do you handle errors? Exception handlers?
16. How do you know, how many rows rejected?
17. How do you manage surrogate key in datastage?
18. What is lookup?
19. Aggregator Stage?
20. Universe Stage?
21. How do you merge two tables in datastage?
22. What is export and import?
23. What are Integration testing, unit testing, performance testing?
24. UAT testing? (User Acceptance Testing)
25. Local, development, preproduction, production server?

oraveen@yahoo.com ☻Page 160 of 202☻ COMPANY: SYNTEL, Mumbai (DataStage – Telephonic Interview). Basic DWH: 1. Tell me about your current project?
2. What is your role or job profile in the project?
3. What is your Job profile?
4. What is dimesion and fact?
5. What are types of dimensions?
6. What are confirmed dimensions?
7. What are generated dimensions?
8. What are slowly changing dimensions?
9. How many data marts in your project?
10. What is data mart name in your project?
11. What is the size of your data mart?
12. What is factless fact table? Give example.
13. How many fact tables are used in the project?
14. What is your fact table name in your project?
15. How many dimension tables used in the project?
16. What are the names of the dimension tables?

17. What is Schema? Types? Explain Star-Schema and Snowflake Schema with difference. Which schema you used in your project? Why?
18. Why star-schema called as star-schema? Give example.
19. How frequently and from where you get the data as source?
20. What is difference between data mart and data warehouse?
21. What is composite key?
22. What is surrogate key? When you will go for it?
23. What is dimensional modeling?
24. What are SCD and SGT? Difference between them? Example of SGT from your project.
25. How do you rate yourself in data warehouse?
26. What is the status of your current project?

DataStage: 27. How do you import your source and targets? What are the types of sources and targets?
28. What is Active Stages and Passive Stages means in datastage?
29. What is difference between Informatica and DataStage? Which do you think is best?
30. What are the stages you used in your project?
31. Whom do you report?
32. What is orchestrate? Difference between orchestrate and datastage?
33. What is parallel extender? Had you work on this?
34. What do you mean by parallel processing?
35. What is difference between Merge Stage and Join Stage?
36. What is difference between Copy Stage and Transformer Stage?
37. What is difference between ODBC Stage and OCI Stage?
38. What is difference between Lookup Stage and Join Stage?
39. What is difference between Change Capture Stage and Difference Stage?
40. What is difference between Hashed file and Sequential File?
41. What are different Joins used in Join Stage?
42. How you decide when to go for join stage and lookup stage?
43. What is partition key? Which key is used in round robin partition?

oraveen@yahoo.com ☻Page 161 of 202☻ 44. How do you handle SCD in datastage?
45. What are Change Capture Stage and Change Apply Stages?
46. How many streams to the transformer you can give?
47. What is primary link and reference link?

48. What is routine? What is before and after subroutines? These are run after/before job or stage?
49. Had you write any subroutines in your project?
50. What is Config File? Each job having its own config file or one is needed?
51. What is Node?
52. What is IPC Stage? What it increase performance?
53. What is Sequential buffer?
54. What are Link Partioner and Link Collector?
55. What are the performance tunning you have done in your project?

56. Did you done scheduling? How? Can you schedule a job at the every end date of month? How? 57. What is job sequence? Had you run any jobs? 58. What is status view? Why you clear this? If you clear the status view what internally done? 59. What is hashed file? What are the types of hashed file? Which you use? What is default? What is main advantage of hashed file? Difference between them. (static and dynamic)
60. What are containers? Give example from your project.
61. Had you done any hardware configuration while running parallel jobs?
62. What are operators in parallel jobs?
63. What are parameters and parameter file?
64. Can you use variables? In which stages?

65. How do you convert columns to rows and rows to columns in datastage? (Using Pivot Stage). 66. What is Pivot Stage? 67. What is execution flow of constraints, derivations and variables in transformer stage? What are these?
68. How do you eliminate duplicates in datastage? Can you use hash file for it?
69. If 1st and 8th record is duplicate then which will be skipped? Can you configure it?

70. How do you import and export datastage jobs? What is the file extension? (See each component while importing and exporting).
71. How do you rate yourself in DataStage?
72. Explain DataStage Architecture?
73. What is repository? What are the repository items?
74. What is difference between routine and transform?

75. I have 10 tables with four key column values, in this situation lookup is necessary, but which type of lookup is used? Either OCBC or Hashed file lookup? Why?
76. When you write the routines?
77. In one project how many shared containers are created?
78. How do you protect your project?
79. What is the complex situation you faced in DataStage?
80. How will you move hashed file from one location to another location?
81. How will you create static hashed file?
82. How many Jobs you have done in your project? Explain one of complex Job.

oraveen@yahoo.com ☻Page 162 of 202☻ COMPANY: KANBAY, Pune (DataStage – Personal Interview) 1. All about company details, project details, and client details, sample data of your source? 2. DataStage Architecture? 3. System variable, what are system variables used your project?
4. What are the different datastage functions used in your project?
5. Difference between star schema and snow flake schema?
6. What is confirmed, degenerated and junk dimension?
7. What are confirmed facts?
8. Different type of facts and their examples?
9. What are approaches in developing data warehouse?
10. Different types of hashed files?
11. What are routines and transforms? How you used in your project?
12. Difference between Data Mart and Data Warehouse?
13. What is surrogate key? How do you generate it?
14. What are environment variables and global variables?
15. How do you improve the performance of the job?
16. What is SCD? How do you developed SCD type1 and SCD type2?

17. Why do you go for oracle sequence to generate surrogate key rather than datastage routines?
18. How do you generate surrogate key in datastage?
19. What is job sequence?
20. What are plug-ins?
21. How much data you can get every day?
22. What is the biggest table and size in your schema or in your project?
23. What is the size of data warehouse (by loading data)?
24. How do you improve the performance of the hashed file?
25. What is IPC Stage?
26. What are the different types of stages and used in your project?
27. What are the operations you can do in IPC Stage and transformer stage?
28. What is merge stage? How do you merge two flat files?

29. I have two table, in one table contains 100 records and other table contains 1000 records which table is the master table? Why? 30. I have one job from one flat file. I have to load data to database, 10 lakhs records are there, after loading 9 lakhs job is aborted? How do you load remaining records?
31. Which data your project contains?
32. What is the source in your project?

oraveen@yahoo.com ☻Page 163 of 202☻ COMPANY: IBM, Bangalore (DataStage – Telephonic Interview) 1. Tell me about your educational and professional background? 2. What is team size? What is your role in that? 3. What is fact less fact table? As it don’t have facts then what’s the purpose of using it? Had you used in your project?
4. How many jobs you have done in your project?
5. You handled different complex logic jobs in your project or not?
6. Out of all jobs you have done, what is most complex job u feel? Explain it?

7. You do yourself the complex logic or someone give you the specifications and you convert them to datastage?
8. What are the sources you used in your project?
9. What are the stages you used in your project?
10. What is difference between ODBC and ORACLE OCI stage?
11. As you told, if your sources are flat files and ORACLE OCI then why you need ODBC in

your project rather than ORACLE OCI stage? 12. What difference between sequential file and hashed file? 13. Can you use sequential file as source to hashed file? Have you done it? What error it will give?
14. Why hashed file improve the performance?
15. How do you sort your data in jobs?
16. Had you use sort stage in your job? (sort stage is parallel stage, be sure that you are using

server jobs only, then he will ask Q.12)
17. Can aggregator and transformer stage used for sorting data? How
18. If I have two sources to aggregator stage and oracle as target, I can sort data in aggregator

but if I don’t want to use aggregator to sort data then how you will do it? 19. Why we use surrogate key in data warehouse? How it will improve the performance? Where it will store? How do you handle your surrogate key in your project? Where we use mostly surrogate key?
20. How many input links you can give to transformer? 21. Can you give more than one source to transformer? (If you say “No” he will ask what error it will give when you try to do this?) 22. Definition of Slowly Changing Dimensions? Types? 23. If a company maintaining type1 SCD, now the company decided to change there plan to maintain type2 SCD, e.g. customer table, so what are the changes to do in customer table? (Whether you have to change the structure of the table, if it is under type3 right?
Or no changes? How do you implement this?)
24. How many dimensions in your project? What are they?
25. What are the facts in your fact table?
26. Are all these facts are specific (related) to all dimensions?
27. How do you get system date in oracle?
28. What is a dual table in oracle?
29. What is the use of UNION in oracle? If I write query select * from EMP UNION select *

from dept, is it executed well? 30. I have a query select * from EMP table group by dept; is this query executed? If no what is the error? oraveen@yahoo.com ☻Page 164 of 202☻ MORE QUESTIONS ON DATASTAGE: 1. What are the difficulties faced in using DataStage?
2. What are the constraints in using DataStage?
3. How do you eliminate duplicate rows?
4. How do we do the automation of dsjobs?
5. What are XML files? How do you read data from XML files and which stage to be used?
6. How do you catch bad rows from OCI stage?
7. Why do you use SQL LOADER or OCI STAGE?
8. How do you populate source files?
9. How do you pass filename as the parameter for a job?
10. How do you pass the parameter to the job sequence if the job is running at night?
11. What happens if the job fails at night?
12. What is SQL tuning? How do you do it?
13. What is project life cycle and how do you implement it?
14. How will you call external function or subroutine from datastage?
15. How do you track performance statistics and enhance it?
16. How do you do oracle 4 way inner join if there are 4 oracle input files?
17. Explain your last project and your role in it?
18. What are the often used Stages or stages you worked with in your last project?
19. How many jobs have you created in your last project?
20. How do you merge two files in DS?
21. What is DS Manager used for - did u use it?
22. What is DS Director used for - did u use it?
23. What is DS Administrator used for - did u use it?
24. What is DS Designer used for - did u use it?
25. Explain the differences between Oracle8i/9i?
26. Do you know about INTEGRITY/QUALITY stage?
27. Do you know about METASTAGE?
28. Difference between Hashfile and Sequential File?
29. What is iconv and oconv functions?
30. How can we join one Oracle source and Sequential file?
31. How can we implement Slowly Changing Dimensions in DataStage?
32. How can we implement Lookup in DataStage Server jobs?
33. What are all the third party tools used in DataStage?
34. What is the difference between routine and transform and function?
35. What are the Job parameters?
36. How can we improve the performance of DataStage jobs?
37. How can we create Containers?
38. What about System variables?
39. What is difference between operational data stage (ODS) & data warehouse?
40. How do you fix the error "OCI has fetched truncated data" in DataStage?
41. How to create batches in Datastage from command prompt
42. How do you eliminate duplicate rows?
43. Suppose if there are million records, did you use OCI? If not then what stage do you prefer?www.allwalkin.blogspot.com

44. What is the order of execution done internally in the transformer with the stage editor having input links on the lft hand side and output links? 45. I want to process 3 files in sequentially one by one, how I can do that. While processing the files it should fetch files automatically. oraveen@yahoo.com ☻Page 165 of 202☻ Datastage: 1. How to create a flat file job… (steps)
2. Is there any tool by ascential to pull the metadata from various sources
3. What if definition of a table changes...what impact will it have on ur job...
4. how to use debugger
5. how u schedule a DS job via unix script
6. Any third party tools for scheduling the jobs
7. how to use hash file...how to create Hash file...

8. Aggregator Transformations.. 9. pre sql post sql..How to use these...truncate table
10. what is the use of administrator is used
11. what was the most complex mapping u hv developed using datastage
12. how much exp u hv on DS
13. if table definition has been changed in manager will it automatically propogate into JOb
14. Can out link from one active stage can become inout link in another active stage
15. Can u use a sequential file as a reference file...difference between a sequential file and a hash

file A)NO 16. What diffrent options are there to see a table definition..
17. What all products of ascential u r aware of
18. What is the advantage of using OCI stage as compared to ODBC stage
19. Normalizer Transformation..
20. what steps will you take to increase performance in Datastage for large volumes of data
21. what are bridge tables
22. Types of Indexes
23. Table Partitioning
24. Types of schemas,explain
25. How do you do requirements gathering in case of non-availability of the personnel and

thereafter the project plan
26. How do you take care of unknown values for the primary key for dimension?
27. Factless fact tables
28. Overview of Datastage projects
29. Link Partitioner/ Collector

Data stage: 1. What is ETL Architecture?
2. Explain your project Architecture?
3. How many Data marts and how many facts and dimensions Available in your project?
4. What is the size of your Data mart?
5. How many types of loading-techniques are Available?
6. Before going to design jobs in Data stage what are preceding-steps in Data stage?
7. What is the Architecture of Data stage?
8. What is the Main difference between different client components in Data stage?
9. What are the different stage you have worked on it?
10. Can I call procedures in to datastage.if so How to call store- procedures in Data stage?
11. What is the difference between sequential file and hashfile? Can we use sequential file as a

lookup? Can we put filter conditions on sequential file? 12. Differences between DRS Stage and ODBC? Which one is the best for performance? oraveen@yahoo.com ☻Page 166 of 202☻ 13. What are the different performance tunning aspects are there in Data stage?
14. How do you remove the duplicates in flat-file?
15. What is the difference between Interprocess and inprocess? Which one is the best?
16. What is CRC32? On which situation go for CRC32?
17. What is a pivotstage? Can u explain on scenario which situation used in your project?
18. What is row-spliter and row-merger can I use separately is it possible to do it?
19. If one user locked the resource? How to release the particular Job?
20. What is a version-controll in data stage?
21. What is the difference between clearlog-file? Clearstage-file?
22. How to scehudle jobs with out using Data stage?
23. What is the difference between Static-hash and dynamichashfile?
24. How to do error handling in data stage?
25. What is the difference between Active stage and passive stage? What are the Active and

passive stages?
26. How to set Environment variables in datastge?
27. What is job controlled routinue? How set job parameter in Data stage?
28. How to release a job?
29. How to do Auto-purge in Data stage?
30. What is the difference between Datastge7.1 and 7.5?

oraveen@yahoo.com ☻Page 167 of 202☻ Data Warehousing Questions 1. What are the different Dimensional modeling Techniques are Available? 2. What is the Difference between Star-schema and snow-flake-schema? When we go for star and snow-flake?
3. What are the types of dimension and facts are in DW?
4. What is the life cycle of Data warehousing project?
5. What is a Data-model?
6. What is the Difference between Top-down Approach and Bottom-up Approach?
7. What is a factless-fact Table?
8. What is a confirmed-dimension?
9. What is a junk-dimension?
10. What is a cleansing?

11. Tell me about your current project?
12. What is your role or job profile in the project?
13. What is your Job profile?
14. What is dimesion and fact?
15. What are types of dimensions?
16. What are confirmed dimensions?
17. What are generated dimensions?
18. What are slowly changing dimensions?
19. How many data marts in your project?
20. What is data mart name in your project?
21. What is the size of your data mart?
22. What is factless fact table? Give example.
23. How many fact tables are used in the project?
24. What is your fact table name in your project?
25. How many dimension tables used in the project?
26. What are the names of the dimension tables?

27. What is Schema? Types? Explain Star-Schema and Snowflake Schema with difference. Which schema you used in your project? Why?
28. Why star-schema called as star-schema? Give example.
29. How frequently and from where you get the data as source?
30. What is difference between data mart and data warehouse?
31. What is composite key?
32. What is surrogate key? When you will go for it?
33. What is dimensional modeling?
34. What are SCD and SGT? Difference between them? Example of SGT from your project.
35. How do you rate yourself in data warehouse?
36. What is the status of your current project?
37. What is data warehouse?
38. What is ‘Star-Schema’?
39. What is ‘Snowflake-Schema’?
40. What is difference between Star-Schema and Snowflake-Schema?
41. What is mean by surrogate key?
42. What is ‘Conformed Dimension’?
43. What is Factless Fact Table?
44. When will we use connected and unconnected lookup?

Locate(ParamName,ParamList,1;ParamPos) Then If Index(UpCase(ParamName),'PASSWORD',1) = 0 oraveen@yahoo.com ☻Page 171 of 202☻ Then Call DSLogInfo('Parameter "':ParamName:'" set to "':ParamValue:'"',SetParams) Else Call DSLogInfo('Parameter "':ParamName:'" set but not displayed on log',SetParams)
End
ElseCall DSLogWarn('Parameter ':ParamName:' does not exist in

':StatusFileName:'. Status = ':Status(),SetParams) Return End Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null STAGECOM.JOB.STATUS = ParamValues vnput Arg should be of the form: ParamDir,ParamFile
If ParamDir is not supplied, the routine assumes the Project directory
If ParamFile is not supplied, the routine assumes the Job Name (this
could be dangerous)
The routine will abort the job if anything doesn't go to plan

Note: a lock is placed to stop the same job from running another
instance of this routine. The second instance will have to wait for the
routine to finish before being allowed to proceed. The lock is released
however the routine terminates (normal, abort...)

The parameter file should contain non-blank lines of the form ParName = ParValue White space is ignored. The Routine may be invoked via the normal Before Job Subroutine
setting, or from within the 'Job Properties->Job Control' window by
entering "Call DSU.SetParams('MyDir,MyFile',ErrorCode)"