Hi, you can do this if you know the workflow name and start and end time. Since the session statistics stores all these info in the repository, you can query the repository to get these details along with rows inserted, rows rejected etc,
Anand

You can otherwise, create a out put port with the workflow name in the
expression and get the start time from OPB_SWIDGINST_LOG table.
Otherwise get the workflow name and time both from opb_sess_task_log,
opb_wflow_run , opb_swidginst_log b joining all of them on workflow_id.

Hi,
I have faced this before...one way to counter this is to have a load_control table and have a session start time before you start your workflow and then run a post sql to update your load control table after your session completes.

Load_cntrl->Your workflow->followed by a post sql to update the load_cntrl table

Note:You are going to need to hard code the workflow name into this database table(using this load_cntrl mapping)... eX Insert into load_cntrl('wf_your_workflow',SYSDATE,NULL)
(wf_name,Sessstart,SessEnd).

When running your update in the form of a postsql you can use
UPDATE load_cntrl
SET SessEnd = SYSDATE
Where wf_name = wf_your_workflow
AND Sessstart = (SELECT MAX(Sessstart)
from load_cntrl where wf_name = 'wf_your_workflow')..
... There can be more values in this load_cntrl table and there can be more permutations..what i have cited is just an example.

I have used Informatica as a developer for many years, and also and an
administrator in large corporate environments, also for years. I have
always been frustrated that it is just way too difficult to get this
type of basic run-time information out of Informatica. It's not just
you. :-) Here's how I approached this, and I suspect for you to get any
flexible, useful results -- until Informatica wakes up and adds this
functionality -- you are going to need to do the same.

The metrics/run-time information that is available at the session level
and available through the session email options, I always found
insufficient. I also found the MXVIEWs generally weak ( views starting
with REP_*) Even more important, the Informatica definition of
success/failure never sat well with me.

After working around these shortcomings in Informatica for years with
hack/patch solutions, I finally got a chance recently and wrote a
somewhat lite weight set of denormalized Oracle views over the low level
OPB_* Informatica metadata tables. This was the critical first step to
get this information out of Informatica in an easier, more practical
way. It took some time to figure out the mechanics of the OPB_* tables,
but it was well worth it. (Remember to only read from those tables.
Never alter their content. The gui tools and the servers run off that
data.)

A simple set of denormalized views also became very useful for ad hoc
queries against the repository. From there I set up a reuseable
metadata gathering session within my workflows. I run it after every ETL
process. This allows me to both gather the run-time metrics/statistics
and also make a sensible business-driven assessment of success/failure
on my terms, and store those metrics in a small set of history tables,
which themselves are incredibly handy for reporting.

As a final step, I also added a post-metrics session email
script/process that sends out detailed metrics in email along with an
assessment of problems in the event I determine that a failure or
warning is required.

This is all manageable Informatica code (one reusable session) but
really requires you take the time up front to analyze the OPB_* table
content. Painful, but it has payed off in a huge way. I'd like to be
able to offer up my code, but my current client, one of the largest
mutual funds companies in the world, owns it.

If you do take the plunge I will do what I can to offer some assistance
on the OPB_* table content though.

> A simple set of denormalized views also became very useful for ad hoc
> queries against the repository. From there I set up a reuseable
> metadata gathering session within my workflows. I run it after every ETL
> process. This allows me to both gather the run-time metrics/statistics
> and also make a sensible business-driven assessment of success/failure
> on my terms, and store those metrics in a small set of history tables,
> which themselves are incredibly handy for reporting.

hi everyone,
thanks for the suggestions.
I was aldready examining using a expression to extract the run time info with the wf name hard wired to it.
I wanted to avoid a way to hardwire the WF name ..guess it is just too troublesome.

And the OPB table - view idea seems pretty good. Let me discuss it with my team, and if it seems to be a reasonable investment maybe we will go for it..
As of now though i think the hardwiring has to happen :(
D