Archiving ODI logs

Have you ever had the problem of lots of logs in ODI listing Operator. What would you think if there is a method to archive logs and you can store them as much as you want and can select anytime you want to?

I want to mention that since you are making changes in your work repository environment make sure you have your backups, exports, etc. These are my personal experiences and will take no responsibility for your development/data/work loss.

ODI stores logs in work repository, including variable history, in 12 tables. First create these tables with prefix “ARC_” to a different schema. I suggest you not to create any referential integrity constraints, foreign keys, indexes or constraints for these tables.

1. Generate the create table script, rename table and schema name, delete everything except CREATE TABLE script. Do not create indexes, foreign keys, constraints or other database objects for those tables.

2. Reverse both SOURCE and TARGET tables in ODI model.

3. Create a variable to store how many days you want to (this could be in a parameter table or maybe static). In this example I will use variable with name #V_Purge_Log_Retention.

4. Create interface for each of them regarding the filters and joins related to table below. (Click to enlarge picture)


5. Prepare a ODI procedure that will delete tables in order, shown below (click to enlarge picture). Remember, because of the referential integrity in ODI Repository, you need to send delete statements to database into right order. Otherwise, you might receive a referential integrity error or lose unexpected data.


6.Pack these stuff up shown below (click to enlarge picture). All the scenarios starting with “I_ARC” are in “Asynchronous Mode”. If any of the scenarios fail, exception is generated. You may take a look throwing exceptions in ODI in my other post named “ODI Alert Mechanism“.


That’s it. If you are able to do these things in right sequence, your ODI logs are in secure in your “ARC_” tables.

18 thoughts on “Archiving ODI logs

  1. gurcan orhan….. A very good analysis and implementation. Looks like there you have a lot of effort to building this . I am calling this Orhan Process at my work..(plese do not mind ). I just cannot think any better solution in archiving and purging logs. Just a quick question …down the road along with time did you face any problem . I want to do POC so called “orhans process” at my work . Cheers to you and thanks a lot for sharing your appreciated piece of work.


    • Hi Laks,
      I’m glad you liked the post and find it useful. You may call it “Gurcan’s process” or “Gurcan’s Approach” since I prefer to use my first name instead of my surname. I dropped you an email about the details.


  2. Hi Gurcan,

    Thanks for providing us a nice article.
    I have very basic question. While archiving, why we have to combine system tables like ->SNP_SESS_STEP + SNP_SESSION. What is the benefit of combining? Whether we can archive individual table as it is? I am doing POC on archiving and purging ODI logs. Your response will be helpful.
    If possible please share any issues faced after implementing this process.


    • Check out my presentation at my SlideShare account (link is below). Because there are referential integrities between repository tables, joins are there just to filter the appropriate children to parents (filters are for date and context code). Source and target tables are all the same while archiving. Have I faced any issues? No, it worked (and I certainly know still working) pretty fine everyday, keeping 2 weeks of logs in production and the rest (since 2010 November) in archive tables.

    • Purge Log had some errors in ODI 10G and 11G, that was why I preferred to delete everything by myself. If it works fine, purging logs could be used instead of delete, but after copying all the records to ARC% tables. 🙂

  3. Hi Gurcan
    Nice solution you built up here.
    Did you already come across the same in ODI 12c?
    I read in a metalink document (Nr. 424663.1) about the structure in 10 and 11 but the Infos about 12c aren’t provided there.
    As you mentioned above we could use the Purge-command in a mapping but first copying all the records to the ARC% tables. But that for we need to know which tables to copy first.
    thank you for your reply and regards

    • Hi Pius,
      Thanks for the feedback. I didn’t need to apply to this algorithm to save logs to any of my ODI12C implementations, yet (purged t-15 days).
      But, you either can download repository definition for ODI12C (I think it exists in MOS), or I believe that you can sort out the tables by looking at referential integrities (parent & child) from the repository. No need to dig into bunch of tables. SNP_SESSION can be your starting point since it is the master of all the child and grandchild tables.


      • Hi Gurcan,
        Thank you for your fast reply. I read on different Blogs or Forums that a lot of ppls just purge their execution logs. Does no one save or backup the execution time to have some Kind of statisticts? I am just surprised that no one use the snp_session-infos for this Kind of statistics…
        We’ve already built up some views around snp_session and snp_sb_task (and _log) for daily runtime statistics and when we would purge all the stuff here then all of these informations would be lost.
        Thx for the MOS-Doc, found it and as a reference ist quite good.


      • Hi Pius,
        This came to me as a business question “what was the amount of data (count(*)) that was inserted to the table last year”, since it was an alarm mechanism, there were active alarms and closed alarms where active alarms were truncated when they resolved. I figured out this problem by archiving logs, instead of preparing many log tables as well as trying to maintain them.
        So, to me, better to start with SNP_SESSION and looking at its children tables, copying data to another schema (ARC% tables) and purging the log.
        Log purging had some problems in ODI10G, that was why I preferred to delete one by one (child to parent).
        On the other hand, I suggest you not to purge all the logs, it is always better to leave at least 1 week of log behind, noone knows when you’ll need them. 🙂


    • If something is deleted, it is deleted from the repository (also from the related tables). It is not marked as deleted.
      So, it would be better to check your database audit, for when and who (probably IP address will solve your situation) deleted the model.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.