This discussion has been locked. The information referenced herein may be inaccurate due to age, software updates, or external references.
You can no longer post new replies to this discussion. If you have a similar question you can start a new discussion in this forum.

DPA Repository Database Size Growing

FormerMember
FormerMember

Hi all,

We have been using DPA since this year March, and we realized the repository database size has been growing.

Based on the guidebook that we have followed, it was said that for a high activity database instance, the disk space required is 5GB.

We are monitoring about 6 instances, and so far we have been allocating about 200GB space for it.

Is there any purging job being setup for this repository database?

  • Just did a quick check - for 13 monitored instances I'm currently using ~26gb.

    Have you changed the default values for number of days of data history such as CLEAN_DAYS_OF_DETAIL etc?

    Otherwise you may need to open a ticket with support to investigate it.

    Edit - are you using SQL server as the repository? Is the space being consumed by the log files or the database files?

  • There is a purging job called the cleaner service that runs nightly. This removes data in the detail tables that is older then the CLEAN_DAYS_OF_DETAIL setting that gcp​ mentioned.

    The table CONDPRM should have a LR_CLEAN date for each monitored instance. Do open a case and send the logs to support.

  • FormerMember
    0 FormerMember

    Thanks gcp and jaminsql for the reply!

    Yes gcp, I am using the SQL server as the repository. The space is consumed by the database files. The log space looks alright.

    This is the screenshot of the settings, and yea, the number of days of data history is set as the default values of 30 days.

    clean_details.jpg

    However, since I have set up since March this year, it shouldn't have been keep increasing? For it will do up the cleaning job every 30 days...?

    May I know how many days you guys have set the value for this CLEAN_DAYS_OF_DETAIL?

  • Default 30 days here...  20 monitored instances, ~90GB data file for SQL Server repository

  • Run this query to see what objects in the repo are consuming the most space. That may point to something that's amiss.

    IF OBJECT_ID('tempdb..#t', 'U') IS NOT NULL

    DROP TABLE #t

    CREATE TABLE #t ([name] NVARCHAR(128),[rows] CHAR(11),reserved VARCHAR(18),data VARCHAR(18),index_size VARCHAR(18),unused VARCHAR(18))

    -- get the space used for each table in this database

    INSERT #t EXEC sp_msForEachTable 'EXEC sp_spaceused ''?'''

    SELECT * FROM (

       SELECT [name] as "Table",

           CONVERT(int, [rows]) as "Rows",

           CONVERT(int, LEFT([reserved],LEN([reserved])-3)) / 1024 as "Reserved MB",

           CONVERT(int, LEFT([data],LEN([data])-3)) / 1024 as "Data MB",

           CONVERT(int, LEFT([index_size],LEN([index_size])-3)) / 1024 as "Index MB",

           CONVERT(int, LEFT([unused],LEN([unused])-3)) / 1024 as "Unused MB"

    FROM #t) t

    ORDER BY [Reserved MB] DESC

    DROP TABLE #t

  • FormerMember
    0 FormerMember in reply to mandevil

    Thanks all and sorry for the late reply.

    mandevil, thanks for the script.

    I have run the script and these are the top 3 tables.

    top.jpg

    Hmm... guess I shall log a ticket to DPA.

  • Yeah, probably a good idea at this point. The conss tables hold SQL Stats, so you can adjust an option for limited stats which could significantly reduce the size of those tables for instance 2&3. Those are likely just very active instances.

    conspt is sql plan text table. This can grow large if there are large plans or just a lot of different SQL statements being run (lots of plans). For that, I'd check the profile and see if SQL is using literals (if so, might look into using parameterization).