Version 1

    Below are answers to questions asked during the recent webinar. If you have additional questions, please contact Janis Griffin at janisgriffin@confio.com.

    Q: Tom Kyte from Oracle recommends in 11R2 use default method_opt. Do you agree?
    A: Yes, I agree that the default for method_opt is sufficient as in my first example for setting the deptno histogram. The default for method_opt is 'FOR ALL COLUMN SIZE AUTO'

    Q: So using histogram frequency is no better than no histogram?
    A: Yes, using a histogram on the deptno column was more efficient because the column was skewed - over half the records were for Dept. '40'. In general, the only reason, you would use a histogram is when you have a column with 'data skew'.

    Q: Do "Free" graphical plans require epviewr file or also confio?
    A: The free graphical tool that I showed requires the following:

    Q: How do you set to level 12?
    A: There are several ways to turn on trace at the different levels. To turn it on in your session, you can use "ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';" Also, look up dbms_system, dbms_support and dbms_monitor packages.

    Q: Which new procedure of the dbms_xplan pakcage is your favorite in 11G?
    A: My favorites are the 2 new functions: display_sql_plan_baseline & display_plan (with type of 'HTML' or 'Active'. I use the display for baselines when dealing with baselines. I do like the 'HTML' look for the execution plan.

    Q: What is the name of the free tool you just mentioned?
    A: Epviewer

    Q: Did you say swing bench? When talking about select on slide#25.
    A: Yes, http://www.dominicgiles.com/swingbench.html is the actual URL for the download. I used it for my case study. It's a free load generator that mimicks an Order Entry system.

    Q: Total execution of this query was 11 seconds, Is that the time shown in the execution plan?
    A: For the case study, the time spent originally was 11 seconds per 'AVERAGE' execution - after tuning, it was '.01' seconds per AVERAGE execution. The execution plan originally showed a total time of 37 seconds which was then reduced to 12 seconds total time after tuning.

    Q: In a multi-tier system, for instance, a query is run from a client, It goes through the middle tier and hits the database and gets the job done. Returns result to end user. Does/Can Ignite visually display how much time spent and other metrics on each tier to quickly identify bottleneck in the event of issue/slowdown?
    A: Ignite is focused on database response time and does not record the full end-to-end view. However, Ignite will quickly rule out the database if you are experiencing performance issues elsewhere. Also, it's worth noting that if you develop inhouse, it's good practice to get the developers using the dbms_application_info package to call out what the application is doing. Ignite can display this information to give you more information. Also, Ignite has the machine and o/s user dimensions that can give you an idea of where to start looking if the problem is network and/or a specific application server, etc...

    Q: What changes with the tuning do you see on exadata appliance?
    A: My experience with Exadata environments is very limited so I'll defer to Kelly Osbourne's paper: http://kerryosborne.oracle-guy.com/papers/Tuning%20Exadata.pdf

    Q: What is the name of the benchmark that Janis used that included product and inventory?
    A: I created a load using Swingbench and then monitored it with Ignite. The screenshots showing the amount of time spent in the database, as well as the throughput (executions), are Ignite screen shots.

    Q: Why would I use Ignite instead of dbconsole or oem grid?
    A: Ignite is solely focused on end user response time and is not a generic database management tool. With Ignite you can quickly find a root cause of an issue with as little as 3-4 key clicks. Also, Ignite is agentless so no software is running on the monitored database server. This allows developers as well as other team members to utilize Ignite and collaborate quickly on finding a solution. There are several good OEM articles on our website, however, you may want to review this one: Ignite is solely focused on end user response time and is not a generic database management tool. With Ignite you can quickly find a root cause of an issue with as little as 3-4 key clicks. Also, Ignite is agentless so no software is running on the monitored database server. This allows developers as well as other team members to utilize Ignite and collaborate quickly on finding a solution. There are several good OEM articles on our website, however, you may want to review this one: http://marketo.confio.com/WP_Ignite_vs_OEM_Comparison.html

    Q: How do you decide which histogram to use - a frequency or height-based histogram?
    A: It depends on the number of distinct values in the column that is using the histogram. If the number of distinct values are below 255, Oracle will use a 'frequency' histogram. If there are more than 255 distinct values, than a height-balanced histogram will be used because Oracle will need to store ranges of values in each bucket. My example of changing it to a height-balanced histogram was to show how you could force the number of buckets and affect the density of the column. Normally, I would never do this during a tuning session! I usually allow Oracle to choose which type of histogram it thinks is best but I also verify if it makes sense.

    Q: If a query uses 3 - 4 tables, each having about 200K rows and all have to do full table scans, do we first think of parallel hints or think of something else first?
    A: I guess I would ask: Why do they have to do full table scans? What is the filtering criteria? How many rows are going to be returned? If very few rows are being returned, then I would look for the table with the most specific filtering criteria and try to determine if I can add an index. This would give the optimizer more information when trying to determine which table it should access first. It really depends on the amount of data being returned. If the query has to return all rows, all columns of all 4 tables, then I guess I'd use parallel hints but only after I've exhausted all other possible tuning efforts. In my opinion, turning on parallel query isn't really tuning - it's using more resources to get the job done where as when you tune something, it should take less resources.

    Q: Is there any way to tune a query-based views on top of views?
    A: Yes, depending on how specific or generic the views are. An execution plan will show the detailed steps of working with views on views. If a view access seems expensive, you may have to first tune the underlying view before working on the query that is using the view.

    Q: I have had too many issue with sqlplan management. bugs etc. Have you had any success with sql plan management?a
    A: While I don't know what issues you've encountered with SPM, I do know there are bugs around the auto capture feature and allowing the tuning advisor to automatically verify and evolve a baseline. I have used baselines successfully, by I don't use auto capture, instead I select specific queries that are having performance regression issues. Also, I don't let the tuning advisor decide. I basically evaluate and evolve a plan into a baseline after reviewing it.

    Q: How do you define the correct access for a developer to use this tools? DBAs are very careful to set access to tables to statistics or DB data.
    A: If I understand your question correctly, you don't allow the developer in production, but you want him to see the execution plan. You can give him the 'actual plan' out of v$sql_plan table. You can create a .html plan to view in browser via dbms_xplan.display_cursor or you can give him the graphical plan using display_plan ('Active') or the free tool that I list in my presentation. You can also download Ignite as it has a central repository where the developer can view the actual execution plans without touching the production database.

    Q: Does Confio software helps in tuning oartitioned DataWarehouse large facts tables?
    A: Confio Ignite will show you where Oracle is spending most of its time on those fact tables and/or other objects, as well as how much time is being spent at the file level.

    Q: Which would be preferred: function based index or Extended Stats?
    A: I don't see how I would compare these 2 features as using one feature over the other. Extended Stats - a new feature in 11, which allows you to put a histogram on multiple columns where the data in those columns are some how related and the combination of the columns content is some how skewed is a great feature to use if you have a table containing this type of data. A function index usually changes the content of a column and/or columns in some fashion (e.g. nvl(salary, 0), or first_name||' '||last_name). It's possible to gather extended statistics on a function index if the data is skewed because it gives the optimizer more information.