Of course. Here are 20 scenario-based interview questions about database performance tuning, along with detailed answers.
Scenario-Based Questions and Answers
Scenario 1: Slow-Running Report
Question: A critical daily report that used to run in 5 minutes now takes over an hour. You haven't changed the query. What are the first three things you investigate?
Answer:
Check for data volume growth: The amount of data has likely increased significantly. A query that performed well on a small dataset might be inefficient on a large one, especially if it relies on full table scans or poor join strategies.
Look for index issues: The indexes might have become fragmented due to high data turnover (inserts, updates, deletes). You would check the fragmentation level and rebuild or reorganize them if necessary.
Investigate statistics freshness: Outdated statistics can lead the query optimizer to choose a bad execution plan. You would check the last update time of table and index statistics and update them to reflect the current data distribution.
Scenario 2: High CPU Usage
Question: The database server's CPU usage is consistently at 95% during peak hours, causing slow application response times. How do you identify the root cause?
Answer:
You would use database performance monitoring tools to identify the top resource-consuming queries. You can query system views (like sys.dm_exec_query_stats in SQL Server or pg_stat_statements in PostgreSQL) to find the queries with the highest CPU time. Once identified, you'd analyze their execution plans to pinpoint inefficient operations, such as full table scans or complex sorts, and tune them by adding or modifying indexes.
Scenario 3: Application Deadlocks
Question: The application team reports frequent deadlocks, causing transactions to fail randomly. What steps do you take to troubleshoot and fix this problem?
Answer:
First, you'd enable deadlock tracing or logging to capture a deadlock graph. The graph will show you the two transactions involved, the resources (tables or rows) they are trying to lock, and the order of their requests. The solution usually involves one of these strategies:
Modify transaction logic: Ensure all transactions acquire locks in the same, consistent order to prevent a circular wait.
Reduce transaction scope: Make transactions as short as possible to minimize the time locks are held.
Use a different isolation level: In some cases, changing the transaction isolation level (e.g., from
SERIALIZABLEtoREAD COMMITTED) can reduce the frequency of locks.
Scenario 4: Slow INSERT Operations
Question: A nightly batch job that inserts millions of rows is taking too long. What could be the issue, and how would you optimize it?
Answer:
The problem is often related to indexes and constraints. Every INSERT operation must update all indexes on the table, which can be very slow.
Drop and recreate indexes: For a large bulk load, it's often faster to drop all non-clustered indexes, perform the inserts, and then recreate them after the data is loaded.
Disable constraints: Temporarily disable foreign key and check constraints during the load and re-enable them after, as this saves time by not checking every inserted row.
Batching: If possible, insert the data in smaller batches instead of a single massive transaction to reduce the size of the transaction log and memory pressure.
Scenario 5: High Disk I/O
Question: The database server's disk I/O is consistently high, slowing down the entire system. How do you find the source of the I/O bottleneck?
Answer:
You would use system monitoring tools and database performance views to identify the objects causing the high I/O.
Find I/O-intensive queries: Look for queries performing a high number of logical or physical reads. The
WHEREandJOINclauses of these queries are good candidates for index optimization.Check for full table scans: High I/O is often a symptom of full table scans, where the database has to read a huge amount of data from disk.
Analyze index usage: Verify if the most I/O-intensive tables have proper indexes on the columns used in their
WHEREandJOINclauses. A missing index is a common culprit.
Scenario 6: Inefficient Joins
Question: A query joining five large tables is taking a long time. The execution plan shows a NESTED LOOPS join on two very large tables. Is this a good sign, and how would you fix it?
Answer:
A NESTED LOOPS join is efficient only when the outer table is small. In this case, with two large tables, it's a very poor choice, as the inner table is scanned for every row of the outer table.
You would investigate if there are proper indexes on the join columns of the inner table. A lack of an index forces a table scan. The query optimizer may have also picked the wrong join order. You could add an index to the inner table's join column, which would enable a more efficient join method like a MERGE or HASH join.
*Scenario 7: The "SELECT " Problem
Question: A developer is using SELECT * in many queries. Explain why this is a performance problem and what you would advise.
Answer:
Using SELECT * is problematic because:
Increased Network Traffic: It sends unnecessary data over the network, which can be a significant bottleneck for large result sets.
Inefficient I/O: The database has to read more data pages from disk than necessary.
Prevents Covered Indexes: The query optimizer cannot use a covered index (an index that contains all the required columns) because SELECT * forces it to fetch every column from the base table.
You would advise the developer to explicitly list only the columns they need in their SELECT statement.
Scenario 8: Parameter Sniffing Issue
Question: A stored procedure runs fast for some users but is very slow for others. You suspect a parameter sniffing problem. What is it, and how can you resolve it?
Answer:
Parameter sniffing is when the query optimizer creates an execution plan based on the very first parameter value it "sniffs" when the procedure is first executed. If subsequent parameter values are skewed (e.g., the first user queries a rare value, and the next user queries a very common value), the cached plan may be inefficient for the second user.
To resolve this, you can:
Use the
RECOMPILEoption on the stored procedure to force a new plan for every execution.Use
OPTION (RECOMPILE)within the query itself.Use
OPTIMIZE FOR UNKNOWNor declare local variables to copy parameter values, which prevents the optimizer from sniffing the initial value.
Scenario 9: Table with Many Columns
Question: You have a table with over 100 columns, and queries on it are slow. What's the potential issue?
Answer:
A table with a very high number of columns can lead to page splitting. When you update a row, the database may need to move it to a new page to accommodate the change. This can increase I/O and reduce performance. A common solution is to consider horizontal or vertical partitioning. Vertical partitioning involves splitting the table into multiple smaller tables, with frequently accessed columns in one table and rarely accessed ones in another.
Scenario 10: Missing Foreign Key Indexes
Question: You notice a query joining two tables on their foreign key columns is slow. You check, and there is no index on the foreign key column in the child table. Why is this a problem?
Answer:
While a primary key is automatically indexed, a foreign key is not. A foreign key without an index can cause JOINs and DELETE operations on the parent table to perform a full table scan on the child table to check for related rows, which is extremely inefficient and a common performance bottleneck. Adding an index on the foreign key column is the correct solution.
Continue with a few more scenarios as needed.
==========================================
MySQL Performance Tuning: A Comprehensive Guide
MySQL performance tuning is a critical aspect of database administration.
1. Database Design and Schema Optimization
- Normalize Your Data: Break down complex data into simpler, normalized tables to reduce redundancy and improve data integrity.
- Choose Appropriate Data Types: Select data types that match the data you're storing to minimize storage space and improve query performance.
- Index Strategically: Create indexes on frequently queried columns to speed up data retrieval.
However, avoid over-indexing as it can slow down insert, update, and delete operations.
2. Query Optimization
- Write Efficient Queries:
- Minimize the number of queries executed.
- Use
EXPLAINto analyze query execution plans. - Avoid
SELECT *and specify only the necessary columns. - Use
LIMITandOFFSETclauses judiciously. - Optimize
JOINoperations. - Leverage subqueries and common table expressions (CTEs) effectively.
- Indexing:
- Create indexes on columns frequently used in
WHERE,JOIN, andORDER BYclauses. - Consider composite indexes for multiple columns.
- Regularly analyze and optimize indexes.
- Create indexes on columns frequently used in
3. Hardware and Configuration Optimization
- Hardware:
- Ensure sufficient CPU, RAM, and disk I/O capacity.
- Use solid-state drives (SSDs) for faster data access.
- Ensure sufficient CPU, RAM, and disk I/O capacity.
- MySQL Configuration:
- Tune MySQL configuration parameters like
innodb_buffer_pool_sizeandinnodb_log_file_size. - Adjust connection pool settings to handle concurrent connections efficiently.
- Optimize memory usage to reduce disk I/O.
- Tune MySQL configuration parameters like
4. Monitoring and Profiling
- Use MySQL's Built-in Tools:
SHOW STATUSandSHOW GLOBAL STATUSto monitor server status.EXPLAINto analyze query execution plans.SLOW QUERY LOGto identify slow-running queries.
- Third-Party Tools:
- Use tools like Percona Monitoring and Management (PMM) for advanced monitoring and analysis.
- Use tools like Percona Monitoring and Management (PMM) for advanced monitoring and analysis.
5. Caching
- Query Cache: Enable the query cache to store query results and reuse them.
- Application-Level Caching: Use caching mechanisms in your application to reduce database load.
6. Regular Maintenance
- Database Backups: Regularly back up your database to protect against data loss.
- Optimize Tables: Periodically optimize tables to reclaim unused space.
- Monitor and Tune: Continuously monitor your database's performance and make adjustments as needed.
By following these guidelines and leveraging the tools available, you can significantly improve the performance of your MySQL database and ensure optimal application performance.
==============================================
MySQL performance tuning
MySQL performance tuning is the process of optimizing
MySQL queries and database systems to improve performance and
efficiency. Here are some tips for improving MySQL performance:
- Indexing: Use
appropriate indexing to reduce fetch time. Choose the right data
types for indexed columns.
- Query
optimization: Optimize
SELECT statements and avoid SELECT *. Specify only the columns you
need. Use joins instead of subqueries.
- Database
schema: Normalize
your database schema.
- Resource
utilization: Monitor
and analyze resource utilization.
- Hardware: Tune
MySQL for your hardware.
- Storage
engine: Switch
to the MySQL InnoDB storage engine instead of MyISAM.
- Version: Update
MySQL to the latest version.
- Performance
improvement tools: Use automatic performance improvement tools.
- Explain
function: Use
the Explain command to understand query execution.
- GROUP
BY: Use
GROUP BY instead of SELECT DISTINCT.
- Predicates: Avoid
functions in predicates and avoid wildcard (%) at the beginning of
predicates.
- DISTINCT
and UNION: Use
DISTINCT and UNION only if necessary.
- Select
clause: Avoid
unnecessary columns in the select claus
===============================
Oracle Performance Tuning Interview Questions for Freshers
1. What is Performance Tuning?
Ans: Making optimal use of the system using existing resources is called performance tuning.
| If you would like to become a professional and build a career in this domain, then visit Mindmajix - a global online training platform: Oracle Performance Tuning Training This course will help you to achieve excellence in this domain. |
2. What are the different types of Tunings?
Ans:
- CPU Tuning
- Memory Tuning
- IO Tuning
- Application Tuning
- Database Tuning
3. What Mainly Database Tuning contains?
Ans:
- Hit Ratios
- Wait for Events
4. What is an optimizer?
Ans: Optimizer is a mechanism that will make the execution plan of an SQL statement
5. Types of Optimizers?
Ans:
- RBO(Rule-Based Optimizer)
- CBO(Cost Based Optimizer)
6. Which init parameter is used to make use of Optimizer?
Ans: optimizer_mode= rule—-RBO cost—CBO choose——–First CBO otherwise RBO
7. Which optimizer is the best one?
Ans: CBO
8. What are the pre-requested to make use of Optimizer?
Ans:
- Set the optimizer mode
- Collect the statistics of an object
=====================================
What is SQL Performance Tuning?
SQL performance tuning is the process of optimizing SQL queries to improve the speed and efficiency of database operations. It involves various techniques to optimize the execution of queries, manage system resources more effectively, and ensure that the database responds quickly to user requests.
Optimizing SQL performance is crucial because poorly optimized queries can severely affect the speed of the database, increase CPU usage, and lead to system downtime. By improving query execution times and resource utilization, performance tuning enhances the overall performance of the SQL database.
Factors Affecting SQL Speed
Some of the major factors that influence the computation and execution time in SQL are:
- Table Size: Larger tables with millions of rows can slow down query performance if the query hits a large number of rows.
- Joins: The use of complex joins, especially when joining multiple tables, can significantly affect query execution time.
- Aggregations: Queries that aggregate large datasets require more processing time and resources.
- Concurrency: Simultaneous queries from multiple users can overwhelm the database, leading to slow performance.
- Indexes: Proper indexing speeds up data retrieval but, when misused, can lead to inefficiencies.
Ways to Find Slow SQL Queries in SQL Server
1. Creating an Execution Plan
SQL Server Management Studio allows users to view the execution plan, which details how SQL Server processes a query. This plan helps identify inefficiencies like missing indexes or unnecessary table scans. To create an execution plan:
- Start by selecting "Database Engine Query" from the toolbar of SQL Server Management Studio.
- Enter the query after that, and then select "Include Actual Execution Plan" from the Query option.
- It's time to run your query at this point. You can do that by pressing F5 or the "Execute" toolbar button.
- The execution plan will then be shown in the results pane, under the "Execution Pane" tab, in SQL Server Management Studio.
2. Monitor Resource Usage
SQL Server's performance is closely tied to resource usage (CPU, memory, and disk). Monitoring tools like Windows Performance Monitor can track these metrics and highlight performance bottlenecks. We may view SQL Server objects, performance counters, and other object activity with it. Simultaneously watch Windows and SQL Server counters with System Monitor to see if there is any correlation between the two services' performance.
3. Use SQL DMVs to Find Slow Queries
The abundance of dynamic management views (DMVs) that SQL Server includes is one of its best features, helping identify slow-running queries, execution plans, and resource consumption. DMVs such as sys.dm_exec_query_stats can be used to track query performance.
SQL Query Optimization Techniques
Inefficient queries or those containing errors can consume excessive resources in the production database, leading to slower performance or even disconnecting other users. It's important to optimize queries to minimize their impact on overall database performance.
In this section, we’ll discuss several effective SQL performance tuning techniques, along with practical examples, that can help optimize queries and improve database efficiency. These methods focus on reducing resource consumption and improving execution speed, ensuring a smoother and faster user experience.
1. SELECT fields instead of using SELECT *
Using SELECT * retrieves all columns from a table, but if you only need specific columns, this can unnecessarily increase processing time. Instead, specify the columns needed. By using the SELECT statement, one may direct the database to only query the data we actually need to suit your business needs. For example:
Inefficient:
Select * from GeeksTable;
Efficient:
SELECT FirstName, LastName,
Address, City, State, Zip FROM GeeksTable;
2. Avoid SELECT DISTINCT
It is practical to get rid of duplicates from a query by using SELECT DISTINCT. To get separate results, SELECT DISTINCT GROUPs for every field in the query. However, a lot of computing power is needed to achieve this goal. Instead of using DISTINCT, refine your query to return unique results naturally by adjusting the selection criteria.
Inefficient:
SELECT DISTINCT FirstName, LastName,
State FROM GeeksTable;
Efficient:
SELECT FirstName, LastName,
State FROM GeeksTable WHERE State IS NOT NULL;
3. Use INNER JOIN Instead of WHERE for Joins
Joining tables using the WHERE clause can lead to inefficiencies and unnecessary computations. It's more efficient to use INNER JOIN or LEFT JOIN for combining tables.
Inefficient:
SELECT GFG1.CustomerID, GFG1.Name, GFG1.LastSaleDate
FROM GFG1, GFG2
WHERE GFG1.CustomerID = GFG2.CustomerID
Efficient:
SELECT GFG1.CustomerID, GFG1.Name, GFG1.LastSaleDate
FROM GFG1
INNER JOIN GFG2
ON GFG1.CustomerID = GFG2.CustomerID
4. Use WHERE Instead of HAVING
The HAVING clause is used after aggregation and can be less efficient. When possible, use WHERE to filter results before aggregation to speed up the query. A WHERE statement is more effective if the goal is to filter a query based on conditions. Assuming 500 sales were made in 2019, for instance, query to find how many sales were made per client that year.
Inefficient:
SELECT GFG1.CustomerID, GFG1.Name, GFG1.LastSaleDate
FROM GFG1 INNER JOIN GFG2
ON GFG1.CustomerID = GFG2.CustomerID
GROUP BY GFG1.CustomerID, GFG1.Name
HAVING GFG2.LastSaleDate BETWEEN "1/1/2019" AND "12/31/2019"
Efficient:
SELECT GFG1.CustomerID, GFG1.Name, GFG1.LastSaleDate
FROM GFG1 INNER JOIN GFG2
ON GFG1.CustomerID = GFG2.CustomerID
WHERE GFG2.LastSaleDate BETWEEN "1/1/2019" AND "12/31/2019"
GROUP BY GFG1.CustomerID, GFG1.Name
5. Limit Wildcards to the End of a Search Term
Wildcards enable the broadest search when searching unencrypted material, such as names or cities. However, the most extensive search is also the least effective. Using wildcards like % at the beginning of a string makes it difficult for SQL to efficiently use indexes. It's better to place them at the end of the search term.
Inefficient:
SELECT City FROM GeekTable WHERE City LIKE ‘%No%’Efficient:
SELECT City FROM GeekTable WHERE City LIKE ‘No%’ 6. Use LIMIT for Sampling Query Results
Limiting the results using LIMIT can help avoid querying the entire table when first testing or analyzing a query. Only the given number of records are returned by the LIMIT statement. By using a LIMIT statement, we can avoid stressing the production database with a big query only to discover that it needs to be edited or improved.
Query:
SELECT GFG1.CustomerID, GFG1.Name, GFG1.LastSaleDate
FROM GFG1
INNER JOIN GFG2
ON GFG1.CustomerID = GFG2.CustomerID
WHERE GFG2.LastSaleDate BETWEEN "1/1/2019" AND "12/31/2019"
GROUP BY GFG1.CustomerID, GFG1.Name
LIMIT 10
7. Run Queries During Off-Peak Hours
Running heavy queries during off-peak hours reduces the load on the database, minimizing the impact on other users. About planning any query to run at a time when it won't be as busy in order to reduce the impact of our analytical queries on the database. When the number of concurrent users is at its lowest, which is often overnight, the query should be executed.
Index Tuning
When choosing and building indexes, database tuning includes index tuning. The index tuning objective is to speed up query processing. It can be challenging to employ indexes in dynamic contexts with numerous ad-hoc searches scheduled in advance. The queries that are based on indexes are subject to index tweaking, and the indexes are generated automatically as needed. Users of the database do not need to take any specific activities to tune the index.
Advantages of Index Tuning
The performance of queries and databases can be improved by using the Index tuning wizard. It accomplishes this using the following methods:
- Recommendations for optimal index usage based on query optimizer analysis and workload
- Examination of changes in query distribution, index utilization, and performance to determine impact
- Suggestion of fine-tuning strategies for problematic queries
- Use of SQL Profiler to record activity traces and improve performance
Points to consider while creating indexes:
- Short indexes for reduced disk space and faster comparisons
- Distinct indexes with minimal duplicates for better selectivity
- Clustered indexes covering all row data for optimal performance
- Static data columns for clustered indexes to minimize shifting
SQL Performance Tuning Tools
Utilizing index tuning tools and following best practices is essential for maintaining high-performing SQL Server environments. Regular monitoring, proactive maintenance, and continuous improvement are key to optimizing database performance and supporting critical business applications.
Several SQL performance tuning tools can help identify and optimize database performance. Some of the popular tools include:
- SQL Sentry (SolarWinds)
- SQL Profiler (Microsoft)
- SQL Index Manager (Red Gate)
- SQL Diagnostic Manager (IDERA)
These tools assist with monitoring, identifying slow queries, and recommending optimization strategies for improving database performance.
======================================
Creating fast SQL queries is crucial for high-performing applications. Whether you are a developer creating web applications, a DBA, or a tester, SQL performance impacts everyone. It is a vital skill for both database programming and validation. To help you prepare, we’ve compiled 25 SQL performance interview questions to challenge and enhance your knowledge.
SQL Performance Interview Questions and Answers
SQL performance tuning is a tough task and key in handling the increased load on a web application. So the interviewer would certainly dig you in and check how well you know about the subject.

Therefore, we’ve selectively picked SQL performance-tuning interview questions that could give you adequate coverage of the SQL performance-tuning concept.

Check out the 50 most-asked SQL query interview questions.
Q:-1. What is SQL Query Optimization?
Ans. Query Optimization is the process of writing the query in a way that it could execute quickly. It is a significant step for any standard application.
Q:-2. What are some tips to improve the performance of SQL queries?
Ans. Optimizing SQL queries can bring a substantial positive impact on performance. It also depends on the level of RDBMS knowledge you have. Let’s now go over some of the tips for tuning SQL queries.
1. Prefer to use views and stored procedures despite writing long queries. It’ll also help in minimizing network load.
2. It’s better to introduce constraints instead of triggers. They are more efficient than triggers and can increase performance.
3. Make use of table-level variables instead of temporary tables.
4. For faster results, use UNION ALL. It combines data sets without removing duplicates, unlike UNION which filters them.
5. Prevent the usage of DISTINCT and HAVING clauses.
6. Avoid excessive use of SQL cursors.
7. Use SET NOCOUNT ON in stored procedures to signify the affected rows by a T-SQL statement. It would lead to reduced network traffic.
8. It’s a good practice to select only the columns you need from a table, rather than retrieving all of them.
9. Prefer not to use complex joins and avoid disproportionate use of triggers.
10. Create indexes for tables and adhere to the standards.

Q:-3. What are the bottlenecks that affect the performance of a Database?
Ans. The database layer often becomes the final hurdle in achieving optimal scalability for a web application. Database performance leaks can act as bottlenecks, significantly impacting responsiveness. Here are some common performance issues to be aware of:
1. Abnormal CPU usage is the most obvious performance bottleneck. However, you can fix it by adding more CPU units or switching to an advanced CPU. It may look like a simple issue but abnormal CPU usage can lead to other problems.
2. Low memory (RAM) is the next most common bottleneck. If the server can’t manage the peak load, it poses a big question mark on the performance. Memory is a critical resource for applications to run optimally as it’s way faster than persistent memory. Also, when the RAM goes down to a specific threshold, the OS turns to utilize the swap memory. But it makes the application run very slow.

You can resolve it by expanding the physical RAM, but it won’t solve memory leaks if there are any. In such a case, you should profile the application to identify the potential leaks within its code.
3. Too much dependency on external storage like SATA disk could also make it a bottleneck. Its impact becomes visible while writing large data to the disk. For example, when you see the output operations are quite slow, it indicates the disk is becoming the bottleneck.
In such cases, you need to do scaling. Replace the existing drive with a faster one. Try upgrading to an SSD hard drive or something similar.

Q:-4. What are the steps involved in improving the SQL performance?
Ans.
Discover – First of all, find out the areas of improvement. Explore tools like Profiler, Query execution plans, SQL tuning advisor, dynamic views, and custom stored procedures.
Review – Brainstorm the data available to isolate the main issues.

Propose – Here is a standard approach one can adopt to boost performance. However, you can customize it further to maximize the benefits.
1. Identify fields and create indexes.
2. Modify large queries to make use of indexes created.
3. Refresh the table and views and update statistics.
4. Reset existing indexes and remove unused ones.
5. Look for dead blocks and remove them.
Validate – Test the SQL performance tuning approach. Monitor the progress at regular intervals. Also, track if there is any adverse impact on other parts of the application.

Publish – Now, it’s time to share the working solution with everyone on the team. Let them know all the best practices so that they can apply when needed.
Q:-5. What is an “explain” plan?
Ans. The “explain” plan is a term used in Oracle. It is a type of SQL clause in Oracle that displays the execution plan that its optimizer plans for executing the SELECT/UPDATE/INSERT/DELETE statements.
Q:-6. How do you analyze an “explain” plan?
Ans. While analyzing the “explain” plan, check the following areas.

1. Driving table
2. Join order
3. Join method
4. Unintentional cartesian product
5. Nested loops, merge sort, and hash join
6. Full table scan
7. Unused indexes
8. Access paths
Q:-7. How do you tune a query using the “explain” plan?
Ans. The “explain” plan shows a complete output of the query costs including each subquery. This is directly proportional to the query execution time. The plan also depicts the problem in queries or sub-queries while fetching data from the query.
Q:-8. What is a Summary Advisor and what type of information does it provide?
Ans. A summary advisor serves as a tool for analyzing and recommending materialized views. It can significantly boost SQL performance by selecting the optimal set for a given workload. Additionally, it offers insights into the recommended materialized views.

Q:-9. What is the most probable reason for a SQL query to run as slowly as 5 minutes?
Ans. One of the main reasons for a query taking over 5 minutes could be a sudden increase in data volume within a table it accesses. To diagnose further, gather statistics on the affected table and monitor any recent changes at the database or object level.
Q:-10. What is a Latch Free Event? And when does it occur? Also, how does the system handle it?
Ans. In Oracle, the Latch Free wait event occurs when a session requires a latch, and attempts to get it but fails because someone else has it.
So it sleeps with a wait eying for the latch to get free, wakes up, and tries again. The time duration for it was inactive is the wait time for Latch Free. Also, there is no ordered queue for the waiters on a latch, so the one who comes first gets it.

Q:-11. What are Proactive tuning and Reactive tuning?
Ans.
Proactive tuning – The architect or the DBA determines which combination of system resources and available Oracle features fulfill the criteria during Design and Development.
Reactive tuning – This is the bottom-up approach to discover and eliminate bottlenecks. The objective is to make Oracle respond faster.

Q:-12. What are Rule-based Optimizer and Cost-based Optimizer?
Ans. Oracle determines how to get the required data for processing a valid SQL statement. It uses one of the following two methods to make this decision.
Rule-based Optimizer – When a server doesn’t have internal statistics supporting the objects referenced by the statement, the RBO method gets preference. However, Oracle will deprecate this method in future releases.
Cost-based Optimizer – When internal statistics are abundant, the CBO gets precedence. It selects an execution plan with the lowest cost based on system resources.
Q:-13. What are several SQL performance tuning enhancements in Oracle?
Ans. Oracle provides many performance enhancements, some of them are:
1. Automatic Performance Diagnostic and Tuning Features
2. Automatic Shared Memory Management – It gives Oracle control of allocating memory within the SGA.
3. Wait-model improvements – Several views have come to boost the Wait-model.
4. Automatic Optimizer Statistics Collection – Collects optimizer statistics using a scheduled job called GATHER_STATS_JOB.
5. Dynamic Sampling – Enables the server to enhance performance.
6. CPU Costing – It’s the basic cost model for the optimizer (CPU+I/O), with the cost unit as per the time optimizer.
7. Rule-Based Optimizer Obsolescence – No more used.
8. Tracing Enhancements – End-to-end tracing allows a client process to be identified via the Client Identifier while not using the typical Session ID.
Q:-14. What are the tuning indicators Oracle proposes?
Ans. The following high-level tuning indicators are available to establish if a database is experiencing bottlenecks or not:

1. Buffer Cache Hit Ratio.
It uses the following formula.
Hit Ratio = (Logical Reads – Physical Reads) / Logical Reads

Action: Advance the DB_CACHE_SIZE (DB_BLOCK_BUFFERS before 9i) that improves the hit ratio.
2. Library Cache Hit Ratio.
Action: Advance the SHARED_POOL_SIZE to increase the hit ratio.

Q:-15. What do you check first when the SYSTEM tablespace has multiple fragments?
Ans. Firstly, check if the users don’t have the SYSTEM tablespace as their TEMPORARY or DEFAULT tablespace assignment by verifying the DBA_USERS view.
Q:-16. When would you add more Copy Latches? What are the parameters that control the Copy Latches?
Ans. If there is excessive contention for the Copy Latches, check from the “redo copy” latch hit ratio.
In such a case, add more Copy Latches via the initialization parameter LOG_SIMULTANEOUS_COPIES to double the number of CPUs available.

Q:-17. How do you confirm if a tablespace has disproportionate fragmentation?
Ans. You can confirm it by checking the output of SELECT against the dba_free_space table. If it points out that the no. of a tablespace extent is more than the count of its data files, then it proves excessive fragmentation.
Q:-18. What can you do to optimize the %XYZ% queries?
Ans. Firstly, set the optimizer to scan all the entries from the index instead of the table. You can achieve it by specifying hints.
Please note – crawling the smaller index takes less time than scanning the entire table.

Q:-19. Where do the I/O statistics per table exist in Oracle?
Ans. There is a report known as UTLESTAT which displays the I/O per tablespace. However, it doesn’t help pinpoint the table with the highest I/O activity.
Q:-20. When is the right time to rebuild an index?
Ans. Firstly, select the target index and run the ‘ANALYZE INDEX VALIDATE STRUCTURE’ command. Every time you run it, a single row will get created in the INDEX_STATS view.
But the row gets overwritten the next time you run the ANALYZE INDEX command. So move the contents of the view to a local table. After that, analyze the ratio of ‘DEL_LF_ROWS’ to ‘LF_ROWS’ and see if you need to rebuild the index.
Q:-21. What exactly would you do to check the performance issue of SQL queries?
Ans. Most probably, the database isn’t slow, but it’s the worker session dragging the performance. It is the abnormal session accesses that cause the bottlenecks.
1. Review the events that are in wait or listening mode.
2. Hunt down the locked objects in a particular session.
3. Check whether the SQL query is pointing to the right index or it is not.
4. Launch SQL Tuning Advisor and analyze the target SQL_ID for making any performance recommendation.
5. Run the “free” command to check the RAM usage. Also, use the TOP command to identify any process hogging the CPU.
Q:-22. What is the information you get from the STATSPACK Report?
Ans. We can get the following statistics from the STATSPACK report.

1. WAIT notifiers
2. Load profile
3. Instance Efficiency Hit Ratio
4. Latch Waits
5. Top SQL
6. Instance Action
7. File I/O and Segment Stats
8. Memory allocation
9. Buffer Waits
Q:-23. What are the factors to consider for creating an Index on the Table? Also, How do you select a column for the Index?
Ans. The creation of an index depends on the following factors.
1. Size of the table,
2. Volume of data

If the Table size is large and you need a smaller report, create an Index.
To select a column for indexing, as per the business rule, you should either go with a primary or if not, then use a unique key.
Q:-24. What is the main difference between Redo, Rollback, and Undo?
Ans.
Redo – Log that records all changes made to data, including all uncommitted and committed changes.
Rollback – Segments to store the previous state of data before the changes.
Undo – Helpful in building a read-consistent view of data. The data gets stored in the undo tablespace.
Q-25. How do you identify shared memory/semaphores for a specific DB instance on a multi-server system?
Ans. Set the following parameters to distinguish between the in-memory resources of a DB instance.
1. SETMYPID
2. IPC
3. TRACEFILE_NAME
Use the ORADEBUG command to explore their underlying options
No comments:
Post a Comment