This post is the second part of the Oracle Database Testing series. The first part 1 can be found here. Here, we will look at the systematic ways of testing Oracle Database for memory, space, and, CPU.
In the first part, we have defined the term “database”, the reasons for Oracle being a pioneered database system, the intent behind testing a database as well as the methods of testing it.
In this post, we shall discuss more systematic ways of testing the database regarding Memory, Space, and CPU processing. In the next and final part of the series, we shall continue our focus on testing with Oracle Real Application Testing.
Table of Contents:
Testing Oracle Database
Let’s cover testing our application against a strong Oracle database system, considering memory, CPU, and storage constraints that need to be addressed before deployment in a production environment.
#1) Memory Testing
First, let’s evaluate your memory requirements through testing. To improve our memory, we first need to familiarize ourselves with the structure of memory. We can classify them into three major memory structures, namely:
- System or Shared Global Area (SGA): This is a shared memory segment that every Oracle process will access.
- Process or Program Global Area (PGA): This is a private memory that is used by an Oracle process.
- User Global Area (UGA): This is memory linked to a user session. Based on the type of connection mechanism used, UGA becomes a part of the PGA (with the dedicated server) and part of the SGA (with the Shared server).
To evaluate the application’s performance and find the best memory sizing, we have to optimize these memory segments. To tune these memory segments, we need to understand the amount of memory that the application requires to function appropriately.
A simple representation of the Memory Architecture is:
A command to check your current SGA values is, login to SQLPLUS command prompt:
SQL> show sga
Total System Global Area 521936896 bytes
Fixed Size 2177328 bytes
Variable Size 465569488 bytes
Database Buffers 46137344 bytes
Redo Buffers 8052736 bytes
The components shown above constitute the total SGA size. Suppose I have about 8 GB of RAM (physical memory) on my server. Set the SGA size to half of the memory available, i.e. 4 GB, and test if the SGA values are suited to your application by running the application with users logged in and executing SQL queries against this database.
Another important factor to consider when sizing your memory is to consider paging. When the operating system doesn’t have enough physical/real memory available, it transfers them to disk so that new pages can be loaded into memory. High amounts of paging usually indicate performance degradation.
A command commonly used to check paging activity is the “sar” command.
$ sar -g
00:00:00 pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
01:00:00 0.00 0.00 0.00 0.00 0.00
This is an idle system and hence there has been no paging observed. If you observe high values for pgfree/s and pgscan/s, we would review our memory again, as this indicates that the memory configured isn’t appropriate to meet the application’s needs.
To tune the Program Global Area (PGA), we need to understand the complexity of the users’ SQL queries. We have work areas in the PGA where we perform in-memory sorting when sorting-based operators like GROUP BY, ORDER BY, Hash-join, and Bitmap Merge are used.
Usually, the PGA doesn’t need an enormous amount of memory. We ideally reserve 20% of the physical memory of the SGA size. We also need to consider if we are using an OLTP (Online Transaction Processing) system or a DSS (Decision Support System).
For an OLTP system, considering the example given before, 8 GB of total physical memory, 4GB dedicated to SGA, PGA = 20% of SGA = 0.8 GB.
For a DSS system, which usually has more memory-consuming queries, we could allocate 50% of SGA, so PGA would now be 50% of SGA = 2 GB.
The above values aren’t realistic examples. We have used them to get a basic understanding of the calculations. Most of these systems in a real-time system would have values greater than 100 ~ 500 GB of SGA allocated. We even have several systems with over 1 TB of physical memory configured.
#2) CPU Processing Testing
Any SQL that is run against a database that performs significant operations, such as sorting, querying data, and writing data, will consume some amount of CPU cycles of your processor. How much of it is ideal and how do we test if we have enough processing power to fulfill our application needs?
How do we determine if there is a bottleneck in the CPU?
Most of the time, having your CPU operating at 100% is not a bad sign. It only means that your processors are executing at their maximum limit, and it is usually what we desire. After all, we have incurred huge expenditure on them.
SQL usually goes through 3 stages when an Oracle instance processes them.
They are as follows:
- Parse phase
- Execute phase and
- Fetch phase
Parsing performs two main checks. One is the syntax check to see if the SQL follows all the rules and if it is a valid SQL that the Oracle database engine can comprehend. The other one is, the semantic check to see if the objects referenced in the SQL are valid and the user running the statement has the privilege to access the objects. Parsing is done to make sure that the statement executes properly.
Execute phase comprises building an execution plan and traverses through each step to access the desired objects.
Fetch phase describes the rows that are fetched from database blocks based on the execution plan computed in the previous step.
If you need to understand more about SQL statement processing, you could refer to this Oracle Documentation, which is free.
Each of them will consume some amount of CPU to process. We usually observe a high CPU being utilized during the parse. We have a couple of performance views that will examine the processes/sessions consuming the CPU.
V$SYSSTAT shows CPU usage for all Oracle user sessions
V$SESSTAT shows CPU usage per oracle user sessions
For example:
SQL> select (a.value / b.value)*100 "% CPU for
parsing" from V$SYSSTAT a, V$SYSSTAT b
where a.name = 'parse time cpu' and
b.name = 'CPU used by this session';
% CPU for parsing
-----------------
7.70263467
We have just about 7.7% of the CPU used for parsing our SQL statements. This is achievable if we test our application with users running SQL queries against them. If we find that the % CPU for parsing is high, check to see if literals are being used in the SQL queries, which causes a hard parse to be made, which in turn is a costly operation consuming high amounts of CPU.
You can inform your application developers to change their code to use bind variables so that we can reuse the cursors available and avoid hard parses.
To know about the CPU being used by users accessing the database via their sessions, you can use the following query:
SQL> SELECT n.username, s.sid, s.value FROM
v$sesstat s,v$statname t, v$session n WHERE
s.statistic# = t.statistic# AND n.sid = s.sid
AND t.name='CPU used by this session' ORDER BY s.value desc;
USERNAME SID VALUE
------------------------- ---------- ----------
SYS 191 125
190 64
134 50
66 45
192 4
4 3
133 3
126 2
72 1
67 0
125 0
We only have one user SYS logged in and the rest is your background processes which are hardly consuming any amount of CPU.
From the OS perspective, one of the many commands that we have at our disposal, we shall be the “sar” command to determine CPU usage. If you need more details regarding this command, you could just issue $ man sar on any Unix console (OS command prompt) or search in Linux documentation.
$sar -u 10 5
---- Reports CPU utilization for each 10 seconds. 5 lines are displayed.
Linux 3.8.13-26.2.1.el6uek.x86_64 (abcdefg) 08/07/2014 _x86_64_ (6 CPU)
05:17:58 PM CPU %user %nice %system %iowait %steal %idle
05:18:08 PM all 0.65 0.00 0.53 0.00 0.03 98.78
05:18:18 PM all 0.00 0.00 0.00 0.00 0.00 100.00
05:18:28 PM all 0.02 0.00 0.00 0.00 0.00 99.98
05:18:38 PM all 0.02 0.00 0.02 0.00 0.00 99.97
05:18:48 PM all 0.02 0.00 0.00 0.00 0.00 99.98
Average: all 0.14 0.00 0.11 0.00 0.01 99.74
We can see that the CPU consumption is nearly zero since this is a test system and we have hardly anything running now, hence it shows that it’s almost idle.
Having tested our application concerning Memory and CPU, we need to test the amount of space that the database would require to fulfill user needs.
#3) SPACE/Storage Testing
To test the storage needs, we must first understand the storage structures in Oracle. We have logical database storage structures and physical database storage structures.
By physical structures, we mean structures that are visible at the Operating system level, such as – Data files, Control files, and online redo log files. We shall discuss more on the data files in this post.
To explore more on these structures, you can refer to this Oracle Documentation.
By Logical structures, we mean structures that are only recognized and dealt with by the Oracle database. Nothing outside Oracle would know about the existence of these structures, such as – Data blocks, extents, segments, and tablespaces.
A high-level overview would be Datafiles (physical structures) that are part of your logical structure which is the tablespace. A single tablespace can have several data files and to know the total size of an existing, fully functional database would be, to sum up all the tablespaces, which means summing up all the datafiles that belong to the Oracle database on the operating system.
Each application user would have his/her schema to work with and these schemas are nothing but segments that are part of the tablespace.
With this bit of understanding of the logical and physical structures to test an application’s needs, we need to know how many users the database intends to have, how many of them would insert data, and their volume, frequency, and duration.
In any database system, we can also truncate, delete data and not just add data. All such transactions contribute to the data growth in the organization’s database. It also depends on the type of applications that connect to this database.
For example, consider if we were using a banking application wherein thousands of users are performing transactions and we need to store every piece of information of each banking user in a database, we would require an enormously large storage system to handle all of this cumulative information.
If we have an application that we developed and would like to be deployed on a production server, we would first need to know how many users this application would support, how many transactions each of these users would perform and how much data would we deal with.
Since we first test the application on a test or UAT instance, we have had a couple of procedures that help in monitoring growth trends and knowing how big an object has eventually grown.
We have performance views which help you determine the size of objects (logical) and physical files such as data files which are meant to hold user data physically on the operating system.
DBA_SEGMENTS, DBA_EXTENTS, DBA_TABLESPACES – To obtain information on logical structures
DBA_DATA_FILES – To obtain information on the size of data files (physical structures).
For example, to know the size of my tablespaces, we execute:
SQL> select tablespace_name "Tablespace",sum(bytes)/1024/1024
"Size in MegaBytes" from dba_data_files group by tablespace_name;
Tablespace Size in MB
------------------------- ----------
UNDOTBS1 100
SYSAUX 680
USERS 45
SYSTEM 830
EXAMPLE 100
Summing up each of these tablespaces would give you the total size of your database instance.
To have the best application performance, we have several storage products and have engineered storage solutions such as NAS, SAN, Flash, Solid State Devices, etc.
Conclusion
To conclude, in this post, we have dealt with testing our application against a robust Oracle database system concerning Memory, CPU, and the storage constraints that application developers, Architects, and Administrators would have to consolidate before deploying them in a production environment.
In the next and final piece of this Oracle Database testing series, we will dig deeper into Real Database Application Testing.
What do you think of this post? Please post your opinion, feedback, or even any queries in the comment section below. We would love to hear from you.
Nice and very useful ..
This is very useful memory, cpu and storage testing discussion wrt oracle db.
hi it is awesome could you please create a link oracle database testing commands which will help rest of the world
Many things to learn here
Sir humble reqest sir, oracle dba theory pdf and real project provide sir. My email id annavarapuprasad62@gmail.com
@Harshit, @Katie Surman, @Nataraj – Thank you for all your valuable feedback.
Regards,
Suntrupth