Commands to check the load average in Solaris platform
The following commands are present by default in Solaris Operating system solaris 10u11.
The following commands are present by default in Solaris Operating system solaris 10u11.
- prstat -a
- sar 1 5
- iostat -xnctz 5
- prstat -s cpu -n 5
- top (Externally package for solaris 10, And from Solaris 11 on wards it is a default in OS)
prstat -a ---> To check to monitor the all activitives that are running in server.
The prstat utility iteratively examines all active processes on the system and reports statistics based on the selected output mode and sort order.
prstat provides options to examine only processes matching specified PIDs, UIDs, zone IDs, CPU IDs, and processor set IDs.
OUTPUT:
oracle@server1 # prstat -a
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
15480 oracle 10G 10G cpu37 0 0 98:21:42 0.8% oracle/1
4569 oracle 10G 10G cpu65 0 0 91:38:37 0.7% oracle/1
23579 oracle 10G 10G cpu74 0 0 0:00:19 0.4% oracle/2
28441 oracle 10G 10G sleep 10 0 0:04:32 0.4% oracle/2
27062 oracle 10G 10G sleep 59 0 2:12:17 0.1% oracle/2
1733 root 156M 72M sleep 110 - 20:55:28 0.1% ologgerd/11
2721 root 101M 62M sleep 59 0 12:36:04 0.0% orarootagent.bi/45
1691 root 86M 51M cpu52 100 - 10:46:55 0.0% osysmond.bin/13
1656 root 127M 94M sleep 59 0 11:37:19 0.0% orarootagent.bi/25
8247 oracle 10G 10G sleep 52 0 5:44:01 0.0% oracle/1
1657 grid 92M 56M sleep 59 0 7:21:54 0.0% gipcd.bin/10
25471 oracle 10G 10G sleep 59 0 2:49:20 0.0% oracle/1
2295 grid 1323M 1053M sleep 34 0 5:28:15 0.0% oracle/1
5 root 0K 0K sleep 0 -20 7:15:43 0.0% vmtasks/128
3095 oracle 107M 83M sleep 59 0 4:46:21 0.0% oraagent.bin/18
2443 root 175M 131M sleep 59 0 5:31:32 0.0% crsd.bin/50
2652 grid 126M 100M cpu82 39 0 2:46:33 0.0% oraagent.bin/26
74 oracle 4456K 4168K cpu84 59 0 0:00:00 0.0% prstat/1
7266 oracle 10G 10G cpu101 39 0 0:04:38 0.0% oracle/1
1873 root 77M 45M sleep 59 0 3:35:23 0.0% octssd.bin/13
1713 grid 103M 71M sleep 110 - 3:20:26 0.0% ocssd.bin/20
1372 root 143M 102M sleep 59 0 3:34:27 0.0% ohasd.bin/34
1604 grid 115M 90M sleep 59 0 2:53:32 0.0% oraagent.bin/29
102 root 9016K 7208K cpu120 19 0 0:00:00 0.0% perl/1
22267 oracle 10G 10G sleep 59 0 0:01:15 0.0% oracle/2
20401 oracle 10G 10G sleep 59 0 0:00:26 0.0% oracle/1
8256 oracle 10G 10G sleep 60 -20 1:14:45 0.0% oracle/1
8253 oracle 10G 10G sleep 60 -20 1:11:11 0.0% oracle/1
8250 oracle 10G 10G sleep 60 -20 1:15:31 0.0% oracle/1
8277 oracle 10G 10G sleep 59 0 1:00:01 0.0% oracle/1
NPROC USERNAME SWAP RSS MEMORY TIME CPU
374 oracle 15G 15G 23% 224:27:16 2.4%
62 root 751M 867M 1.3% 83:13:57 0.2%
48 grid 1862M 1997M 3.1% 31:55:16 0.1%
6 daemon 7864K 10M 0.0% 0:42:26 0.0%
1 smmsp 2952K 10M 0.0% 0:00:11 0.0%
1 noaccess 140M 108M 0.2% 0:21:44 0.0%
Total: 492 processes, 1435 lwps, load averages: 4.93, 4.98, 5.22
sar 1 5 ---> To check the Monitoring System Activities
Use the sar command to perform the following tasks:
- Organize and view data about system activity.
- Access system activity data on a special request basis.
- Generate automatic reports to measure and monitor system performance, as well as special request reports to pinpoint specific performance problems.
For information about how to set up the sar command to run on your system, as well as a description of these tools, see Collecting System Activity Data Automatically (sar).
OUTPUT:
oracle@server1 # sar 1 5
SunOS server1 5.10 Generic_147147-26 sun4v 07/15/2016
18:56:53 %usr %sys %wio %idle
18:56:54 3 1 0 96
18:56:55 4 1 0 95
18:56:56 4 0 0 96
18:56:57 4 1 0 95
18:56:58 4 0 0 96
Average 4 1 0 96
iostat -xnctz 5 --->To check the particular mount point usage
The iostat utility iteratively reports terminal, disk, and tape I/O activity, as well as CPU utilization.
The first line of output is for all time since boot; each subsequent line is for the prior interval only.
OUTPUT:
oracle@server1 # iostat -xnctz 5
tty cpu
tin tout us sy wt id
0 4 7 1 0 93
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.4 3.0 5.1 8.2 0.0 0.0 1.3 13.3 0 1 md/d10
0.2 3.0 2.5 7.5 0.0 0.0 0.0 12.4 0 1 md/d11
0.2 3.0 2.5 7.5 0.0 0.0 0.0 11.7 0 1 md/d12
0.0 0.0 0.0 0.0 0.0 0.0 0.0 13.2 0 0 md/d20
0.0 0.0 0.0 0.0 0.0 0.0 0.0 15.0 0 0 md/d21
0.0 0.0 0.0 0.0 0.0 0.0 0.0 9.5 0 0 md/d22
0.6 3.0 16.5 43.6 0.0 0.0 2.5 12.1 1 2 md/d30
0.3 3.0 8.2 42.8 0.0 0.0 0.0 10.7 0 1 md/d31
0.3 3.0 8.3 42.8 0.0 0.0 0.0 10.5 0 1 md/d32
0.5 7.3 10.8 50.9 0.0 0.1 0.0 11.3 0 3 c1t0d0
0.5 7.3 10.8 50.9 0.0 0.1 0.0 10.9 0 3 c1t1d0
271.7 104.6 30567.0 2274.1 0.9 4.0 2.4 10.5 1 53 c6t60080E500017EF140000624A565942BEd0
337.0 6.8 2940.5 2705.3 0.0 0.2 0.0 0.6 0 9 c6t60080E500017F1B2000009FC5646E2DBd0
0.0 41.7 0.0 1335.1 0.0 0.0 0.0 0.3 0 1 rmt/0
0.0 42.3 0.0 1354.9 0.0 0.0 0.0 0.3 0 1 rmt/1
tty cpu
tin tout us sy wt id
0 259 3 1 0 96
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.4 0.0 3.2 0.0 0.0 9.4 6.9 0 0 md/d30
0.0 0.4 0.0 3.2 0.0 0.0 0.0 6.8 0 0 md/d31
0.0 0.4 0.0 3.2 0.0 0.0 0.0 6.4 0 0 md/d32
0.0 1.0 0.0 3.5 0.0 0.0 0.0 6.4 0 1 c1t0d0
0.0 1.0 0.0 3.5 0.0 0.0 0.0 7.2 0 1 c1t1d0
162.9 28.4 39476.4 5889.6 0.0 0.2 0.0 1.1 0 17 c6t60080E500017EF140000624A565942BEd0
tty cpu
tin tout us sy wt id
0 129 3 1 0 97
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.2 0.0 4.5 0.0 0.0 9.5 7.6 0 0 md/d10
0.0 0.2 0.0 4.5 0.0 0.0 0.0 4.8 0 0 md/d11
0.0 0.2 0.0 4.5 0.0 0.0 0.0 7.5 0 0 md/d12
0.0 1.0 0.0 10.1 0.0 0.0 3.9 8.2 0 1 md/d30
0.0 1.0 0.0 10.1 0.0 0.0 0.0 8.1 0 1 md/d31
0.0 1.0 0.0 10.1 0.0 0.0 0.0 6.6 0 1 md/d32
0.0 2.0 0.0 15.0 0.0 0.0 0.0 7.3 0 1 c1t0d0
0.0 2.0 0.0 15.0 0.0 0.0 0.0 6.2 0 1 c1t1d0
282.9 14.0 68544.6 1275.8 0.0 0.3 0.0 0.9 0 24 c6t60080E500017EF140000624A565942BEd0
tty cpu
tin tout us sy wt id
0 170 3 1 0 96
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.8 23.2 6.4 28.3 0.0 0.3 1.3 12.2 3 4 md/d10
0.4 23.2 3.2 28.3 0.0 0.3 0.0 11.8 0 4 md/d11
0.4 23.2 3.2 28.3 0.0 0.3 0.0 11.8 0 4 md/d12
0.2 0.2 1.6 1.6 0.0 0.0 3.8 7.3 0 0 md/d30
0.0 0.2 0.0 1.6 0.0 0.0 0.0 5.7 0 0 md/d31
0.2 0.2 1.6 1.6 0.0 0.0 0.0 7.2 0 0 md/d32
0.4 26.0 3.2 31.2 0.0 0.3 0.0 11.9 0 6 c1t0d0
0.6 26.0 4.8 31.2 0.0 0.3 0.0 11.7 0 6 c1t1d0
5.0 23.0 39.4 4819.6 0.0 0.0 0.0 1.6 0 5 c6t60080E500017EF140000624A565942BEd0
tty cpu
tin tout us sy wt id
0 170 3 1 0 96
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.2 0.0 0.6 0.0 0.0 0.0 9.1 0 0 md/d10
0.0 0.2 0.0 0.6 0.0 0.0 0.0 9.0 0 0 md/d11
0.0 0.2 0.0 0.6 0.0 0.0 0.0 5.5 0 0 md/d12
0.0 1.2 0.0 10.2 0.0 0.0 3.3 6.6 0 1 md/d30
0.0 1.2 0.0 10.2 0.0 0.0 0.0 6.2 0 1 md/d31
0.0 1.2 0.0 10.2 0.0 0.0 0.0 5.8 0 1 md/d32
0.0 2.0 0.0 11.1 0.0 0.0 0.0 6.0 0 1 c1t0d0
0.0 2.0 0.0 11.1 0.0 0.0 0.0 5.9 0 1 c1t1d0
5.2 30.6 45.8 6398.5 0.0 0.1 0.0 1.5 0 5 c6t60080E500017EF140000624A565942BEd0
prstat -s cpu -n 5 ---> To check the CPU usage of the server
OUTPUT:
oracle@server1 # prstat -s cpu -n 5
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
15480 oracle 10G 10G cpu37 0 0 98:26:24 0.8% oracle/1
24149 oracle 10G 10G cpu66 0 0 0:01:26 0.8% oracle/2
4569 oracle 10G 10G cpu110 0 0 91:43:03 0.7% oracle/1
28441 oracle 10G 10G sleep 10 0 0:06:43 0.3% oracle/2
802 oracle 10G 10G sleep 0 0 0:00:07 0.2% oracle/2
Total: 501 processes, 1449 lwps, load averages: 5.93, 5.21, 5.23
top ---> To check memory,swap, load,paging space.
The top program provides a dynamic real-time view of a running
system. It can display system summary information as well as a list
of processes or threads currently being managed by the Linux kernel.
OUTPUT:
oracle@mserver1 # top
load averages: 5.83, 5.26, 5.22; up 9+00:03:32 19:09:39
490 processes: 480 sleeping, 10 on cpu
CPU states: 95.0% idle, 4.5% user, 0.5% kernel, 0.0% iowait, 0.0% swap
Kernel: 10137 ctxsw, 3627 trap, 8210 intr, 29003 syscall, 61 flt
Memory: 64G phys mem, 38G free mem, 50G total swap, 50G free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
2135 oracle 1 0 0 0K 0K cpu/101 0:32 0.78% oracle
25485 oracle 2 20 0 0K 0K cpu/37 0:44 0.77% oracle
15480 oracle 1 20 0 0K 0K cpu/19 98.6H 0.76% oracle
4569 oracle 1 20 0 0K 0K cpu/110 91.9H 0.72% oracle
22136 oracle 1 0 0 0K 0K cpu/46 41:29 0.70% oracle
2193 oracle 2 51 0 0K 0K sleep 0:03 0.19% oracle
1723 oracle 2 33 0 0K 0K sleep 0:59 0.16% oracle
1733 root 11 110 -20 0K 0K cpu/62 20.9H 0.07% ologgerd
27062 oracle 2 59 0 0K 0K sleep 133:24 0.06% oracle
25471 oracle 1 59 0 0K 0K sleep 169:40 0.06% oracle
1691 root 13 100 -20 0K 0K sleep 647:38 0.05% osysmond.bin
1656 root 25 59 0 0K 0K sleep 698:01 0.04% orarootagent.bi
2721 root 13 59 0 0K 0K cpu/74 756:50 0.04% orarootagent.bi
8247 oracle 1 59 0 0K 0K sleep 344:35 0.03% oracle
8265 oracle 2 59 0 0K 0K sleep 32:50 0.02% oracle
1657 grid 10 59 0 0K 0K sleep 442:21 0.02% gipcd.bin
2295 grid 1 49 0 0K 0K cpu/56 328:36 0.02% oracle
8263 oracle 2 59 0 0K 0K sleep 33:05 0.02% oracle
8262 oracle 2 59 0 0K 0K sleep 33:06 0.02% oracle
2443 root 47 59 0 0K 0K sleep 331:50 0.02% crsd.bin
3095 oracle 17 59 0 0K 0K sleep 286:38 0.02% oraagent.bin
8264 oracle 2 59 0 0K 0K sleep 32:47 0.02% oracle
8256 oracle 1 60 -20 0K 0K sleep 74:50 0.01% oracle
1372 root 33 59 0 0K 0K sleep 214:39 0.01% ohasd.bin
1873 root 13 59 0 0K 0K sleep 215:35 0.01% octssd.bin
8250 oracle 1 60 -20 0K 0K sleep 75:36 0.01% oracle
8277 oracle 1 49 0 0K 0K sleep 60:08 0.01% oracle
1713 grid 20 110 -20 0K 0K sleep 200:38 0.01% ocssd.bin
8253 oracle 1 60 -20 0K 0K sleep 71:16 0.01% oracle
1604 grid 29 59 0 0K 0K sleep 173:43 0.01% oraagent.bin