Resource Controls
Looking into that Oracle database project, running on a SuperCluster Solaris 11.4
user.oracle projid : 105 comment: "" users : (none) groups : (none) attribs: process.max-core-size=(privileged,1073741824,deny) process.max-file-descriptor=(basic,65536,deny) process.max-sem-nsems=(privileged,2048,deny) process.max-sem-ops=(privileged,1024,deny) process.max-stack-size=(basic,33554432,deny) project.max-msg-ids=(privileged,4096,deny) project.max-sem-ids=(privileged,65535,deny) project.max-shm-ids=(privileged,4096,deny) project.max-shm-memory=(privileged,2199023255552,deny)
Let's try to look into these settings what they mean, defaults, Sol10vs11, Oracle RDBMS docu, OCS recommendations and usage:
Contents
max-file-descriptor
process.max-file-descriptor Maximum number of open files per process OLD = rlim_fd_max rlim_fd_cur Oracle RDBMS Installation Minimum Value = soft 1024 / hard 65536 Solaris 10 Default = basic 256 Solaris 11.4 Default = basic 256 OSC Setting = basic 65536
CHECK setting
root # prctl -n process.max-file-descriptor -i process $$ process: 21663: -bash NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT process.max-file-descriptor basic 256 - deny - privileged 65.5K - deny - system 2.15G max deny - root # ulimit -n 256 root #
CHECK usage
root # echo ::kmastat | mdb -k | grep file_cache file_cache 72 26298 36848 2695168B 95753345327 0 root #
In this example, 26298 is the number of file descriptors in use and 36848 the number of allocated file descriptors. Note that in Solaris, there is no maximum open file descriptors setting for the system, only a single process might have open. They are allocated on demand as long as there is free RAM available.
max-sem-nsems
process.max-sem-nsems Maximum number of semaphoren OLD = seminfo_semmsl Oracle RDBMS Minimum Value = 256 Solaris 10 Default = 25 Solaris 11.4 Default = 512 OSC Setting = 2048
CHECK setting
# prctl -n process.max-sem-nsems -i process $$ oracle:~$ prctl -n process.max-sem-nsems -i process $$ process: 22738: -bash NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT process.max-sem-nsems privileged 2.05K - deny - system 32.8K max deny - oracle:~$
CHECK how many NSEMS are used:
# ipcs -sb | awk '/^s/ {SUM+=$NF}END{print SUM}'
max-sem-ops
process.max-sem-ops Maximum number of System V semaphore operations OLD = seminfo_semopm Oracle RDBMS Installation Minimum Value = N/A Solaris 10 = 10 Solaris 11.4 Default = 512 OSC Setting = 1024
CHECK ? Good question, could not find how to check the current usage; documentation says that the application should get errorsr; eturn code of E2BIG from a semop() call...
max-stack-size
process.max-stack-size Maximum stack memory segment available to this process. OLD = "combination of different kernel settings let to it or ulimit command (eg.: lwp_default_stksize, rlim_fd_cur)" Oracle RDBMS Installation Minimum Value = soft 10240 / hard 32768 Solaris 10 Default = 8192 Solaris 11.4 Default = 8192 OSC Setting = 33554432
CHECK settings
oracle:~$ ulimit -s 32768 oracle:~$ prctl -n process.max-stack-size -i process $$ process: 24517: -bash NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT process.max-stack-size basic 32.0MB - deny - privileged 8.00EB max deny - system 8.00EB max deny - oracle-@C01SC1C0L01Z01A:~$ ulimit -sS
CHECK per process That could be logged by:
# rctladm -e syslog process.max-stack-size
Could be looked up by:
# pmap -sx PID
OR - found a dtrace script (Doc ID 2275236.1):
# dtrace -qn ' grow_internal:entry{self->trace=1;self->addr=arg0} grow_internal:return/self->trace && arg1==12/ {printf("pid:%d %s stack_addr:%a fault_addr:%a\n", pid,execname,curthread->t_procp->p_usrstack,self->addr); self->trace=0;self->addr=0;} grow_internal:return/self->trace/{self->trace=0;self->addr=0}'
The below example shows the process caused a pagefault at 0x0. The size of stack growth is the different between the fault address and the current stack address, which is 0xffc00000 - 0x0 ~= 3.99GB while the process.max-stack-size is 8MB.
# dtrace <... snip for brevity...> pid:6672 fwrxmldiff stack_addr:0xffc00000 fault_addr:0x0 # tail -1 /var/adm/messages Jun 9 06:56:26 hostname genunix: [ID 500092 kern.notice] basic rctl process.max-stack-size (value 8388608) exceeded by process 6672.
max-msg-ids
project.max-msg-ids maximum number of message queues that can be created OLD = msgsys:msginfo_msgmni Oracle RDBMS Installation Minimum Value = 100 Solaris 10 Default = 50 Solaris 11.4 Default = 128 OSC Setting = 4096
CHECK: check the number of active message queues
# ipcs -q
Seen Errors
Failure to create message queue .. msgget: No space left on device
max-sem-ids
project.max-sem-ids Maximum number of semaphore identifiers OLD = seminfo_semmni Oracle RDBMS Installation Minimum Value = 100 Solaris 10 Default = 10 Solaris 11.4 Default = 128 OSC Setting = 65535
CHECK: You can see the identifier for the facility entry using ipcs -s; i.m.a.o. the current usage should be seen with:
# ipcs -sZ | grep -c ^s
max-shm-ids
project.max-shm-ids limit on number of shared memory segments that can be created OLD = shminfo_shmmni Oracle RDBMS Installation Minimum Value = 100 Solaris 10 Default = 100 Solaris 11.4 Default = 128 OSC Setting = 4096
CHECK
# ipcs -b # ipcs -bZ | grep -c ^m
max-core-size
process.max-core-size Maximum size of a core file that is created by this process. Default is unlimited!
max-shm-memory
Last but not least the maximum shared memory itself. Default is 1/4 of physical Memory in Solaris 11.4 - i guess the best to see the usage is mdb and its OSM section (optimized shared memory):
# echo "::memstat" | mdb -k Usage Type/Subtype Pages Bytes %Tot %Tot/%Subt ---------------------------- ---------------- -------- ----- ----------- Kernel 11919848 90.9g 7.4% Regular Kernel 10099567 77.0g 6.3%/84.7% Defdump prealloc 1820281 13.8g 1.1%/15.2% ZFS 2043159 15.5g 1.2% User/Anon 141962499 1.0t 89.2% Regular User/Anon 28686595 218.8g 18.0%/20.2% OSM 113275904 864.2g 71.2%/79.7% Exec and libs 327706 2.5g 0.2% Page Cache 109770 857.5m 0.0% Free (cachelist) 9321 72.8m 0.0% Free 2618033 19.9g 1.6% Total 158990336 1.1t 100% #