Solaris SMF and FMA Notifications

I never realized that there is a really easy way to “monitor” your Solaris in a way to use build-in SMF and FMA monitors and send a mail in case of diagnoses.

# svccfg setnotify problem-diagnosed mailto:pressy@solaris.wtf
# svcadm enable http:apache24
# mv /etc/apache2/2.4/httpd.conf /etc/apache2/2.4/httpd.conf_bu
# pkill httpd
# svcs -xv
svc:/network/http:apache24 (Apache 2.4 HTTP server)
State: maintenance since Fri May 24 11:51:19 2019
Reason: Method failed.
See: http://support.oracle.com/msg/SMF-8000-8Q
See: http://httpd.apache.org
See: man -M /usr/apache2/2.4/man -s 8 httpd
See: /var/svc/log/network-http:apache24.log
Impact: This service is not running.

uhhh… got mail:

SUNW-MSG-ID: SMF-8000-YX, TYPE: Defect, VER: 1, SEVERITY: Major
EVENT-TIME: Fri May 24 11:51:19 CEST 2019
PLATFORM: ORCL,SPARC-T4-1, CSN: AKBLABLA42, HOSTNAME: sparc-server
SOURCE: software-diagnosis, REV: 0.2
EVENT-ID: e0114186-cd70-4085-84aa-802b091a399e
DESC: Service svc:/network/http:apache24 failed - a start, stop or refresh method failed.
AUTO-RESPONSE: The service has been placed into the maintenance state.
IMPACT: svc:/network/http:apache24 is unavailable.
REC-ACTION: Run 'svcs -xv svc:/network/http:apache24' to determine the generic reason why the service failed, the location of any logfiles, and a list of other services impacted. Please refer to the associated reference document at http://support.oracle.com/msg/SMF-8000-YX for the latest service procedures and policies regarding this diagnosis.

Nice… you could set several tags like problem-diagnosed, problem-updated, problem-repaired, problem-resolved, to- or from- (maintenance, from-online, to-degraded) or all for every transition.

And it would also work for specific services:

# svccfg -s application/myservice setnotify problem-diagnosed mailto:pressy@solaris.wtf

Easy, isn’t it?

Solaris – extra large page size support

Last time installing an Oracle DB on Solaris 11.4 SPARC I realized missing extra large memory page sizes. I want to see 16GB huge pages on SPARC but only got 2GB pages as largest allocated memory by Oracle. Well, still bigger than on x86, where you will get 4k/2M/1G but dynamic large pages chosen by the database like needed is a very nice feature on SPARC:

Multiple Page Size Support

MPSS feature in Oracle Solaris allows an application to use different page sizes for different regions of virtual memory. Larger page sizes let the Translation Lookaside Buffer (TLB) map more physical memory using the fixed number of TLB entries. Larger pages may therefore reduce the cost of virtual-to-physical memory mapping and increase overall system performance.

First of all we need a domain which provides 16GB pages which is controlled by the SPARC hypervisor (aka LDOM). This is documented by the LDOM parameter “effective-max-pagesize” and “hardware-max-pagesize”.

To get a given effective-max-pagesize of 16GB the LDOM must be assigned in a layout that includes at least one MBLOCK that has 4 aligned physical and contiguous ranges of 16GB. That means at least one MBLOCK with 4x16GB (64GB) and this MBLOCK *must* be aligned to a 16GB hardware address.

Alignment can be a tricky part since “LDOMs” reserve a small amout of memory for internal use which means the first available block might not be aligned to our needed 16GB hardware address. For example you could use hardware addresses for that like:

root@t7primary01:~# ldm set-mem mblock=0x400000000:64g my64Gdomain
root@t7primary01:~# ldm list-constraints my64domain | grep page
    effective-max-pagesize=16GB
    hardware-max-pagesize=16GB
root@t7primary01:~#

Pfua… ok… now it should work but still… no 16GB pages…

After struggling around with support I got an answer from a kernel developer… extra large pages are disabled on “small” systems with less than 512GB of memory (in pages). There were some issues (internal Bug) but with latest and greatest versions systems actually run satisfactorily.

Anyhow it is still disabled by this threshold (might chance again), but yes, it also might not really be necessary on smaller systems. If you still want to use extra large pages, you can adjust the threshold:

root@t7primary01:~# ldm list-constraints primary | grep -i 0x2
0x2000000000 256G
root@t7primary01:~# root@t7primary01:~# grep xlarge /etc/system
set xlarge_mem_threshold = 0x1900000
root@t7primary01:~# pagesize -a
8192
65536
4194304
268435456
2147483648
17179869184
root@t7primary01:~# pmap -sx $(ps -ef -o pid,comm | awk '/smon/ {print $1}') | grep osm

0000000380000000      8192      8192    -      8192   4M rwxsR--  [ osm shmid=0x0 ]
0000000380800000      4096      4096    -      4096    - rwxsR--  [ osm shmid=0x0 ]
00000003C0000000    262144    262144    -    262144 256M rwxsRi-  [ osm shmid=0x4 ]
0000000400000000 117440512 117440512    - 117440512  16G rwxsRi-  [ osm shmid=0x1 ]
0000002000000000   6291456   6291456    -   6291456   2G rwxsRi-  [ osm shmid=0x2 ]
0000002180000000   1835008   1835008    -   1835008 256M rwxsRi-  [ osm shmid=0x3 ]
root@t7primary01:~# 

Have fun showing your DBAs a possible platform performance feature…

ZombieLoad on SPARC

And again, after Meltdown, Spectre, and Foreshadow, current SPARC CPUs are also not effected by ZombieLoad…

@ZombieLoad
While programs usually only see their own data, a malicious program can exploit the stuffing buffers to gain secrets that are currently being processed by other running programs. These secrets can be user-level secrets, e.g. browsing history, site content, user keys and passwords, or system-level secrets such as HDD encryption keys.

Oracle’s response to these MDS issues:
“Oracle has determined that Oracle SPARC servers are not affected by these MDS vulnerabilities.”

So these four distinct CVE identifiers will only affect Intel implementations;

CVE-2019-11091: Microarchitectural Data Sampling Uncacheable Memory (MDSUM)
CVE-2018-12126: Microarchitectural Store Buffer Data Sampling (MSBDS)
CVE-2018-12127: Microarchitectural Load Port Data Sampling (MLPDS)
CVE-2018-12130: Microarchitectural Fill Buffer Data Sampling (MFBDS)

That’s another prove to run your mission critical workloads on Solaris SPARC to get security, compliance and all important features to ensure an enterprise architecture.

Did I say welcome to cloud computing today? 😉

Solaris I/O Latency

Starting with Solaris 11.4 you got a new interface for device latency without using dtrace. The information was always there but now you can use a “human readable” command. That might bring easier understanding and analyzing the disk subsystem.

I ran an IO calibrate in a 18c database which resides on NVMe flash drives:

root@t7primary01:~# iostat -x -L c5t1d0 c1t1d0 1
                     extended device statistics
device     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b
blkdev2   29.9   13.1  185.7   81.4  0.0  0.0    0.0    0.1   0   0
latency          range         count      density distribution
                  <4us             0        0.00%        0.00%
                 4-8us          3527        0.02%        0.02%
                8-16us       1224166        5.74%        5.76%
               16-32us       4020858       18.85%       24.61%
               32-64us       1225365        5.75%       30.35%
              64-128us      13243355       62.10%       92.45%
             128-256us       1561222        7.32%       99.77%
             256-512us         34758        0.16%       99.93%
            512-1024us          4642        0.02%       99.96%
                 1-2ms          3624        0.02%       99.97%
                 2-4ms          5758        0.03%      100.00%
                 4-8ms            97        0.00%      100.00%
                8-16ms            33        0.00%      100.00%
                 >16ms             0        0.00%      100.00%
                 total      21327405
blkdev3   32.0   14.0  198.5   86.8  0.0  0.0    0.0    0.1   0   0
latency          range         count      density distribution
                  <4us             0        0.00%        0.00%
                 4-8us          2848        0.01%        0.01%
                8-16us       1280834        5.62%        5.64%
               16-32us       4197957       18.44%       24.07%
               32-64us       1355014        5.95%       30.02%
              64-128us      14167408       62.22%       92.24%
             128-256us       1734203        7.62%       99.86%
             256-512us         29629        0.13%       99.99%
            512-1024us           895        0.00%       99.99%
                 1-2ms           614        0.00%      100.00%
                 2-4ms           939        0.00%      100.00%
                 4-8ms           104        0.00%      100.00%
                8-16ms            27        0.00%      100.00%
                 >16ms             0        0.00%      100.00%
                     extended device statistics
device     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b
blkdev2 49234.1    1.0 307781.4   20.0  0.0  5.4    0.0    0.1   0  20
latency          range         count      density distribution
                 <16us             0        0.00%        0.00%
               16-32us             1        0.00%        0.00%
               32-64us            21        0.04%        0.05%
              64-128us         42324       88.06%       88.10%
             128-256us          5647       11.75%       99.85%
             256-512us            67        0.14%       99.99%
            512-1024us             1        0.00%       99.99%
                 1-2ms             1        0.00%      100.00%
                 2-4ms             2        0.00%      100.00%
                  >4ms             0        0.00%      100.00%
                 total         48064
blkdev3 52145.2    0.0 325741.9    0.0  0.0  5.7    0.0    0.1   0  18
latency          range         count      density distribution
                 <32us             0        0.00%        0.00%
               32-64us            28        0.05%        0.05%
              64-128us         44430       87.26%       87.31%
             128-256us          6374       12.52%       99.83%
             256-512us            84        0.16%       99.99%
            512-1024us             3        0.01%      100.00%
               >1024us             0        0.00%      100.00%
                 total         50919

That’s a nice overview…