Solaris 11.next

Well, what do you think about news that no Solaris 12 will be delivered… A lot of rumours and bad gossip came up from competitors …

To be honest looking back how hard it was to get Solaris 11 re-certifications from ISV and other software vendors it might be a nice idea to skip this versioning fight and just improve the OS with updates… I do not care if it is called Solaris 10/11/12/13 or just 11.4 / 11.5 a.s.o. if the features will get more and more and it stays as stable as it is… There are still a lot of Solaris 9 and even more 10 installations in the field by people just being afraid from 11…

Also M$ told us that Win10 will be the last version and btw… HP-UX is at version 11.00 since 1997 😉

Oracle promises a Solaris Support until 2031 today and there is a 5 year roadmap for Solaris and SPARC nowadays…

I am really looking forward to seeing Solaris 11.next releases and new SPARC+++CPUs…

See what John says:

PCI on SPARC

what cards are in my box?

# ipmitool sunoem cli "show -level all -output table /system/pci_devices/add-on description"
Connected. Use ^D to exit.
-> show -level all -output table /system/pci_devices/add-on description
Target             | Property              | Value
-------------------+-----------------------+-----------------------------------
/System/           | description           | Sun Dual Port 10 GbE PCIe 2.0 Low
 PCI_Devices/Add-  |                       | Profile Adapter, Base-T
 on/Device_3       |                       |
/System/           | description           | Oracle Storage 12 Gb SAS PCIe
 PCI_Devices/Add-  |                       | RAID HBA, Internal
 on/Device_4       |                       |


forcing solaris to look for chances

echo '#path_to_inst_bootstrap_1' > /etc/devices/path_to_inst Run: bootadm update-archive shutdown the computer change the PCIe card, for example a NIC with an HBA poweron again...

It is tempting here to manually modify /etc/devices/path_to_inst directly, replacing 8 and 9 with 10 and 11. But modification of path_to_inst file does not survive an upgrade. Any modification done to that file will be dropped after an upgrade. So bootstrapping path_to_inst file is the right persistent way. => Bootstrapping this file allows the box to force a rebuild of path_to_inst.

Oracle Soft- vs. Hard- Partitioning

Partitioning; like mentioned in the “Oracle Partitioning Policy”, when a server is separated into individual sections:
Soft partitioning examples
VMware, HyperV, RHEV, KVM, Xen

Hard partitioning examples
Solaris Zones, SPARC LDOM, IBM LPAR, Fujitsu PPAR, OracleVM for x86

When hard partitioning is in place you need to license only bound CPU cores. Live Migration between to hosts will never be covered and will need to license all cores. (Except Oracle’s Trusted Partitions in Exalogic, Exalytics, Exadata and PCA). Otherwise you will need to license all cores in a VM cluster with Soft Partitioning.

Special Cases in VMware

VMware >5.0
In earlier VMware releases running VMs could be moved within one cluster, therefor you needed to license all cores within this VMware cluster.
Customers built their own Oracle Cluster in their VMware farm…

VMware 5.1 – 5.5
With this version a VM could be moved across cluster boundaries in a vCenter. So you had to license all servers and cores within a vCenter.
Customers built their own Oracle vCenter installation.

VMware 6.0<
There is no longer need to have a shared storage and you could migrate VMs across vCenter instances. That requires you to license all physical servers running VMware in your company.  🙂
There are rumours saying that some customers had a special agreement with Oracle to use VMware with a special setup, separated and not routed VLANs, SAN zoning and so on… but you will have to get in touch with Oracle to create your own special customer definition which might certify your setup but I am sure that this will only be allowed exactly for the version you are running now.

What I would recommend my customers; take a look at Oracle on Oracle solutions and use a seperate VM software like OracleVM next to your VMware.

BTW; OVM is “for free”, you only need to pay for support. If you would use Oracle hardware, the support comes with the hardware support contract.

Please keep in mind that there are special setups for hard partitioning you will have to follow to be on the safe side…

Oracle Storage Cloud Software Appliance Installation

Just playing around with Oracle Storage Cloud and tried to install the appliance which enabled access to the cloud storage per NFS (otherwise you would have to use APIs).

Prerequisites:

    Oracle Linux 7 with UEK Release 4 or later
    Docker 1.8.3 or later
    NFS version 4.0 or later

And yes, for sure an active Oracle Storage Cloud subscription
oracould

Ok, installed an Oracle VM and gave it a try:

[root@OL7mpress01 ~]# uname -a
Linux OL7mpress01 4.1.12-61.1.6.el7uek.x86_64 #2 SMP Thu Aug 18 21:55:17 PDT 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@OL7mpress01 ~]# yum install docker-engine-1.8.3-1.0.2.el7.x86_64
Loaded plugins: langpacks, rhnplugin, ulninfo
This system is receiving updates from ULN.
[...]

Installed:
  docker-engine.x86_64 0:1.8.3-1.0.2.el7

Dependency Installed:
  audit-libs-python.x86_64 0:2.4.1-5.el7         checkpolicy.x86_64 0:2.1.12-6.el7                     docker-engine-selinux.noarch 0:1.12.0-1.0.2.el7
  libsemanage-python.x86_64 0:2.1.10-18.el7      policycoreutils-python.x86_64 0:2.2.5-20.0.1.el7      python-IPy.noarch 0:0.75-6.el7
  setools-libs.x86_64 0:3.3.7-46.el7

Complete!
[root@OL7mpress01 ~]#
[root@OL7mpress01 ~]# systemctl reboot
[...]
[root@OL7mpress01 ~]# groupadd docker
[root@OL7mpress01 ~]# useradd docker-test -m
[root@OL7mpress01 ~]# usermod -a -G docker docker-test
[root@OL7mpress01 ~]# passwd docker-test
Changing password for user docker-test.
[root@OL7mpress01 ~]# systemctl start docker
[root@OL7mpress01 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@OL7mpress01 ~]# yum install nfs-utils
[...]
[root@OL7mpress01 ~]# systemctl start rpcbind
[root@OL7mpress01 ~]# systemctl start nfs-server
[root@OL7mpress01 ~]# systemctl enable rpcbind
[root@OL7mpress01 ~]# systemctl enable nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@OL7mpress01 ~]# su - docker-test
[docker-test@OL7mpress01 ~]$ docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
 Pool Name: docker-251:0-551720-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 1.821 GB
 Data Space Total: 107.4 GB
 Data Space Available: 46.23 GB
 Metadata Space Used: 1.479 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.146 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.107-RHEL7 (2016-06-09)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.12-61.1.6.el7uek.x86_64
Operating System: Oracle Linux Server 7.2
CPUs: 2
Total Memory: 15.42 GiB
Name: OL7mpress01
ID: MROP:4OV3:WKNM:MQ3A:274N:EZ23:2SZQ:JHM5:GMQP:5EHC:BTS6:NMED
[docker-test@OL7mpress01 oscsa-onprem]$ df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             7.7G     0  7.7G   0% /dev
tmpfs                7.8G     0  7.8G   0% /dev/shm
tmpfs                7.8G   17M  7.7G   1% /run
tmpfs                7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root   50G  7.0G   44G  14% /
/dev/xvda1           497M  216M  281M  44% /boot
/dev/mapper/ol-home   42G   33M   42G   1% /home
tmpfs                1.6G     0  1.6G   0% /run/user/0
tmpfs                1.6G     0  1.6G   0% /run/user/1001
[docker-test@OL7mpress01 ~]$ cd oscsa-onprem
[docker-test@OL7mpress01 oscsa-onprem]$ sudo ./oscsa-install.sh  -p http://proxy.serverbla.at:3128 -a
data args: -v /oscsa/cache:/usr/share/oracle/ -v /oscsa/md:/usr/share/oracle/system/ -v /oscsa/logs:/var/log/gateway
*************************************
Imported temporary env vars from docker-test to this install session
*************************************
Checking that docker is installed and using the correct version
Pass found docker version Docker version 1.8.3, build aa9b234

*************************************
Checking host prerequisites
*************************************

Detected linux operating system
Checking kernel version
Pass kernel version 4.1.12-61.1.6.el7uek.x86_64 found
Checking NFS version
Pass found NFS version 4

*************************************
All prerequisites have been met
*************************************


*************************************
Begin install
*************************************

Enter the install location press enter for default (/opt/oscsa_gateway/) :

Installing to destination /opt/oscsa_gateway/
Copied install scripts
Copied OSCSA image
Starting configuration script
Enter the mount location for data cache
/oscsa/cache
Enter the mount location for meta data
/oscsa/md
Enter the mount location for log file information
/oscsa/logs
Enter the docker network mode (host or bridge), Hit  for the default bridge mode.

Enter the host port to use for the Administrative Web Interface. Hit  to use dynamic port mapping

Enter the host port to use for NFS access. Hit  to use dynamic port mapping

Enter the host port to use for the optional HTTP REST service. Hit  to use dynamic port mapping

Writing configuration
Importing image
Please run 'oscsa up' to start the software appliance

*************************************
For additional details, please see (/opt/oscsa_gateway/OSCSA_GATEWAY_README.txt) file
*************************************

[docker-test@OL7mpress01 oscsa-onprem]$ sudo firewall-cmd --state
[sudo] password for docker-test:
running
[docker-test@OL7mpress01 oscsa-onprem]$ sudo firewall-cmd --zone=public --add-port=32774/tcp --permanent
success
[docker-test@OL7mpress01 oscsa-onprem]$ sudo firewall-cmd --zone=public --add-port=32775/tcp --permanent
success
[docker-test@OL7mpress01 oscsa-onprem]$ sudo firewall-cmd --zone=public --add-port=32776/tcp --permanent
success
[docker-test@OL7mpress01 oscsa-onprem]$ sudo firewall-cmd --reload
success
[docker-test@OL7mpress01 oscsa-onprem]$ 
[docker-test@OL7mpress01 oscsa-onprem]$ time ./oscsa up
data args: -v /oscsa/cache:/usr/share/oracle/ -v /oscsa/md:/usr/share/oracle/system/ -v /oscsa/logs:/var/log/gateway
Creating OSCSA Volume
Applying configuration file to container
Starting OSCSA [oscsa_gw:1.0.11]
Setting up config file port with nfs
Setting up config file port with admin
Setting up config file port with rest
Management Console: https://OL7mpress01:32769
If you have already configured an OSCSA FileSystem via the Management Console,
you can access the NFS share using the following port.

NFS Port: 32770

Example: mount -t nfs -o vers=4,port=32770 OL7mpress01:/ /local_mount_point

real    0m19.945s
user    0m0.875s
sys     0m1.063s
[docker-test@OL7mpress01 oscsa-onprem]$

Now I could use a BUI:
ocabuisetup

Now let’s try it with a client:

root@psvsparc1:~ # uname -a
SunOS psvsparc1 5.11 11.3 sun4v sparc SUNW,SPARC-Enterprise-T5120
root@psvsparc1:~ # mkdir /oraclecloud
root@psvsparc1:~ # mount -F nfs -o vers=4,port=32770 10.52.72.82:/oraclecloud /oraclecloud/
root@psvsparc1:~ # df -h  /oraclecloud
Filesystem             Size   Used  Available Capacity  Mounted on
10.52.72.82:/oraclecloud
                       8.0T   4.0T       4.0T    50%    /oraclecloud
root@psvsparc1:~ #
root@psvsparc1:~ # ls -alh /downloads/EIS*iso
-rw-r--r--   1 root     root        7.4G Aug 29 12:28 /downloads/EIS-DVD-ONE-08JUN16.iso
-rw-r--r--   1 root     root        7.6G Aug 29 12:29 /downloads/EIS-DVD-TWO-08JUN16.iso
root@psvsparc1:~# time cp /downloads/EIS-DVD-* /oraclecloud/

real    27m54.958s
user    0m0.032s
sys     6m36.771s
root@psvsparc1:~# 
root@psvsparc1:~# bc
15*1024/28/60
9

So that’s arount 9 MB/s and that’s ok… I am not alone in the company 🙂

root@psvsparc1:~# speedtest-cli
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from next layer (92.60.12.82)...
Selecting best server based on latency...
Hosted by NEXT LAYER GmbH (Vienna) [1.07 km]: 1800000.0 ms
Testing download speed........................................
Download: 563.94 Mbit/s
Testing upload speed..................................................
Upload: 69.71 Mbit/s
root@psvsparc1:~#

Solaris Kernel Memory

From time to time I see some solaris server using a lot of memory "just" for the kernel. You could take a deeper look to see where the kernel memory is used:

root@server:~# mdb -k
Loading modules: [ unix genunix specfs dtrace zfs scsi_vhci sd mpt_sas mac px ldc ip hook neti ds arp usba kssl stmf stmf_sbd random sockfs md niumx idm
cpc crypto fcip fctl fcp mdesc vldc smbsrv nfs zvmm ufs logindmux ptm ii nsctl sppp nsmb rdc sdbc sv lofs ipc ]

>
> ::kmastat
cache                        buf    buf    buf    memory     alloc alloc
name                        size in use  total    in use   succeed  fail
------------------------- ------ ------ ------ ---------- --------- -----
kmem_magazine_1               16  13787  39546     638976B   2212036     0
kmem_magazine_3               32  13293 149017    4825088B  20882637     0
kmem_magazine_7               64  63818 687708   44711936B  74367950     0
kmem_magazine_15             128  81728 154728   20119552B  40689829     0
kmem_magazine_31             256      0      0          0B         0     0
kmem_magazine_47             384      0      0          0B         0     0
kmem_magazine_63             512      0      0          0B         0     0
kmem_magazine_95             768      0      0          0B         0     0
kmem_magazine_127           1024      0      0          0B         0     0
kmem_magazine_143           1152      0      0          0B         0     0
kmem_magazine_179           1440      0      0          0B         0     0
kmem_magazine_255           2048      0      0          0B         0     0
kmem_magazine_361           2896      0      0          0B         0     0
kmem_magazine_492           3944      0      0          0B         0     0
kmem_slab_cache               72  21964  41888    3063808B   1905292     0
kmem_bufctl_cache             24  60474  84162    2039808B   1966131     0
kmem_bufctl_audit_cache      128      0      0          0B         0     0
kmem_va_8192                8192   6617  15040  123207680B     92929     0
kmem_va_16384              16384     12     32     524288B        22     0
kmem_va_24576              24576     11     50    1310720B       147     0
kmem_va_32768              32768      6     64    2097152B      1152     0
kmem_va_40960              40960      9    192    8388608B      4001     0
kmem_va_49152              49152      9    200   10485760B      7542     0
kmem_va_57344              57344      1     48    3145728B      1143     0
kmem_va_65536              65536     18    144    9437184B      4846     0
kmem_alloc_8                   8 225785 229390    1851392B 1460654149     0
kmem_alloc_16                 16  70050  74022    1196032B 1802895553     0
kmem_alloc_24                 24 176486 244712    5931008B 4193514113     0
kmem_alloc_32                 32  40268  45034    1458176B 3716814702     0
kmem_alloc_40                 40  31285  47705    1925120B 1528348140     0
kmem_alloc_48                 48  67277 128102    6209536B 1790577690     0
kmem_alloc_56                 56  91174 141955    8019968B 858181106     0
kmem_alloc_64                 64 151639 323946   21061632B 1933863414     0
kmem_alloc_80                 80 187667 261994   21250048B 596556789     0
kmem_alloc_96                 96   9275  29064    2834432B 1763944514     0
kmem_alloc_112               112   8553  16056    1826816B 242740674     0
kmem_alloc_128               128 108755 111636   14516224B 241660346     0
kmem_alloc_160               160   4200  18000    2949120B 378366571     0
kmem_alloc_192               192  60314 130536   25460736B 516119668     0
kmem_alloc_224               224   1682   2232     507904B 498796577     0
kmem_alloc_256               256   2237  51305   13557760B 1199808091     0
kmem_alloc_320               320   4686  18475    6053888B 4005106805     0
kmem_alloc_384               384    664   2289     892928B 1743958062     0
kmem_alloc_448               448   1487   1890     860160B 156543851     0
kmem_alloc_512               512   6052   6645    3629056B 283536972     0
kmem_alloc_640               640  48054  63216   43155456B  49035869     0
kmem_alloc_768               768    181    510     417792B 533346028     0
kmem_alloc_896               896    106    423     385024B   2201063     0
kmem_alloc_1152             1152   3164   3563    4169728B 758869183     0
kmem_alloc_1344             1344    449    768    1048576B 134708639     0
kmem_alloc_1600             1600    130    490     802816B  12434972     0
kmem_alloc_2048             2048    862   1080    2211840B  49268582     0
kmem_alloc_2688             2688    147    471    1286144B 228164685     0
kmem_alloc_4096             4096   2311   2738   11214848B 599289299     0
kmem_alloc_8192             8192   6843   7355   60252160B 823767889     0
kmem_alloc_12288           12288     24    286    3514368B  94265288     0
kmem_alloc_16384           16384    448   4864   79691776B 372203661     0
kmem_alloc_24576           24576    247    358    8798208B  20182582     0
kmem_alloc_32768           32768    625    829   27164672B  67372957     0
kmem_alloc_40960           40960    292    497   20357120B  10594602     0
kmem_alloc_49152           49152    264    424   20840448B  18226119     0
kmem_alloc_57344           57344     57    140    8028160B   7752775     0
kmem_alloc_65536           65536     83    146    9568256B  68801457     0
kmem_alloc_73728           73728    254    318   23445504B   1224400     0
kmem_alloc_81920           81920     12     75    6144000B    855948     0
kmem_alloc_90112           90112     16     65    5857280B    772093     0
kmem_alloc_98304           98304     72    130   12779520B    673798     0
kmem_alloc_106496         106496      5     60    6389760B    649065     0
kmem_alloc_114688         114688      9     63    7225344B    796342     0
kmem_alloc_122880         122880      3     59    7249920B    388702     0
kmem_alloc_131072         131072     20    352   46137344B  46679183     0
kmem_alloc_262144         262144     25     89   23330816B     22305     0
kmem_alloc_524288         524288     10     91   47710208B    292417     0
kmem_alloc_1048576        1048576     14     83   87031808B    151526     0
streams_mblk                  64  11628  17892    1163264B 4222106588     0
streams_dblk_64              192   1880   3948     770048B 1560260262     0
streams_dblk_128             256      6    744     196608B 2094673871     0
streams_dblk_192             320      2    500     163840B 407759363     0
streams_dblk_256             384    206   1911     745472B 384014686     0
streams_dblk_320             448      0    360     163840B 916550415     0
streams_dblk_512             640      0    312     212992B 122220096     0
streams_dblk_1024           1152      0    322     376832B  60312175     0
streams_dblk_1536           1664      2    468     851968B 4237545508     0
streams_dblk_1920           2048      0    180     368640B   2582208     0
streams_dblk_2560           2688      0    276     753664B   9535544     0
streams_dblk_4096           4224      0     81     368640B  10214770     0
streams_dblk_8192            128      0    693      90112B 319193334     0
streams_dblk_12288         12416      0     98    1261568B  17628988     0
streams_dblk_16384           128      0    315      40960B   2089403     0
streams_dblk_20480         20608      0     56    1179648B   9157299     0
streams_dblk_24576           128      0    315      40960B     23840     0
streams_dblk_28672         28800      0     28     819200B     37109     0
streams_dblk_32768           128      0    567      73728B     72012     0
streams_dblk_36864         36992      0     56    2097152B   1752035     0
streams_dblk_40960           128      0    378      49152B     89538     0
streams_dblk_45056         45184      0     21     958464B    122283     0
streams_dblk_49152           128      0    378      49152B     72414     0
streams_dblk_53248         53376      0     49    2637824B  36207829     0
streams_dblk_57344           128      0     63       8192B        80     0
streams_dblk_61440         61568      0     21    1302528B     30352     0
streams_dblk_65536           128      0     63       8192B       136     0
streams_dblk_69632         69760      0      0          0B         0     0
streams_dblk_73728           128      0      0          0B         0     0
streams_dblk_esb             128   2048   2709     352256B 310293725     0
streams_dblk_mdc             128      0      0          0B         0     0
streams_fthdr                408      0      0          0B         0     0
streams_ftblk                376      0      0          0B         0     0
multidata                    248      0      0          0B         0     0
multidata_pdslab            7112      0      0          0B         0     0
multidata_pattbl              32      0      0          0B         0     0
log_cons_cache                48     29   1352      65536B    943715     0
taskq_ent_cache               56  17219  36685    2072576B 105416892     0
taskq_cache                  280    307    348      98304B       417     0
id32_cache                    32      7    759      24576B  61202888     0
One_wallet_cache              68   3257   4480     327680B  93172593     0
Pac_nopredictor_pool      505536      1      7    3538944B         1     0
Mo_cache                     288      0      0          0B         0     0
Monode_prealloc_cache        104      0      0          0B         0     0
Mo_policy_cache               72      0      0          0B         0     0
Mo_resident_cache             72      0      0          0B         0     0
Mo_capture_cache             376      0      0          0B         0     0
Mo_caphead_cache              64      0      0          0B         0     0
Mw_later_cache               128      0      0          0B         0     0
Mw_cache                     128      0      0          0B         0     0
fakemw                        88      0    460      40960B 306059268     0
mvec_tracking                  8      0   2030      16384B 1104075934     0
mvec_tag                      48      0      0          0B         0     0
Memseg_cache                  64      0      0          0B         0     0
bp_map_8192                 8192      0      0          0B         0     0
bp_map_16384               16384      0     80    1310720B      1823     0
bp_map_24576               24576      0     80    2097152B      2443     0
bp_map_32768               32768      0     80    2621440B      3048     0
bp_map_40960               40960      0      0          0B         0     0
bp_map_49152               49152      0      0          0B         0     0
bp_map_57344               57344      0      0          0B         0     0
bp_map_65536               65536      0      0          0B         0     0
mod_hash_entries              24   1635   3042      73728B  11631688     0
ipp_mod                      304      0      0          0B         0     0
ipp_action                   368      0      0          0B         0     0
ipp_packet                    64      0      0          0B         0     0
mmuctxdom_cache              696      8     11       8192B         8     0
sfmmuid_cache               1176    370    636     868352B 132755208     0
sfmmu_tsbinfo_cache           64    384   1638     106496B 277806748     0
sfmmu_tsb8k_cache           8192      0      0          0B         0     0
sfmmu_tsb_cache             8192    104    218    1785856B 130478770     0
sfmmu8_cache                 320 359954 380900  124813312B 230714082     0
sfmmu1_cache                  96 1160768 1277472  124583936B 975785584     0
pa_hment_cache                64    384   1764     114688B 152854640     0
ism_blk_cache                336      0      0          0B         0     0
ism_ment_cache                32      0      0          0B         0     0
srd_cache                   2192    120    407     909312B  36959896     0
region_cache                 144    186    840     122880B  38085849     0
scd_cache                   2192      0      0          0B         0     0
seg_cache                    112  27575  33264    3784704B 2966412882     0
seg_pcache                   104      0    624      65536B     52128     0
vfs_cache                    240    100    561     139264B     24676     0
vn_cache                     216  99646 140492   37126144B 346774691     0
shadow_cache                  72      0      0          0B         0     0
vsk_anchor_cache              40     39    203       8192B       138     0
nep_cache                    384      5    294     114688B      1429     0
dev_info_node_cache          760    283    310     253952B       830     0
ndi_fm_entry_cache            32   5392   6831     221184B 170484658     0
kcf_sreq_cache                56      0    126       8192B   1542463     0
kcf_areq_cache               296      0     25       8192B        25     0
kcf_context_cache            112      0      0          0B         0     0
object_handle                 80 2798905 3152109  255664128B 931834012     0
object_debug_handle          216      0      0          0B         0     0
object_event                  40      0      0          0B         0     0
segkmem_ppa_262144        262144      0     12    3145728B        16     0
segkp_8192                  8192    375    512    4194304B  73688442     0
segkp_16384                16384      0      0          0B         0     0
segkp_24576                24576      0      0          0B         0     0
segkp_32768                32768      0      0          0B         0     0
segkp_40960                40960   3323   3363  146931712B    507168     0
umem_np_8192                8192      0    128    1048576B    555389     0
umem_np_16384              16384      0     80    1310720B     24633     0
umem_np_24576              24576      0      0          0B         0     0
umem_np_32768              32768      0    104    3407872B    484312     0
umem_np_40960              40960      0     90    3932160B    459456     0
umem_np_49152              49152      0      0          0B         0     0
umem_np_57344              57344      0      0          0B         0     0
umem_np_65536              65536      0     68    4456448B     24633     0
thread_cache                1040    915   1545    1687552B 107833858     0
wbuf32_cache                 512    741    900     491520B  82343977     0
wbuf64_cache                1024    775   1141    1335296B   1121736     0
lwp_cache                   1048   1516   1785    1949696B   5360544     0
turnstile_cache               64   3245   4788     311296B  87013637     0
rw_reentrd_cache             136   3245   4130     573440B  93172717     0
tslabel_cache                 48      2    169       8192B         2     0
cred_cache                   184   1137   3520     655360B 111662850     0
proc_ac_cache                 64    301   1512      98304B  81421934     0
rctl_cache                    48   6392   8957     434176B 1288617008     0
rctl_val_cache                64  13614  18018    1171456B 2940692401     0
task_cache                   160    160    800     131072B   1083893     0
kmem_defrag_cache            224      2     36       8192B         2     0
kmem_move_cache               56      0  18705    1056768B   6862115     0
i_dmahdl                    2648      0      0          0B         0     0
timeout_request              128      0      0          0B         0     0
cyclic_id_cache               80    263    303      24576B       329     0
callout_cachebabecafe         80   3239   3276     425984B      3239     0
callout_lcachebabecafe        48  10163  10206     663552B     10163     0
bounds_predictor          505536      3      7    3538944B         3     0
dnlc_space_cache              24      0      0          0B         0     0
file_cache                    72   5236   7392     540672B 2852162077     0
stream_head_cache            376    518    798     311296B 122978665     0
queue_cache                  664   1176   1536    1048576B 128578727     0
syncq_cache                  168     81    432      73728B     20512     0
qband_cache                   64      2    126       8192B         2     0
linkinfo_cache                48     50    507      24576B      5064     0
ciputctrl_cache             1024      0      0          0B         0     0
serializer_cache              64     55   1008      65536B    163607     0
as_cache                     352    371    667     237568B 132754981     0
marker_cache                 128      0    378      49152B    715850     0
anon_cache                    48 164399 212771   10313728B 2491485009     0
anonmap_cache                120  15546  19363    2367488B 3655840909     0
segvn_cache                  224  27575  32724    7446528B 2756160318     0
segvn_szc_cache1              64      0    882      57344B 772856386     0
segvn_szc_cache2             512      0      0          0B         0     0
segvn_szc_cache3            4096      0     88     360448B  35897056     0
segvn_szc_cache4           32768      0      0          0B         0     0
segvn_szc_cache5          262144      0      0          0B         0     0
segvn_szc_cache6          2097152      0      0          0B         0     0
flk_edges                     48      0    169       8192B       409     0
fdb_cache                    104      0      0          0B         0     0
timer_cache                  176      4     46       8192B        31     0
vmu_bound_cache               56   6650   7540     425984B     15900     0
vmu_object_cache              88   2305   2484     221184B      3234     0
physio_buf_cache             248      0    416     106496B     74677     0
process_cache               4168    383    468    2129920B  92796816     0
numaio_obj_cache             328    112    192      65536B       425     0
numaio_grp_cache             144      9     56       8192B        27     0
mac_impl_cache             13488      6      9     122880B         8     0
mac_ring_cache               480      8     30      16384B        12     0
mac_block_cache              152      0      0          0B         0     0
mac_descriptor_cache          64      0      0          0B         0     0
mac_packet_pool_cache       1184      0      0          0B         0     0
mac_magazine_cache           552      0      0          0B         0     0
flow_tab_cache_0             184      5     42       8192B         7     0
flow_entry_cache_0         22440     12     20     450560B        26     0
mac_bcast_grp_cache           80      7    101       8192B        19     0
mac_client_impl_cache       2064      6     11      24576B         8     0
mac_promisc_impl_cache       120      0      0          0B         0     0
ip_minor_arena_sa_1            1     46    384        384B   1721432     0
ip_minor_arena_la_1            1     65   1088       1088B    959164     0
ip_conn_cache                744      3     60      49152B      1193     0
tcp_conn_cache              2120    314    693    1548288B    737336     0
udp_conn_cache              1256     69    324     442368B   2592397     0
rawip_conn_cache            1096      0    140     163840B     17159     0
rts_conn_cache               816      8     27      24576B        30     0
ire_cache                    352    153    168      65536B       202     0
ncec_cache                   200    110    279      73728B      6158     0
nce_cache                    112    117    378      49152B      6454     0
rt_entry                     152    133    168      32768B       180     0
radix_mask                    32      8    253       8192B        15     0
radix_node                   120      5     67       8192B         5     0
ipsec_actions                 88      0      0          0B         0     0
ipsec_selectors               80      0      0          0B         0     0
ipsec_policy                  80      0      0          0B         0     0
tcp_timercache                88   1239   2024     180224B    726650     0
tcp_notsack_blk_cache         24      1   1690      40960B   2083719     0
squeue_cache                 168    100    126      24576B       100     0
sctp_conn_cache             2608      0      0          0B         0     0
sctp_faddr_cache             472      0      0          0B         0     0
sctp_set_cache                24      0      0          0B         0     0
sctp_ftsn_set_cache           16      0      0          0B         0     0
dce_cache                    152    179    265      40960B       239     0
ire_gw_secattr_cache          24      0      0          0B         0     0
ldc_memhdl_cache              48      0      0          0B         0     0
ldc_memseg_cache              64      0      0          0B         0     0
fnode_cache                  176      9     84      16384B    121213     0
pipe_cache                   320     66    350     114688B  57668150     0
snode_cache                  152    858   1590     245760B 208309593     0
clnt_clts_endpnt_cache        88      0      0          0B         0     0
bpmap_cache                  200      0      0          0B         0     0
zio_cache                    912     26  76848   78692352B 2230982187     0
zio_link_cache                48      0  82472    3997696B 3836987868     0
sa_cache                      56  87949 132675    7495680B 197787650     0
dnode_t                      696 954000 954371  710746112B 104337805     0
dmu_buf_impl_t               216 1769940 2097530  464404480B 206659141     0
arc_elink_t                   32 967948 1389982   45006848B 3414386709     0
arc_buf_t                    168 992393 1401744  239230976B 559040333     0
arc_ref_t                     72 1769981 2097760  153436160B  54845411     0
arc_ghost_t                   64 314633 317646   20652032B 208588997     0
arc_meta                     184 956283 1262932  235134976B  57748154     0
arc_data                     184 915092 924264  172081152B 109836972     0
arc_data_512                  16 743953 756951   12230656B 122178970     0
arc_meta_512                  16 843315 1134159   18325504B  82077912     0
arc_data_1024                 16   3656  12168     196608B   1870065     0
arc_meta_1024                 16    380   2535      40960B    291201     0
arc_data_1536                 16   2409   8112     131072B   1191893     0
arc_meta_1536                 16     72   1521      24576B     43830     0
arc_data_2048                 16   1994   8112     131072B   1125267     0
arc_meta_2048                 16    307   2028      32768B     76657     0
arc_data_3072                 16   2210  11154     180224B   3490145     0
arc_meta_3072                 16     50   1521      24576B     20353     0
arc_data_4096                 16   1369  15210     245760B   5126190     0
arc_meta_4096                 16   5662   7098     114688B   1601065     0
arc_data_6144                 16   1864  10647     172032B   2382865     0
arc_meta_6144                 16     20   1521      24576B     40232     0
arc_data_8192                 16    998   7605     122880B   1823731     0
arc_meta_8192                 16     15   1014      16384B      3434     0
arc_data_12288                16    958   7605     122880B   1551591     0
arc_meta_12288                16     32   1014      16384B      8738     0
arc_data_16384                16    513   5577      90112B   1210128     0
arc_meta_16384                16  90237 305214    4931584B  15250828     0
arc_data_24576                16    458  20787     335872B  10999937     0
arc_meta_24576                16     19   1014      16384B      5337     0
arc_data_32768                16    428  25857     417792B   7619052     0
arc_meta_32768                16     10   1014      16384B      2976     0
arc_data_40960                16    163  22308     360448B  15904627     0
arc_meta_40960                16     13   1521      24576B     22600     0
arc_data_49152                16    227  19266     311296B   7580554     0
arc_meta_49152                16      1    507       8192B       205     0
arc_data_57344                16   2956  13182     212992B   3697718     0
arc_meta_57344                16      0    507       8192B       175     0
arc_data_65536                16   1638   3042      49152B    967486     0
arc_meta_65536                16      0    507       8192B       359     0
arc_data_73728                16     94   2028      32768B    569502     0
arc_meta_73728                16      4    507       8192B       356     0
arc_data_81920                16    448   1521      24576B    528592     0
arc_meta_81920                16      3    507       8192B       542     0
arc_data_90112                16   1492   2535      40960B    675906     0
arc_meta_90112                16      1    507       8192B       204     0
arc_data_98304                16     68   1521      24576B    428205     0
arc_meta_98304                16      1    507       8192B       151     0
arc_data_106496               16     47   1521      24576B    381247     0
arc_meta_106496               16      1    507       8192B       248     0
arc_data_114688               16     32   1521      24576B    387053     0
arc_meta_114688               16      1   1014      16384B      5162     0
arc_data_122880               16     23   1521      24576B    396002     0
arc_meta_122880               16      1    507       8192B       263     0
arc_data_131072               16 126094 142467    2301952B 3965382018     0
arc_meta_131072               16     29   1521      24576B    316793     0
arc_data_139264               16      0      0          0B         0     0
arc_meta_139264               16      0      0          0B         0     0
arc_data_262144               16      0      0          0B         0     0
arc_meta_262144               16      0      0          0B         0     0
arc_data_524288               16      0      0          0B         0     0
arc_meta_524288               16      0      0          0B         0     0
arc_data_1048576              16      0      0          0B         0     0
arc_meta_1048576              16      0      0          0B         0     0
l2arc_seg_t                   96      0      0          0B         0     0
l2arc_buf_t                   80      0      0          0B         0     0
zfetch_trigger_t              80    189   2525     204800B  61215486     0
space_seg_cache               64  35260 482076   31342592B 2637153477     0
dsl_share_t                  328      1     24       8192B         1     0
dsl_share_state_t             48      0      0          0B         0     0
zil_lwb_cache                208      6    624     131072B    736315     0
zil_train_cache               64      2   1386      90112B   2295082     0
zil_car_cache                 56      2   1885     106496B   3394521     0
zil_ian_cache                 80     32   1818     147456B  63267405     0
vdev_disk_cache              256      0    341      90112B 882654057     0
zfs_znode_cache              328  87949  91200   31129600B 243237064     0
dls_link_cache               344      6     23       8192B         9     0
dls_devnet_cache             368      6     22       8192B         8     0
px0_px0_0_cache1            8192     12     32     262144B 112535534     0
px0_px0_0_cache2           16384      3      8     131072B      9249     0
px0_px0_0_cache8           65536      2      2     131072B         2     0
dv_node_cache                184    664    748     139264B      1211     0
px0_mpt_sas0_2_cache1       8192      2     32     262144B 296721559     0
px0_mpt_sas0_2_cache2      16384      0     16     262144B  99973488     0
pkt_cache_mpt_sas_0          720      0    110      81920B 290898182     0
px0_mpt_sas1_3_cache1       8192      4     48     393216B 737561670     0
px0_mpt_sas1_3_cache2      16384      0     32     524288B 281684410     0
pkt_cache_mpt_sas_1          720      2    132      98304B 763554977     0
sdev_node_cache              248    905    992     253952B     51623     0
audit_proc                    48    385   1521      73728B  80880005     0
drv_secobj_cache             296      0      0          0B         0     0
dld_str_cache                320     11    150      49152B    147472     0
exacct_object_cache           40      0      0          0B         0     0
rw_numa_cache                128  19911  20853    2711552B   2551080     0
kssl_cache                  1624      0      0          0B         0     0
stmf_task_event_cache         64      0      0          0B         0     0
stmf_task_cache             3296      0      0          0B         0     0
stmf_ref_node_cache           16      2    507       8192B         6     0
sbd_task_cache              1304      0      0          0B         0     0
namefs_inodes_1                1     47   1152       1152B     74469     0
port_cache                    80     13    101       8192B        51     0
socket_cache                 792    321    720     589824B   1700687     0
socktpi_cache               1096      0      7       8192B         6     0
socktpi_unix_cache          1096     32    350     409600B    170699     0
sock_sod_cache               656      0      0          0B         0     0
tl_cache                     448     78    396     180224B    171292     0
keysock_1                      1      0     64         64B         1     0
spdsock_1                      1      0     64         64B         5     0
rds_alloc_cache               88      0      0          0B         0     0
dtrace_state_cache        262144      0     14    3670016B        29     0
idm_buf_cache                256      0      0          0B         0     0
idm_task_cache              1408      0      0          0B         0     0
idm_tx_pdu_cache             400      0      0          0B         0     0
idm_rx_pdu_cache             596      0      0          0B         0     0
softmac_cache                568      5     14       8192B         7     0
softmac_upper_cache          232      0      0          0B         0     0
fctl_cache                   112      0      0          0B         0     0
vldc_cookie_buf_cache     262144      0     65   17039360B  33963567     0
authkern_cache                72      0    784      57344B 205205946     0
authnone_cache                72      0      0          0B         0     0
authloopback_cache            72      0      0          0B         0     0
authdes_cache_handle          80      0      0          0B         0     0
rnode_cache                  680    982    990     737280B     61171     0
nfs_access_cache              56    182   3190     180224B    145036     0
client_handle_cache           32     17    506      16384B      1850     0
rnode4_cache                1032      0      0          0B         0     0
svnode_cache                  40      0      0          0B         0     0
nfs4_access_cache             56      0      0          0B         0     0
client_handle4_cache          32      0      0          0B         0     0
nfs4_ace4vals_cache           48      0      0          0B         0     0
nfs4_ace4_list_cache         264      0      0          0B         0     0
NFS_idmap_cache               56      0      0          0B         0     0
lm_xprt_10003c62cf40          32      0      0          0B         0     0
lm_vnode_10003c62cf40        184      0      0          0B         0     0
lm_sysid_10003c62cf40        160      0     50       8192B         1     0
lm_client_10003c62cf40       128      0     63       8192B         1     0
lm_async_10003c62cf40         32      0      0          0B         0     0
lm_sleep_10003c62cf40         96      0      0          0B         0     0
lm_config_10003c62cf40        80      2    101       8192B         2     0
uvfs_uvnode_cache            392      0      0          0B         0     0
uvfs_task_sync_cache          16      0      0          0B         0     0
uvfs_task_rootvp_cache        16      0      0          0B         0     0
uvfsvfs_cache                280      0      0          0B         0     0
ufs_inode_cache              368      0      0          0B         0     0
directio_buf_cache           272      0      0          0B         0     0
lufs_save                     24      0      0          0B         0     0
lufs_bufs                    256      0      0          0B         0     0
lufs_mapentry_cache          112      0      0          0B         0     0
pty_map                       64     54    756      49152B     10729     0
sppptun_map                  440      0      0          0B         0     0
Hex0x100030bf3428_minor_1      1      0      0          0B         0     0
Hex0x100030bf3430_minor_1      1      0      0          0B         0     0
px0_igb3_4_cache1           8192   1793   1840   15073280B 169264446     0
px0_igb3_4_cache2          16384      1     24     393216B 194088248     0
px0_igb0_5_cache1           8192   1793   2000   16384000B 3804956026     0
px0_igb0_5_cache2          16384      1     24     393216B 653588623     0
iscsit_status_pdu_cache      400      0      0          0B         0     0
stp_2_0_987                 1712      0      0          0B         0     0
stp_m2_0_987                  56      0      0          0B         0     0
audit_buffer                 152      0    212      32768B       690     0
lnode_cache                   32     12   1265      40960B 247872400     0
flow_tab_cache_1             184      1     42       8192B         1     0
flow_entry_cache_1         22440      1      4      90112B         1     0
lm_xprt_10003c62fa40          32      0      0          0B         0     0
lm_vnode_10003c62fa40        184      0      0          0B         0     0
lm_sysid_10003c62fa40        160      0      0          0B         0     0
lm_client_10003c62fa40       128      0      0          0B         0     0
lm_async_10003c62fa40         32      0      0          0B         0     0
lm_sleep_10003c62fa40         96      0      0          0B         0     0
lm_config_10003c62fa40        80      1    101       8192B         1     0
vnic_cache                  1544      1      5       8192B         1     0
crypto_session_cache         104      0      0          0B         0     0
sdp_generic_table             32      0      0          0B         0     0
sdp_advt_cache                80      0      0          0B         0     0
sdp_advt_table                24      0      0          0B         0     0
sdp_conn_cache              1944      0      0          0B         0     0
Hex0x10004c7d1428_minor_1      1      0      0          0B         0     0
Hex0x10004c7d1430_minor_1      1      0      0          0B         0     0
stp_3_1_3841                1712      0      0          0B         0     0
stp_m3_1_3841                 56      0      0          0B         0     0
stp_2_1_3866                1712      0      0          0B         0     0
stp_m2_1_3866                 56      0      0          0B         0     0
fcsm_job_cache               104      0      0          0B         0     0
aggr_port_cache             1032      0      0          0B         0     0
aggr_grp_cache              1008      0      0          0B         0     0
iptun_cache                  288      0      0          0B         0     0
smb_shr_notify_cache          72      0      0          0B         0     0
smb_share_cache              168      1     48       8192B         1     0
smb_vfs_cache                 48      0      0          0B         0     0
smb_mc_cache                  96      0      0          0B         0     0
smb_uio_cache                752      0      0          0B         0     0
smb_node_cache               752      0      0          0B         0     0
smb_txreq                  66592      0      0          0B         0     0
vxlan_grp_cache              200      0      0          0B         0     0
vxlan_cache                  904      0      0          0B         0     0
zvsdir_zvnode_cache           96      0      0          0B         0     0
px0_igb1_6_cache1           8192      0     32     262144B      3584     0
px0_igb1_6_cache2          16384      0      8     131072B         1     0
px0_igb2_7_cache1           8192      0     32     262144B      3584     0
px0_igb2_7_cache2          16384      0      8     131072B         1     0
------------------------- ------ ------ ------ ---------- --------- -----
Total [hat_memload]                             124813312B 230714082     0
Total [kmem_msb]                                 76464128B 148885992     0
Total [kmem_va]                                 158597120B    111782     0
Total [kmem_default]                           3346472960B 2897004240     0
Total [bp_map]                                    6029312B      7314     0
Total [kmem_tsb_default]                          1785856B 130478770     0
Total [hat_memload1]                            124583936B 975785584     0
Total [segkmem_ppa]                               3145728B        16     0
Total [umem_np]                                  14155776B   1548423     0
Total [id32]                                        24576B  61202888     0
Total [segkp]                                   151126016B  74195610     0
Total [ip_minor_arena_sa]                             384B   1721432     0
Total [ip_minor_arena_la]                            1088B    959164     0
Total [px0_px0_0_vmem_top]                          524288B 112544785     0
Total [px0_mpt_sas0_2_vmem_top]                          524288B 396695047     0
Total [px0_mpt_sas1_3_vmem_top]                          917504B 1019246080     0
Total [namefs_inodes]                                1152B     74469     0
Total [keysock]                                        64B         1     0
Total [spdsock]                                        64B         5     0
Total [px0_igb3_4_vmem_top]                        15466496B 363352694     0
Total [px0_igb0_5_vmem_top]                        16777216B 163577353     0
Total [px0_igb1_6_vmem_top]                          393216B      3585     0
Total [px0_igb2_7_vmem_top]                          393216B      3585     0
------------------------- ------ ------ ------ ---------- --------- -----

vmem                         memory     memory    memory     alloc alloc
name                         in use      total    import   succeed  fail
------------------------- ---------- ----------- ---------- --------- -----
heap                      4418913714176B 17592186044416B         0B    445761     0
    vmem_metadata          14204928B   14417920B  14417920B      1625     0
        vmem_seg           42156032B   42156032B  42156032B      5141     0
        vmem_hash           5945344B    5955584B   5955584B       378     0
        vmem_vmem            443992B     513592B    475136B       171     0
    heap_alloc                61760B      65536B     65536B       198     0
    hat_memload           124813312B  124813312B 124813312B     17940     0
    kstat                   1317944B    1351680B   1286144B   1091062     0
    kmem_metadata          61620224B   97779712B  97779712B    115733     0
        kmem_msb           76464128B   76464128B  76464128B    130218     0
        kmem_audit                0B          0B         0B         0     0
        kmem_cache          2236176B    3907584B   3907584B       552     0
        kmem_hash            530944B     540672B    540672B      1038     0
    kmem_log                4204640B    4210688B   4210688B         6     0
    kmem_firewall_va              0B          0B         0B         0     0
        kmem_firewall             0B          0B         0B         0     0
    mod_sysfile                   8B       8192B      8192B         1     0
    kmem_oversize         20114661376B 20122435584B 20122435584B    333208     0
    kmem_va               210501632B  210501632B 210501632B     43084     0
        kmem_default      3346472960B 3925639168B 3925639168B   5790429     0
    little_endian           1162048B    1228800B   1228800B 104879936     0
    big_endian             36849813B   58744832B  58744832B  60100613     0
    bp_map                  6029312B    6029312B   6029312B      6402     0
    ksyms                   3267768B    3317760B   3317760B       418     0
    ctf                      278772B     327680B    327680B       420     0
    kmem_bigtsb                   0B          0B         0B         0     0
        kmem_bigtsb_default         0B          0B         0B         0     0
    kmem_tsb               12582912B   12582912B  12582912B       610     0
        kmem_tsb_default   10174464B   12582912B  12582912B  28176218   579
    hat_memload1          124583936B  124583936B 124583936B     15214     0
    KOM firewall                  0B          0B         0B         0     0
    segkmem_ppa             3145728B    3145728B   3145728B         3     0
    umem_np                14155776B   14155776B  14155776B     24604     0
    contig_mem_arena       30666816B  222298112B         0B 133475242     0
    contig_mem_arena_le           0B          0B         0B         0     0
    defdump_arena         1147854848B 1147854848B 1147854848B         6     0
        defdump_metadata_arena 1147854848B 1147854848B 1147854848B         6     0
lppool                    154394624B  154394624B         0B     31840 138056
heap32                     10772480B  134217728B         0B       126     0
    id32                      24576B      24576B     24576B         3     0
    module_data             8638543B    8814592B   8290304B       596     0
    promplat                      0B          0B         0B        79     0
    trapstat                      0B          0B         0B         0     0
heaptext                   33562624B  134217728B         0B        17     0
    module_text            33551680B   37863424B         0B       421     0
logminor_space                   61B     262137B         0B    942755     0
taskq_id_arena                  146B 2147483647B         0B       240     0
heap_lp                   4026531840B 4397241204736B         0B        16     0
    kmem_lp               4026531840B 4026531840B 4026531840B      7466  4150
segkp                     151420928B 2147483648B         0B      4905     0
rctl_ids                         44B      32767B         0B        44     0
zoneid_space                      1B       9998B         0B         1     0
taskid_space                    160B     999999B         0B   1048209     0
pool_ids                          0B     999998B         0B         0     0
contracts                       159B 2147483646B         0B    853997     0
regspec                     9175040B 5368709120B         0B        31     0
mac_minor_ids                   116B     130070B         0B       792     0
ip_minor_arena_sa               384B     262140B         0B         6     0
ip_minor_arena_la              1088B 4294705152B         0B        17     0
px0_px0_0_vmem_top           655360B 1878917120B         0B      2135     8
    px0_px0_0_vmem_16             0B          0B         0B      2125     0
px0_px0_0_vmem_c                  0B  268435456B         0B         0     9
px0_mpt_sas0_2_vmem_top     1441792B  939393024B         0B  41273125     8
    px0_mpt_sas0_2_vmem_16    262144B     262144B    262144B  35835132     0
px0_mpt_sas0_2_vmem_c             0B  134217728B         0B         0     9
px0_mpt_sas1_3_vmem_top     1835008B  939393024B         0B 110855090     8
    px0_mpt_sas1_3_vmem_16    262144B     262144B    262144B  94440293     0
px0_mpt_sas1_3_vmem_c             0B  134217728B         0B         0     9
lib_va_32                   7954432B 2031599616B         0B        20     0
lib_va_64                 283328512B 2251793356234752B         0B       211     0
namefs_inodes                  1152B      65536B         0B        18     0
tl_minor_space                   78B     262138B         0B    163787     0
keysock                          64B 4294967295B         0B         1     0
spdsock                          64B 4294967295B         0B         1     0
dtrace                       104429B 4294967295B         0B   1219732     0
dtrace_minor                      0B 4294967293B         0B        28     0
syseventd_channel                15B        101B         0B     24474     0
syseventd_channel                 1B          2B         0B         1     0
idm_taskid_space                  0B      65536B         0B         0     0
module_text_holesrc_2             0B    4194304B         0B         0     0
    ktext_hole_2            1862280B    4194304B         0B       156     0
module_text_holesrc_0             0B    4194304B         0B         0     0
    ktext_hole_0            2155704B    4194304B         0B        20     0
ibcm_local_sid                    0B 4294967295B         0B         0     0
ibcm_ip_sid                       0B      65535B         0B         0     0
lmsysid_space                     1B      16383B         0B         3     0
module_text_holesrc_3             0B    4194304B         0B         0     0
    ktext_hole_3            1924624B    4194304B         0B       154     0
module_text_holesrc_4             0B    4194304B         0B         0     0
    ktext_hole_4            2109924B    4194304B         0B        72     0
module_text_holesrc_1             0B    4194304B         0B         0     0
    ktext_hole_1             236992B    4194304B         0B        32     0
logdmux_minor                    34B        256B         0B      5028     0
ptms_minor                       54B        128B         0B     10703     3
sppptun_minor                     0B         16B         0B         0     0
syseventconfd_door                1B        101B         0B         1     0
syseventconfd_door                1B          2B         0B         1     0
devfsadm_event_channel            1B        101B         0B         1     0
devfsadm_event_channel            1B          2B         0B         1     0
Hex0x100030bf3428_minor           0B 4294967294B         0B         0     0
Hex0x100030bf3430_minor           0B 4294967294B         0B         0     0
px0_igb3_4_vmem_top        15597568B  939393024B         0B  12335257     8
    px0_igb3_4_vmem_16            0B          0B         0B  12334856     0
px0_igb3_4_vmem_c                 0B  134217728B         0B         0     9
px0_igb0_5_vmem_top        16908288B  939393024B         0B    362737     8
    px0_igb0_5_vmem_16            0B          0B         0B    362449     0
px0_igb0_5_vmem_c                 0B  134217728B         0B         0     9
iscsit_tsih_pool                  0B      65535B         0B         0     0
ipnet_minor_space                 6B     262141B         0B         6     0
crypto                            0B         16B         0B    136281     0
lofi_id                           0B      16383B         0B         0     0
ds_minors                         0B     262140B         0B         0     0
Hex0x10004c7d1428_minor           0B 4294967294B         0B         0     0
Hex0x10004c7d1430_minor           0B 4294967294B         0B         0     0
semids                           90B        128B         0B        90     0
mdesc_minor                       0B        256B         0B    120953     0
aggr_portids                      0B      65534B         0B         0     0
aggr_key_ids                      0B      64535B         0B         0     0
zvmm_minor_space                  0B     262142B         0B         0     0
px0_igb1_6_vmem_top          524288B  939393024B         0B       115     8
    px0_igb1_6_vmem_16            0B          0B         0B         0     0
px0_igb1_6_vmem_c                 0B  134217728B         0B         0     9
px0_igb2_7_vmem_top          524288B  939393024B         0B       115     8
    px0_igb2_7_vmem_16            0B          0B         0B         0     0
px0_igb2_7_vmem_c                 0B  134217728B         0B         0     9
msqids                            0B        128B         0B         0     0
shmids                            0B        128B         0B         0     0
------------------------- ---------- ----------- ---------- --------- -----
>
>
> ::kmastat !awk '!/Total/ {print $4 " " $1}' | sort -n | tail
76464128B kmem_msb
97779712B kmem_metadata
124583936B hat_memload1
124813312B hat_memload
210501632B kmem_va
1147854848B defdump_arena
1147854848B defdump_metadata_arena
3925639168B kmem_default
4026531840B kmem_lp
20122435584B kmem_oversize
>
>
> ::vmem ! grep kmem_oversize
000003000008a000   kmem_oversize        20114661376  20122435584    333212     0
> 000003000008a000::print vmem_t vm_kstat.vk_free.value.l
vm_kstat.vk_free.value.l = 0x50304
> 000003000008a000::print vmem_t vm_kstat.vk_alloc.value.l
vm_kstat.vk_alloc.value.l = 0x5159d

In newer solaris releases we see the ZFS buffers as own lines in mdb... that's part of the kernel memory:

> ::memstat
Page Summary                 Pages             Bytes  %Tot
----------------- ----------------  ----------------  ----
Kernel                     3129843             23.8G   76%
Defdump prealloc            140119              1.0G    3%
ZFS Metadata                292857              2.2G    7%
ZFS File Data              2251900             17.1G   55%
Anon                        158898              1.2G    4%
Exec and libs                 6226             48.6M    0%
Page cache                    9878             77.1M    0%
failed to read 'mrp_svc'; module not present
Free (cachelist)                52              416k    0%
Free (freelist)             121576            949.8M    3%
Total                      4128768             31.5G
>


What is happening there? Well... IO IO IO 🙂

root@server:~#  dtrace -n 'fbt::vmem_alloc:entry { @[args[0]->vm_name] = sum(arg1); }'
dtrace: description 'fbt::vmem_alloc:entry ' matched 1 probe
^C

  ip_minor_arena_la                                                 1
  namefs_inodes                                                     1
  ip_minor_arena_sa                                                 2
  logminor_space                                                    2
  little_endian                                                119168
  segkp                                                        188416
  big_endian                                                  1397837
  heap                                                        3145728
  kmem_oversize                                               3145728
  px0_mpt_sas0_2_vmem_16                                      5767168
  px0_mpt_sas0_2_vmem_top                                     5767168
  px0_mpt_sas1_3_vmem_16                                     15990784
  px0_mpt_sas1_3_vmem_top                                    15990784
root@server:~#
root@server:~# dtrace -n 'fbt::vmem_alloc:entry /args[0]->vm_name == "px0_mpt_sas1_3_vmem_top"/ { @[stack()] = count(); }'
dtrace: description 'fbt::vmem_alloc:entry ' matched 1 probe
^C


              genunix`vmem_xalloc+0x670
              genunix`vmem_alloc+0x21c
              px`px_dvma_pool_default_dvma_alloc+0x180
              px`px_atu_dvma_alloc+0x94
              px`px_dvma_map+0x54
              px`px_dma_bindhdl+0xbc
              genunix`ddi_dma_buf_bind_handle+0x54
              scsi`scsi_cache_bind+0x24
              scsi`scsi_cache_init_pkt+0x2d4
              scsi`scsi_init_pkt+0x4c
              scsi_vhci`vhci_bind_transport+0x9ac
              scsi_vhci`vhci_scsi_start+0x350
              sd`sd_start_cmds+0x3a4
              sd`sd_core_iostart+0x228
              sd`sd_mapblockaddr_iostart+0x210
              sd`xbuf_iostart+0x20c
              zfs`vdev_disk_strategy+0x30
              zfs`vdev_disk_io_start+0x26c
              zfs`zio_execute+0xf4
              zfs`vdev_queue_io_done+0xb4
                9

              genunix`vmem_xalloc+0x670
              genunix`vmem_alloc+0x21c
              px`px_dvma_pool_default_dvma_alloc+0x180
              px`px_atu_dvma_alloc+0x94
              px`px_dvma_map+0x54
              px`px_dma_bindhdl+0xbc
              genunix`ddi_dma_buf_bind_handle+0x54
              scsi`scsi_cache_bind+0x24
              scsi`scsi_cache_init_pkt+0x2d4
              scsi`scsi_init_pkt+0x4c
              scsi_vhci`vhci_bind_transport+0x9ac
              scsi_vhci`vhci_scsi_start+0x350
              sd`sd_start_cmds+0x3a4
              sd`sd_core_iostart+0x228
              sd`sd_mapblockaddr_iostart+0x210
              sd`xbuf_iostart+0x20c
              zfs`vdev_disk_strategy+0x30
              zfs`vdev_disk_io_start+0x26c
              zfs`zio_execute+0xf4
              zfs`vdev_raidz_io_start+0x26c
               16

              genunix`vmem_xalloc+0x670
              genunix`vmem_alloc+0x21c
              px`px_dvma_pool_default_dvma_alloc+0x180
              px`px_atu_dvma_alloc+0x94
              px`px_dvma_map+0x54
              px`px_dma_bindhdl+0xbc
              genunix`ddi_dma_buf_bind_handle+0x54
              scsi`scsi_cache_bind+0x24
              scsi`scsi_cache_init_pkt+0x2d4
              scsi`scsi_init_pkt+0x4c
              scsi_vhci`vhci_bind_transport+0x9ac
              scsi_vhci`vhci_scsi_start+0x350
              sd`sd_start_cmds+0x3a4
              sd`sd_core_iostart+0x228
              sd`sd_mapblockaddr_iostart+0x210
              sd`xbuf_iostart+0x20c
              zfs`vdev_disk_strategy+0x30
              zfs`vdev_disk_io_start+0x26c
              zfs`zio_execute+0xf4
              zfs`vdev_queue_io_done+0x94
              110
root@server:~# dtrace -n 'fbt::vmem_alloc:entry /args[0]->vm_name == "kmem_oversize"/ { @[stack()] = count(); }'
dtrace: description 'fbt::vmem_alloc:entry ' matched 1 probe
^C


              genunix`kmem_alloc+0x160
              genunix`kmem_zalloc+0x120
              autofs`auto_calldaemon+0x1d0
              autofs`auto_null_request+0x24
              autofs`unmount_tree+0x60
              autofs`unmount_zone_tree+0xc
              unix`thread_start+0x4
                1
root@server:~#

View solaris page sizes

Solaris provides different memory page sizes, on SPARC up to 2gb. It does not always make sense to use 2gb also known as huge page sizes. The Oracle database chooses the size depending on their needs:

root@server:/# pagesize -a
8192
65536
4194304
268435456
2147483648
root@server:/# ps -ef | grep smon | wc -l
      25
root@server:/# prctl -n zone.max-shm-memory -i zone global
zone: 0: global
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
zone.max-shm-memory
        usage           47.5GB
        system          16.0EB    max   deny                                 -
root@server:/#
root@server:/# mdb -k
Loading modules: [ unix genunix specfs dtrace zfs scsi_vhci ldc mac ip hook neti ds arp kssl sockfs ipc random idm mdesc cpc crypto fcip fctl fcp ufs logindmux ptm sppp nsmb nfs ]
> ::memstat
Page Summary                 Pages             Bytes  %Tot
----------------- ----------------  ----------------  ----
Kernel                     1531886             11.6G    9%
Guest                            0                 0    0%
ZFS Metadata                357116              2.7G    2%
ZFS File Data               229109              1.7G    1%
Anon                       9240216             70.4G   55%
Exec and libs               125346            979.2M    1%
Page cache                   61769            482.5M    0%
In temporary use               512                4M    0%
Free (cachelist)           4395237             33.5G   26%
Free (freelist)             836025              6.3G    5%
Total                     16777216              128G
> ::tile -s
TILE  MN  SZC   TOTAL   PCT    USER   PCT   KCAGE   PCT
           8k   52.4g 40.9%   43.1g 33.7%    9.2g 7.22%
          64k   19.0g 14.9%   19.0g 14.9%       -     -
           4m   18.2g 14.2%   18.2g 14.2%       -     -
         256m   30.2g 23.6%   24.7g 19.3%    5.5g 4.29%
           2g      8g 6.25%      8g 6.25%       -     -
   total         128g  100%  113.2g 88.4%   14.7g 11.5%
> ::quit

zdqueue – script to find deleted files still using space on ZFS

Always a strange situation when ZFS shows you a full filesystem and you know that there should be enough free space. One reason could be a big file you did not think about. I wrote a small script to find the biggest file on a ZFS, you can find my zfsize-script in my previous post.
Another problem could be a deleted file which is still in access by a process. The file is gone and you do not see it in the filesystem with ls/du/find and so on… You will only get your free space when the process stops using the file or you kill the process.
I wrote a small script to find such processes and old files which are still in the ZFS delete queue.

root@solaris:~/scripts# ./zdqueue.sh -h
ZFS Delete Queue Analyzing
Usage:
                ./zdqueue.sh -z <ZFS> [-o tempdir]
root@solaris:~/scripts#
root@solaris:~/scripts# ./zdqueue.sh -z oracle/u01
ZFS = oracle/u01
Mountpoint = /u01
TempDir = /tmp
This may take a while ...
I will wait at least 1 minute before analyzing
............
  PID TTY         TIME CMD
 2703 pts/10      0:43 pfiles
Still analyzing process list...
Do you want to wait another minute or work with the data we have? (y/n) n
OK, I will kill process 2703 and work with gathered information
---------------------------------------
Process: 709    /u01/app/12.1.0.2/grid/bin/oraagent.bin
The file was:   /u01/app/grid/diag/crs/orasid/crs/trace/crsd_oraagent_oracle.trc

Process: 595    /u01/app/12.1.0.2/grid/bin/orarootagent.bin
The file was:   /u01/app/grid/diag/crs/orasid/crs/trace/crsd_orarootagent_root.trc





#!/usr/bin/bash
#set -x 
###################################################
#
# zdqueue v0.1
#
# ZFS Delete Queue Analyzing
#
# small script to find open files on ZFS which 
# should be deleted but are still using space.
#
# 16.09.2016, written by Martin Presslaber
#
###################################################
help ()
{
		print "ZFS Delete Queue Analyzing"
                print "Usage:"
                print "\t\t$0 -z <ZFS> [-o tempdir]"
}
########## preTESTS #############
OS=`uname -s`
RELEASE=`uname -r`
VERS=`uname -v`
ZONE=`zonename`
if [[ $OS != SunOS ]]
then
        print "This script will only work on Solaris"
        exit 1
fi
[[ $ZONE == global ]] || print "This script will only work in the global zone"
[[ $VERS == 1[1-9].[1-9] ]] && SOLARIS=new
if [ ${RELEASE#*.} -gt 10 ] ;
then
        ID=$(/usr/bin/whoami)
else
        ID=$(/usr/ucb/whoami)
fi
if [ $ID != "root" ]; then
        echo "$ID, you must be root to run this program."
        exit 1
fi
if [ $# -lt 1 ]
        then
                help && exit 1
        fi
########## Options ###########
TEMPDIR="/tmp"
while getopts "z:o:h" args
do
	case $args in
	z)
		ZFS=$OPTARG
		ZFSlist=`zfs list $ZFS 2>/dev/null | nawk -v ZFS=$ZFS '$1~ZFS {print $0}'`
		[[ $ZFSlist == "" ]] && print "$ZFS does not seem to be a ZFS" && exit 1
		ZFSmountpoint=`zfs list $ZFS 2>/dev/null | nawk -v ZFS=$ZFS '$1~ZFS {print $NF}'`
	;;

	o)
	TEMPDIR=$OPTARG
	[[ -d $TEMPDIR ]] || print "$TEMPDIR does not exist!" && exit 1
	;;

	h|*)
		help && exit 1
	;;
	esac
done
shift $(($OPTIND -1))
sleeping ()
{
SLEEP=1;  while [[ SLEEP -ne 12 ]]; do sleep 5 ; print ".\c" ; let SLEEP=$SLEEP+1; done ; print "."
}
######### Let's go #########
print "ZFS = $ZFS"
print "Mountpoint = $ZFSmountpoint"
print "TempDir = $TEMPDIR"
print "This may take a while ... "
print "I will wait at least 1 minute before analyzing"
######## Create File with open delete queue
zdb -dddd $ZFS $(zdb -dddd $ZFS 1 | nawk '/DELETE_QUEUE/ {print $NF}') > $TEMPDIR/zdqueue-open.tmp
######## Find processes with files from delete queue
OPENFILES=$(nawk '/\= / {print $NF}' $TEMPDIR/zdqueue-open.tmp | while read DQi; do echo "$DQi|\c"; done | nawk '{print $4 $NF}')

[[ $OPENFILES == "" ]] && print "No files in delete queue for $ZFS" && exit 0

pfiles `fuser -c $ZFSmountpoint 2>/dev/null` 2>/dev/null > $TEMPDIR/zdqueue-procs.tmp &
PIDpfiles=$!
sleeping 
ps -p $PIDpfiles && \
WAIT=yes
while [[ $WAIT == yes ]]
do 
	print "Still analyzing process list..."
	read -r -p "Do you want to wait another minute or work with the data we have? (y/n) " A
	case $A in
	[yY][eE][sS]|[yY])
	sleeping
	ps -p $PIDpfiles && \
	WAIT=yes
	;;
	[nN][oO]|[nN])
	print "OK, I will kill process $PIDpfiles and work with gathered information"
	kill $PIDpfiles
	WAIT=n
	;;	
	esac
done
print "---------------------------------------"
egrep $OPENFILES $TEMPDIR/zdqueue-procs.tmp | tr ':' ' ' | awk '$7 ~ /ino/ {print $8}' |\
while read INO
do 
	print "Process: \c"
	awk '/Current/ {print PROC};{PROC=$0} /ino/ {print $5}' $TEMPDIR/zdqueue-procs.tmp |\
	tr ':' ' ' | nawk -v INO=$INO '$1 ~ /^[0-9]/ {print $0} $2 ~ INO {print $0}' |\
	nawk '$1 ~ /ino/ {print INO};{INO=$0}'
	ZID=`nawk -v INO=$INO '$3 ~ INO {print $1}' $TEMPDIR/zdqueue-open.tmp`
	if [[ $SOLARIS == new ]]
	then
		print "The file was:   \c"
		echo "::walk zfs_znode_cache | ::if znode_t z_id = $ZID and z_unlinked = 1 | ::print znode_t z_vnode->v_path" |\
		mdb -k | awk '/\// {print $NF}' | sed 's/\"//g'
	else
		print "The file was:   \c"
		echo "::walk zfs_znode_cache z | ::print znode_t z_id | ::grep ".==$ZID" | ::map <z | ::print znode_t z_vnode->v_path z_unlinked" |\
		mdb -k | awk '/\// {print $NF}' | sed 's/\"//g'
	fi
	print "\n"
done

#### Clean up ####
rm /$TEMPDIR/zdqueue-procs.tmp 
rm /$TEMPDIR/zdqueue-open.tmp
#################### EOF ####################

zfsize – small script to find the biggest files on ZFS

I found some time to script and was looking into zdb and what could be done with it. I would say that it is a nice feature to ask “what is your biggest file in that filesystem” (rather a mega find command). You could also find files which where deleted but are still in the ZFS because a process is using it. I also wrote a script for that, you will find it in my next post.

root@server:~# ./zfsize.sh -h
small script to find the biggest files on ZFS
Usage:
                ./zfsize.sh -z <ZFS> [-o tempdir] [-c count]
root@server:~# ./zfsize.sh -z rpool/downloads -c 2
ZFS = rpool/downloads
Mountpoint = /downloads
TempDir = /tmp
This may take a while ...
/downloads/sol-10-u11-ga-sparc-dvd.iso  2207.50 MB
/downloads/sol-11_1-repo-full.iso       2896.00 MB
root@server:~#

#!/usr/bin/bash
#set -x
###################################################
#
# zfsize v0.1
#
# ZFS file sizes
#
# small script to find the biggest files on ZFS
#
# 16.09.2016, written by Martin Presslaber
#
###################################################
help ()
{
		print "small script to find the biggest files on ZFS"
                print "Usage:"
                print "\t\t$0 -z <ZFS> [-o tempdir] [-c count]"
}
########## preTESTS #############
OS=`uname -s`
RELEASE=`uname -r`
VERS=`uname -v`
ZONE=`zonename`
if [[ $OS != SunOS ]]
then
        print "This script will only work on Solaris"
        exit 1
fi
[[ $ZONE == global ]] || print "This script will only work in the global zone"
[[ $VERS == 1[1-9].[1-9] ]] && SOLARIS=new
if [ ${RELEASE#*.} -gt 10 ] ;
then
        ID=$(/usr/bin/whoami)
else
        ID=$(/usr/ucb/whoami)
fi
if [ $ID != "root" ]; then
        echo "$ID, you must be root to run this program."
        exit 1
fi
if [ $# -lt 1 ]
        then
                help && exit 1
        fi
#[[ $1 != "-[az]" ]] && help && exit 1
########## Options ###########
TEMPDIR="/tmp"
while getopts "z:o:c:h" args
do
        case $args in
        z)
                ZFS=$OPTARG
                ZFSlist=`zfs list $ZFS 2>/dev/null | nawk -v ZFS=$ZFS '$1~ZFS {print $0}'`
                [[ $ZFSlist == "" ]] && print "$ZFS does not seem to be a ZFS" && exit 1
                ZFSmountpoint=`zfs list $ZFS 2>/dev/null | nawk -v ZFS=$ZFS '$1~ZFS {print $NF}'`
        ;;

        o)
        TEMPDIR=$OPTARG
        [[ -d $TEMPDIR ]] || print "$TEMPDIR does not exist!" && exit 1
        ;;

	c)
	COUNT="-$OPTARG"
	;;

        h|*)
                help && exit 1
        ;;
        esac
done
shift $(($OPTIND -1))

######### Let's go #########
print "ZFS = $ZFS"
print "Mountpoint = $ZFSmountpoint"
print "TempDir = $TEMPDIR"
print "This may take a while ... "

zdb -dddd $ZFS |\
nawk -v MP=$ZFSmountpoint 'BEGIN { printf("FILE\tSIZE\n"); }
$0 ~/ZFS plain file$/ { interested = 1; }
interested && $1 == "path" { printf(MP"%s", $2); }
interested && $1 == "size" { printf("\t%.2f MB\n", $2/1024/1024); }
interested && $1 == "Object" { interested = 0; }'  > $TEMPDIR/zfsize.tmp
sort -nk 2,2 $TEMPDIR/zfsize.tmp > $TEMPDIR/zfsize-sorted.tmp
tail $COUNT $TEMPDIR/zfsize-sorted.tmp
# clean up
rm $TEMPDIR/zfsize.tmp
rm $TEMPDIR/zfsize-sorted.tmp
##################### EOF #####################

Using different HW Features in a Box

I wrote a small article for my company how you could use Oracle’s new SPARC hardware for different layers in your datacentre… original in German, could be found on SPARC T7-1 testing In-Memory, DAX and Crypto Engines
Some findings and interesting points translated for my blog:

So what I thought about are classic tasks normally found on several servers, build in one box. All of them could benefit from different features which come with M7 or S7 chips.
The database in the backend will profit from the big memory bandwidth and the SQL Offload Engines called DAX, data analytics accelerators. In the combination Oracle says in their PowerPoints the database could scan up to 170 billion rows per second with those streaming engines with a measured bandwidth from 160GB/sec per socket. Wow… and that’s measurement; the M7 processor hardware facts are talking about 4 memory controller units per socket which could handle 333 GB/sec raw memory bandwidth per processor. (It seems that DDR4 is the “bottleneck” not the CPU…) compared to the latest Xeon E7 88xx v4 (Q2/16) with 102GB/sec mentioned on Intel’s ARK technical details pages.

The next layer could be the application itself. With 8 threads per core a perfect fit for a high user load and with critical threads the process has more exclusive access to the hardware. Perfect for running a wide mix of workloads, some will be designed for throughput, others for low latency.

The third level could be something like a reverse proxy with a SSO backend or something. The proxy could take the application sessions if not already encrypted and use the build in cryptographic accelerators on the processor to encrypt. Solaris itself and some standard applications using these engines already, like Apache, IPsec, Java, KSSL, OpenSSL, ZFS Crypto. But not only Oracle software like the database and WebLogic are supporting Solaris’ Crypto Framework, also IBM with DB2, Informix, IBM HTTP Server or WebSphere are certified with IBM Global Security Kit to use SPARC’s hardware encryption (IBM GSKit v8).

Oracle SPARC processors can handle 15 industry standard algorithms and a bunch of random number generators (AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512). (BTW; Xeons would have 7 crypto instructions and 5 on-chip accelerated algorithms; IBM Power8 = 6 instructions and 8 accelerated.)

The last level could be the way to the internet, separated to the other domains. Solaris offers a build-in firewall, load-balancer or other web utilities to handle the connections. Having Solaris on SPARC in the front helps you easily to prevent so called script-kiddies using their found hacks and attacks because on one side SPARC is big endian based, so standard attacks will run into the “wrong direction” compared to little endian on x86. On the other side the new SPARC processors are protected by “silicon-secure-memory”. When an application requests some new memory to use via malloc(), the operating system tags the block of memory with a version number, and gives the app a pointer to that memory. Whenever a pointer is used to access a block of memory, the pointer’s version number must match the memory block’s version number, or an exception will be triggered. The version numbers are checked in real-time by the processor with a tiny overhead – an extra one percent of execution time, according to Oracle’s benchmarks. (more infos at theregister )
So imaging using all of these features a whole datacentre could be hosted on a single server or if it comes down to availability you could build a cluster with failover or live migration between the servers.

t7datacenter

solaris repo

Local Repository

Hopefully you have a lot of Solaris systems; so it might make sense to create your own mirrored local repository for Solaris and other packages.

Let's start creating the repo with the GA build avaiable from the download sites for everyone:

root@psvsparc1:/downloads/11.3repo# ls
install-repo.ksh           sol-11_3-repo_2of5.zip     sol-11_3-repo_4of5.zip     sol-11_3-repo_md5sums.txt
sol-11_3-repo_1of5.zip     sol-11_3-repo_3of5.zip     sol-11_3-repo_5of5.zip
root@psvsparc1:/downloads/11.3repo# install-^C
root@psvsparc1:/downloads/11.3repo# chmod +x install-repo.ksh
root@psvsparc1:/downloads/11.3repo# ./install-repo.ksh -d /ai/repo/
Using sol-11_3-repo download.
Uncompressing sol-11_3-repo_1of5.zip...done.
Uncompressing sol-11_3-repo_2of5.zip...done.
Uncompressing sol-11_3-repo_3of5.zip...done.
Uncompressing sol-11_3-repo_4of5.zip...done.
Uncompressing sol-11_3-repo_5of5.zip...done.
Repository can be found in /ai/repo/.
root@psvsparc1:/downloads/11.3repo# pkgrepo rebuild -s /ai/repo/
Initiating repository rebuild.

OK, now we could use this local repo but we also want other servers to use it. We need a new service for that:

root@psvsparc1:~# svccfg -s application/pkg/server setprop pkg/inst_root=/ai/repo
root@psvsparc1:~# svccfg -s application/pkg/server setprop pkg/readonly=true
root@psvsparc1:~# svccfg -s application/pkg/server setprop pkg/port=8080
root@psvsparc1:~# svcadm refresh application/pkg/server
root@psvsparc1:~# svcadm enable application/pkg/server

That's it... now let's connect the client:

root@client:~# pkg unset-publisher solaris
Updating package cache                           1/1
root@client:~# pkg set-publisher -O http://psvsparc1:8080 solaris
root@client:~# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F http://psvsparc1:8080/

OK... at the end of the day we want to get updates... you will need an active support contract and then you can connect to the support repository... this example updates the local solaris repository to the latest or newest patchset / SRU avaiable at oracle. SSL files can be obtained from http://pkg-register.oracle.com/

root@psvsparc1:~# pkgrecv -s https://pkg.oracle.com/solaris/support -d /ai/repo \
> --key /var/pkg/ssl/pkg.oracle.com.key.pem \
> --cert /var/pkg/ssl/pkg.oracle.com.certificate.pem -m latest '*'
Processing packages for publisher solaris ...
Retrieving and evaluating 6949 package(s)...
PROCESS                                         ITEMS    GET (MB)   SEND (MB)
Completed                                   1191/1191   2836/2836   4383/4383

root@psvsparc1:~#
root@psvsparc1:~# pkgrepo -s /ai/repo refresh
Initiating repository refresh.
root@psvsparc1:~#

The client sees the GA version, his own installation and the newest one:

root@client:~# pkg list -af entire
NAME (PUBLISHER)                                  VERSION                    IFO
entire (solaris)                                  0.5.11-0.175.3.11.0.6.0    ---
entire (solaris)                                  0.5.11-0.175.3.10.0.7.0    i--
entire (solaris)                                  0.5.11-0.175.3.1.0.5.0     ---
root@client:~# 
other repositories

You might want to use other Oracle software repositories... no problem:

root@psvsparc1:~#  zfs create -o atime=off rpool/ai/repo-ss
root@psvsparc1:~#  pkgrepo create /ai/repo-ss
root@psvsparc1:~#  pkgrecv --key /var/pkg/ssl/pkg.oracle.com.key.pem --cert /var/pkg/ssl/pkg.oracle.com.certificate.pem -s https://pkg.oracle.com/solarisstudio/support/ -d  /ai/repo-ss '*'
Retrieving and evaluating 347 package(s)...
PROCESS                                         ITEMS    GET (MB)   SEND (MB)
Completed                                     130/130   4667/4667 14730/14730

root@psvsparc1:~# pkgrepo -s /ai/repo-ss/ refresh
root@psvsparc1:~# pkgrepo get -s /ai/repo-ss
SECTION    PROPERTY                     VALUE
publisher  prefix                       ""
repository check-certificate-revocation False
repository signature-required-names     ()
repository trust-anchor-directory       /etc/certs/CA/
repository version                      4
root@psvsparc1:~# svccfg -s pkg/server add solarisstudio
root@psvsparc1:~# svccfg -s pkg/server:solarisstudio addpg pkg application
root@psvsparc1:~# svccfg -s pkg/server:solarisstudio setprop pkg/port=8082
root@psvsparc1:~# svccfg -s pkg/server:solarisstudio setprop pkg/inst_root=/ai/repo-ss
root@psvsparc1:~# svccfg -s pkg/server:solarisstudio addpg general framework
root@psvsparc1:~# svccfg -s pkg/server:solarisstudio addpropvalue general/enabled boolean: true
root@psvsparc1:~# svccfg -s pkg/server list
:properties
default
solarisstudio
root@psvsparc1:~# svcadm enable application/pkg/server:solarisstudio
root@psvsparc1:~#
root@psvsparc1:~#
root@psvsparc1:~#

And another one:

root@psvsparc1:~# svccfg -s pkg/server add ha-cluster
root@psvsparc1:~# svccfg -s pkg/server:ha-cluster addpg pkg application
root@psvsparc1:~# svccfg -s pkg/server:ha-cluster setprop pkg/port=8081
root@psvsparc1:~# svccfg -s pkg/server:ha-cluster setprop pkg/inst_root=/ai/repo-sc
root@psvsparc1:~# svccfg -s pkg/server:ha-cluster addpg general framework
root@psvsparc1:~# svccfg -s pkg/server:ha-cluster addpropvalue general/enabled boolean: true
root@psvsparc1:~# svccfg -s pkg/server list
:properties
default
solarisstudio
ha-cluster
root@psvsparc1:~#
root@psvsparc1:~# svcadm enable application/pkg/server:ha-cluster
root@psvsparc1:~# svcs -a | grep ha-cluster
online         14:34:23 svc:/application/pkg/server:ha-cluster
root@psvsparc1:~#
root@psvsparc1:~# netstat -aun | grep 808
      *.8082               *.*            root       4414 pkg.depotd          0      0  128000      0 LISTEN
      *.8081               *.*            root       3940 pkg.depotd          0      0  128000      0 LISTEN
      *.8080               *.*            root       2081 pkg.depotd          0      0  128000      0 LISTEN
root@psvsparc1:~#

Client:

root@client:~# pkg set-publisher -O http://psvsparc1:8081 ha-cluster
root@client:~# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F http://psvsparc1:8080/
ha-cluster                  origin   online F http://psvsparc1:8081/
root@client:~# pkg list -af ha-cluster/group-package/ha-cluster-framework-full
NAME (PUBLISHER)                                  VERSION                    IFO
ha-cluster/group-package/ha-cluster-framework-full (ha-cluster) 4.3-0.24.0                 ---
ha-cluster/group-package/ha-cluster-framework-full (ha-cluster) 4.2-0.30.0                 ---
ha-cluster/group-package/ha-cluster-framework-full (ha-cluster) 4.1-0.18.2                 ---
ha-cluster/group-package/ha-cluster-framework-full (ha-cluster) 4.0.0-0.22.1               ---
root@client:~# pkg set-publisher -O http://psvsparc1:8082 solarisstudio
root@client:~# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F http://psvsparc1:8080/
ha-cluster                  origin   online F http://psvsparc1:8081/
solarisstudio               origin   online F http://psvsparc1:8082/
root@client:~#

Solaris AI and IPs

Solaris AI profile with multiple IPs

If you want more than one IP configured after an AI installation you will need the following xml configuration in your AI profile:


  <service version="1" type="service" name="network/install">
    <instance enabled="true" name="default">
      <property_group type="application" name="install_ipv4_interface">
        <propval type="net_address_v4" name="static_address" value="10.11.12.13/24"/>
        <propval type="astring" name="name" value="net0/pd0"/>
        <propval type="astring" name="address_type" value="static"/>
        <propval type="net_address_v4" name="default_route" value="10.11.12.1"/>
      </property_group>
      <property_group type="ipv4_interface" name="install_ipv4_interface_0">
        <propval type="net_address_v4" name="static_address" value="10.11.13.14/24"/>
        <propval type="astring" name="name" value="net1/st0"/>
        <propval type="astring" name="address_type" value="static"/>
      </property_group>
      <property_group type="ipv4_interface" name="install_ipv4_interface_1">
        <propval type="net_address_v4" name="static_address" value="10.11.14.15/24"/>
        <propval type="astring" name="name" value="net2/bkp0"/>
        <propval type="astring" name="address_type" value="static"/>
      </property_group>
    </instance>
  </service>

Update the profile:

# installadm update-profile -p server.profile -n sol-ai -f /ai/config/server.profile

Sommer vs. Temperatures

Sommer is coming so check your servers not getting burned 🙂

Very bad example, a T4-1 running two years in a laboratory and we were wondering why the CPU always said it is to hot…

SC Alert: [ID 425519 daemon.notice] Sensor | minor: Temperature : /SYS/MB/CMP0/T_TCORE : Upper Non-critical going high : reading 90 >= threshold 90 degrees C

Dust_on_T4

ILOM list Temperatures

Ever wondered how the temperature is in your server?

Two examples, one from a T7-1 and the second was a T4-1

->  show -level all -output table /SYS type==Temperature value
Target                                                                           | Property                                                                                      | Value
---------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------
/SYS/MB/1V05_IOH_OBPS/T_INT                                                      | value                                                                                         | 46.312 degree C
/SYS/MB/1V6_IOH_OBPS/T_INT                                                       | value                                                                                         | 47.875 degree C
/SYS/MB/3V3_MAIN_OBPS/T_INT                                                      | value                                                                                         | 40.062 degree C
/SYS/MB/CM/CMP/BOB01/CH0/DIMM/T_AMB                                              | value                                                                                         | 37.000 degree C
/SYS/MB/CM/CMP/BOB01/CH1/DIMM/T_AMB                                              | value                                                                                         | 38.000 degree C
/SYS/MB/CM/CMP/BOB01/T_CORE                                                      | value                                                                                         | 35.250 degree C
/SYS/MB/CM/CMP/BOB11/CH0/DIMM/T_AMB                                              | value                                                                                         | 32.000 degree C
/SYS/MB/CM/CMP/BOB11/CH1/DIMM/T_AMB                                              | value                                                                                         | 33.000 degree C
/SYS/MB/CM/CMP/BOB11/T_CORE                                                      | value                                                                                         | 33.438 degree C
/SYS/MB/CM/CMP/BOB21/CH0/DIMM/T_AMB                                              | value                                                                                         | 34.000 degree C
/SYS/MB/CM/CMP/BOB21/CH1/DIMM/T_AMB                                              | value                                                                                         | 31.000 degree C
/SYS/MB/CM/CMP/BOB21/T_CORE                                                      | value                                                                                         | 31.812 degree C
/SYS/MB/CM/CMP/BOB31/CH0/DIMM/T_AMB                                              | value                                                                                         | 31.000 degree C
/SYS/MB/CM/CMP/BOB31/CH1/DIMM/T_AMB                                              | value                                                                                         | 31.000 degree C
/SYS/MB/CM/CMP/BOB31/T_CORE                                                      | value                                                                                         | 36.062 degree C
/SYS/MB/CM/CMP/MR0/BOB20/CH0/DIMM/T_AMB                                          | value                                                                                         | 36.000 degree C
/SYS/MB/CM/CMP/MR0/BOB20/CH1/DIMM/T_AMB                                          | value                                                                                         | 35.000 degree C
/SYS/MB/CM/CMP/MR0/BOB20/T_CORE                                                  | value                                                                                         | 32.500 degree C
/SYS/MB/CM/CMP/MR0/BOB30/CH0/DIMM/T_AMB                                          | value                                                                                         | 34.000 degree C
/SYS/MB/CM/CMP/MR0/BOB30/CH1/DIMM/T_AMB                                          | value                                                                                         | 33.000 degree C
/SYS/MB/CM/CMP/MR0/BOB30/T_CORE                                                  | value                                                                                         | 32.500 degree C
/SYS/MB/CM/CMP/MR0/T_AMB_FRONT                                                   | value                                                                                         | 27.750 degree C
/SYS/MB/CM/CMP/MR0/T_AMB_REAR                                                    | value                                                                                         | 30.000 degree C
/SYS/MB/CM/CMP/MR1/BOB00/CH0/DIMM/T_AMB                                          | value                                                                                         | 32.000 degree C
/SYS/MB/CM/CMP/MR1/BOB00/CH1/DIMM/T_AMB                                          | value                                                                                         | 34.000 degree C
/SYS/MB/CM/CMP/MR1/BOB00/T_CORE                                                  | value                                                                                         | 36.000 degree C
/SYS/MB/CM/CMP/MR1/BOB10/CH0/DIMM/T_AMB                                          | value                                                                                         | 32.000 degree C
/SYS/MB/CM/CMP/MR1/BOB10/CH1/DIMM/T_AMB                                          | value                                                                                         | 30.000 degree C
/SYS/MB/CM/CMP/MR1/BOB10/T_CORE                                                  | value                                                                                         | 31.250 degree C
/SYS/MB/CM/CMP/MR1/T_AMB_FRONT                                                   | value                                                                                         | 23.750 degree C
/SYS/MB/CM/CMP/MR1/T_AMB_REAR                                                    | value                                                                                         | 29.500 degree C
/SYS/MB/CM/CMP/T_CORE_INT                                                        | value                                                                                         | 42.000 degree C
/SYS/MB/CM/T_AMB                                                                 | value                                                                                         | 24.938 degree C
/SYS/MB/CM/T_BUSBAR                                                              | value                                                                                         | 32.562 degree C
/SYS/MB/CM/T_CORE                                                                | value                                                                                         | 39.750 degree C
/SYS/MB/CM/T_INLET                                                               | value                                                                                         | 39.438 degree C
/SYS/MB/CM/VDDR_OBPS/T_INT                                                       | value                                                                                         | 41.250 degree C
/SYS/MB/CM/VDDSOC_OBPS0/T_INT                                                    | value                                                                                         | 37.062 degree C
/SYS/MB/CM/VDDSOC_OBPS1/T_INT                                                    | value                                                                                         | 40.500 degree C
/SYS/MB/CM/VDDSOC_OBPS2/T_INT                                                    | value                                                                                         | 41.125 degree C
/SYS/MB/CM/VDDT_OBPS0/T_INT                                                      | value                                                                                         | 40.875 degree C
/SYS/MB/CM/VDDT_OBPS1/T_INT                                                      | value                                                                                         | 40.000 degree C
/SYS/MB/IOH/T_AMB                                                                | value                                                                                         | 43.438 degree C
/SYS/MB/IOH/T_CORE                                                               | value                                                                                         | 59.410 degree C
/SYS/MB/SAS/T_AMB                                                                | value                                                                                         | 35.062 degree C
/SYS/MB/T_0V9_SAS_OBPS                                                           | value                                                                                         | 41.562 degree C
/SYS/MB/T_OUTLET0                                                                | value                                                                                         | 35.000 degree C
/SYS/MB/T_OUTLET1                                                                | value                                                                                         | 37.500 degree C
/SYS/MB/VCORE_IOH_OBPS0/T_INT                                                    | value                                                                                         | 44.812 degree C
/SYS/MB/VCORE_IOH_OBPS1/T_INT                                                    | value                                                                                         | 42.250 degree C
/SYS/MB/XGBE0/T_AMB                                                              | value                                                                                         | 46.250 degree C
/SYS/MB/XGBE1/T_AMB                                                              | value                                                                                         | 44.500 degree C
/SYS/PS0/T_OUT                                                                   | value                                                                                         | 25.000 degree C
/SYS/PS1/T_OUT                                                                   | value                                                                                         | 26.000 degree C
/SYS/T_AMB                                                                       | value                                                                                         | 21.250 degree C
-> show -level all -output table /SYS type==Temperature value
Target                                                                           | Property                                                                                      | Value
---------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------
/SYS/MB/CMP0/BOB0/CH1/D0/T_AMB                                                   | value                                                                                         | 32.000 degree C
/SYS/MB/CMP0/BOB1/CH1/D0/T_AMB                                                   | value                                                                                         | 28.000 degree C
/SYS/MB/CMP0/BOB2/CH1/D0/T_AMB                                                   | value                                                                                         | 28.000 degree C
/SYS/MB/CMP0/BOB3/CH1/D0/T_AMB                                                   | value                                                                                         | 29.000 degree C
/SYS/MB/CMP0/T_BCORE                                                             | value                                                                                         | 40.000 degree C
/SYS/MB/CMP0/T_TCORE                                                             | value                                                                                         | 43.000 degree C
/SYS/MB/DVRM_CMP0/TEMP_FAULT                                                     | value                                                                                         | State Deasserted
/SYS/MB/DVRM_CMP0/T_EXT                                                          | value                                                                                         | 43.000 degree C
/SYS/MB/DVRM_CMP0/T_INT                                                          | value                                                                                         | 44.000 degree C
/SYS/MB/DVRM_M0/TEMP_FAULT                                                       | value                                                                                         | State Deasserted
/SYS/MB/DVRM_M0/T_EXT                                                            | value                                                                                         | 39.000 degree C
/SYS/MB/DVRM_M0/T_INT                                                            | value                                                                                         | 42.000 degree C
/SYS/MB/DVRM_M1/TEMP_FAULT                                                       | value                                                                                         | State Deasserted
/SYS/MB/DVRM_M1/T_EXT                                                            | value                                                                                         | 34.000 degree C
/SYS/MB/DVRM_M1/T_INT                                                            | value                                                                                         | 39.000 degree C
/SYS/MB/RISER0/T_RISER0                                                          | value                                                                                         | 27.000 degree C
/SYS/MB/RISER0/T_RISER1                                                          | value                                                                                         | 31.000 degree C
/SYS/MB/RISER1/T_RISER0                                                          | value                                                                                         | 32.000 degree C
/SYS/MB/RISER1/T_RISER1                                                          | value                                                                                         | 38.000 degree C
/SYS/MB/RISER2/T_RISER0                                                          | value                                                                                         | 30.000 degree C
/SYS/MB/RISER2/T_RISER1                                                          | value                                                                                         | 35.000 degree C
/SYS/MB/T_BUS_BAR0                                                               | value                                                                                         | 31.000 degree C
/SYS/MB/T_BUS_BAR1                                                               | value                                                                                         | 31.000 degree C
/SYS/MB/T_OUTLET0                                                                | value                                                                                         | 41.000 degree C
/SYS/MB/T_OUTLET1                                                                | value                                                                                         | 45.000 degree C
/SYS/T_AMB                                                                       | value                                                                                         | 25.000 degree C


Warning: HTTPS certificate is set to factory default.

HTTPS SSL certificate

Starting with some newer versions of Oracle's ILOM you will get a Warning when using no special HTTPS certificate, next to the "default password" warning logging into the BUI the first time... This was on my new T7 with Sun System Firmware 9.7.1.c // ILOM v3.2.6.2.c

Warning: HTTPS certificate is set to factory default.

To get rid of it you will have to create a costum certificate and a costum private key and upload the files.

Just use your Solaris box:

root@svr01:/downloads/certs# openssl genrsa -out ilom-svr01.key 2048
Generating RSA private key, 2048 bit long modulus
...............................................................+++
......................................+++
e is 65537 (0x10001)
root@svr01:/downloads/certs# openssl req -new -key ilom-svr01.key -out ilom-svr01.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:AT
State or Province Name (full name) []:Vienna
Locality Name (eg, city) []:Vienna
Organization Name (eg, company) []:PRESSY
Organizational Unit Name (eg, section) []:IT
Common Name (e.g. server FQDN or YOUR name) []:ilom-svr01.domain.narf
Email Address []:mymail@mail.narf

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:asdfasdf
An optional company name []:asdfasdf
root@svr01:/downloads/certs# openssl x509 -req -days 3650 -in ilom-svr01.csr -signkey ilom-svr01.key -out ilom-svr01.cert
Signature ok
subject=/C=AT/ST=Vienna/L=Vienna/O=PRESSY/OU=IT/CN=ilom-svr01.domain.narf/emailAddress=mymail@mail.narf
Getting Private key
root@svr01:/downloads/certs#

You can upload the *.cert and *.key files using the BUI: ILOM Administration -> Management Access -> SSL certificate

You will lose your current web connection and the browser will come with a warning on reload, because it is a self sign certificate. If you have a trusted certification provider you could use their files aswell.

Zones@UAR

How to deploy a Solaris 11 Zone from an unified archive

create archive:

# archiveadm create --exclude-media -z <zone> <uar-name>

let's install it:

# zonecfg -z <new zone> create -a <archive> -z <zone name in the archive>
# zoneadm -z <new zone> install -a <archive> -z <zone name in the archive>
# zoneadm -z <new zone> move <new path>
# zoneadm -z <new zone> boot
# zlogin -C <new zone>

Answer the setup questions (e.g. hostname, ip, dns, search domain, regions, locations, language (-> no default language support),...)

Cheat Sheet

Thanks to Jörg Möllenkamp (c0t0d0s0) providing a great document “Solaris 11 cheat sheet”
Good work Jörg 😉
Cheat Sheet

Kernel Zone vs. SolarisCluster

Solaris Kernel Zone @ Solaris Cluster @ LDOM

First of all prepair your LDOMs to allow more vNICs

root@t5ldlzc03:~# prtdiag | head -1
System Configuration:  Oracle Corporation  sun4v SPARC T5-8
root@t5ldvsvc01:~# ldm stop t5ldlzc03
Remote graceful shutdown or reboot capability is not available on t5ldlzc03
LDom t5ldlzc03 stopped
root@t5ldvsvc01:~# ldm set-vnet alt-mac-addrs=auto,auto,auto,auto,auto,auto,auto,auto net0  t5ldlzc03
root@t5ldvsvc01:~# ldm set-vnet alt-mac-addrs=auto,auto,auto,auto,auto,auto,auto,auto net1  t5ldlzc03
root@t5ldvsvc01:~# ldm start  t5ldlzc03
LDom t5ldlzc03 started

I will create a ZFS volume on a shared SAN LUN as a base/boot device for my kernel zone.

root@t5ldlzc03:~# uname -a
SunOS t5ldlzc03 5.11 11.3 sun4v sparc sun4v
root@t5ldlzc03:~# clnode show-rev
4.3_2.3.0

root@t5ldlzc03:~# pkg list entire
NAME (PUBLISHER)                                  VERSION                    IFO
entire                                            0.5.11-0.175.3.6.0.5.0     i--

root@t5ldlzc03:~# zfs set compression=lz4 t5ldzmg03
root@t5ldlzc03:~# zfs create -V 20g t5ldzmg03/t5ldzmg03
root@t5ldlzc03:~# zfs get compression  t5ldzmg03/t5ldzmg03
NAME               PROPERTY     VALUE  SOURCE
t5ldzmg03/t5ldzmg03  compression  lz4    inherited from t5ldzmg03
root@t5ldlzc03:~#
root@t5ldlzc03:~# zfs create -V 20g t5ldzmg03/t5ldzmg03-suspend
root@t5ldlzc03:~#
root@t5ldlzc03:~# virtinfo
NAME            CLASS
logical-domain  current
non-global-zone supported
kernel-zone     supported
root@t5ldlzc03:~#
root@t5ldlzc03:~# suriadm lookup-uri /dev/zvol/dsk/t5ldzmg03/t5ldzmg03
dev:zvol/dsk/t5ldzmg03/t5ldzmg03
root@t5ldlzc03:~# suriadm lookup-uri /dev/zvol/dsk/t5ldzmg03/t5ldzmg03-suspend
dev:zvol/dsk/t5ldzmg03/t5ldzmg03-suspend
root@t5ldlzc03:~#
root@t5ldlzc03:~# zonecfg -z t5ldzmg03
Use 'create' to begin configuring a new zone.
zonecfg:t5ldzmg03> create -b
zonecfg:t5ldzmg03> set brand=solaris-kz
zonecfg:t5ldzmg03> add capped-memory
zonecfg:t5ldzmg03:capped-memory> set physical=16g
zonecfg:t5ldzmg03:capped-memory> end
zonecfg:t5ldzmg03> add device
zonecfg:t5ldzmg03:device> set storage=dev:zvol/dsk/t5ldzmg03/t5ldzmg03
zonecfg:t5ldzmg03:device> set bootpri=1
zonecfg:t5ldzmg03:device> end
zonecfg:t5ldzmg03> add suspend
zonecfg:t5ldzmg03:suspend> set storage=dev:zvol/dsk/t5ldzmg03/t5ldzmg03-suspend
zonecfg:t5ldzmg03:suspend> end
zonecfg:t5ldzmg03> add anet
zonecfg:t5ldzmg03:anet> set lower-link=net0
zonecfg:t5ldzmg03:anet> end
zonecfg:t5ldzmg03> add anet
zonecfg:t5ldzmg03:anet> set lower-link=net1
zonecfg:t5ldzmg03:anet> end
zonecfg:t5ldzmg03> set autoboot=false
zonecfg:t5ldzmg03> add attr
zonecfg:t5ldzmg03:attr> set name=osc-ha-zone
zonecfg:t5ldzmg03:attr> set type=boolean
zonecfg:t5ldzmg03:attr> set value=true
zonecfg:t5ldzmg03:attr> end
zonecfg:t5ldzmg03> commit
zonecfg:t5ldzmg03> exit
root@t5ldlzc03:~#
root@t5ldlzc03:~# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - t5ldzmg03         configured  -                            solaris-kz excl
root@t5ldlzc03:~#

Ok, configuration is done; let's install it and try to boot it on the second cluster node. I will also need the configuration on that node. To test that I will switch the ZPOOL which is configured in a cluster group.

root@t5ldlzc03:~#
root@t5ldlzc03:~# zoneadm -z t5ldzmg03 install
Progress being logged to /var/log/zones/zoneadm.20160412T093941Z.t5ldzmg03.install
pkg cache: Using /var/pkg/publisher.
 Install Log: /system/volatile/install.11071/install_log
 AI Manifest: /tmp/zoneadm10552.76aG8u/devel-ai-manifest.xml
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
Installation: Starting ...

        Creating IPS image
        Installing packages from:
            solaris
                origin:  http://t4repo02.testlan.at/solaris_11_3/
            ha-cluster
                origin:  http://t4repo02.testlan.at/ha-cluster/
            solarisstudio
                origin:  http://t4repo02.testlan.at/solarisstudio/
        The following licenses have been accepted and not displayed.
        Please review the licenses for the following packages post-install:
          consolidation/osnet/osnet-incorporation
        Package licenses may be viewed using the command:
          pkg info --license <pkg_fmri>

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            470/470   65349/65349  636.8/636.8  4.7M/s

PHASE                                          ITEMS
Installing new actions                   89538/89538
Updating package state database                 Done
Updating package cache                           0/0
Updating image state                            Done
Creating fast lookup database                   Done
Installation: Succeeded
        Done: Installation completed in 356.049 seconds.

root@t5ldlzc03:~# zoneadm -z t5ldzmg03 boot
root@t5ldlzc03:~# zlogin -C t5ldzmg03
[...]
root@t5ldlzc03:~# zoneadm -z t5ldzmg03 shutdown
zone 't5ldzmg03': updating /platform/sun4v/boot_archive
root@t5ldlzc03:~# zoneadm -z t5ldzmg03 detach -F
root@t5ldlzc03:~#
root@t5ldlzc03:~#
root@t5ldlzc03:~# zonecfg -z t5ldzmg03 export -f /tmp/t5ldzmg03.export
root@t5ldlzc03:~# scp  /tmp/t5ldzmg03.export root@t5ldlzc04:/tmp/.
root@t5ldlzc03:~#
root@t5ldlzc03:~# clrg create t5ldzmg03-rg
root@t5ldlzc03:~# clrt register SUNW.HAStoragePlus
root@t5ldlzc03:~# clrs create -t SUNW.HAStoragePlus -g t5ldzmg03-rg -p Zpools=t5ldzmg03 t5ldzmg03-stor-res
root@t5ldlzc03:~# clrg online -emM -n t5ldlzc03 t5ldzmg03-rg
root@t5ldlzc03:~# clrg switch -n t5ldlzc04 t5ldzmg03-rg
root@t5ldlzc04:~# zoneadm -z t5ldzmg03 attach -x force-takeover
root@t5ldlzc04:~# zoneadm -z  t5ldzmg03 boot
root@t5ldlzc04:~# zlogin -C  t5ldzmg03
[Connected to zone 't5ldzmg03' console]
Hostname: t5ldzmg03

t5ldzmg03 console login: root
Password:
Last login: Tue Apr 12 13:52:41 2016 on console
Apr 12 14:00:56 t5ldzmg03 login: ROOT LOGIN /dev/console
Oracle Corporation      SunOS 5.11      11.3    February 2016
root@t5ldzmg03:~# svcs -xv 
[...]
root@t5ldlzc04:~# zoneadm -z t5ldzmg03 shutdown
root@t5ldlzc04:~# zoneadm -z t5ldzmg03 detach -F
root@t5ldlzc04:~# clrg switch -n t5ldlzc03 t5ldzmg03-rg


Now I will register the zone as a cluster resource and make the zone high avaiable. There is an own agent from Oracle for that which I will use.

root@t5ldlzc03:~# pkg list ha-cluster/data-service/ha-zones
NAME (PUBLISHER)                                  VERSION                    IFO
ha-cluster/data-service/ha-zones (ha-cluster)     4.3-0.24.0                 i--
root@t5ldlzc03:~#
root@t5ldlzc03:~# clrt register SUNW.gds
root@t5ldlzc03:~# cd /opt/SUNWsczone/sczbt/util
root@t5ldlzc03:/opt/SUNWsczone/sczbt/util# ls
ha-solaris-zone-boot-env-id  sczbt_config                 sczbt_register
root@t5ldlzc03:/opt/SUNWsczone/sczbt/util# vi [..]
root@t5ldlzc03:/opt/SUNWsczone/sczbt/util# cat t5ldzmg03.sczbt.config
RS=t5ldzmg03-zone-res
RG=t5ldzmg03-rg
PARAMETERDIR=
SC_NETWORK=false
SC_LH=
FAILOVER=true
HAS_RS=t5ldzmg03-stor-res
Zonename="t5ldzmg03"
Zonebrand="solaris-kz"
Zonebootopt=""
Milestone="svc:/milestone/multi-user-server"
LXrunlevel="3"
SLrunlevel="3"
Mounts=""
Migrationtype="warm"
root@t5ldlzc03:/opt/SUNWsczone/sczbt/util#
root@t5ldlzc03:/opt/SUNWsczone/sczbt/util# ./sczbt_register -f ./t5ldzmg03.sczbt.config
sourcing ./t5ldzmg03.sczbt.config
Registration of resource type ORCL.ha-zone_sczbt succeeded.
Registration of resource t5ldzmg03-zone-res succeeded.
root@t5ldlzc03:/opt/SUNWsczone/sczbt/util# clrs enable t5ldzmg03-zone-res
root@t5ldlzc03:/opt/SUNWsczone/sczbt/util# clrs status

=== Cluster Resources ===

Resource Name         Node Name     State       Status Message
-------------         ---------     -----       --------------
t5ldzmg03-zone-res     t5ldlzc03      Online      Online - Service is online.
                      t5ldlzc04      Offline     Offline

t5ldzmg03-stor-res     t5ldlzc03      Online      Online
                      t5ldlzc04      Offline     Offline

root@t5ldlzc03:/opt/SUNWsczone/sczbt/util#
root@t5ldlzc03:/opt/SUNWsczone/sczbt/util# clrs status

=== Cluster Resources ===

Resource Name         Node Name     State       Status Message
-------------         ---------     -----       --------------
t5ldzmg03-zone-res     t5ldlzc03      Offline     Offline
                      t5ldlzc04      Online      Online - Service is online.

t5ldzmg03-stor-res     t5ldlzc03      Offline     Offline
                      t5ldlzc04      Online      Online

root@t5ldlzc03:/opt/SUNWsczone/sczbt/util#

That's it... we have now a HA kernel zone 🙂

Mixed Martial Zone Arts

Oh yeah… mixing different zone types works perfect on Solaris Cluster 4.3 🙂

root@DBClzc01:~# clnode show-rev
4.3_1.2.0
root@DBClzc01:~#
root@DBClzc01:~# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
  19 DBCztsm1         running     /zones/DBCztsm1/DBCztsm1     solaris10  shared
  21 DBCztsm2         running     /zones/DBCztsm2/DBCztsm2     solaris10  shared
  25 DBCzdb04         running     /zones/DBCzdb04              solaris    shared
  26 DBCzmg03         running     -                            solaris-kz excl
  27 DBCdbs51         running     /zones/DBCdbs51              solaris10  shared
  28 DBCsapc2         running     /zones/DBCsapc2              solaris10  shared
  29 DBCdbs21         running     /zones/DBCdbs21              solaris10  shared
  30 DBCztsm3         running     /zones/DBCztsm3              solaris    shared
root@DBClzc01:~# 

T7-1… Sun System Firmware 9.5.2.g

Arrg…. don’t forget to follow the readme during the upgrade… or you will lose your network configuration (like me)…
Patch 22078903 -> Sun System Firmware 9.5.2.g

root@t7primary:~# ipmitool sunoem cli 'show /SP/network'
Connected. Use ^D to exit.
-> show /SP/network

 /SP/network
    Targets:
        interconnect
        ipv6
        test

    Properties:
        commitpending = (Cannot show property)
        dhcp_clientid = none
        dhcp_server_ip = none
        ipaddress = 0.0.0.0
        ipdiscovery = dhcp
        ipgateway = 0.0.0.0
        ipnetmask = 0.0.0.0
        macaddress = 00:10:E0:89:BB:15
        managementport = MGMT
        outofbandmacaddress = 00:10:E0:89:BB:15
        pendingipaddress = 0.0.0.0
        pendingipdiscovery = dhcp
        pendingipgateway = 0.0.0.0
        pendingipnetmask = 0.0.0.0
        pendingmanagementport = MGMT
        pendingvlan_id = (none)
        sidebandmacaddress = 00:10:E0:89:BB:14
        state = enabled
        vlan_id = (none)

    Commands:
        cd
        set
        show

-> Session closed
Disconnected
root@t7primary:~#

Solaris Zone Clone

I installed a zone on my T7 box and prepared it for an Oracle DB installation. But I will need the same setup for a second installation... In this example I will use the "clone" function from solaris11/zones/zfs to get my preconfigured zone cloned:

Halt the zone/container:

root@t7primary:~/config# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   1 t7zone01         running     /zones/t7zone01              solaris    excl
root@t7primary:~/config# zoneadm -z t7zone01 halt

Now I will copy the configuration from the first zone and will just chance the install root directory inside the zone:

root@t7primary:~/config# zonecfg -z t7zone01 export -f ./t7zone01.export
root@t7primary:~/config# vi t7zone01.export
[...] -> changed zonepath
root@t7primary:~/config# zonecfg -z t7zone02 -f ./t7zone01.export
root@t7primary:~/config# zonecfg -z t7zone02 info
zonename: t7zone02
zonepath: /zones/t7zone02
brand: solaris
autoboot: false
autoshutdown: shutdown
bootargs:
file-mac-profile:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
tenant:
fs-allowed:
[max-shm-memory: 512G]
[max-shm-ids: 2048]
[max-msg-ids: 2048]
[max-sem-ids: 2048]
fs:
        dir: /shared
        special: /shared
        raw not specified
        type: lofs
        options: [rw,nodevices]
anet:
        linkname: net0
        lower-link: auto
        allowed-address not specified
        configure-allowed-address: true
        defrouter not specified
        allowed-dhcp-cids not specified
        link-protection: mac-nospoof
        mac-address: auto
        mac-prefix not specified
        mac-slot not specified
        vlan-id not specified
        priority not specified
        rxrings not specified
        txrings not specified
        mtu not specified
        maxbw not specified
        bwshare not specified
        rxfanout not specified
        vsi-typeid not specified
        vsi-vers not specified
        vsi-mgrid not specified
        etsbw-lcl not specified
        cos not specified
        pkey not specified
        linkmode not specified
        evs not specified
        vport not specified
rctl:
        name: zone.max-sem-ids
        value: (priv=privileged,limit=2048,action=deny)
rctl:
        name: zone.max-msg-ids
        value: (priv=privileged,limit=2048,action=deny)
rctl:
        name: zone.max-shm-ids
        value: (priv=privileged,limit=2048,action=deny)
rctl:
        name: zone.max-shm-memory
        value: (priv=privileged,limit=549755813888,action=deny)
root@t7primary:~/config# 
root@t7primary:~/config# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - t7zone01         installed   /zones/t7zone01              solaris    excl
   - t7zone02         configured  /zones/t7zone02              solaris    excl

As you can see, the configuration from t7zone1 was taken for the new t7zone2. Recourse controls and the shared directory for example comes from the t7zone1 zone. Now I will clone the first zone, this took around 2-3 seconds 🙂

root@t7primary:~/config# zoneadm -z t7zone02 clone  t7zone01
WARNING: read-write lofs file system on '/shared' is configured in both zones.
The following ZFS file system(s) have been created:
    rpool/zones/t7zone02
Progress being logged to /var/log/zones/zoneadm.20151221T094156Z.t7zone02.clone
Log saved in non-global zone as /zones/t7zone02/root/var/log/zones/zoneadm.20151221T094156Z.t7zone02.clone
root@t7primary:~/config#

And yes, it worked... I have both zones now "installed". But you will see, that the new t7zone2 only takes less space.

root@t7primary:~/config# zfs list | grep t7zone
rpool/zones/t7zone01                                  1.80G   829G    32K  /zones/t7zone01
rpool/zones/t7zone01/rpool                            1.80G   829G    31K  /rpool
rpool/zones/t7zone01/rpool/ROOT                       1.80G   829G    31K  legacy
rpool/zones/t7zone01/rpool/ROOT/solaris               1.80G   829G  1.73G  /zones/t7zone01/root
rpool/zones/t7zone01/rpool/ROOT/solaris/var           61.3M   829G  52.6M  /zones/t7zone01/root/var
rpool/zones/t7zone01/rpool/VARSHARE                   1.09M   829G  1.03M  /var/share
rpool/zones/t7zone01/rpool/VARSHARE/pkg                 63K   829G    32K  /var/share/pkg
rpool/zones/t7zone01/rpool/VARSHARE/pkg/repositories    31K   829G    31K  /var/share/pkg/repositories
rpool/zones/t7zone01/rpool/export                      102K   829G    32K  /export
rpool/zones/t7zone01/rpool/export/home                69.5K   829G    31K  /export/home
rpool/zones/t7zone01/rpool/export/home/oracle         38.5K   829G  38.5K  /oracle
rpool/zones/t7zone02                                  4.91M   829G    34K  /zones/t7zone02
rpool/zones/t7zone02/rpool                            4.87M   829G    31K  /rpool
rpool/zones/t7zone02/rpool/ROOT                       4.85M   829G    31K  legacy
rpool/zones/t7zone02/rpool/ROOT/solaris-0             4.85M   829G  1.73G  /zones/t7zone02/root
rpool/zones/t7zone02/rpool/ROOT/solaris-0/var         48.5K   829G  52.7M  /zones/t7zone02/root/var
rpool/zones/t7zone02/rpool/VARSHARE                      3K   829G  1.03M  /var/share
rpool/zones/t7zone02/rpool/VARSHARE/pkg                  2K   829G    32K  /var/share/pkg
rpool/zones/t7zone02/rpool/VARSHARE/pkg/repositories     1K   829G    31K  /var/share/pkg/repositories
rpool/zones/t7zone02/rpool/export                        3K   829G    32K  /export
rpool/zones/t7zone02/rpool/export/home                   2K   829G    31K  /export/home
rpool/zones/t7zone02/rpool/export/home/oracle            1K   829G  38.5K  /oracle
root@t7primary:~/config#
root@t7primary:~/config# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - t7zone01         installed   /zones/t7zone01              solaris    excl
   - t7zone02         installed   /zones/t7zone02              solaris    excl
root@t7primary:~/config#

So were does this come from? Well, it is a clone... so let's call it a re-write-able snapshot, only differences use space. You can see where it comes from in the following outputs:

root@t7primary:~/config# zfs get origin | awk '$3 !~ /-/ {print}'
NAME                                                                  PROPERTY  VALUE                                                                 SOURCE
rpool/zones/t7zone02/rpool                                            origin    rpool/zones/t7zone01/rpool@t7zone02_snap00                            -
rpool/zones/t7zone02/rpool/ROOT                                       origin    rpool/zones/t7zone01/rpool/ROOT@t7zone02_snap00                       -
rpool/zones/t7zone02/rpool/ROOT/solaris-0                             origin    rpool/zones/t7zone01/rpool/ROOT/solaris@t7zone02_snap00               -
rpool/zones/t7zone02/rpool/ROOT/solaris-0/var                         origin    rpool/zones/t7zone01/rpool/ROOT/solaris/var@t7zone02_snap00           -
rpool/zones/t7zone02/rpool/VARSHARE                                   origin    rpool/zones/t7zone01/rpool/VARSHARE@t7zone02_snap00                   -
rpool/zones/t7zone02/rpool/VARSHARE/pkg                               origin    rpool/zones/t7zone01/rpool/VARSHARE/pkg@t7zone02_snap00               -
rpool/zones/t7zone02/rpool/VARSHARE/pkg/repositories                  origin    rpool/zones/t7zone01/rpool/VARSHARE/pkg/repositories@t7zone02_snap00  -
rpool/zones/t7zone02/rpool/export                                     origin    rpool/zones/t7zone01/rpool/export@t7zone02_snap00                     -
rpool/zones/t7zone02/rpool/export/home                                origin    rpool/zones/t7zone01/rpool/export/home@t7zone02_snap00                -
rpool/zones/t7zone02/rpool/export/home/oracle                         origin    rpool/zones/t7zone01/rpool/export/home/oracle@t7zone02_snap00         -
root@t7primary:~/config#
root@t7primary:~/config# zfs list -t snapshot
NAME                                                                   USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/11.3.3.0.6.0@install                                        110M      -  2.59G  -
rpool/ROOT/11.3.3.0.6.0@2015-12-16-18:11:00                            148M      -  2.68G  -
rpool/ROOT/11.3.3.0.6.0/var@install                                   95.0M      -   133M  -
rpool/ROOT/11.3.3.0.6.0/var@2015-12-16-18:11:00                        327M      -   440M  -
rpool/zones/t7zone01/rpool@t7zone02_snap00                                0      -    31K  -
rpool/zones/t7zone01/rpool/ROOT@t7zone02_snap00                           0      -    31K  -
rpool/zones/t7zone01/rpool/ROOT/solaris@install                       7.41M      -   605M  -
rpool/zones/t7zone01/rpool/ROOT/solaris@t7zone02_snap00               4.40M      -  1.73G  -
rpool/zones/t7zone01/rpool/ROOT/solaris/var@install                   8.68M      -  38.2M  -
rpool/zones/t7zone01/rpool/ROOT/solaris/var@t7zone02_snap00            200K      -  52.6M  -
rpool/zones/t7zone01/rpool/VARSHARE@t7zone02_snap00                     22K      -  1.03M  -
rpool/zones/t7zone01/rpool/VARSHARE/pkg@t7zone02_snap00                   0      -    32K  -
rpool/zones/t7zone01/rpool/VARSHARE/pkg/repositories@t7zone02_snap00      0      -    31K  -
rpool/zones/t7zone01/rpool/export@t7zone02_snap00                        1K      -    32K  -
rpool/zones/t7zone01/rpool/export/home@t7zone02_snap00                    0      -    31K  -
rpool/zones/t7zone01/rpool/export/home/oracle@t7zone02_snap00            1K      -  38.5K  -
root@t7primary:~/config#

OK, let's boot the zones. The first one boots up like before. With the new one, you will need to go threw the installation configuration:

root@t7primary:~/config# zoneadm -z t7zone01 boot
root@t7primary:~/config# zoneadm -z t7zone02 boot
root@t7primary:~/config# zlogin -C  t7zone02


                           System Configuration Tool

     System Configuration Tool enables you to specify the following
     configuration parameters for your newly-installed Oracle Solaris 11
     system:
     - system hostname, network, time zone and locale, user and root
       accounts, name services, support

     System Configuration Tool produces an SMF profile file in
     etc/svc/profile/sysconfig/sysconfig-20151221-104209.

     How to navigate through this tool:
     - Use the function keys listed at the bottom of each screen to move
       from screen to screen and to perform other operations.
     - Use the up/down arrow keys to change the selection or to move
       between input fields.
     - If your keyboard does not have function keys, or they do not
       respond, press ESC; the legend at the bottom of the screen will
       change to show the ESC keys for navigation and other functions.



  F2_Continue  F6_Help  F9_Quit


[...] -> skipped

root@t7primary:~/config# 
root@t7primary:~/config# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   3 t7zone01         running     /zones/t7zone01              solaris    excl
   4 t7zone02         running     /zones/t7zone02              solaris    excl
root@t7primary:~/config#

And that's it... Took about 2 Minutes and you have a new installation 🙂 It is now a clone... with ZFS clones there are no performance penalties, you can only win and use less space. But if you really want to have your one filesystem you could also "promote" the clone. Promotes a clone file system to no longer be dependent on its origin snapshot. You will need this, if you ever plan to destroy t7zone01...

Open Files

Open Files

Ever wondered how many open files which process has running solaris?

root@server# ls -d /proc/*/fd/* | sed  -e's|/proc/|open files for PID: |'  -e's|/fd.*$||' | uniq -c | sort -n

virtual domain service device (vlds)

Just came around this issue....

root@psvsparc2:~# svcadm enable vntsd
root@psvsparc2:~# svcs -xv
svc:/ldoms/ldmd:default (Logical Domains Manager)
 State: maintenance since Fri Dec 04 19:04:05 2015
Reason: Start method exited with $SMF_EXIT_ERR_CONFIG.
   See: http://support.oracle.com/msg/SMF-8000-KS
   See: /var/svc/log/ldoms-ldmd:default.log
Impact: This service is not running.
root@psvsparc2:~# tail -3  /var/svc/log/ldoms-ldmd:default.log
[ Dec  4 18:13:47 Executing start method ("/opt/SUNWldm/bin/ldmd_start"). ]
SMF service 'svc:/ldoms/agents:default' is not online.
[ Dec  4 18:13:48 Method "start" exited with status 96. ]
root@psvsparc2:~# 
root@psvsparc2:~# svcs -l /ldoms/agents
fmri         svc:/ldoms/agents:default
name         Logical Domains agents service
enabled      false (temporary)
state        disabled
next_state   none
state_time   Fri Dec 04 19:03:33 2015
logfile      /var/svc/log/ldoms-agents:default.log
restarter    svc:/system/svc/restarter:default
contract_id  34
manifest     /lib/svc/manifest/platform/sun4v/ldoms-agents.xml
dependency   require_all/none svc:/system/filesystem/minimal (online)
root@psvsparc2:~# cat  /var/svc/log/ldoms-agents:default.log
[ Dec  4 19:03:32 "start" method requested temporary disable: "The Logical Domains agents service has been disabled because the system has no virtual domain service (vlds) device"
 ]

hmmm....

After some search I found an explanation on MOS. This comes because my old box used ldom 1.1 before and this solaris 11.3 ldom 3 is looking for a device that is not in the old sp-config.

That workaround helped me:



comment out the following line in /opt/SUNWldm/bin/ldmd_start

# check_service_is_online "svc:/ldoms/agents:default"

start the service svc:/ldoms/ldmd:default:

# svcadm clear svc:/ldoms/ldmd:default

saving a new "ldom3" config on your SP

# ldm add-config <config>

uncomment the script again /opt/SUNWldm/bin/ldmd_start:

check_service_is_online "svc:/ldoms/agents:default"

shutdown everything and power cycle the system:

-> stop /SYS
-> start /SYS

Boot up again and for me it worked: 
root@psvsparc2:~# ldm add-vconscon port-range=5000-5100 primary-console primary
root@psvsparc2:~# svcadm enable ldoms/vntsd
root@psvsparc2:~# svcs -a | grep ldom
online         19:29:51 svc:/ldoms/agents:default
online         19:30:19 svc:/ldoms/ldmd_dir:default
online         19:30:22 svc:/ldoms/ldmd:default
online         19:34:02 svc:/ldoms/vntsd:default
root@psvsparc2:~#


locales@smf

locales

In Solaris 10 you had to edit the /etc/default/init file to chance the default locale, now with Solaris 11 it is defined in SMF:

root@psvsparc1:~# locale
LANG=de_DE.UTF-8
LC_CTYPE="de_DE.UTF-8"
LC_NUMERIC="de_DE.UTF-8"
LC_TIME="de_DE.UTF-8"
LC_COLLATE="de_DE.UTF-8"
LC_MONETARY="de_DE.UTF-8"
LC_MESSAGES="de_DE.UTF-8"
LC_ALL=
root@psvsparc1:~# svccfg -s svc:/system/environment:init \
> setprop environment/LANG = astring: C
root@psvsparc1:~# svcadm refresh svc:/system/environment
root@psvsparc1:~# init 6
root@psvsparc1:~#
Using username "root".
Server refused our key
Using keyboard-interactive authentication.
Password:
Last login: Mon Nov 23 11:48:02 2015 from 10.51.10.107
Oracle Corporation      SunOS 5.11      11.2    July 2015
root@psvsparc1:~#
root@psvsparc1:~#
root@psvsparc1:~# locale
LANG=C
LC_CTYPE="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_COLLATE="C"
LC_MONETARY="C"
LC_MESSAGES="C"
LC_ALL=
root@psvsparc1:~#

Getting more and more, times are over with just a quick edit in a file… to be honest I am still not sure what I like more…

 

ZFS monitor @ 11.3

That’s a nice new feature… always thought that it could not be sooooo difficult to get these results 🙂

root@server-bu:~# zfs recv rpool/server/data < data@151115-030007.dump
[2nd terminal]
root@server-bu:~# zpool monitor -t receive \
> -o done,pool,provider,speed,starttime,timestmp 1
done pool provider speed starttime timestmp
32.9G rpool receive 44.2M 19:29:48 19:42:31
32.9G rpool receive 44.1M 19:29:48 19:42:32
33.0G rpool receive 44.1M 19:29:48 19:42:33
33.0G rpool receive 44.1M 19:29:48 19:42:34

HW@OOW

It’s about time… and yes, we got new system products… 😉

T7 and M7 systems are here, 1/2/4 sockets T servers and 8/16 sockets M server, where at the first look the M series has more RAS features like dual SPs and PCIe hot plugging… All servers come with M7 4.13 GHz 32core CPU and 512GB RAM per Socket using 16x 32GB DDR4 DIMMs…

Based on the M hardware a new SuperCluster combination was announced. Interessting could be the 1 CPU per server version… that will depend on the price…

Today Solaris 11.3 was released and following this also a new SolarisCluster 4.3.

Next step; I need such a box 🙂

Standard Edition 2

Some words about the new Standard Edition 2 (SE2) …
SE2 will replace the current SE One and SE based on the same pricing as SE had (list price € 15.194,- per socket) and will also be licensed on system sockets (not on cores like EE). RAC is included but no other options or packs are available. Minimum named users are 10 NUPs per Server.
SE2 may be installed on servers with a maximum of two sockets. When running as RAC each server may only have 1 socket installed and the maximum are also two sockets, so two servers. On two socket servers, you “may” remove the second CPU or you can bind it with a certified hardware partitioning method like OracleVM for SPARC and x86, Solaris Container or AIX LPARs… (No VMware, no HyperV or KVMs).
Technical is it a 12c (>=12.1.0.2) and has a new resource cap with a maximum of 16 threads per server / DB installation, or 8 threads per node in a RAC.
All of this means, no SE setups on four socket machines and no two socket RAC nodes anymore. No cheap SEO which was at 1/3 of SE but yes, for that, you got a thread limitation. And be careful with NUPs, SE/SEO NUPs were per company, now they are per server.
SE/SEO support will end on 1st of September 2016 and will start directly with a sustaining support.
A lot of customers will not be very happy with this new product….

Solaris p2v

Just had another fun with a p2v conversion migrating an old e2900 with 12 US-IV+ CPUs to a small LDOM on a T5-8 with 2 cores… and guess what, it runs perfect and much faster then before on the big iron…

And with application data on SAN LUNs it only took about 5 minutes downtime to map the disks to the guest and start the application again..

My notes:

LDOM p2v

a short summary how to migrate a solaris server to a LDOM

First create a default configuration file on the target server

target # more /etc/ldmp2v.conf
# Virtual switch to use
VSW="primary-switch"
# Virtual disk service to use
VDS="primary-vds"
# Virtual console concentrator to use
VCC="primary-console"
# Location where vdisk backend devices are stored
BACKEND_PREFIX=""
# Default backend type: "zvol" or "file".
BACKEND_TYPE="file"
# Create sparse backend devices: "yes" or "no"
BACKEND_SPARSE="no"
# Timeout for Solaris boot in seconds
BOOT_TIMEOUT=60

now copy the "ldmp2v" script from the target server to the solaris client and start the collect phase. the rest is done on the target server, that's all

source # ldmp2v collect -d /mnt/src-svr1
 
target # ldmp2v prepare -b disk -B /dev/dsk/c0t60060E801653CE00000153CE00001520d0s2:src-svr1-vol0:src-svr1-hdd0 -c 8 -M 32g -m /:40g -m swap:4g -m /var:8g -o keep-hostid -p primary-vds -d /export/collect/src-svr1 src-svr1
 
target # ldmp2v convert -i /downloads/new/sol-10-u11-ga-sparc-dvd.iso -d /export/collect/src-svr1 -x skip-ping-test src-svr1

Oracle’s x86 beast – X5-8

Just read an announcement about the brand new Oracle X5-8 server.

Oracle will be one of the few vendors with an 8 socket Intel based server. This beast runs up to 144 Xeon cores based on E7-8895 v3 CPUs with 6 TB memory and 16 PCIe Gen3 slots.

server_x5-8Read more about at Josh Rosen’ Blog.

Standard Edition 2 ?!?

Oracle published a new MOS announcement saying with the beginning of Oracle Database 12.1.0.2, Standard Edition (SE) and Standard Edition One (SE1) are replaced by Standard Edition 2.

SE2 will support 2 socket systems and is RAC aware. So it seems, that it will be allowed to run on two 2 socket servers (which reflects 4 sockets) and not more.

That would affect customers running SE on 4 socket systems…
And it seems that SEO and SE installation will need to be migrated to SE2, what ever that will mean in $€’s… Costs are expected between SEO and SE, but as usual on the higher end…

Let’s see what will happen….

Link to MOS:
2027072.1 Oracle Database 12c Standard Edition 2 (12.1.0.2)

 

 

Oracle Sun HW EOL/EOSL?

I was looking for a while to find official answers on how long a Sun or an Oracle hardware could be under a valid service contract. There is no more “End of Service Life” (EOSL) like it was at Sun… in Oracle terms:

“With Lifetime Support for Oracle Hardware, Oracle hardware systems will be supported at the Premier support level for an indefinite period. Support levels will remain the same.

Beginning June 1, 2014, Oracle implemented a small surcharge on aged systems. The surcharge on Premier Support for Systems is in effect for systems that are over 5 years from LSD.”

You can find the official information on the Oracle Hardware and Systems Support Policies.

Surcharge is 5% additional to last year support price. But this does not include the IAR per year. So in this example taking a v445, LOD was 04/08, which is 5+ from 2014… Assuming Premier Support @ 1.000,- would mean for 2015 (e.g. 3% as IAR):
2.000,- + 5% surcharge + 3% IAR == 2.163,-
In 2016: 2.163,- * 1,05 * 1,03 == 2.339,28
And so on…

LODs could be found at MOS in the Oracle System Handbook.

More details can found at Aged Hardware Surcharge – Process & Sales Messaging document. This document is Internal Use Only, you can share the link with your local Partner Manager, in order for him to share with you the information he finds suitable.