VLAN on Virtual Functions (SPARC LDOM)

This example comes from a SuperCluster M7 which runs an SVA version not able to apply VLANs using the GUI:

root@primary:~# ldm stop ssccn1-io-io-dom
root@primary:~# ldm set-io vid=1620,1621,1680,1690 alt-mac-addrs=auto,auto,auto,auto,auto /SYS/CMIOU3/PCIE2/IOVNET.PF0.VF0
root@primary:~# ldm set-io vid=1620,1621,1680,1690 alt-mac-addrs=auto,auto,auto,auto,auto /SYS/CMIOU3/PCIE2/IOVNET.PF1.VF0
root@primary:~# ldm start ssccn1-io-io-dom

I/O Domain (Guest-Domain):

 
root@io-dom:~# ipadm show-addr
root@io-dom:~# ipadm delete-ipmp -f sc_ipmp0
root@io-dom:~# ipadm delete-ip net0
root@io-dom:~# ipadm delete-ip net1
root@io-dom:~# dladm create-vlan -l net0 -v 1680 net0_1680
root@io-dom:~# dladm create-vlan -l net1 -v 1680 net1_1680
root@io-dom:~# ipadm create-ip net0_1680
root@io-dom:~# ipadm create-ip net1_1680
root@io-dom:~# ipadm create-ipmp -i net0_1680 -i net1_1680 sc_ipmp0   
root@io-dom:~# ipadm set-ifprop -p standby=off -m ip net0_1680
root@io-dom:~# ipadm set-ifprop -p standby=on -m ip net1_1680
root@io-dom:~# ipadm create-addr -T static -a 192.168.180.101/24 sc_ipmp0/v4

@SPECTRE

Like already mentioned and updated in my previous post Meltdown and Spectre on SPARC we got some patches from Oracle for Solaris:
Oracle Support Document 2349278.1 (Oracle Solaris on SPARC — Spectre (CVE-2017-5753, CVE-2017-5715) and Meltdown (CVE-2017-5754) Vulnerabilities)

Will update some performance impacts if I see them… please share your experiences about performance issues with those patches….

[Update]
Oracle published a new MOS article about the impact:
Oracle Support Document 2386271.1 (Performance impact of technical mitigation measure against vulnerability CVE-2017-5715 (Spectre v2) on SPARC Servers)

Like on other architectures 2-10% … heard some very bad news from customers using older Intel boxes with up to 70% IO loss… real world examples will be interesting…

DAX usage on OSC?

We enabled the Oracle Database InMemory option on an Oracle SuperCluster M7 and wanted to see if our SPARC DAX engines are used. With newer Solaris releases you could use daxstat to get some more info next to busstat or dtrace but issuing daxstat brought an error:

root@OSC:~# daxstat
Traceback (most recent call last):
 File "/usr/bin/daxstat", line 969, in <module>
 sys.exit(main())
 File "/usr/bin/daxstat", line 962, in main
 return process_opts()
 File "/usr/bin/daxstat", line 905, in process_opts
 dax_ids, dax_queue_ids = derive_dax_opts(args, parser)
 File "/usr/bin/daxstat", line 844, in derive_dax_opts
 dax_ids = find_ids(query, parser, None)
 File "/usr/bin/daxstat", line 683, in find_ids
 all_dax_kstats = RCU.list_objects(kbind.Kstat(), query)
 File "/usr/lib/python2.7/vendor-packages/rad/connect.py", line 391, in list_objects
 a RADInterface object
 File "/usr/lib/python2.7/vendor-packages/rad/client.py", line 213, in _raise_error
 packer.pack_int((timestamp % 1000000) * 1000)
rad.client.NotFoundError: Error listing com.oracle.solaris.rad.kstat:type=Kstat: not found (3)

In my installation the following was not installed:

root@OSC:~# pkg list -a | grep kstat
library/python-2/python-kstat                     5.11-0.175.2.0.0.27.0      --o
system/management/rad/module/rad-kstat            0.5.11-0.175.3.17.0.1.0    ---


root@OSC:~# pkg install system/management/rad/module/rad-kstat
[...]
root@OSC:~# pkg list | grep rad
system/management/rad                             0.5.11-0.175.3.21.0.4.0    i--
system/management/rad/client/rad-c                0.5.11-0.175.3.21.0.3.0    i--
system/management/rad/client/rad-python           0.5.11-0.175.3.17.0.1.0    i--
system/management/rad/module/rad-kstat            0.5.11-0.175.3.17.0.1.0    i--
system/management/rad/module/rad-smf              0.5.11-0.175.3.17.0.1.0    i--
root@OSC:~#
root@OSC:~# svcadm disable svc:/system/rad:local svc:/system/rad:local-http
root@OSC:~# svcadm enable svc:/system/rad:local svc:/system/rad:local-http

And yes, that was it 😉

root@OSC:~# daxstat -ad 60
DAX    commands fallbacks    input    output %busy
ALL    32541246     15222     1.0G     78.0M     0
ALL        5760         0     6.0M      0.0M     0
ALL        2240         0     6.0M      0.0M     0
root@OSC:~# daxstat 1 1
DAX    commands fallbacks    input    output %busy
  0        7062         0     0.0M      0.0M     0
  1        7071         0     0.0M      0.0M     0
  2        7071         0     0.0M      0.0M     0
  3        7067         0     0.0M      0.0M     0
  4        7073         0     0.0M      0.0M     0
  5        7071         0     0.0M      0.0M     0
  6        7066         0     0.0M      0.0M     0
  7        7067         0     0.0M      0.0M     0
  8     4078650      1878     0.0M      0.0M     0
  9     4078651      1941     0.0M      0.0M     0
 10     4078699      1870     0.0M      0.0M     0
 11     4078674      1914     0.0M      0.0M     0
 12     4078720      1923     0.0M      0.0M     0
 13     4078723      1929     0.0M      0.0M     0
 14     4078706      1897     0.0M      0.0M     0
 15     4078721      1871     0.0M      0.0M     0
 16        5696         0     0.0M      0.0M     0
 17        5705         0     0.0M      0.0M     0
 18        5704         0     0.0M      0.0M     0
 19        5704         0     0.0M      0.0M     0
 20        5702         0     0.0M      0.0M     0
 21        5703         0     0.0M      0.0M     0
 22        5700         0     0.0M      0.0M     0
 23        5702         0     0.0M      0.0M     0


As you can see, the dax engines are working…

In this example I rearranged some cores to the IM-zone to get more DAX pipelines and could use DAX engines from more than one chip, because we are also using more memory than one socket owns (around 1.2 TB). I read that “DAX units and pipelines are not hardwired to certains cores and you can submit work to any DAX unit on a CPU” in this article which explains DAX very good. So I thought it makes sense to spread cores for the zone around sockets. After changing the core pinning from one socket to three sockest I saw three times 8 units in use. That could be the reason why only the middle is more busy than the rest… might be a NUMA effect, I will test to repopulate the IM store to see if it spreads the load soon…

 

SPARC Roadmap 2018

A new roadmap is on the web… Solaris 11.5 as Solaris.next and new Sparc M8+ chips are planned for 2020/21…

Good new for the best operating system on the world 😉

http://www.oracle.com/us/products/servers-storage/servers/sparc/oracle-sparc/sparc-roadmap-slide-2076743.pdf