ZFS Storage Appliance Infiniband

To attach your ZFSSA using infiniband to an EXA for example you might want more than one "virtual" datalink on your HCA using multiple IB partitions with same pkey. It is (yet?) not possible to allow the use of the same pkey when creating two IB partitions (datalinks) that point to the same IB device using the BUI and the CLI property is hidden; don't know why; but I could find a workaround in: Oracle Support Document 2087231.1 (Guidelines When Using ZFS Storage in an Exadata Environment) https://support.oracle.com/epmos/faces/DocumentDisplay?id=2087231.1

In my example I will create a bunch of datalinks to enable an active-active IPMP failover configuration for both controllers. (you will have to start with ibpart1, "0" does not work)

zsexa0101a:configuration net datalinks>partition
zsexa0101a:configuration net datalinks partition (uncommitted)> set li    <-- tab tab
linkmode  links
zsexa0101a:configuration net datalinks partition (uncommitted)> set link  <-- tab tab
linkmode  links       <-- it is not there ;-| 
zsexa0101a:configuration net datalinks partition (uncommitted)> set linkname=ibpart5
                      linkname = ibpart5 (uncommitted)
zsexa0101a:configuration net datalinks partition (uncommitted)> show
                         class = partition
                         label = Untitled Datalink
                         links = (unset)
                          pkey = (unset)
                      linkmode = cm
zsexa0101a:configuration net datalinks partition (uncommitted)> set links=ibp0
                         links = ibp0 (uncommitted)
zsexa0101a:configuration net datalinks partition (uncommitted)> set pkey=ffff
                          pkey = ffff (uncommitted)
zsexa0101a:configuration net datalinks partition (uncommitted)> show
                         class = partition
                         label = Untitled Datalink
                         links = ibp0 (uncommitted)
                          pkey = ffff (uncommitted)
                      linkmode = cm
zsexa0101a:configuration net datalinks partition (uncommitted)> commit
zsexa0101a:configuration net datalinks> show
DATALINK       CLASS       LINKS       STATE   ID      LABEL
aggr1          aggregation i40e0       up      -        zsexa01-LACP
ibpart1        partition   ibp0        up      -        zsexa01-IB0
ibpart2        partition   ibp2        up      -        zsexa01-IB1
ibpart3        partition   ibp0        up      -        zsexa01-IB0
ibpart4        partition   ibp2        up      -        zsexa01-IB1
ibpart5        partition   ibp0        up      -       Untitled Datalink
igb0           device      igb0        up      -       Motherboard-igb0
pffff_ibp0     partition   ibp0        up      -        zsexa01-IB0
pffff_ibp2     partition   ibp2        up      -        zsexa01-IB1
vnic1          vnic        igb0        up      -        zsexa0101a-VNIC
vnic2          vnic        igb0        up      -        zsexa0102a-VNIC
vnic3          vnic        aggr1       up      -        zsexa0101c-VNIC
vnic4          vnic        aggr1       up      -        zsexa0102c-VNIC

SPARC performance still rulez!

Since Oracle JAVA like many other software is charged per core it might be more and more interesting where to get the most performance out per license… remember the announcement 3 years ago when Oracle brought their M8 SPARC chips – double JAVA performance than x86 and Power. Three years later that’s still true and proven by public SPEC benchmarks which show amazing results for M8 based servers. Compared to the latest and greatest x86 chips from Intel and AMD the latest SPARC chip still has 100% more capacity for jOPS and leads the critical jOPS per core! That would mean in a commercial point of view you would need double licenses for the same workload on other platforms.

And don’t forget the incredible performance when it comes down to In-Memory analytics. When M8 was announced the chip provided 10x faster results using the build on DAX engines than any other vendor. I saw comparisons now a days still showing 8x faster queries than the rest on market.

And in the end everyone talks about security but nearly no one encrypts their workload; most of the time because it ruins the critical performance. That’s not true on SPARC – encrypt all your business but only loose 2 to 4 % with end-to-end encryption and all performance features mentioned before.

So, with Moore’s law you could get the performance which will show up probably in 2 to 4 years. But looking back to the last 10 years of Intel’s single core performance it grew by 10 to 15%, especially when it comes down to multi core environments where Intel’s chips lose their ability to increase turbo modes on specific cores. On modern Platinum chips they scale very well up to 14-16 cores but using 24 or even more cores you are thrown back to the performance you got 10 years ago with 2GHz… it’s still true, if you need enterprise class systems with a predictable performance and linear scale you have to go with enterprise CPUs like SPARC.
(or Power – did I really say that?)

See the whole presentation from Bill Nesheim, SVP Oracle Solaris Engineering:
Oracle SPARC & Solaris Consistent, Simple, Secure

Oracle Hard Partitioning with Oracle Linux KVM

Just found the official note reading “Oracle Partitioning Policy” that Oracle KVM is now supported as Hard Partitioning (!).
“For sure” it is only supported with Oracle Linux KVM in a “special” documented setup:
Hard Partitioning Implementation with Oracle Linux KVM and Oracle Linux Virtualization Manager

A similar setup like it was on OVM with Xen – a management server called “Oracle Linux Virtualization Manager” with an Oracle script called “olvm-vmcontrol” to set the core pinning on KVM compute nodes.

As it seems it is still like the Xen environment free to use – but if you want support on 3rd party hardware you will have to buy Oracle premier support (not basic); but the KVM manager is listed as a premier supported feature – with OVM Xen you had to buy an own contract for OL and OVM.

These are good news because the Xen implementation will run out of support soon -> MAR2021 premier support for OVM3.x will end.
And as you could see around OOW19 all new products came out with KVM rather than Xen… Oracle Cloud is not running on Xen and EXA-X8M & ODA-X8 where announced running KVM virtualization (only PCA X8 on-prem lags behind which might be changed soon…)

I would say, Xen was ok – but the OVM Manager was horrible… hopefully that will change now with the new “OLVM”…

Hope we will see Solaris 11 x86 support as a guest soon!