Freeing Memory

ZFS frees up its cache in a way that does not cause a memory shortage. The system can operate with lower freemem without suffering a performance penalty. ZFS returns memory from the ARC only when there is a memory pressure.
However, there are occasions when ZFS fails to evict memory from the ARC quickly which can lead to application startup failure due to a memory shortage or for example to less free memory for kernel-zones. Also, reaping memory from the ARC can trigger high system utilization at the expense of performance. You could limit the memory usage using “zfs_arc_max” or “user_reserve_hint_pct”, please see MOS DOC-ID 1005367.1 for more details.
But anyhow, limiting does not mean you have enough free like mentioned before. There is a small but nice hook you could use:

root@solaris:~# echo "::memstat" | mdb -k
Usage Type/Subtype                      Pages    Bytes  %Tot  %Tot/%Subt
---------------------------- ---------------- -------- ----- -----------
Kernel                               10583129    80.7g  6.6%
  Regular Kernel                      8800037    67.1g        5.5%/83.1%
  Defdump prealloc                    1783092    13.6g        1.1%/16.8%
ZFS                                  25064230   191.2g 15.7%  <---- high usage
User/Anon                            87434765   667.0g 54.9%
  Regular User/Anon                  10150413    77.4g        6.3%/11.6%
  OSM                                77284352   589.6g       48.6%/88.3%
Exec and libs                          284069     2.1g  0.1%
Page Cache                            5677442    43.3g  3.5%
Free (cachelist)                       311034     2.3g  0.1%
Free                                 29635667   226.1g 18.6%
Total                               158990336     1.1t  100%
root@solaris:~# echo "needfree/Z 0x40000000"|mdb -kw ; sleep 1 ; echo "needfree/Z 0"|mdb -kw
needfree:       0                       =       0x40000000
needfree:       0x40000000              =       0x0
root@solaris:~# echo "::memstat" | mdb -k
Usage Type/Subtype                      Pages    Bytes  %Tot  %Tot/%Subt
---------------------------- ---------------- -------- ----- -----------
Kernel                               10585952    80.7g  6.6%
  Regular Kernel                      8802860    67.1g        5.5%/83.1%
  Defdump prealloc                    1783092    13.6g        1.1%/16.8%
ZFS                                   2976204    22.7g  1.8%  <---- it's gone
User/Anon                            87441852   667.1g 54.9%
  Regular User/Anon                  10157500    77.4g        6.3%/11.6%
  OSM                                77284352   589.6g       48.6%/88.3%
Exec and libs                          284067     2.1g  0.1%
Page Cache                            5676347    43.3g  3.5%
Free (cachelist)                       312849     2.3g  0.1%
Free                                 51713065   394.5g 32.5%  <---- free again
Total                               158990336     1.1t  100%
root@solaris:~#

BTW; it takes some seconds to shrink the ZFS usage… in my output above it took maybe 5 seconds on a M7-8 server running Solaris 11.4.28.82.3

CVE-2021-3156 sudo @ Solaris

ohhhh – base scoring 7.8 – but only because it is a local issue, but are your users your friends? Affecting sudo legacy versions 1.8.2 through 1.8.31p2 and stable versions 1.9.0 through 1.9.5p1
It is quite easy to use this heap-based buffer overflow vulnerability and i hope it will be fixed in the next SRU30 for 11.4. Until SRU30 comes out, still might takes some days, you can use an IDR patch for Solaris – don’t know how long a new LSU build will take for 11.3 extended support.
Solaris 11.4 SRU29 -> idr4690.1
Solaris 11.3 LSU 36.24.0 -> idr4691.1
Solaris 10 ? -> Oracle says “pending resolution”

Oracle Support Document 2052590.1 (Reference Index of CVE IDs and Solaris Security IDRs) can be found at: https://support.oracle.com/epmos/faces/DocumentDisplay?id=2052590.1

Another reason why you should use pfexec on Solaris 😉

Happy patching

[UPDATE] 17-FEB 2021
Oracle released Solaris 11.4 update including the fixes for this sudo miracle -> Solaris 11.4 SRU 30 (11.4.30.88.3)