WLS + Oracle benchmark

Today Oracle announced this press release. Cisco Systems tested WebLogic SE 10.3.3 (not available yet) + single-node Oracle 11gR2 @ Cisco UCS C250 M2 Rackmount Server with 2×6-core Intel Xeon X5680 (not available yet) – here are the results. I want to highlight some specifics about this test bed – after all, it’s interesting to know what’s behind the scene.

  • Application
    • This is a distributed application, i.e. there are calls to external system. See configuration diagram
    • It is relatively old. It uses EJB2 CMP (try to google that combination and look at the first title – strange, huh?), MDB and session beans. And, of course – servlets in form of JSPs (this is old too, but not a problem since all web-based frameworks rely on servlets as their basis).
  • WebLogic Configuration
    • This is not a cluster. The application HW was shared among 8 WLS instances – and those instances were not in the WLS cluster – i.e. effectively, there were 8 separate applications working with the same database+schema simultaneously.
    • NIO socket muxer was in use. Strange thing NIOSocketMuxer is documented nowhere.
    • Logging Last Resource was in use. LLR is a way to support 2PC in some environments (with one non-XA participant in the transaction). This is not the same as using XA driver in a pure distributed environment. With enabled XA each database request is the same as distributed call (meaning that even read-only transactions would require small amount undo), while with LLR enabled and non-XA configuration, WLS is considered as a simple JDBC thin client to Oracle database.
    • HugePages in use
    • 64-bit JRocket in use. It’s strange that Compressed References wasn’t used (the same feature in the HotSpot is called CompressedOops)
  • Database and schema configuration
    • HugePages in use
    • compatible=11.1.0.1. WTF?
    • cursor_space_for_time=true
    • db_cache_advice=off, query_rewrite_enabled=false, statistics_level=basic, timed_statistics = false
    • 2K db block size + 4k & 8K tablespaces. I guess this was done to spread the DB objects between separate buffer caches
    • Redo log size ~136GB
    • Some tables have INITRANS 100
    • All tablespaces are ASSM, but there are tables with explicit FREELISTS/FREELIST GROUPS specified. WTF?
    • Single-table hash-clusters are in use
    • Hash-partitioning

PS. I’d love to see a stastpack/AWR/ASH snapshot of the FinalRun from that system, as well as GC log of an application node.

Leave a comment