Schlagwort: performance

Swingbench Analysis: Power versus Intel

We aim to answer the question: How many Intel CPUs are needed to match the performance of my current Power CPUs while considering Oracle licensing implications?

Previously, there were issues capturing CPU and IO metrics in Swingbench using vmstat on AIX. After discussing with the developer, a fix is on the horizon. We had the opportunity to test this fix, and we are thrilled. The need for SSH connect to synchronize OS statistics on Swingbench is eliminated.

CPU Settings Preparation:
This is essential to prevent automatic scaling.

A. IBM POWER LPAR Configuration

  1. Please use CAPT LPAR to ensure that no additional resources can be activated.
  2. SMT – please activate (8 Threads)

B. Intel Platform Configuration

  1. Activate Hyper-Threading (2 Threads)


CORE Scenario 1:
Purpose for Scenario 1: What’s the actual value of a thread?
AIX with 2 Cores (equivalent to 16 Threads)
INTEL with 8 Cores (equivalent to 16 Threads)

CORE Scenario 2:
Purpose for Scenario 2: Conversion of core performance (license relevant)
Equal number of Cores on AIX and INTEL


Comparison Table:

An Excel comparison table between AIX and Linux concerning
„Transactions Per Minute“ (TPM) for different numbers of active users (4, 8, 16, 32, 64) is provided to set the utilization in relation.

Calculating the relation between AIX and INTEL using the TPM values:

Efficiency calculation of the systems (average value from all test runs SH/OE separated):

Efficiency ratio calculation:

The performance ratio we have calculated can serve as a rough indicator for how many Intel CPUs are required to match the performance of an AIX POWER CPU.

Testing Runs to be recorded:
Tip: Oracle Software should be the same Release at minimum, better with the same Patchlevel.

During each test, the CPU performance should not exceed 80%!

We use the Sales History Schema which is provided from swingbench, but we only install a little dataset approx 100MB.

We load the config and start the test every 5 min with different active user#.

Sales History SETTINGS (DEFAULT) every 5 min with 4,8,16,32,64 Users

We call it: MEMORY ONLY QUERY TEST

CORE Scenario 1:
SH Test run 1-6:
AIX 13:00-13:30
LINUX 13:30-14:00

CORE Scenario 2:
SH Test run 6-12:
AIX 14:00-14:30
LINUX 14:30-15:00

works fine ..

To get another Memory Only Szenario we repeat it with a Small SalesOrderEntry (SOE) Schema. OE Settings (without DML) every 5 min with 4,8,16,32,64 Users

Settiongs SOEV2

CORE Scenario 1:
SOE_V2 Test run 1-6:
AIX 15:00-15:30
LINUX 15:30-16:00

CORE Scenario 2:
SOE_V2 Test run 1-6:
AIX 16:00-16:30
LINUX 16:30-17:00

Now we enter the results in our given EXCEL Sheet to see a good Comparsion:

Our Customer is now Happy!

Feel free to contact us!

Exasol Virtual Schema Performancetest

In today’s world, where companies have to deal with huge amounts of data, the issue of data management efficiency becomes more and more important. One of the solutions to optimize data management and improve performance is to use Virtual Schemas in Exasol, a high-performance in-memory database.

In the following, we present a performance test of Exasol Virtual Schema to reduce the amount of data in an Exasol cluster environment. The idea behind this technique is to use a smaller Exasol cluster with a storage (memory) license and offload data to it.

Offloading data to a smaller cluster can lead to better performance by reducing the amount of active data that needs to be stored on the main Exasol instance. It also allows for greater flexibility by not having to keep the data on the main cluster all the time, resulting in better use of resources.

To perform this test, we use the Virtual Schema Adapter, which is written in Java. This adapter allows us to interact with Exasol via JDBC.

First we create the schema and the adapter:

CREATE SCHEMA X_SCHEMA_FOR_VS_SCRIPT;
CREATE OR REPLACE JAVA ADAPTER SCRIPT X_SCHEMA_FOR_VS_SCRIPT.ADAPTER_SCRIPT_EXASOL AS
    %scriptclass com.exasol.adapter.RequestDispatcher;
    %jar /buckets/bfsdefault/vschema/virtual-schema-dist-10.5.0-exasol-7.1.1.jar;
/

We then define two connections to our Exasol instance, one JDBC and one native Exasol connection:

CREATE OR REPLACE CONNECTION JDBC_CONNECTION_EXA_DEV1 
TO 'jdbc:exa:1.112.32.331..333/FINGERPRINT:8565'
USER 'SYS'
IDENTIFIED BY 'xx';
CREATE OR REPLACE CONNECTION EXA_CONNECTION_DEV1
TO '1.112.32.331..333/FINGERPRINT:8565'
USER 'SYS'
IDENTIFIED BY 'xx';

After that we create the Virtual Schema with the JDBC connection we created earlier:

CREATE VIRTUAL SCHEMA VIRTUAL_EXASOL_DEV1 
USING  X_SCHEMA_FOR_VS_SCRIPT.ADAPTER_SCRIPT_EXASOL WITH
    CONNECTION_NAME = 'JDBC_CONNECTION_EXA_DEV1'
    SCHEMA_NAME     = 'HISTORY_ARCHIVE'  
    IMPORT_FROM_EXA = 'true'
    EXA_CONNECTION  = 'EXA_CONNECTION_DEV1'
    MAX_TABLE_COUNT = '10000';

After that, the data will be merged with the data from the other source:

Since we do not want that another query is necessary than before. It should remain transparent for our customer!

create or replace view FULL_DATA
as select * from ACTUAL_DATA 
where datadate > sysdate-interval '1' MONTH
union all 
select * from VIRTUAL_EXASOL_DEV1.ARCHIVE_DATA 
where datadate < sysdate- interval '1' MONTH;

When we do a performance test we see that there is a clear difference if we do a query on a large database or on 2 databases when they are linked via Union All and Virtual Schema.

DEV1 is really slow – but cheap!

The optimizer of course passes the Where Clause, but still fetches all the necessary data over the network. And that takes time. Note: Runtime in seconds.

Conclusion: Depending on the query, the query performance has an effect. If you only need a few data for a calculation, it is of course not so dramatic, but we achieved a slowness factor between 11 upto 157 in our tests. Since we only tested with simple queries, this is still an important knowhow for us.

SPHINX.AT

YAITCON

© 2023 Sphinx Blog

Theme by Anders NorenUp ↑