MIPS reduction strategies
Once we identified the workload we want to reduce, there are several options available to us to reduce the MSU/MIPS footprint.
Rescheduling Workload
We can attempt to move non-business critical workload to off-peak hours (through transaction scheduling, or batch scheduling).
Alternatively, we can also de-prioritise or inhibit non-critical workload during peak consumption periods, so the peak is only reserved for highly critical workload. We can configure our WLM environment to be more restrictive during those time periods.
Code optimization
As we discussed in earlier articles, once you have identified workload that consumes most of the capacity, you want to take a closer look at those transactions, batch jobs or database queries. Once we identified the entry programs of these objects, we can optimize those by:
- Attempting to leverage more optimal code functions for languages such as C370, PL/I, COBOL. Sometimes even replacing with newer compiler functions available, which have a smaller footprint.
- Potentially recompiling certain code with newer ARCH level and a new compiler version, like COBOL v6.4, which uses newer and more efficient instructions and functions.
- Identifying which expensive instructions are used in the code such as MVC, and potentially eliminate redundancies of inefficient code
- Refactor some code aspects, optimizing certain routines that are running during the peak
Database Optimization
In many cases, capacity attributed to expensive workload is caused by database functions or queries. Database queries are attributed to the TCB of the transaction or batch job, and does not show up as the database subsystem itself being the consumer. However, our DBMS environments usually come with utilities which allows us to create reports and statistics for specific calls, such as "DB2 explain". ADABAS and IDMS have similar features (IMS probably does not though).
We can use"DB2 explain" on SQL statements that are expensive and optimize those SQL statements. Potentially also further “normalizing” the database model. DB2 explain can show you how "expensive" an SQL statement is. It makes sense to revisist the database model on high velocity queries that run in large amounts, but also in heavy queries which are used for business intelligence or data warehouse functions. We want to eliminate inefficient database calls. We are looking for very heavy DB2 calls but also very small ones, which are the same, but are executed millions of times in a short period.
Leveraging cheaper processors
The IBM CPC comes with several different processors, such as zAAP, zIIP, IFL and the normal "GP" (general purpose processors)
zAAP and zIIP are incentiviced and cheaper to use, and sometimes free. But they do come with some limitations.
We can evaluate refactoring parts of the code, to run decentral in python or java which for such as connecting to DB2 remotely using JDBC or ODBC, thus leveraging the zIIP processor, or potentially recode certain aspects so they run in SRB mode.
zIIP eligible workload conist of zCX (container workload) and programs running in SRB mode, such as Java. Link to zIIP eligibility
Rehosting & ISV replacements
For some workload, it makes sense to rehost it. Decentral migration platforms are cheaper in many cases, such as using MicroFocus Enterprise Server or LzLabs' SDM, which can allow significant MSU savings.
In addition we can also move products that dont have to run on z/OS to a decentral location, such as MQ, or the scheduler (Like Control-M, CA Workload Automation or IWS). Most of the schedulers have z/OS agents. Some other products might be "BI" products, or file transfer products.
Feel free to download the "How to save MIPS" whitepaper on the right or below, no subscription or registration required!