Black Hill Software

  • Home
  • Products
    • EasySMF
      • Online Manual
      • Release Notes and Latest Version
      • License Agreement
    • EasySMF:JE
      • EasySMF:JE Java Quickstart
      • Release Notes and Latest Version
      • Javadoc
      • License Agreement
    • EasySMF:RTI
      • EasySMF:RTI – SMF Real Time Interface
      • Javadoc
    • 30 Day Trial
  • Purchase
    • How to Buy
    • Purchase a License
  • Support
    • Documentation
      • EasySMF Desktop Online Manual
      • EasySMF:JE Javadoc
      • EasySMF RTI Javadoc
      • EasySMF JSON Javadoc
      • z/OS Utilities Javadoc
    • EasySMF Support
    • Get the latest version of EasySMF
    • EasySMF:JE Support
    • Get the latest version of EasySMF:JE
  • News
  • Contact
    • Support
    • Sales

User Key Common Flags in EasySMF

August 15, 2019 by Andrew

As you have probably heard, user key common storage will not be allowed in z/OS 2.4.

For more information on z/OS 2.4 User Key Common removal see Marna Walle’s article: Reminder to take a look: z/OS V2.4 user key common removal

IBM added some flags in the type 30 SMF record to audit usage of user key common storage. The flags allow you to see which jobs used user key common storage. The flags are:

  • SMF30_UserKeyCsaUsage
  • SMF30_UserKeyCadsUsage
  • SMF30_UserKeyChangKeyUsage

There is another flag for Restricted Use Common Service Area:

  • SMF30_UserKeyRuCsaUsage

Restricted use CSA is a relatively new function not going away in 2.4, but IBM discourage its use and it is becoming a priced feature.

EasySMF provides reports to show these flags and help you find any jobs or address spaces using user key common storage. The common storage flags are shown in the following reports:

  • Job Memory Information – Shows information from jobs and address spaces after they have ended.
  • Step Completions – Shows information from ended steps. This report shows the program name.
  • Running Jobs – Shows information including jobs that are still running (from SMF type 30 interval records).

You will need to scroll right to find the User Key Common columns.

EasySMF User Key Common Report
  • User Key Audit indicates whether the SMF30_UserKeyCommonAuditEnabled flag is set. This must be on, otherwise the information in the other fields is not valid.
  • User Key CSA shows the value of SMF30_UserKeyCsaUsage.
  • User Key CADS shows the value of SMF30_UserKeyCadsUsage
  • User Key CHANGKEY shows the value of SMF30_UserKeyChangKeyUsage
  • Restricted Use CSA shows the value of SMF30_UserKeyRuCsaUsage

Click the column headers to sort the values and find any jobs where the flags are set (click twice to sort descending).

User Key Common reports on z/OS using Java

If you don’t want to download SMF data to a PC, you can run a report on z/OS using Java. EasySMF:JE (a set of Java classes to map SMF records) provides a sample report to show User Key Common information.

Quickstart installation information for EasySMF:JE can be found here: EasySMF:JE Java Quickstart

Change the class name in IVP4 from com/blackhillsoftware/samples/RecordCount to com/blackhillsoftware/samples/UserKeyCommon to run the User Key Common sample report.

30 Day Trial

Both EasySMF and EasySMF:JE can be downloaded for 30 day trials.

Information about the trial is available here.

Filed Under: EasySMF News

Understanding z/OS Unix Work with EasySMF

June 20, 2019 by Andrew

Unix work running under z/OS can be difficult to track.

A Unix process can create thousands of child processes, each running in another address space. They may only exist for fractions of a second and produce no job output, so you don’t see them in SDSF. Child processes create their own SMF type 30 records for job accounting, so their resource usage doesn’t appear in the parent’s accounting records. The child processes may have a different jobname and can even run in different service and report classes to the parent.

I have seen a job running BPXBATCH where the job itself used virtually no CPU time, but it spawned over 10,000 sub tasks. More than 99% of the CPU time used by that batch job appeared in child process SMF records with different job names, service and report classes.

EasySMF has reports to help you understand your z/OS Unix work.

EasySMF uses the Unix process and parent process id information from the type 30 SMF records to build a tree view of your Unix work.

  • You can see the parent – child relationships between different address spaces.
  • You can see whether the child service and report classes are the same as the original job.
  • When you collapse the tree, usage information from all the children is rolled up into the parent job. When you expand the tree, usage information is shown for the individual entries.

The expanded view of Unix work, showing the relationship between work running in different address spaces:

The collapsed view of the same work. Resources like CPU time show the total for all the related address spaces:

Related Processes

EasySMF can also help you find where work came from. If Related Processes is selected, EasySMF will search for and show parent and child tasks that do not match the main selection criteria. Here we can see a task with the job name ANDREWR, and Related Processes shows that particular task came in through SSH, and it had a number of Unix sub tasks.

Related Processes can help you find out where a Unix process came from.
More Detailed Information about Unix Work

The Unix Work report shows you more detail about work that uses z/OS Unix. As the name suggests, it only shows work that had a Unix component – other batch jobs and started tasks are ignored. This report shows information from the step and substep end records, which include some information about the program that was executed. This give clues to the processing taking place, although the information included in SMF is limited and may not show every program.

The Unix Work report shows information from Step End records for work with a Unix component, including the program information recorded in SMF.

Unix processes don’t always run in a different address space under z/OS. Sometimes processes will share an address space. This gives multiple Unix Process sections in SMF, with most of the statistics reported at the shared address space level. You can see this in the Unix Process report where you have processes that do not report the address space level information.

When multiple processes share an address space only one will show address space level information.

You can filter the Unix Work report to find a particular process ID – in this case the selected process was one of multiple processes that shared one address space.

Detailed Unix information is included in EasySMF since version 3.2.

Upgrades are free for existing customers. If you are not a customer, a 30 day trial is available.

Filed Under: EasySMF News

Java mapping for CICS SMF records

August 3, 2017 by Andrew

The EasySMF Java API now has experimental support for CICS records.

Experimental, because I want to get some feedback from CICS users about class names, usage etc. before locking down the design. In particular:

  • Do the class names and organization make sense to a CICS person? Would other names or a different organization make more sense?
  • Are the examples of how to process data clear and useful?
  • Are there areas where terminology is used incorrectly?

The complete Javadoc is here with an overview of the CICS functionality here.

If you have any comments, you can leave feedback in the comments box below, send it to support@blackhillsoftware.com, or give feedback in person at booth 323 at SHARE in Providence, Rhode Island.

You can try out the API using the 30 day trial available here: 30 Day Trial.

Installation information is available here: EasySMF:JE Java Quickstart

Using the API

EasySMF:JE aims to provide a consistent interface across different SMF record types and sections, and converts values to standard Java types for simple programming.

Dates and Times

Dates and times are converted to java.time classes. Java.time can represent dates and times with a precision of 1 nanosecond.

Times representing a duration e.g. CPU or elapsed time are converted to Duration.
Dates and times of day are converted to LocalDate, LocalTime, LocalDateTime or ZonedDateTime depending on exactly what information is in the field. Typically, times based on UTC(GMT) are converted to ZonedDateTime with ZoneOffset.UTC. Other dates and times are converted to LocalDate/Times.
Java has time zone rules so it is possible to apply a ZoneId to a LocalDateTime and perform date aware conversions between time zones.

Numeric Values

1, 2 and 3 byte integer values and 4 byte signed integer values are converted to int (32 bit signed) values.
4-7 byte integer values and 8 byte signed values are converted to long (64 bit signed).

8 byte unsigned values are available as both long (64 bit signed) and as a BigInteger. The long value may provide better performance if the value will not exceed the maximum value for a long. If a value does exceed the maximum value (i.e. the high order bit is set) an exception will be thrown. If the field value might exceed the maximum value for a long, use the BigInteger version.

Integer values greater than 8 bytes are converted to BigInteger.

Floating point values are converted to Java double.

String Values

EBCDIC and UTF8 string/character values are converted to String. Java uses Unicode internally – values are converted from EBCDIC or UTF8.

Flags

Flag bits within a byte are converted to a boolean value indicating whether the bit is set.

CICS Statistics

Reading CICS statistics is very done the same way as reading sections from other records using the API. Sections of a specific type are returned in a List<E> of that type. If there are no sections of the type in the record an empty List is returned. This allows you to iterate over the sections without explicitly checking whether the sections exist in the record – an empty list will iterate 0 times.

Example

The following code reads all FileControlStatistics sections from type 110 SMF records from the DD INPUT.

 try (SmfRecordReader reader = 
         SmfRecordReader
             .fromDD("INPUT")
             .include(110, Smf110Record.SMFSTSTY))
 {
     for (SmfRecord record : reader)
     {
         Smf100Record r110 = new Smf110Record(record);
         for (FileControlStatistics fc : 
             r110.fileControlStatistics())
         {
             //...   process FileControlStatistics sections here
         }
     }
 }

CICS Performance Monitoring

Accessing data from CICS monitoring performance records is slightly different to other SMF records because the data needs to be accessed using a Dictionary.

Dictionary records are handled automatically, however you cannot access the data from a record before a related dictionary record has been seen. You can check whether a dictionary record is available using Smf110Record.haveDictionary() or simply concatenate all required dictionary records ahead of the data records in the input data.

Specific fields are defined by name and type. Then Performance records are read from the SMF record, and specific fields accessed using getField(…) methods or variations.

Example

 ByteStringField transactionField = ByteStringField.define("DFHTASK","C001");
 TimestampField startField = TimestampField.define("DFHCICS","T005");
 TimestampField stopField = TimestampField.define("DFHCICS","T006");
 ClockField dispatchField = ClockField.define("DFHTASK","S007");

 try (SmfRecordReader reader = 
         SmfRecordReader
             .fromDD("INPUT")
             .include(110, Smf110Record.SMFMNSTY))
 {
     for (SmfRecord record : reader)
     {
         Smf100Record r110 = new Smf110Record(record); 
         if (r110.haveDictionary())
         {
             for (PerformanceRecord perfdata :
                 r110.performanceRecords())
             {
                 String txName = perfdata.getField(transactionField);
                 ZonedDateTime start = perfdata.getField(startField);
                 ZonedDateTime stop = perfdata.getField(stopField);
                 double dispatch = perfdata.getFieldTimerSeconds(dispatchField);

                 //...  process data
             }
         }
     }
 }

Complete CICS Statistics reporting sample

These samples are designed to show how to use the API, not to suggest items that you should specifically be reporting. However comments about their relevance are welcome.

import java.io.*;
import java.util.*;
import static java.util.Comparator.comparing;

import com.blackhillsoftware.smf.*;
import com.blackhillsoftware.smf.cics.*;
import com.blackhillsoftware.smf.cics.statistics.FileControlStatistics;

public class CicsFileStatistics 
{
    public static void main(String[] args) throws IOException 
    {
        Map<String, Map<String, FileData>> applids = 
                new HashMap<String, Map<String, FileData>>();

        try (SmfRecordReader reader = 
                args.length == 0 ? 
                SmfRecordReader.fromDD("INPUT") :
                SmfRecordReader.fromStream(new FileInputStream(args[0]))) 
        {
            reader.include(110, Smf110Record.SMFSTSTY);
            for (SmfRecord record : reader) 
            {
                Smf110Record r110 = new Smf110Record(record);

                Map<String, FileData> applidFiles = 
                        applids.computeIfAbsent(r110.stProductSection().smfstprn(),
                        files -> new HashMap<String, FileData>());

                for (FileControlStatistics fileStats : r110.fileControlStatistics()) 
                {
                    String entryName = fileStats.a17fnam();
                    applidFiles.computeIfAbsent(entryName, 
                            x -> new FileData(entryName)).add(fileStats);
                }
            }
        }
        writeReport(applids);
    }

    private static void writeReport(Map<String, Map<String, FileData>> applidFiles) 
    {

        applidFiles.entrySet().stream()
            .filter(applid -> !applid.getValue().isEmpty())
            .sorted((a, b) -> a.getKey().compareTo(b.getKey()))
            .forEachOrdered(applid -> 
            {
                // Headings
                System.out.format("%n%-8s", applid.getKey());

                System.out.format("%n%-8s %12s %12s %12s %12s %12s %12s %12s %12s%n%n", 
                        "ID", 
                        "Gets", 
                        "Get Upd",
                        "Browse", 
                        "Adds", 
                        "Updates", 
                        "Deletes", 
                        "Data EXCP", 
                        "Index EXCP");

                applid.getValue().entrySet().stream()
                    .map(x -> x.getValue())
                    .sorted(comparing(FileData::getTotalExcps)
                            .reversed())
                    .forEachOrdered(fileInfo -> 
                    {
                        // write detail line
                        System.out.format("%-8s %12d %12d %12d %12d %12d %12d %12d %12d%n", 
                                fileInfo.getId(),
                                fileInfo.getGets(), 
                                fileInfo.getGetUpd(), 
                                fileInfo.getBrowse(),
                                fileInfo.getAdds(), 
                                fileInfo.getUpdates(), 
                                fileInfo.getDeletes(),
                                fileInfo.getDataExcps(), 
                                fileInfo.getIndexExcps());
                    });
                });

    }

    private static class FileData 
    {
        public FileData(String fileId)
        {
            this.id = fileId;
        }

        public void add(FileControlStatistics fileStatistics) 
        {
            gets += fileStatistics.a17dsrd();
            getupd += fileStatistics.a17dsgu();
            browse += fileStatistics.a17dsbr();
            add = fileStatistics.a17dswra();
            update = fileStatistics.a17dswru();
            delete = fileStatistics.a17dsdel();
            dataexcp = fileStatistics.a17dsxcp();
            indexexcp = fileStatistics.a17dsixp();
            totalexcp += fileStatistics.a17dsxcp() 
                    + fileStatistics.a17dsixp();
        }

        public String getId() 
        {
            return id;
        }

        public long getGets() 
        {
            return gets;
        }

        public long getGetUpd() 
        {
            return getupd;
        }

        public long getBrowse() 
        {
            return browse;
        }

        public long getAdds() 
        {
            return add;
        }

        public long getUpdates() 
        {
            return update;
        }

        public long getDeletes() 
        {
            return delete;
        }

        public long getDataExcps() 
        {
            return dataexcp;
        }

        public long getIndexExcps() 
        {
            return indexexcp;
        }

        public long getTotalExcps() 
        {
            return totalexcp;
        }

        private String id;
        private long gets = 0;
        private long getupd = 0;
        private long browse = 0;
        private long add = 0;
        private long update = 0;
        private long delete = 0;
        private long dataexcp = 0;
        private long indexexcp = 0;
        private long totalexcp = 0;
    }
}

Complete CICS Transaction Monitoring reporting sample

import java.io.*;
import java.time.*;
import java.util.*;
import static java.util.Collections.reverseOrder;
import static java.util.Comparator.comparing;

import com.blackhillsoftware.smf.*;
import com.blackhillsoftware.smf.cics.*;
import com.blackhillsoftware.smf.cics.monitoring.*;
import com.blackhillsoftware.smf.cics.monitoring.fields.*;

public class CicsTransactionSummary 
{

    public static void main(String[] args) throws IOException 
    {
        Map<String, Map<String, TransactionData>> applids = 
                new HashMap<String, Map<String, TransactionData>>();

        ByteStringField transaction = ByteStringField.define("DFHTASK", "C001");

        int noDictionary = 0;

        try (SmfRecordReader reader = 
                args.length == 0 ? 
                SmfRecordReader.fromDD("INPUT") :
                SmfRecordReader.fromStream(new FileInputStream(args[0]))) 
        {     
            reader.include(110, Smf110Record.SMFMNSTY);
            for (SmfRecord record : reader) 
            {
                Smf110Record r110 = new Smf110Record(record);

                if (r110.haveDictionary()) 
                {
                    Map<String, TransactionData> applidTransactions = 
                        applids.computeIfAbsent(
                            r110.mnProductSection().smfmnprn(), 
                            transactions -> new HashMap<String, TransactionData>());

                    for (PerformanceRecord mn : r110.performanceRecords()) 
                    {
                        String txName = mn.getField(transaction);
                        applidTransactions.computeIfAbsent(
                                txName, 
                                x -> new TransactionData(txName)).add(mn);
                    }
                } else 
                {
                    noDictionary++;
                }
            }
        }

        writeReport(applids);

        if (noDictionary > 0) 
        {
            System.out.format(
                    "%n%nSkipped %s records because no applicable dictionary was found.", 
                    noDictionary);
        }

    }

    private static void writeReport(Map<String, Map<String, TransactionData>> transactions) 
    {
        transactions.entrySet().stream()
            .sorted((a, b) -> a.getKey().compareTo(b.getKey()))
            .forEachOrdered(applid -> 
            {
                // Headings
                System.out.format("%n%-8s", applid.getKey());

                System.out.format("%n%-4s %15s %15s %15s %15s %15s %15s %15s %15s %15s%n%n", 
                        "Name", 
                        "Count", 
                        "Elapsed",
                        "Avg Elapsed", 
                        "CPU", 
                        "Avg CPU", 
                        "Dispatch", 
                        "Avg Disp.", 
                        "Disp Wait", ""
                                + "Avg Disp Wait");

                applid.getValue().entrySet().stream()
                    .map(x -> x.getValue())
                    .sorted(comparing(TransactionData::getCpu, reverseOrder())
                            .thenComparing(TransactionData::getCount, reverseOrder()))
                    .forEachOrdered(txInfo -> 
                    {
                        // write detail line
                        System.out.format("%-4s %15d %15f %15f %15f %15f %15f %15f %15f %15f%n", 
                                txInfo.getName(),
                                txInfo.getCount(), 
                                txInfo.getElapsed(), 
                                txInfo.getAvgElapsed(), 
                                txInfo.getCpu(),
                                txInfo.getAvgCpu(), 
                                txInfo.getDispatch(), 
                                txInfo.getAvgDispatch(),
                                txInfo.getDispatchWait(), 
                                txInfo.getAvgDispatchWait());

                    });
            });

    }

    private static class TransactionData 
    {
        public TransactionData(String name) 
        {
            this.name = name;
        }

        public void add(PerformanceRecord perfdata) 
        {
            count++;
            elapsed += Utils.ToSeconds(
                    Duration.between(perfdata.getField(start), perfdata.getField(stop)));
            dispatch += perfdata.getFieldTimerSeconds(dispatchField);
            dispatchWait += perfdata.getFieldTimerSeconds(dispatchWaitField);
            cpu += perfdata.getFieldTimerSeconds(cpuField);
        }

        public String getName() 
        {
            return name;
        }

        public int getCount() 
        {
            return count;
        }

        public double getElapsed() 
        {
            return elapsed;
        }

        public double getDispatch() 
        {
            return dispatch;
        }

        public double getDispatchWait() 
        {
            return dispatchWait;
        }

        public double getCpu() 
        {
            return cpu;
        }

        public Double getAvgElapsed() 
        {
            return count != 0 ? elapsed / count : null;
        }

        public Double getAvgDispatch() 
        {
            return count != 0 ? dispatch / count : null;
        }

        public Double getAvgDispatchWait() 
        {
            return count != 0 ? dispatchWait / count : null;
        }

        public Double getAvgCpu() 
        {
            return count != 0 ? cpu / count : null;
        }

        static TimestampField start = TimestampField.define("DFHCICS", "T005");
        static TimestampField stop = TimestampField.define("DFHCICS", "T006");
        static ClockField dispatchField = ClockField.define("DFHTASK", "S007");
        static ClockField dispatchWaitField = ClockField.define("DFHTASK", "S102");
        static ClockField cpuField = ClockField.define("DFHTASK", "S008");

        private String name;
        private int count = 0;
        private double elapsed = 0;
        private double dispatch = 0;
        private double dispatchWait = 0;
        private double cpu = 0;
    }
}

Filed Under: EasySMF News, Java, Java SMF

EasySMF News: April 2014

April 15, 2014 by Andrew

In this issue:

  1. Compressing data for transfer
  2. Loading data from a z/OS batch job
  3. Read-only Repository
  4. Extended wild card support
  5. New Time Selection Controls
  6. Performance Improvement
  7. Request for data

It has been quite some time since the last EasySMF News, and there have been some significant updates in that time. Version 2.0.4 in March included a large number of new features and bug fixes. Version 2.0.5 includes improved support for loading compressed data and sample JCL to load data using a z/OS batch job.

Feedback on the various JCL samples included with version 2.0.5 would be welcome. Every z/OS site is slightly different, so sometimes samples need a bit of tweaking to get them working. If there was something you had to change that would be worth noting in the documentation (or just errors!) please let me know.

Compressing Data for Transfer

EasySMF now supports loading compressed data in Gzip format using the built in FTP function. SMF data is very compressible – typical compression is about 10:1 – so this could significantly reduce the time and network traffic to load the data.

Sample jobs are supplied to compress data using Gzip from the IBM Ported Tools with Dovetailed Technologies Co:Z Toolkit, and using a Java program with the IBM JZOS Batch Toolkit. Any other software that produces Gzip format output should also work – as long as the compressed data includes the RDW with the SMF data. Including the RDW proved to be the difficult bit when testing various tools.

Zip format is also supported if the data is loaded from a file on the PC e.g. using EasySMFLoad. That means that Zip is one of the supported formats for loading from a z/OS batch job…

Loading data from a z/OS batch job

Another sample job demonstrates loading data into EasySMF using a z/OS batch job. The process uses the Dovetailed Technologies Hybrid Batch products (Co:Z Launcher and Co:Z Dataset Pipes). The messages and return code from the EasySMFLoad command line program appear in the z/OS batch job output.

EasySMFLoad is invoked using SSH. The data can be transferred over SSH and optionally compressed in transit.

Read only repository

Read only access to the SMF data repository is now supported. This could be useful if you share the repository between multiple people.

Read-write access is still required for some functions, e.g. managing the data and if repository changes are required for a new version of EasySMF.

If repository changes are required for a new version of EasySMF they are normally made the first time the repository is opened with the new version. (You will receive a warning if the changes are not compatible with older versions.) If the repository is read only the new version of EasySMF will not be able to use it until it is opened by someone with write access.

Extended wildcard support

Wildcard and regular expressions are now supported for all the text based report parameters. This is particularly useful for service and report class reports, where you can now use regular expressions to exclude particular service or report classes. For example, the regular expression:

-/TEST/

will exclude anything matching *TEST*. To exclude only TEST but not TESTA, MYTEST etc. you need to add anchors to the start and end of the string:

-/^TEST$/

New time selection controls

The time selection controls have been rewritten. A few bugs have been fixed and some new features added:

  • A button to zoom out to a wider view of the data. This is particularly useful if you have used the mouse to zoom in to a chart, and want to return to a wider view, but not the whole time range you zoomed in from. The button extends the time range to 3 times the current range.
  • Additional predefined times. In addition to Today, This Week, This Month you can now select Yesterday, Last Week, Last Month.

Performance improvement

There was a bug prior to version 2.0.4 that resulted in unnecessary data being read when creating reports. This particularly affected reports where the reporting period was small compared with the period in the SMF dataset e.g. if you had monthly SMF data but ran a report for a day, or if you ran a report for a short running job.

Version 2.0.4 fixes that bug. Updating from 2.0.3 or earlier requires write access to the repository the first time EasySMF runs, and will scan the data to update information in the repository catalog. If you installed EasySMFLoad on another PC to load data it also needs to be updated.

Request for data

I’m looking for some type 30 data that includes the new Counter section. That is from z/OS 2.1, with Hardware Instrumentation Services active and SMF30COUNT specified in SMFPRMxx. Some z/OS 2.1 type 113 records would also be useful for some new reports under development. At this stage I don’t need a lot of data – a few hours from a test LPAR would be very helpful.

EasySMF Evaluations

EasySMF has a 30 day evaluation period. You can download it and start using it immediately.

Trial Extensions

New releases of EasySMF automatically provide a 7 day “re-evaluation” period if you have already used the 30 day evaluation so you can try out new features. Trial extensions can also be arranged – please contact info@blackhillsoftware.com if required.

Get the current version of EasySMF

Filed Under: EasySMF News

EasySMF News: September 2013

September 25, 2013 by Andrew

In this issue:

  1. EasySMF Version 2 Released
  2. Report Spotlight: Job Status During Interval
[Read more…]

Filed Under: EasySMF News

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • Next Page »

30 Day Trial

EasySMF and EasySMF:JE are available for a free 30 day trial. Download now and start using them immediately.
30 Day Trial

Information

EasySMF:JE Java API for SMF: Quickstart

Java vs C++ : Drag Racing on z/OS

News

  • Using zEDC compression for SMF data
  • Text message alerts using the z/OS SMF Real Time Interface
  • DCOLLECT Reports and DCOLLECT to JSON using Java

Black Hill Software

Suite 10b, 28 University Drive, Mt Helen, VIC 3350, Australia
PO Box 2214, Bakery Hill, VIC 3354, Australia
+61 3 5331 8201
+1 (310) 634 9882
info@blackhillsoftware.com

News

  • Using zEDC compression for SMF data
  • Text message alerts using the z/OS SMF Real Time Interface
  • DCOLLECT Reports and DCOLLECT to JSON using Java

Copyright © 2025 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in