Mainframe Infrastructure Plan for Open Mainframe Project

Why we are doing this project

A common request from open-source projects wanting to support the mainframe is getting access to mainframe infrastructure. While in many architecture ecosystems, such infrastructure is readily available, for mainframe, it rarely is. Reasons include:

  • Cost of infrastructure/software to acquire and host
  • Infrastructure owned by vendors is limited to access for those vendor’s employees.
  • Infrastructure owned by end users is in production usage and often disconnected from direct Internet access.
  • Some companies do not allow their employees to use their mainframe for open-source projects

There have been efforts in the past, such as the LinuxONE Community Cloud, but that effort was limited to just s390x Linux and not inclusive of z/OS.

As a vendor-neutral organization that provides a home for open source on the mainframe, having such hardware available for any open-source project is critical to the growth and sustainability of the open-source ecosystem on the mainframe. Furthermore, having an environment for testing and development helps identify security vulnerabilities that may be specific to the s390x architecture or z/OS operating system.

Infrastructure being acquired

Broadcom has agreed to donate the following hardware at no cost to the Open Mainframe Project. To use this hardware with the needed configuration to support all the current Open Mainframe Projects and allow for growth. There is a capability for hardware upgrades, as outlined below.

IBM Z15-T02 model A02

Details at Link to IBM documentation on the z15 8562-T02

Product	Description	Qty	
8562-T02	IBM z15	1

137	Fanout Airflow PCIe	10
175	PCIe Fanout	2
421	PCIe Interconnect	2
426	OSA-Express6S 1000BASE-T 2 ports	4
427	FICON Express16S+ LX 2 ports	3
451	zHyperLink Express1.1 2 ports	1
505	Model T02	1
629	200-208V 30/60A, 3 Ph PDU	2
631	Ethernet Switch	2
649	Max4	1
666	CPC PSU	2
1500	64 GB Memory	1
1643	64 GB Mem DIMM (5/feat)	1
2271	CPC1 Reserve	1
3863	CPACF Enablement	1
4021	PCIe+ I/O Drawer	1
4039	A Frame Air	1
4800	CP-A	2
4853	2-Way Processor A02	1
5010	A02 Capacity Marker	1
6835	OPO Sales Flag	1
7899	Bottom Exit Cabling	1
7928	Top Exit Cabling w/o Tophat	1
7952	30A/250V 3Ph w/Hubbell	2
Serial: 0200472F8

DS8K Storage System

Product detail DS8910

Product details
Field description Value
Description MACHINE TYPE 5334 - Model NEW
Type-model 5334-993
Serial number 75LAV50
System type-number 5334-9A0D49K
Category Hardware
Status Installed
Install date 23 Jul 2020
Discontinuance date
Warranty expiration date 22 Jul 2024
Using customer number 0935393
Maintenance customer number 0935393
Owning customer number 6040257
Contract status On warranty without MA
Billing option
Billing option end date
Proof of entitlement
Installed features
Qty Feature Description
1 0798 0798-25.1 TB to 50 TB capacity
1 0934 IBM System z Indicator
1 0939 Customer Rack Field Merge
1 0991 0991-Remote code load
1 1038 SPP208v30ANEMAL6-30P
1 1103 Kick Step
1 1303 1303-I/O enclosure pair PCIe 3
8 1420 1420-9 um Fibre cable LC
2 1450 40m zHyperlink cable
1 1604 HPFE Gen 2 adapter card
1 1605 HPFE Gen 2 enclosure pair R9
1 1622 1.92 TB High cap flash
2 1699 1699-Flash enclsr filler set
1 1765 1U Keyboard-Display
1 1890 1890-DS8000 LMC R9.0
2 3453 16 Gb 4 port LW FCP/FICON
2 3500 zHyperlink adapter
1 4341 8 core processor pair ind
1 4450 192GB Sys Cache (8 core)
4 8151 BF - Up to 100 TB capacity
1 8300 zsS - Active
4 8351 zsS - Up to 100 TB capacity
1 AGAJ Shipping and handling 993

18TB usable storage; can go to 62TB without an expansion frame.

Virtual Tape System

VTS 3957 VEB.Details at

Activation Plan

To accommodate the current Open Mainframe Project budget constraints, as well as able to accept the hardware donation before Broadcom needs to get the hardware out of its current data center by the end of the calendar year, the plan is to split the plan into two parts:

  1. Move current hardware from Broadcom to Marist College.
  2. Announce hardware donation and intention to activate.
  3. Install and activate the hardware.
  4. Define security requirements and implementation.
  5. Plan for the migration of the CBT, and any other, environments to the new box.
  6. Plan for opening infrastructure for community use.

Phase 1: Move current hardware from Broadcom to Marist College - COMPLETE

Moving the hardware is done as a service through IBM. Vicom Infinity, on behalf of IBM. This was completed in 2022 Q2.

Phase 2: Announce hardware donation and intention to activate - COMPLETE

This was done at Open Mainframe Summit 2023, with the intent of driving more interest in funding.

The Open Mainframe Project Announces A New Mainframe Resource to Advance Mainframe Talent and Innovative Technologies - Open Mainframe Project

Phase 3: Activate hardware.

Upon the Open Mainframe Project’s request, IBM will provide activation services and Shop Z customer number, including unpacking and installing the hardware. Once complete, Vicom Infinity will then configure the infrastructure for use by the Open Mainframe Project community.

Hardware Configuration

  • z/VM LPAR
    • z/OS Guest(s) - for individual projects
    • VSEⁿ Guest(s) - for individual projects - FUTURE
    • Linux Guest(s) - for individual projects
    • z/VM Guest(s) - for individual projects
    • z/OS Guest - Sandbox ( maintenance and updates )
    • z/OS Guest - COBOL Programming Course
      • OMP COBOL system being used today
        • z/OS V2.4 running as a guest of z/VM on a z14
        • 1 CP assigned
        • 8 GB processing memory
        • 50 GB of participant usable disk storage
        • Enterprise COBOL V6.4
        • Db2 V12
        • CICS V5.5
        • z/OSMF for VSCode Zowe CLI connections
        • Public IP connection
      • Project to maintain
        • Needs registration system
        • Needs predefined user IDs to be assigned by registration system
        • Needs on-going system administration
    • z/OS Guest - Mainframe Open Education - FUTURE
    • z/OS Guest - CBT Tape
      • Current Configuration ( mirror here )
        • Model T02 with 2 general purpose processors (no zIIPs)
        • High Real Storage 8GB
        • Approx 1TB of DASD (2 EAV, 2 3390-3, 28 3390-9)
          • This does not include any SMP/E volumes used by Vicom
        • Current access is via TN3270 TLS (port 992) and non-TLS (port 2023) with SSH on port 2022
          • These ports can be changed for this implementation
        • Enable access for zOSMF and support for ZOWE and zExplorer
      • Add Virtual Tape
    • z/OS Guest - Zowe (ON HOLD)
    • Linux Guest(s) - for individual projects as requested


  • 30 combined z/OS or Linux Guests altogether
    • Based on A02 configuration with 2 IFLs and 4 processors
    • Proposed Z02 configuration

Other considerations

  • zIIPs (specialty engines) - FUTURE
    • Is there a project that requires or could exploit zIIPs? I would wait for the demand
    • Consider if we get more than 4 CPUs enabled
    • Also might be helpful for Python and z/OSMF-related work
    • Can designate at a later time
  • Root passwords stored in a secure vault
  • Carving up storage array across LPARs/Guests
    • Aim for a consistent solution across everything
  • Each LPAR/guest will have it’s own dedicated I.P. address
    • For some a DNS address may be worthwhile - is that a possibility?
      • i.e and if there are two cbt guests at some point then the 2nd can be
  • Security needs to be factored into the environments.
    • local security
    • network security

Software Availability

When new versions of software come out, we will stop provisioning the previous version after six months, unless approved by the TAC on a case-by-case basis. Existing instances with previous versions will be required to update to the latest version after 18 months unless approved by the TAC on a case-by-case basis.

Phase 4: Plan for opening infrastructure for community use.

This hardware is intended to be used for the development and testing of open-source projects for the mainframe. At launch, the infrastructure will support z/OS and s390x Linux but could include other mainframe operating systems over time.


Open Source project will be onboarding in 3 phases:

  1. Current Open Mainframe Project hosted projects ( i.e., Zowe, Feilong, COBOL Programming Course ) that either have no current infrastructure or use other third-party infrastructure.
  2. Select broad open source projects ( suggested no more than 10 ) to ensure the hardware properly supports those needs.
  3. Any open source project which expresses interest in leveraging such infrastructure.

The TAC can determine the specific timeline for each phase of onboarding.

Requirements for projects to leverage the hardware

Because of the nature of the hardware and licensing requirements for the software being used, the TAC will establish a program for open-source projects to leverage the hardware.

Standard Resources

Projects will be allotted a standard configuration as follows (exceptions CBT Tape, COBOL Programming Course, Zowe - those are highlighted above):

  • Guest under z/VM or KVM LPAR ( z/OS, z/VM, Linux )
    • Linux distros: SUSE/openSUSE, Red Hat/Alma Linux, Ubuntu - latest versions
  • 2 Virtual CPUs
    • Enable SMT for Linux
  • 8GB RAM
  • SSH/SCP/SFTP access for z/OS and Linux ( via VPN? )
    • Prefer SSH without VPN on a unique port
    • The CBTTape LPAR uses port 2022 without VPN
  • TN3270 access for z/OS and z/VM ( via VPN? )¸
    • Prefer TN3270 without VPN on a unique port
    • The CBTTape LPAR uses port 992 with TLS without VPN
  • Disk Space: 128GB
    • Can scale up as needed and approved by the TAC
    • It is recommended that DASD not be shared between systems for user data but the sharing of system volumes (i.e. sysres) may be desirable.

Note: Requiring VPN will require the administration to distribute the VPN client, troubleshoot the VPN connection/client, and maintain the VPN credentials (userid/password). This will come with a cost.

Projects may be able to have additional resources upon request and approval by the TAC.

Application requirements for open source projects

To join the program, open-source projects must complete an application to be considered. Application will be submitted at

  • Name of project
  • URL to code repository
  • Project License
  • Names of primary maintainers/project leads
  • Architectures/operating systems currently supported
  • Why are you interested in supporting the mainframe?
  • Support z/OS, Linux on s390x, or both?
  • Resources requested outside of the standard resources provided ( to be defined ).
  • Will the infrastructure be needed for only a fixed period of time, or ongoing?
  • Any other comments or questions


The Service Provider will oversee the infrastructure, managing the overall resources and providing limited support to the Projects leveraging the infrastructure. Expectations of the service provider are:

  • Provision/de-provision guests
  • Overall systems resource management
  • Infrastructure maintenance
  • Documentation of infrastructure ( will be hosted in this site )

Develop a service level agreement with expectations for each of the above. Specifically:

  • Time frame from request to completion
  • Levels of communications, including when and how often, to whom
  • Overall availability of services
  • Return to service expectations
  • Currency of software maintenance and releases

Projects will generally have lateraility to use the provided guest instances as it makes sense for thier community. Expectations of the individual project:

  • Define who specifically will have access to the mainframe. Credentials are to be shared via a secure vault.
  • Management of thier own guest instances. The only support than can be provided by the Open Mainframe Project is to re-provision, but there may be community members that can assist with guest instance issues on a case-by-case basis.
  • Annual report of how the infrastructure is being used and any feedback.
  • Ensure the infrastructure is only being used for open-source development ( no commericial software development can occur on the instance.

Projects using the infrastructure can submit support requests to Service Provider will respond to issues within 1 business day.

APPENDIX: Future Expansion Possibilities ( not part of the current plan )

The current system as delivered supports four processors (Feature A02). Depending on initial and anticipated workloads, the system will likely require additional processors to support multiple workloads by different projects. We would want to do some analysis of the system over time to come up with better recommendations.

An initial SWAG at potential considerations for additional processing resources are

2 additional GCPs

_Rationale - _This will allow 4 GCPs to support workload across various z/OS partitions. The additional GCPs would enable the creation of z/OS instances and coupling facilities to enable true SYSPLEX environments such that services like API ML from Zowe can be configured and tested in high availability modes.

It could be alleviated with faster processors; not an immediate issue but something to consider in the future. Hardware upgrade required to Max13

Activate ZIIP processors

_Rationale - _These processors are used in z/OS LPARs for the execution of Java and other non-z/OS workloads, which in general, reduces the MSUs rated for the box. These will be needed when configuring zCX partitions to run workloads inside z/OS and support Java-based workloads in z/OS.

Only critical for OpenShift workloads, microcode upgrade.

8 IFLs

Given the importance of zLinux as a workload and that the workloads run there are generally more cloud-like IFLs would allow for a variety of Linux instances that could run K8s clusters with OCP as general/traditional Linux instances.

Nice to have would be a microcode upgrade.


Additional memory will be based on the required workloads and initial estimates. We need to consider the additional memory cost at the initial startup versus asking for additional resources later on, which can incur higher installation costs by service personnel.


The DS8K appears to have 18 TB of storage across 16 drives. The available storage will depend on the base setup and RAID options.

The CBT LPAR is currently using 365GB for reference and is in need of an estimated 20-30GB additional.

FCP to look at in the future.