IBM Parallel Sysplex
In computing, a Parallel Sysplex is a cluster of IBM mainframes acting together as a single system image with z/OS. Used for disaster recovery, Parallel Sysplex combines data sharing and parallel computing to allow a cluster of up to 32 systems to share a workload for high performance and high availability.
Sysplex
[edit]In 1990, IBM mainframe computers introduced the concept of a Systems Complex, commonly called a Sysplex, with MVS/ESA SPV4.1. This allows authorized components in up to eight logical partitions (LPARs) to communicate and cooperate with each other using the XCF protocol.
Components of a Sysplex include:
- A common time source to synchronize all member systems' clocks. This can involve either a Sysplex timer (Model 9037), or the Server Time Protocol (STP)
- Global Resource Serialization (GRS), which allows multiple systems to access the same resources concurrently, serializing where necessary to ensure exclusive access
- Cross System Coupling Facility (XCF), which allows systems to communicate peer-to-peer
- Couple Data Sets (CDS)
Users of a (base) Sysplex include:
- Console services – allowing one to merge multiple MCS consoles from the different members of the Sysplex, providing a single system image for Operations
- Automatic Restart Manager (ARM) – Policy to direct automatic restart of failed jobs or started tasks on the same system if it is available or on another LPAR in the Sysplex
- Sysplex Failure Manager (SFM) – Policy that specifies automated actions to take when certain failures occur such as loss of a member of a Sysplex or when reconfiguring systems
- Workload Manager (WLM) – Policy based performance management of heterogeneous workloads across one or more z/OS images or even on AIX
- Global Resource Serialization (GRS) - Communication – allows use of XCF links instead of dedicated channels for GRS, and Dynamic RNLs
- Tivoli OPC – Hot standby support for the controller
- RACF (IBM's mainframe security software product) – Sysplex-wide RVARY and SETROPTS commands
- PDSE file sharing
- Multisystem VLFNOTE, SDUMP, SLIP, DAE
- Resource Measurement Facility (RMF) – Sysplex-wide reporting
- CICS – uses XCF to provide better performance and response time than using VTAM for transaction routing and function shipping.
- zFS – Using XCF communication to access data across multiple LPARs
Parallel Sysplex
[edit]IBM introduced[1] the Parallel Sysplex with the addition of the 9674[2] Coupling Facility (CF), new S/390 models,[3][4][5] upgrades to existing models, coupling links for high speed communication and MVS/ESA SP V5.1[6] operating system support, in April 1994.[7]
The Coupling Facility (CF) may reside on a dedicated stand-alone server configured with processors that can run Coupling Facility control code (CFCC), as integral processors on the mainframes themselves configured as ICFs (Internal Coupling Facilities), or less common, as normal LPARs. The CF contains Lock, List, and Cache structures to help with serialization, message passing, and buffer consistency between multiple LPARs.[8]
The primary goal of a Parallel Sysplex is to provide data sharing capabilities, allowing multiple databases for direct reads and writes to shared data. This can provide benefits of
- Help remove single points of failure within the server, LPAR, or subsystems
- Application Availability
- Single System Image
- Dynamic Session Balancing
- Dynamic Transaction Routing
- Scalable capacity
Databases running on the System z server that can take advantage of this include:
- IBM Db2
- IBM Information Management System (IMS).
- VSAM (VSARM/RLS)
- IDMS
- Adabas
- DataCom
- Oracle
Other components can use the Coupling Facility to help with system management, performance, or reduced hardware requirements. Called “Resource Sharing”, uses include:
- Catalog – shared catalogs to improve performance by reducing I/O to a catalog data set on disk
- CICS – Using the CF to provide sharing and recovery capabilities for named counters, data tables, or transient data
- DFSMShsm – Workload balancing for data migration workload
- GRS Star – Reduced CPU and response time performance for data set allocation.
Tape Switching uses the GRS structure to provide sharing of tape units between z/OS images.
- Dynamic CHPID Management (DCM), and I/O priority management
- JES2 Checkpoint – Provides improved access to a multisystem checkpoint
- Operlog / Logrec – Merged multisystem logs for system management
- RACF – shared data set to simplify security management across the Parallel Sysplex
- WebSphere MQ – Shared message queues for availability and flexibility
- WLM - provides support for Intelligent Resource Director (IRD) to extends the z/OS Workload Manager to help manage CPU and I/O resources across multiple LPARs within the Parallel Sysplex. Functions include LPAR CPU management, IRD.
Multi-system enclave management for improved performance
- XCF Star – Reduced hardware requirements and simplified management of XCF communication paths
Major components of a Parallel Sysplex include:
- Coupling Facility (CF or ICF) hardware, allowing multiple processors to share, cache, update, and balance data access;
- Sysplex Timers (more recently, Server Time Protocol) to synchronize the clocks of all member systems;
- High speed, high quality, redundant cabling;
- Software (operating system services and, usually, middleware such as IBM Db2).
The Coupling Facility may be either a dedicated external system (a small mainframe, such as a System z9 BC, specially configured with only coupling facility processors) or integral processors on the mainframes themselves configured as ICFs (Internal Coupling Facilities).[9] It is recommended that at least one external CF be used in a parallel sysplex.[10] It is recommended that a Parallel Sysplex has at least two CFs and/or ICFs for redundancy, especially in a production data sharing environment. Server Time Protocol (STP) replaced the Sysplex Timers beginning in 2005 for System z mainframe models z990 and newer.[11] A Sysplex Timer is a physically separate piece of hardware from the mainframe,[12] whereas STP is an integral facility within the mainframe's microcode.[13] With STP and ICFs it is possible to construct a complete Parallel Sysplex installation with two connected mainframes. Moreover, a single mainframe can contain the internal equivalent of a complete physical Parallel Sysplex, useful for application testing and development purposes.[14]
The IBM Systems Journal dedicated a full issue to all the technology components.[15]
Server Time Protocol
[edit]Maintaining accurate time is important in computer systems. For example, in a transaction-processing system the recovery process reconstructs the transaction data from log files. If time stamps are used for transaction-data logging, and the time stamps of two related transactions are transposed from the actual sequence, then the reconstruction of the transaction database may not match the state before the recovery process. Server Time Protocol (STP) can be used to provide a single time source between multiple servers. Based on Network Time Protocol concepts, one of the System z servers is designated by the HMC as the primary time source (Stratum 1). It then sends timing signals to the Stratum 2 servers through use of coupling links. The Stratum 2 servers in turn send timing signals to the Stratum 3 servers. To provide availability, one of the servers can be designated as a backup time source, and a third server can be designated as an Arbiter to assist the Backup Time Server in determining if it should take the role of the Primary during exception conditions.
STP has been available on System z servers since 2005.
More information on STP is available in “Server Time Protocol Planning Guide”.[16]
Geographically Dispersed Parallel Sysplex
[edit]Geographically Dispersed Parallel Sysplex (GDPS) is an extension of Parallel Sysplex of mainframes located, potentially, in different cities. GDPS includes configurations for single site or multiple site configurations:[17]
- GDPS HyperSwap Manager: This is based on synchronous Peer to Peer Remote Copy (PPRC) technology for use within a single data center. Data is copied from the primary storage device to a secondary storage device. In the event of a failure on the primary storage device, the system automatically makes the secondary storage device the primary, usually without disrupting running applications.
- GDPS Metro: This is based on synchronous data mirroring technology (PPRC) that can be used on mainframes 200 kilometres (120 mi) apart. In a two-system model, both sites can be administered as if they were one system. In the event of a failure of a system or storage device, recovery can occur automatically, with limited or no data loss.
- GDPS Global - XRC: This is based on asynchronous Extended Remote Copy (XRC) technology with no restrictions on distance. XRC copies data on storage devices between two sites such that only a few seconds of data may be lost in the event of a failure. If a failure does occur, a user must initiate the recovery process. Once initiated, the process is automatic in recovering from secondary storage devices and reconfiguring systems.
- GDPS Global - GM: This is based on asynchronous IBM Global Mirror technology with no restrictions on distance. It is designed for recovery from a total failure at one site. It will activate secondary storage devices and backup systems.
- GDPS Metro Global - GM: This is a configuration for systems with more than two systems/sites, for purposes of disaster recovery. It is based on GDPS Metro together with GDPS Global - GM.
- GDPS Metro Global - XRC: This is a configuration for systems with more than two systems/sites for purposes of disaster recovery. It is based on GDPS Metro together with GDPS Global - XRC.
- GDPS Continuous Availability: This is a disaster recovery / continuous availability solution, based on two or more sites, separated by unlimited distances, running the same applications and having the same data to provide cross-site workload balancing. IBM Multi-site Workload Lifeline, through its monitoring and workload routing, plays an integral role in the GDPS Continuous Availability solution.
See also
[edit]References
[edit]- ^ "S/390 Parallel Sysplex Overview". Announcement Letters. IBM. April 6, 1994. 194-080.
- ^ "IBM S/390 Coupling Facility 9674 Model C01". Announcement Letters. IBM. April 6, 1994. 194-082.
- ^ "S/390 Parallel Sysplex Offering". Announcement Letters. IBM. April 6, 1994. 194-081.
- ^ "IBM ES/9000 Water-Cooled Processor Enhancements: New Ten-Way Processor, Parallel Sysplex Capability, and Additional Functions". Announcement Letters. IBM. April 6, 1994. 194-084.
- ^ "IBM Enterprise System/9000 Air-Cooled Processors Enhanced with Additional Functions and Parallel Sysplex Capability". Announcement Letters. IBM. April 6, 1994. 194-084.
- ^ "IBM MVS/ESA SP Version 5 Release 1 and OpenEdition Enhancements". Announcement Letters. IBM. April 6, 1994. 294-152.
- ^ System/390 Parallel Sysplex Performance (PDF) (Fourth ed.). International Business Machines Corporation. December 1998. SG24-4356-03. Archived from the original (PDF) on 2011-05-18. Retrieved 2007-09-17.
- ^ David Raften (November 2019). "Coupling Facility Configuration Options". Positioning paper. IBM. ZSW01971USEN.
- ^ "Coupling Facility Definition". PC Magazine.com. Archived from the original on December 2, 2008. Retrieved April 13, 2009.
- ^ "Coupling Facility" (PDF). Archived from the original (PDF) on July 17, 2011. Retrieved April 13, 2009.
- ^ "Migrate from a Sysplex Timer to STP". IBM. Retrieved April 15, 2009.
- ^ "Sysplex Timer". Symmetricom. Retrieved April 15, 2009.
- ^ "IBM Server Time Protocol (STP)". IBM. Archived from the original on June 13, 2008. Retrieved April 15, 2009.
- ^ Johnson, John E. "MVS Boot Camp: IBM Health Checker". z/Journal. Retrieved April 15, 2009.[permanent dead link ]
- ^ "IBM's System Journal on S/390 Parallel Sysplex Clusters". Archived from the original on 9 March 2012. Retrieved 24 April 2017.
- ^ Server Time Protocol Planning Guide (PDF) (Fourth ed.). International Business Machines Corporation. June 2013. SG24-7280-03.
{{cite book}}
:|work=
ignored (help) - ^ Ahmad, Riaz (March 5, 2009). GDPS 3.6 Update & Implementation. Austin, TX: SHARE. Retrieved April 17, 2009.[permanent dead link ]