Jump to content

MySQL Cluster

From Wikipedia, the free encyclopedia
(Redirected from Ndb Cluster)

MySQL Cluster
Developer(s)Oracle
Initial releaseNovember 2004
Stable release
8.4.3 / October 16, 2024; 39 days ago (2024-10-16) [1]
Operating systemCross-platform[which?]
Available inEnglish
TypeRDBMS
LicenseGNU General Public License (version 2, with linking exception) or commercial EULA
Website[2]

MySQL Cluster , also known as MySQL Ndb Cluster is a technology providing shared-nothing clustering and auto-sharding for the MySQL database management system. It is designed to provide high availability and high throughput with low latency, while allowing for near linear scalability.[3] MySQL Cluster is implemented through the NDB or NDBCLUSTER storage engine for MySQL ("NDB" stands for Network Database).

Architecture

[edit]

MySQL Cluster is designed around a distributed, multi-master ACID compliant architecture with no single point of failure. MySQL Cluster uses automatic sharding (partitioning) to scale out read and write operations on commodity hardware and can be accessed via SQL and Non-SQL (NoSQL) APIs.

Replication

[edit]

Internally MySQL Cluster uses synchronous replication through a two-phase commit mechanism in order to guarantee that data is written to multiple nodes upon committing the data. Two copies (known as replicas) of the data are required to guarantee availability. MySQL Cluster automatically creates “node groups” from the number of replicas and data nodes specified by the user. Updates are synchronously replicated between members of the node group to protect against data loss and support fast failover between nodes. Cluster replication differs from "MySQL Replication", which is instead asynchronous.

It is also possible to replicate asynchronously between clusters; this is sometimes referred to as "MySQL Cluster Replication" or "geographical replication". This is typically used to replicate clusters between data centers for IT disaster recovery or to reduce the effects of network latency by locating data physically closer to a set of users. Unlike standard MySQL replication, MySQL Cluster's geographic replication uses optimistic concurrency control and the concept of Epochs to provide a mechanism for conflict detection and resolution,[4] enabling active/active clustering between data centers.

Starting with MySQL Cluster 7.2, support for synchronous replication between data centers was supported with the Multi-Site Clustering feature.[5]

Horizontal data partitioning (auto-sharding)

[edit]

MySQL Cluster is implemented as a fully distributed multi-master database ensuring updates made by any application or SQL node are instantly available to all of the other nodes accessing the cluster, and each data node can accept write operations.

Data within MySQL Cluster (NDB) tables is automatically partitioned across all of the data nodes in the system. This is done based on a hashing algorithm based on the primary key on the table, and is transparent to the end application. Clients can connect to any node in the cluster and have queries automatically access the correct shards needed to satisfy a query or commit a transaction. MySQL Cluster is able to support cross-shard queries and transactions.

Users can define their own partitioning schemes. This allows developers to add “distribution awareness” to applications by partitioning based on a sub-key that is common to all rows being accessed by high running transactions. This ensures that data used to complete transactions is localized on the same shard, thereby reducing network hops.

Hybrid storage

[edit]

MySQL Cluster allows datasets larger than the capacity of a single machine to be stored and accessed across multiple machines.

MySQL Cluster maintains all indexed columns in distributed memory. Non-indexed columns can also be maintained in distributed memory or can be maintained on disk with an in-memory page cache. Storing non-indexed columns on disk allows MySQL Cluster to store datasets larger than the aggregate memory of the clustered machines.

MySQL Cluster writes Redo logs to disk for all data changes as well as check pointing data to disk regularly. This allows the cluster to consistently recover from disk after a full cluster outage. As the Redo logs are written asynchronously with respect to transaction commit, some small number of transactions can be lost if the full cluster fails, however this can be mitigated by using geographic replication or multi-site cluster discussed above. The current default asynchronous write delay is 2 seconds, and is configurable. Normal single point of failure scenarios do not result in any data loss due to the synchronous data replication within the cluster.

When a MySQL Cluster table is maintained in memory, the cluster will only access disk storage to write Redo records and checkpoints. As these writes are sequential and limited random access patterns are involved, MySQL Cluster can achieve higher write throughput rates with limited disk hardware compared to a traditional disk-based caching RDBMS. This checkpointing to disk of in-memory table data can be disabled (on a per-table basis) if disk-based persistence isn't needed.

Shared nothing

[edit]

MySQL Cluster is designed to have no single point of failure. Provided that the cluster is set up correctly, any single node, system, or piece of hardware can fail without the entire cluster failing. Shared disk (SAN) is not required. The interconnects between nodes can be standard Ethernet, Gigabit Ethernet, InfiniBand, or SCI interconnects.

SQL and NoSQL APIs

[edit]

As MySQL Cluster stores tables in data nodes, rather than in the MySQL Server, there are multiple interfaces available to access the database:

MySQL Cluster Manager

[edit]

Part of the commercial MySQL Cluster CGE, MySQL Cluster Manager is a tool designed to simplify the creation and administration of the MySQL Cluster CGE database by automating common management tasks, including on-line scaling, upgrades, backup/restore and reconfiguration. MySQL Cluster Manager also monitors and automatically recovers MySQL Server application nodes and management nodes, as well as the MySQL Cluster data nodes.

MySQL Ndb Operator

[edit]

The open source MySQL Ndb Operator simplifies the deployment and operation of MySQL Cluster on a Kubernetes cluster. Ndb Operator deploys containerized MySQL Cluster Data, Management and SQL nodes in a number of StatefulSets with data stored in Persistent Volumes. Kubernetes mechanisms extend the high availability features of MySQL Cluster, for example automatically restoring HA redundancy after hardware failures by migrating pods to new hardware. Operating MySQL Cluster on Kubernetes allows a full stack of cloud native software to be operated in the same way on private or public clouds.

NDB Cluster

[edit]

NDB Cluster is the distributed database system underlying MySQL Cluster. It can be used independently of a MySQL Server with users accessing the Cluster via the NDB API (C++). "NDB" stands for Network Database.

From the MySQL Server perspective the NDB Cluster is a Storage engine for storing tables of rows.

From the NDB Cluster perspective, a MySQL Server instance is an API process connected to the Cluster. NDB Cluster can concurrently support access from other types of API processes including Memcached, JavaScript/Node.JS, Java, JPA and HTTP/REST. All API processes can operate on the same tables and data stored in the NDB Cluster.

MySQL Cluster uses the MySQL Server to provide the following capabilities on top of Ndb Cluster:

  • SQL parsing / optimising / execution capability
    • Connectors to applications via JDBC, ODBC etc.
  • Cross-table join mechanism
  • User authentication and authorisation
  • Asynchronous data replication to other systems

All API processes including the MySQL Server use the NDBAPI[6] C++ client library to connect to the NDB Cluster and perform operations.

Implementation

[edit]

MySQL Cluster uses three different types of nodes (processes) :

  • Data node (ndbd/ndbmtd process): These nodes store the data. Tables are automatically sharded across the data nodes which also transparently handle load balancing, replication, failover and self-healing.
  • Management node (ndb_mgmd process): Used for configuration and monitoring of the cluster. They are required only to start or restart a cluster node. They can also be configured as arbitrators, but this is not mandatory (MySQL Servers can be configured as arbitrators instead).[7]
  • Application node or SQL node (mysqld process): A MySQL server (mysqld) that connects to all of the data nodes in order to perform data storage and retrieval. This node type is optional; it is possible to query data nodes directly via the NDB API, either natively using the C++ API or one of the additional NoSQL APIs described above.

Generally, it is expected that each node will run on a separate physical host, VM or cloud instance (although it is very common to co-locate Management Nodes with MySQL Servers). For best practice, it is recommended not to co-locate nodes within the same node group on a single physical host (as that would represent a single point of failure).

Versions

[edit]

From the 8.0 release onwards, MySQL Cluster is based directly on the corresponding releases of the MySQL Server. Previously, MySQL Cluster version numbers were decoupled from those of MySQL Server - for example MySQL Cluster 7.6 was based on/contained the server component from MySQL 5.7.

Higher versions of MySQL Cluster include all of the features of lower versions, plus some new features. Currently available versions:

  • MySQL Cluster 9.X Innovation Release series
  • MySQL Cluster 8.4 LTS based on MySQL 8.4 LTS release
Add support for TLS on cluster internal connections
  • MySQL Cluster 8.0 based on MySQL 8.0
Increase in max row size to 30kB, Support for up to 144 data nodes, Improved distributed filtering and joining, Support for parallel outer joins and semi joins, Improved schema and ACL handling, Online column rename, Simplified configuration, Multithreaded parallel backup and restore, Disk data performance improvements, Enhanced support for 3 and 4 replica configurations, Support for multi socket mesh networking, Support for restore transformations, Improved Blob write performance, Backup encryption, Support for IPv6, Threading autoconfiguration, Improved recovery performance, Improved query multithreading. [8]
  • MySQL Cluster 7.6 based on MySQL 5.7
Improved restart and recovery times, reduced disk space usage, improved join performance, new import tool, shared memory communication, improved topology awareness for cloud.[9]

Older versions (no longer in development):

  • MySQL Cluster 8.1 based on MySQL 8.1 Innovation release
  • MySQL Cluster 8.2 based on MySQL 8.2 Innovation release
  • MySQL Cluster 8.3 based on MySQL 8.3 Innovation release
  • MySQL Cluster 7.5 based on MySQL 5.7
Includes support for bigger datasets(more than 128TB per node), improved read scalability through read optimized tables, improved SQL support.[10]
  • MySQL Cluster 7.4 based on MySQL 5.6
Includes enhanced conflict detection and resolution, improved node restart times, new Event API.[11]
  • MySQL Cluster 7.3 based on MySQL 5.6
Includes support for foreign key constraints, Node.js / JavaScript API and an auto-installer.[12]
  • MySQL Cluster 7.2 based on MySQL 5.5
Includes Adaptive Query Localization (pushes JOIN operations down to the data nodes), Memcached API, simplified Active/Active Geographic replication, multi-site clustering, data node scalability enhancements, consolidated user privileges.[13]
  • MySQL Cluster 7.1 based on MySQL 5.1.D
Includes ClusterJ and ClusterJPA connectors
  • MySQL Cluster 7.0 based on MySQL 5.1.C
Includes multi-threaded data nodes (ndbmtd), Transactional DDL, Windows support.
  • MySQL Cluster 6.3 based on MySQL 5.1.B
Includes compressed backup + LCP, circular replication support, conflict detection/resolution, table optimization etc.
  • MySQL Cluster 6.2 based on MySQL 5.1.A
First 'telco' or 'carrier grade edition' release. Supports 255 nodes, online table alter, replication latency and throughput enhancements etc.
  • Ndb included in MySQL 5.1.X source tree

Requirements

[edit]

For evaluation purposes, it is possible to run MySQL Cluster on a single physical server. For production deployments, the minimum system requirements are for 3 x instances / hosts:

  • 2 × Data Nodes
  • 1 × Application / Management Node

or

  • 2 × Data Node + Application
  • 1 × Management Node

Configurations as follows:

  • OS: Linux, Solaris, Windows. macOS (for development only)
  • CPU: Intel/AMD x86/x86-64, UltraSPARC
  • Memory: 1GB
  • HDD: 3GB
  • Network: 1+ nodes (Standard Ethernet - TCP/IP)

Tips and recommendations on deploying highly performant, production grade clusters can be found in the MySQL Cluster Evaluation Guide and the Guide to Optimizing Performance of the MySQL Cluster Database.

History

[edit]

MySQL AB acquired the technology behind MySQL Cluster from Alzato, a small venture company started by Ericsson. NDB was originally designed for the telecom market, with its high availability and high performance requirements.[14]

MySQL Cluster based on the NDB storage engine has since been integrated into the MySQL product, with its first release being in MySQL 4.1.

Books

[edit]

MySQL Cluster 7.5 inside and out.[15] Book written by Mikael Ronström, the founder of the NDB technology.

Pro MySQL NDB Cluster.[16] Book written by Jesper Wisborg Krogh and Mikiya Okuno, support engineers of MySQL.

Support

[edit]

MySQL Cluster is licensed under the GPLv2 license. Commercial support is available as part of MySQL Cluster CGE, which also includes non-open source addons such as MySQL Cluster Manager, MySQL Enterprise Monitor, in addition to MySQL Enterprise Security and MySQL Enterprise Audit.

See also

[edit]
  • Galera Cluster, a generic synchronous multi-master replication library for transactional databases, for MySQL and MariaDB. [2]
  • Percona XtraDB Cluster, also is a combination of Galera replication library and MySQL supporting multi master.
  • RonDB A fork of MySQL Cluster maintained by Hopsworks.[17]

References

[edit]
  1. ^ "MySQL NDB Cluster 8.4 Release Notes". mysql.com.
  2. ^ Cluster CGE. MySQL. Retrieved on 2013-09-18.
  3. ^ Oracle Corporation. "MySQL Cluster Benchmarks: Oracle and Intel Achieve 1 Billion Writes per Minute". mysql.com. Retrieved June 24, 2013.
  4. ^ MySQL :: MySQL 5.6 Reference Manual :: 17.6.11 MySQL Cluster Replication Conflict Resolution. Dev.mysql.com. Retrieved on 2013-09-18.
  5. ^ Synchronously Replicating Databases Across Data Centers – Are you Insane? (Oracle's MySQL Blog). Blogs.oracle.com (2011-10-03). Retrieved on 2013-09-18.
  6. ^ [1] The MySQL Cluster API Developer Guide
  7. ^ Jon Stephens, Mike Kruckenberg, Roland Bouman (2007): "MySQL 5.1 Cluster DBA Certification Study Guide", pp. 86
  8. ^ "MySQL :: MySQL 8.0 Reference Manual :: 22.1.4 What is New in NDB Cluster". dev.mysql.com.
  9. ^ MySQL Cluster 7.6 is now Generally Available, Oracle's MySQL Blog, 1 June 2018
  10. ^ MySQL Cluster 7.5 Is Now GA!, Oracle's MySQL Blog, 18 October 2016
  11. ^ MySQL Cluster 7.4 GA: 200 Million QPS, Active-Active Geographic Replication and more, MySQL Cluster 7.4 Summary
  12. ^ MySQL Cluster 7.3 is now Generally Available – an overview, MySQL Cluster 7.3 Summary
  13. ^ New features of MySQL Cluster 7.2, MySQL Developer Zone
  14. ^ Todd R. Weiss (October 14, 2003). "MySQL acquiring data management system vendor Alzato". Computerworld.com. Retrieved November 5, 2012.
  15. ^ Mikael Ronström (February 12, 2018). " "MySQL Cluster 7.5 inside and out". bod.se. Retrieved August 1, 2022.
  16. ^ Jesper Wisborg Krogh, Mikiya Okuno (2017). Pro MySQL NDB Cluster. apress.com. doi:10.1007/978-1-4842-2982-8. ISBN 978-1-4842-2981-1. S2CID 10326666. Retrieved August 1, 2022.
  17. ^ "RonDB Announcement". Retrieved August 1, 2022.
[edit]

MySQL

[edit]

Other

[edit]