We’ve seen a few people in the community struggling with the new management node features in MySQL Cluster 7.0. To be honnest, we sometimes in MySQL Support, are scratching our heads as well.
This simple how-to will explain how to start from scratch a MySQL Cluster 7.0.7 (or above) with 2 management nodes (and 2 data nodes). This is not a rolling upgrade from MySQL Cluster 6.3.
Here is the basic config.ini we are going to use. Note the Hostname parameters as we are going to use them often:
[NDBD DEFAULT] Datadir=/data2/users/geert/cluster/master NoOfReplicas=2 DataMemory=80M IndexMemory=10M [NDB_MGMD DEFAULT] Datadir=/data2/users/geert/cluster/master [NDB_MGMD] Id=1 Hostname = machine-1 [NDB_MGMD] Id=2 Hostname = machine-2 [NDBD] Id=3 Hostname = machine-3 [NDBD] Id=4 Hostname = machine-4 [API] [API] [API] [API]
We start with an empty MySQL Cluster: no data, no logs. The config.ini is only on machine-1 (however, it’s good to have it on both anyway, making it HA!).
Start the first ndb_mgmd process on host machine-1:
shell> ndb_mgmd -f config.ini \ --configdir=/path/to/empty/dir --initial ..NDB Cluster Management Server. mysql-5.1.35 ndb-7.0.7 ..Reading cluster configuration from 'config.ini'
On machine-2 you start the 2nd management as follows:
shell> ndb_mgmd -c machine-1 --ndb-nodeid=2 \ --configdir=/path/to/empty/dir ..NDB Cluster Management Server. mysql-5.1.35 ndb-7.0.7 ..Trying to get configuration from other mgmd(s) using 'nodeid=2,machine-1'... ..Connected to 'machine-1'...
Note that starting the second ndb_mgmd is quite different:
- It does not read config.ini, but gets configuration from machine-1.
- It needs to know what node ID it has, because it doesn’t read a configuration file.
- There is no --initial option.
At this point, you should have a cluster that looks like this (doing SHOW connected to the first management node on machine-1):
ndb_mgm> SHOW .. [ndbd(NDB)] 2 node(s) id=3 (not connected, accepting connect from machine-3) id=4 (not connected, accepting connect from machine-4) [ndb_mgmd(MGM)] 2 node(s) id=1 @machine-1 (mysql-5.1.35 ndb-7.0.7) id=2 (not connected, accepting connect from machine-2) ..
Important: The fact that you don’t see the second management node connect is because you did not start data nodes. Management nodes ‘see’ each other through connected data nodes!
Now start the data nodes, but for fun, point the 2nd one to the 2nd management node.
On machine-3 you do:
shell> ndbd -c machine-1 ..Configuration fetched from 'machine-1', generation: 1
Same on machine-4 but connect to the 2nd management node:
shell> ndbd -c machine-2 ..Configuration fetched from 'machine-2', generation: 1
Your MySQL Cluster is now up and running with 2 management and 2 data nodes.
ndb_mgm> SHOW .. [ndbd(NDB)] 2 node(s) id=3 @machine-3 (mysql-5.1.35 ndb-7.0.7, Nodegroup: 0, Master) id=4 @machine-4 (mysql-5.1.35 ndb-7.0.7, Nodegroup: 0) [ndb_mgmd(MGM)] 2 node(s) id=1 @machine-1 (mysql-5.1.35 ndb-7.0.7) id=2 @machine-2 (mysql-5.1.35 ndb-7.0.7) ..
Next we’ll do some experimenting changing some parameters and doing a rolling restart, but that’s for another article/blog post.