How to start MySQL Cluster 7.0 with 2 management nodes?

We’ve seen a few people in the community struggling with the new management node features in MySQL Cluster 7.0. To be honnest, we sometimes in MySQL Support, are scratching our heads as well.

This simple how-to will explain how to start from scratch a MySQL Cluster 7.0.7 (or above) with 2 management nodes (and 2 data nodes). This is not a rolling upgrade from MySQL Cluster 6.3.

Here is the basic config.ini we are going to use. Note the Hostname parameters as we are going to use them often:



Hostname = machine-1

Hostname = machine-2

Hostname = machine-3

Hostname = machine-4


We start with an empty MySQL Cluster: no data, no logs. The config.ini is only on machine-1 (however, it’s good to have it on both anyway, making it HA!).

Start the first ndb_mgmd process on host machine-1:

 shell> ndb_mgmd -f config.ini \
    --configdir=/path/to/empty/dir --initial
 ..NDB Cluster Management Server. mysql-5.1.35 ndb-7.0.7
 ..Reading cluster configuration from 'config.ini'

On machine-2 you start the 2nd management as follows:

 shell> ndb_mgmd -c machine-1 --ndb-nodeid=2 \
 ..NDB Cluster Management Server. mysql-5.1.35 ndb-7.0.7
 ..Trying to get configuration from other mgmd(s) using
 ..Connected to 'machine-1'...

Note that starting the second ndb_mgmd is quite different:

  • It does not read config.ini, but gets configuration from machine-1.
  • It needs to know what node ID it has, because it doesn’t read a configuration file.
  • There is no --initial option.

At this point, you should have a cluster that looks like this (doing SHOW connected to the first management node on machine-1):

 ndb_mgm> SHOW
 [ndbd(NDB)]     2 node(s)
 id=3 (not connected, accepting connect from machine-3)
 id=4 (not connected, accepting connect from machine-4)

 [ndb_mgmd(MGM)] 2 node(s)
 id=1    @machine-1  (mysql-5.1.35 ndb-7.0.7)
 id=2 (not connected, accepting connect from machine-2)

Important: The fact that you don’t see the second management node connect is because you did not start data nodes. Management nodes ‘see’ each other through connected data nodes!

Now start the data nodes, but for fun, point the 2nd one to the 2nd management node.

On machine-3 you do:

 shell> ndbd -c machine-1
 ..Configuration fetched from 'machine-1', generation: 1

Same on machine-4 but connect to the 2nd management node:

 shell> ndbd -c machine-2
 ..Configuration fetched from 'machine-2', generation: 1

Your MySQL Cluster is now up and running with 2 management and 2 data nodes.

 ndb_mgm> SHOW
 [ndbd(NDB)]     2 node(s)
 id=3    @machine-3  (mysql-5.1.35 ndb-7.0.7, Nodegroup: 0, Master)
 id=4    @machine-4  (mysql-5.1.35 ndb-7.0.7, Nodegroup: 0)

 [ndb_mgmd(MGM)] 2 node(s)
 id=1    @machine-1  (mysql-5.1.35 ndb-7.0.7)
 id=2    @machine-2  (mysql-5.1.35 ndb-7.0.7)

Next we’ll do some experimenting changing some parameters and doing a rolling restart, but that’s for another article/blog post.

Traveling by train to Kraków, Poland from Germany

Today I’ve been traveling by train for 17 hours from Aschaffenburg (Germany) to Kraków (Poland). Crazy? In love! But yes, also a bit crazy. I’ve been doing the journey by car stopping for the night, so it took me 2 days. The risk is also higher on the road than on rails. Flying is the other way, but then there is the ‘green’-factor. However, I was reminded by somebody that the plane is flying anyway.

Here is the schedule (times are CEST):
* 06:51 Aschaffenburg – Hanau (regional train)
* 07:29 Hanau – Berlin (ICE train)
* 12:20 Berlin – Warszawa (EC train)
* 20:12 Warszawa – Kraków (PKP Intercity, 1 hour delay)

Except for the delay it has been an easy ride. I had a good book, helped people getting their luggage up and down the racks, even worked a bit on a presentation.
It’s more expensive than flying. In total I payed about 125 EUR (with my reduction card 50% in Germany and I think 25% to Warszawa).