Configuration change and rolling restart with MySQL Cluster 7.0

In a previous post we discussed how to start MySQL Cluster 7.0 with two management nodes. A nice new feature is that the 2nd ndb_mgmd doesn’t need the configuration file: it is fetching it from the other management node. In this article I will describe how we start this cluster after a shut down, changing configuration and do a rolling restart.

The configuration

The config.ini stored on the first management node:

[NDBD DEFAULT]
Datadir=/data2/users/geert/cluster/master
NoOfReplicas=2
DataMemory=80M
IndexMemory=10M

[NDB_MGMD DEFAULT]
Datadir=/data2/users/geert/cluster/master

[NDB_MGMD]
Id=1
Hostname = machine-1

[NDB_MGMD]
Id=2
Hostname = machine-2

[NDBD]
Id=3
Hostname = machine-3

[NDBD]
Id=4
Hostname = machine-4

[API]
[API]
[API]
[API]

Starting MySQL Cluster with 2 management nodes

We assume that the cluster from previous post was shut down and we need to restart it. Here are the instructions to do so. We start both management node process the same way, without the options --initial or --reload!

# On machine-1
$ ndb_mgmd --configdir=/path/to/configcache/dir
 ..NDB Cluster Management Server. mysql-5.1.35 ndb-7.0.7
 ..Loaded config from '/path/to/configcache/dir/ndb_1_config.bin.1'

# On machine-2
$ ndb_mgmd --configdir=/path/to/configcache/dir
..NDB Cluster Management Server. mysql-5.1.35 ndb-7.0.7
..Loaded config from '/path/to/configcache/dir/ndb_2_config.bin.1'

Data nodes are started on machine-3 and machine-4:

# On machine-3
$ ndbd -c machine-1
..Configuration fetched from 'machine-1', generation: 1

# On machine-4
$ ndbd -c machine-2
..Configuration fetched from 'machine-2', generation: 1

That should bring your cluster back up, ready for some experiments!

Rolling restart after configuration change

Lets assume we want more memory to store data and index information. We change the following in the configuration file config.ini, which you find on the first management node, machine-1:

[NDBD DEFAULT]
DataMemory=160M
IndexMemory=20M

Save your new config.ini and kill the ndb_mgmd process on machine-1, followed by starting it again with the --reload option:

# On machine-1
$ killall ndb_mgmd
$ ndb_mgmd -f config.ini --reload --configdir=/path/to/configcache/dir
 ..NDB Cluster Management Server. mysql-5.1.35 ndb-7.0.7
 ..Loaded config from '/path/to/configcache/dir/ndb_1_config.bin.1'

The above output might be a bit confusing: we started with a changed config.ini but it said it loaded config from the previous cached version. This is normal. It needs to first read the old to know the changes from the new. The real magic is shown in the cluster log ndb_1_cluster.log on machine-1 (simplified for this blog post):

.. Detected change of config.ini on disk, will try to set it
when all ndb_mgmd(s) started. This is the actual diff:
[ndbd(DB)]
NodeId=3
-IndexMemory=10485760
+IndexMemory=20971520

[ndbd(DB)]
NodeId=4
-IndexMemory=10485760
+IndexMemory=20971520
..
Node 2 connected
Starting configuration change, generation: 1
Configuration 2 commited
Config change completed! New generation: 2

Notice that currently in MySQL Cluster 7.0.7 there is bug that when changing 2 parameters, only 1 will show up when the difference is shown in the logs.

On the second management node you’ll find in ndb_2_cluster.log something like this:

..Node 2: Node 1 Connected
..
..Configuration 2 commited

Both management nodes have agreed on the same configuration, and both have it binary cached in files named like ndb_2_config.bin.*.

We continue now with restarting the data nodes, while connected to either management node do the following:

# On machine-1 or machine-2
$ ndb_mgm

ndb_mgm> ALL REPORT MEMORY USAGE
Node 3: Data usage is 0%(4 32K pages of total 2560)
Node 3: Index usage is 0%(8 8K pages of total 1312)
Node 4: Data usage is 0%(4 32K pages of total 2560)
Node 4: Index usage is 0%(8 8K pages of total 1312)

ndb_mgm> 3 RESTART
Node 3: Node shutdown initiated
Node 3: Node shutdown completed, restarting, no start.
Node 3 is being restarted
Node 3: Started (version 7.0.7)

ndb_mgm> 4 RESTART
Node 4: Node shutdown initiated
Node 4: Node shutdown completed, restarting, no start.
Node 4 is being restarted
Node 4: Data usage decreased to 0%(0 32K pages of total 5120)
Node 4: Started (version 7.0.7)

ndb_mgm> ALL REPORT MEMORY USAGE
Node 3: Data usage is 0%(6 32K pages of total 5120)
Node 3: Index usage is 0%(8 8K pages of total 2592)
Node 4: Data usage is 0%(6 32K pages of total 5120)
Node 4: Index usage is 0%(8 8K pages of total 2592)

The ALL REPORT MEMORY USAGE output shows that the configuration took effect and a rolling restart was succesful.

Comments

val-ufo
Thank you very very very very much.
iCafeMinds
How about Api for example I have more than 2 is there anything I need to check for my.cnf?
Geert JM Vanderkelen
API/SQL Nodes dont' have to be restarted when memory is increased. Some other parameters might need a restart. It's good to restart all the APIs anyways, but not always needed thus.
kbenton
It is worth mentioning that the bug shown above in 7.0.7 is against the log and not against the logs communicated between nodes. Geert - would you mind updating your article to reflect that? :-)
kbenton
Correction - the bug is against what is logged, not against the configuration communicated between nodes.
Geert JM Vanderkelen
@kbenton: thank for your comment. In my blog post I say “..shown in the logs.”, that should be clear enough, no? But, I'll update to reflect the fix too.
rav3n2010
Hi Geert I encountered this problem on my live server 'Out of fragment records (increase MaxNoOfOrderedIndexes) I've setup local cluster here in our office try to change values of MaxNoOfOrderedIndexes so I can Alter table. In my production sever I have 6 servers 2 nodes, 2 mgmt, 2 api… how can I apply new settings on my mysql mgmt without losing my data and downtime? thus this same apply shell> ndb_mgmd -f config.ini --reload --configdir=/path/to/configcache/dir?
rav3n2010
by way I'm using mysql cluster 7.1.3
rav3n2010
I did try to kill ndb_mgmd on mgmt 1 and start mgmt server with this command /usr/sbin/ndb_mgmd -f config.ini --configdir=/var/lib/mysql-cluster --initial
Error on my logs2010-05-25 08:26:55 [MgmtSrvr] ERROR -- This node was started --initial with a config which is not equal to the one node 2 is using. Refusing to start with different configurations, diff:
How to fix this?
Geert JM Vanderkelen
@rav3n2010: killall ndb_mgmd, and using --reload like in my blog post.
Maybe it's also better to get help through our lists: http://lists.mysql.com/
rav3n2010
Hi! Geert I did follow your guide I got same error. 2010-05-25 18:27:15 [MgmtSrvr] INFO -- Id: 1, Command port: *:11862010-05-25 18:27:15 [MgmtSrvr] INFO -- Node 1: Node 4 Connected2010-05-25 18:27:15 [MgmtSrvr] INFO -- Node 1: Node 2 Connected2010-05-25 18:27:15 [MgmtSrvr] INFO -- Node 1: Node 3 Connected2010-05-25 18:27:15 [MgmtSrvr] INFO -- Node 2 connected2010-05-25 18:27:15 [MgmtSrvr] ERROR -- This node was started --initial with a config which is not equal to the one node 2 is using. Refusing to start with different configurations, diff:
rav3n2010
Geert I've already solve the problem, thanks a lot I just miss read your blog. By the way in commerce site how much memory for data node and api node does I need, I believe management can run on 2gb. On Config.ini DataMemory and IndexMemory is that for physical Memory of Data Node?
iCafeMinds
Hi! Geert I was wondering why my 2 api node does not have the same data and tables. When I try to setup load balancing for api node I found out that 2nd api node table is incomplete. Everytime I made changes on my 1st api node does not replicated to 2nd api node. When I check management server all mysql cluster is connected.
iCafeMinds
One more question Geert everytime I create user or tables, should be manually executed on 2 sql node(api)?
iCafeMinds
Hi Geert been I can figure out how to post question on lists.mysql.com… I almost forgot thanks for your guide it helps me a lot when I setup my cluster…
ambi

Hi Geert, I tried setting up cluster on EC2. My config files are like this: config.ini: [ndbd default] noofreplicas=2 datadir=/usr/tarmysql/my_cluster/ndb_data [ndb_mgmd] hostname=localhost datadir=/usr/tarmysql/my_cluster/ndb_data nodeid=1 [ndbd] hostname=localhost datadir=/usr/tarmysql/my_cluster/ndb_data nodeid=3 [ndbd] hostname=localhost datadir=/usr/tarmysql/my_cluster/ndb_data nodeid=4 [mysqld] nodeid=50

my.cnf [mysqld] ndbcluster ndb-connectstring=localhost datadir=c:\Users\user1\\my_cluster\mysqld_data basedir=c:\Users\user1\mysqlc port=5000

[mysql_cluster] ndb-connectstring=localhost

Whenever I try to start management server, it throws error that its not able to open ndb_1_config.bin file as no such file exist(But this is the first time I m starting it).How to fix this?

ambi

Below is exact log I got:

2010-11-20 12:13:33 [MgmtSrvr] INFO -- Got initial configuration from ‘conf/config.ini’, will try to set it when all ndb_mgmd(s) started 2010-11-20 12:13:33 [MgmtSrvr] INFO -- Mgmt server state: nodeid 1 reserved for ip 127.0.0.1, m_reserved_nodes 1. 2010-11-20 12:13:33 [MgmtSrvr] INFO -- Id: 1, Command port: *:1186 2010-11-20 12:13:33 [MgmtSrvr] INFO -- Starting initial configuration change 2010-11-20 12:13:33 [MgmtSrvr] ERROR -- Failed to open file ‘../my_cluster/conf/ndb_1_config.bin.1.tmp’ while preparing, errno: 2 2010-11-20 12:13:33 [MgmtSrvr] WARNING -- Node 1 refused configuration change, error: 6 2010-11-20 12:13:33 [MgmtSrvr] INFO -- MySQL Cluster Management Server mysql-5.1.51 ndb-7.1.9 started

geert

Looks like you are missing ndb_mgmd --configdir option in my.cnf:

[ndb_mgmd] configdir = /opt/mysql/clusters/machA/ndb

Please use MySQL Cluster mailinglist if problem persists.

geert
@iCafeMinds Heya, this reply is way overdue, I hope you solved it already. If you create users in the MySQL server, they will not propagate to the other SQL nodes in the MySQL Cluster. The grant tables are local MyISAM tables. You have to create the user on each SQL node manually. When you create NDB tables in a MySQL Cluster, all the other SQL nodes will detect it and make it available. That, of course, only works with NDB tables not with InnoDB or MyISAM.
rav3n
Hi geert I have this question in our production server I have 2 mgmt, 2 api, 2 data node… I read in other blogs that to fully used mysql cluster with high availability it need at least 4 data node… is this correct? In this way I could shutdown 1 data node without losing any data? Also what is best to create table partition or unpartition?
Geert Vanderkelen
@rav3n No, you need at least 2 data nodes with NoOfReplicas=2. You need at least 3 physical machines to reach high availability, 2 of them running each 1 data node or ndbd process. You can shutdown all data nodes without losing data, but if you shutdown 1 data node, your data will still be accessible. NDB tables are and should always be partitioned.
rav3n
Hi Geert thanks for the explanation… one more question I having problem with queries very slow in getting data… we use join, left, inner to get data… we also try to replace join with sub queries and give much better query… When we create tables we don’t partition it so I was wondering if we could alter it so that all tables will be partition and also we won’t lost any data? It is ok to have 2 api and 4 data node? would it give much faster query?
Geert Vanderkelen
@rav3n Please use http://lists.mysql.com/cluster for posting (new) questions. This is a blog post about configuration changes and rolling restarts.