블로그 이미지
LifeisSimple

calendar

            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

Notice

2016. 6. 22. 09:57 Brain Trainning/DataBase



Graphing MySQL performance with Prometheus and Grafana


출처 : https://www.percona.com/blog/2016/02/29/graphing-mysql-performance-with-prometheus-and-grafana/


   | February 29, 2016 |  Posted In: MonitoringMySQLPrometheus

This post explains how you can quickly start using such trending tools as Prometheus and Grafana for monitoring and graphing of MySQL and system performance.

First of all, let me mention that Percona Monitoring and Management beta has been released recently which is an easy way you can get all of this.

I will try to keep this blog as short as possible, so you can quickly set things up before getting bored. I plan to cover the details in the next few posts. I am going to go through the installation process here in order to get some really useful and good-looking graphs in the end.

Overview

PrometheusPrometheus is an open-source service monitoring system and time series database. In short, the quite efficient daemon scrapes metrics from remote machines using HTTP protocol and stores data in the local time-series database. Prometheus provides a simple web interface, a very powerful query language, HTTP API etc. However, the storage is not designed to be durable for the time being.

The remote machines need to run exporters to expose metrics to Prometheus. We will be using the following two:

GrafanaGrafana is an open source, feature-rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus and InfluxDB. It is a powerful tool for visualizing large-scale measurement data and designed to work with time-series. Grafana supports different types of graphs, allows for custom representation of individual metrics on the graph and various methods of authentication including LDAP.

Diagram

Here is a diagram of the setup we are going to use:
Prometheus + Grafana diagram

Prometheus setup

To install on the monitor host.

Get the latest tarball from Github.

Create a simple config:

where 192.168.56.107 is the IP address of the db host we are going to monitor and db1 is its short name. Note, the “alias” label is important here because we rely on it in the predefined dashboards below to get per host graphs.

Start Prometheus in foreground:

Now we can access Prometheus’ built-in web interface by http://monitor_host:9090

Prometheus web interface
If you look at the Status page from the top menu, you will see that our monitoring targets are down so far. Now let’s setup them – prometheus exporters.

Prometheus exporters setup

Install on the db host. Of course, you can use the same monitor host for the experiment. Obviously, this node must run MySQL.

Download exporters from here and there.

Start node_exporter in foreground:

Unlike node_exporter, mysqld_exporter wants MySQL credentials. Those privileges should be sufficient:

Create .my.cnf and start mysqld_exporter in foreground:

At this point we should see our endpoints are up and running on the Prometheus Status page:
Prometheus status page

Grafana setup

Install on the monitor host.

Grafana has RPM and DEB packages. The installation is as simple as installing one package.
RPM-based system:

or APT-based one:

Open and edit the last section of /etc/grafana/grafana.ini resulting in the following ending:

Percona has built the predefined dashboards for Grafana with Prometheus for you.

Let’s get them deployed:

It is important to apply the following minor patch on Grafana 2.6 in order to use the interval template variable to get the good zoomable graphs. The fix is simply to allow variable in Step field on Grafana graph editor page. For more information, take a look at PR#3757 and PR#4257. We hope the last one will be released with the next Grafana version.

Those changes are idempotent.

Finally, start Grafana:

At this point, we are one step before being done. Login into Grafana web interface http://monitor_host:3000 (admin/admin).

Go to Data Sources and add one for Prometheus:
Grafana datasource

Now check out the dashboards and graphs. Say choose “System Overview” and period “Last 5 minutes” on top-right. You should see something similar:
Grafana screen
If your graphs are not populating ensure the system time is correct on the monitor host.

Samples

Here are some real-world samples (images are clickable and scrollable):
 
 
 
 

Enjoy!

Conclusion

Prometheus and Grafana is a great tandem for enabling monitoring and graphing capabilities for MySQL. The tools are pretty easy to deploy, they are designed for time series with high efficiency in mind. In the next blog posts I will talk more about technical aspects, problems and related stuff.


posted by LifeisSimple

댓글을 달아 주세요

2016. 6. 6. 23:22 Brain Trainning/DataBase


Graphing MySQL performance with Prometheus and Grafana

   | February 29, 2016 |  Posted In: MonitoringMySQLPrometheus

출처 : https://www.percona.com/blog/2016/02/29/graphing-mysql-performance-with-prometheus-and-grafana/

This post explains how you can quickly start using such trending tools as Prometheus and Grafana for monitoring and graphing of MySQL and system performance.

First of all, let me mention that Percona Monitoring and Management beta has been released recently which is an easy way you can get all of this.

I will try to keep this blog as short as possible, so you can quickly set things up before getting bored. I plan to cover the details in the next few posts. I am going to go through the installation process here in order to get some really useful and good-looking graphs in the end.

Overview

PrometheusPrometheus is an open-source service monitoring system and time series database. In short, the quite efficient daemon scrapes metrics from remote machines using HTTP protocol and stores data in the local time-series database. Prometheus provides a simple web interface, a very powerful query language, HTTP API etc. However, the storage is not designed to be durable for the time being.

The remote machines need to run exporters to expose metrics to Prometheus. We will be using the following two:

GrafanaGrafana is an open source, feature-rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus and InfluxDB. It is a powerful tool for visualizing large-scale measurement data and designed to work with time-series. Grafana supports different types of graphs, allows for custom representation of individual metrics on the graph and various methods of authentication including LDAP.

Diagram

Here is a diagram of the setup we are going to use:
Prometheus + Grafana diagram

Prometheus setup

To install on the monitor host.

Get the latest tarball from Github.

Create a simple config:

where 192.168.56.107 is the IP address of the db host we are going to monitor and db1 is its short name. Note, the “alias” label is important here because we rely on it in the predefined dashboards below to get per host graphs.

Start Prometheus in foreground:

Now we can access Prometheus’ built-in web interface by http://monitor_host:9090

Prometheus web interface
If you look at the Status page from the top menu, you will see that our monitoring targets are down so far. Now let’s setup them – prometheus exporters.

Prometheus exporters setup

Install on the db host. Of course, you can use the same monitor host for the experiment. Obviously, this node must run MySQL.

Download exporters from here and there.

Start node_exporter in foreground:

Unlike node_exporter, mysqld_exporter wants MySQL credentials. Those privileges should be sufficient:

Create .my.cnf and start mysqld_exporter in foreground:

At this point we should see our endpoints are up and running on the Prometheus Status page:
Prometheus status page

Grafana setup

Install on the monitor host.

Grafana has RPM and DEB packages. The installation is as simple as installing one package.
RPM-based system:

or APT-based one:

Open and edit the last section of /etc/grafana/grafana.ini resulting in the following ending:

Percona has built the predefined dashboards for Grafana with Prometheus for you.

Let’s get them deployed:

It is important to apply the following minor patch on Grafana 2.6 in order to use the interval template variable to get the good zoomable graphs. The fix is simply to allow variable in Step field on Grafana graph editor page. For more information, take a look at PR#3757 and PR#4257. We hope the last one will be released with the next Grafana version.

Those changes are idempotent.

Finally, start Grafana:

At this point, we are one step before being done. Login into Grafana web interface http://monitor_host:3000 (admin/admin).

Go to Data Sources and add one for Prometheus:
Grafana datasource

Now check out the dashboards and graphs. Say choose “System Overview” and period “Last 5 minutes” on top-right. You should see something similar:
Grafana screen
If your graphs are not populating ensure the system time is correct on the monitor host.

Samples

Here are some real-world samples (images are clickable and scrollable):
 
 
 
 

Enjoy!

Conclusion

Prometheus and Grafana is a great tandem for enabling monitoring and graphing capabilities for MySQL. The tools are pretty easy to deploy, they are designed for time series with high efficiency in mind. In the next blog posts I will talk more about technical aspects, problems and related stuff.


posted by LifeisSimple
TAG Grafana, MySQL

댓글을 달아 주세요

2011. 8. 26. 14:23 Brain Trainning/DataBase
SSIS 에서 MySQL 을 연결해서 데이터를 전송하면 몇가지 문제가 발생합니다. 

그중 하나는 syntax 에러 .. 
그리고, shared memory error 인데 

1번은 아래와 같이

select @@global.sql_mode;
set global sql_mode='ANSI'

요렇게 해결하면 되고 

2번의 Shared memory error 의 경우는

2008 의 경우 SP2 이상
2008 R2 의 경우는 SP1 이상으로 Update 해주면 해결됩니다.

아래는 원문으로 참고할만 한 내용입니다.  

by Jayaram Krishnaswamy | September 2009 | Microsoft PHP

There are a large number of posts on various difficulties experienced while transferring data from MySQL using Microsoft SQL Server Integration Services. While the transfer of data from MySQL to Microsoft SQL Server 2008 is not fraught with any blocking issues, transfer of data from SQL Server 2008 to MySQL has presented various problems. There are some workarounds suggested. In this article by Dr. Jay Krishnaswamy, data transfer to MySQL using SQL Server Integration Services will be described.

(For more resources on Microsoft, see here.)

If you are new to SQL Server Integration Services (SSIS) you may want to read a book by the same author on Beginners Guide to SQL Server Integration Services Using Visual Studio 2005, published by Packt.

Connectivity with MySQL

For data interchange with MySQL there are two options one of which can be accessed in the connection wizards of SQL Server Integration Services assuming you have installed the programs. The other can be used to set up a ODBC DSN as described further down. The two connection options are:

In this article we will be using the ODBC connector for MySQL which can be downloaded from the MySQL Site. The connector will be used to create an ODBC DSN.

Transferring a table from SQL Server 2008 to MySQL

We will transfer a table in the TestNorthwind database on SQL Server 2008 (Enterprise & Evaluation) to MySQL server database. The MySQL database we are using is described in the article on Exporting data from MS Access 2003 to MySQL. In another article, MySQL Linked Server on SQL Server 2008, creating an ODBC DSN for MySQL was described. We will be using the DSN created in that article.

Creating an Integration Services project in Visual Studio 2008

Start the Visual Studio 2008 program from its shortcut. Click File | New | Project... to open the New Project window and select an integration services template from the business intelligence projects by providing a suitable name. The project folder will have a file called Package.dtsx which can be renamed with a custom name.

Add and configure an ADO.NET Source

The Project's package designer will be open displaying the Control Flow tab. Drag and drop a Data Flow Task on to the control flow tabbed page. Click next on the Data Flow tab in the designer to display the Data Flow page. Read the instructions on this page. Drag and drop a ADO.NET Source from the Data Flow Sources items in the Toolbox.

It is assumed that you can set up a connection manager to the resident SQL Server 2008 on your machine. The next figure shows the configured connection manager to the SQL Server 2008. The table (PrincetonTemp) that will be transferred is in the TestNorthwind database. The authentication is Windows and a .NET provider is used to access the data. You may also test the connection by clicking the Test Connection button. If you are not sure of this you can review the free chapter from the book available here.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

If the connection shown above is correctly configured, the test should indicate a successful connection. Right click the ADO.NET source and from the drop-down click Edit. The ADO.NET Source Editor gets displayed. As mentioned earlier you should be able to access the table and view objects on the database as shown in the next figure.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

We have chosen to transfer a simple table, PrincetonTemp from the TestNorthwind database on SQL Server 2008. It has a only couple of columns as shown in the Columns page of the ADO.NET Source Editor.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

The default for the Error page setting has been assumed, that is, if there is an error or truncation of data the task will fail.

Add an ADO.NET destination and port the data from the source

Drag and drop an ADO.NET destination item from under Data Flow Destinations items in the Toolbox on to the data flow page of the designer. There are two ways to arrange for the data to flow from source to the destination. The easy way is just drag the green dangling line from the source with your mouse and let go on the ADO.NET destination. A solid line will connect the source and the destination as shown.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

An ideal book and eBook for trainers who want to teach an introductory course in SQL Server Integration Services or, to those who want to study and learn SSIS in a little over two weeks.

(For more resources on Microsoft, see here.)

Configure a connection manager to connect to MySQL

In the Connection Manager's pane under the Package designer right click to display a pop-up menu which allows you to make a new connection. When you agree to make a new ADO.NET Connection the Configure ADO.NET connection Manager's window shows up and click on New... button on this page.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

The connection manager's page gets displayed as shown. In the Providers drop-down you will see a number of providers. There are the two providers that you can use, the ODBC through the connector and the MySQL Data Provider.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Click on the Odbc Data Provider. As mentioned previously we will be using the System DSN MySQL_Linkcreated earlier for the other article shown in the drop-down list of available ODBC DSN's.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Provide the USERID and Password; click the Test Connection button. If all the information is correct you should get a success message as shown.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Close out of the message as well as the Configure ADO.NET Connection Manager windows. Right click the ADO.NET Destination to display its editor window. In the drop-down for connection manager you should be able to pick the connection Manager you created in the previous step (MySQL_INK.root) as shown.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Click on the New... button to create a Table or View. You will get a warning message regarding not knowing the mapping to SSIS as shown.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Click OK. The create table window gets displayed as shown. Notice that the table is displaying all the columns from the table that the source is sending out.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

If you were to click OK, you would get an error that the syntax is not correct as shown.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Modify the table as shown to change the destination table name (your choice) and the data type.

CREATE TABLE From2k8(
"Id" INT,
"Month" VARCHAR(10),
"Temperature" DOUBLE PRECISION,
"RecordHigh" DOUBLE PRECISION
)

Click OK. Again you get the same error regarding syntax not being correct. Modify the Create Tablestatement further as shown.

CREATE TABLE From2k8 (
Id INT,
Month VARCHAR(10),
Temperature DOUBLE PRECISION,
RecordHigh DOUBLE PRECISION
)

Click OK after the above modification. The table gets added to the ADO.NET Destination Manager Editor as shown.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Click on the Mappings on the left side of the ADO.NET Destination Editor. The column mappings page gets displayed as shown. We accept the default settings for Error Output page.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Click OK. Build the project and execute the package by right clicking the package and choosing Execute Package. The program runs and processes the package and ends up being unsuccessful with the error message in the Progress tab of the project as shown (only relevant message is shown here).

....
.....
[SSIS.Pipeline] Information: Execute phase is beginning.
[ADO NET Destination 1 [165]] Error: An exception has occurred during data insertion, 
the message returned from the provider is: ERROR [42000] [MySQL][ODBC 5.1 Driver]
[mysqld-5.1.30-community]You have an error in your SQL syntax; check the manual that 
corresponds to your MySQL server version for the right syntax to use near '"Id", 
"Month", "Temperature", "RecordHigh") VALUES (1, 'Jan ', 4.000000000' at line 1
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput 
method on component "ADO NET Destination 1" (165) failed with error code 0xC020844B 
while processing input "ADO NET Destination Input" (168). The identified component 
returned an error from the ProcessInput method. The error is specific to the component,
but the error is fatal and will cause the Data Flow task to stop running. There may 
be error messages posted before this with more information about the failure.
[SSIS.Pipeline] Information: Post Execute phase is beginning.
......
....
Task Data Flow Task failed
....

Start the MySQL Server and login to it. Run the following commands as shown in the next figure. By setting the mode to 'ANSI' makes the syntax more standard like as MySQL can cater to clients using other SQL modes. This is why the above error is returned although the syntax itself appears correct. In fact a create statement run on command line to create a table directly on MySQL could not create a table and returned an error when SSIS was used to create the same table.

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

    

After running the above statements, build the BI project and execute the package. This time the execution is will be successful and you can query the MySQL Server as in the following:

Beginners Guide to SQL Server Integration Services Using Visual Studio 2005

Summary

The article describes step by step transferring a table from SQL Server 2008 to MySQL using ODBC connectivity. For successful transfer the data type differences between SQL Server 2008 and the MySQL version must be properly taken into consideration as well as correctly setting the SQL_Mode property of MySQL Server.

posted by LifeisSimple
TAG MySQL, ssis

댓글을 달아 주세요

2010. 9. 29. 11:09 Brain Trainning/DataBase

MySQL Clustering Config

Posted October 6, 2008
Filed under:
 Uncategorized | Tags: cluster, Clustering, MySQL |

I’ve had two positive comments to my previous post so I figured it was time to write an update regarding how my work has been going.

The MySQL clustered database is part of a large project I’ve been working on for the last 2 months. The basic server setup is two clustered JBoss 4.2.3Application Servers running on top of two clustered MySQL 5.0.67 servers. There is a 3rd Server which is a backup which currently only runs the MySQL manager ment console. I’ve noticed that even if there is high load on the 2 NDB data nodes, the Management console does not do much.

The four main servers run 2 Dual-Core AMD Opteron(tm) Processors with 8 gig of ram.

Even though alot of my work has been to rework my own code in order to optimize and improve it, alot of time has been spent looking for a configuration for the MySQL cluster that would cope with the load as I found that it was quite easy to kill the cluster in the beginning.

The config I am currently using is below and following are some results of the load testing I’ve been doing.

[TCP DEFAULT]

SendBufferMemory=2M

ReceiveBufferMemory=2M

 

[NDBD DEFAULT]

NoOfReplicas=2

 

# Avoid using the swap

LockPagesInMainMemory=1

 

#DataMemory (memory for records and ordered indexes)

DataMemory=3072M

 

#IndexMemory (memory for Primary key hash index and unique hash index)

IndexMemory=384M

 

#Redolog

# (2*DataMemory)/64MB which is the LogFileSize for 5.0

NoOfFragmentLogFiles=96

 

#RedoBuffer of 32M. If you get "out of redobuffer" then you can increase it but it

#more likely a result of slow disks.

RedoBuffer=32M

 

MaxNoOfTables=4096

MaxNoOfAttributes=24756

MaxNoOfOrderedIndexes=2048

MaxNoOfUniqueHashIndexes=512

 

MaxNoOfConcurrentOperations=1000000

 

TimeBetweenGlobalCheckpoints=1000

 

#the default value for TimeBetweenLocalCheckpoints is very good

TimeBetweenLocalCheckpoints=20

 

# The default of 1200 was too low for initial tests. But the code has been improved alot

# so 12000 may be too high now.

TransactionDeadlockDetectionTimeout=12000

DataDir=/var/lib/mysql-cluster

BackupDataDir=/var/lib/mysql-cluster/backup

 

[MYSQLD DEFAULT]

 

[NDB_MGMD DEFAULT]

 

# Section for the cluster management node

[NDB_MGMD]

# IP address of the management node (this system)

HostName=10.30.28.10

 

# Section for the storage nodes

[NDBD]

# IP address of the first storage node

HostName=10.30.28.11

 

[NDBD]

# IP address of the second storage node

HostName=10.30.28.12

 

# one [MYSQLD] per storage node

[MYSQLD]

[MYSQLD]

 

Here are some numbers of what I’ve been able to get out of the MySQL cluster. One iteration of my code results in:
12 selects
10 inserts
10 updates
4 deletes
This might sound like a lot, but it is a service oriented application that relies on persistent JBoss queues and demands
 100% redundancy so even if the server dies, it will pickup where it died and does not loose any data.

My first benchmark was 10 000 iterations which is the current load I can expect on the application over the course of an hour i.e.
12 000 selects
10 000 inserts
10 000 updates
4 000 deletes
This took a total of
 93 seconds producing just over 100 iterations per second.

The second test to really put the application and environment to the test was to run 100 000 iterations. This test completed in roughly 15 minutes producing around 110 iterations per second. This is about 10x the load we’d expect to see during the course of one hour but it is nice to see the setup has the ability to grow quite a bit before we need more hardware. :)

I am currently working on setting up a third test which will run 100 000 iterations every hour for 10 hours producing 1 millions rows of data.


Possibly related posts: (automatically generated)

·         I’m half way to being MySQL certified

·         Oracle buying Sun, therefore adquiring MySQL!!!!!

·         Tips & Tricks from MySQL Experts

 

 출처 : http://bieg.wordpress.com/2008/10/06/mysql-clustering-config/

'Brain Trainning > DataBase' 카테고리의 다른 글

SQL Diag 이렇게 이용  (0) 2010.10.16
SQL Server 2008 SP2 (서비스팩2)  (0) 2010.10.04
MySQL Clustering Config  (0) 2010.09.29
MySQL Clustering on Ubuntu  (0) 2010.09.29
MySQL Cluster 구성  (0) 2010.09.20
유지관리 - 흔적삭제  (0) 2010.09.13
posted by LifeisSimple

댓글을 달아 주세요

2010. 9. 29. 10:56 Brain Trainning/DataBase

MySQL Clustering on Ubuntu

Posted August 3, 2008
Filed under:
 Clustering, MySQL, Uncategorized |

I spent some time getting MySQL clustering working with Ubuntu after reading a guide on Howto Forge. The guide however went into the details of compiling and installing MySQL from source so I’m creating this to show the steps needed to get it set up on a fresh Ubuntu installation.

For a correct setup you will need 3 machines. The first machine will serve as the management node, and the other two will be storage nodes.

At the time of writing, the current stable version of Ubuntu is 8.04.1 and the MySQL version that is installed is 5.0.51

During the configuration I log onto the machines and use the command

sudo su -

to gain permanent root access and saving myself from having to type sudo in front of every command. Use your own discretion.

Installing MySQL

Using apt this is straight forward. Just type the following command on all three machines to install MySQL server.

apt-get install mysql-server

Once asked to, set the root password to the MySQL database. You’ll need to remember this one. Once MySQL server is installed we’ll proceed to configure the management node.

Configuring the Management Node

Create and edit the file /etc/mysql/ndb_mgmd.cnf. Copy and paste the text bellow changing the ip addresses to match your setup as necessary.

[NDBD DEFAULT]

NoOfReplicas=2

DataMemory=80M    # How much memory to allocate for data storage

IndexMemory=18M   # How much memory to allocate for index storage

# For DataMemory and IndexMemory, we have used the

# default values. Since the "world" database takes up

# only about 500KB, this should be more than enough for

# this example Cluster setup.

[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]

[TCP DEFAULT]

# Section for the cluster management node

[NDB_MGMD]

# IP address of the management node (this system)

HostName=192.168.1.5

 

# Section for the storage nodes

[NDBD]

# IP address of the first storage node

HostName=192.168.1.6

DataDir=/var/lib/mysql-cluster

BackupDataDir=/var/lib/mysql-cluster/backup

DataMemory=512M

[NDBD]

# IP address of the second storage node

HostName=192.168.1.7

DataDir=/var/lib/mysql-cluster

BackupDataDir=/var/lib/mysql-cluster/backup

DataMemory=512M

 

# one [MYSQLD] per storage node

[MYSQLD]

[MYSQLD]

Configuring the Storage Nodes

As you can see in the file we created in the previous step, the cluster will be using/var/lib/mysql-cluster on the storage machines. This path is created when you install MySQL server but they are owned by root. We want to create the backup directory and change ownership to mysql.

mkdir /var/lib/mysql-cluster/backup

chown -R mysql:mysql /var/lib/mysql-cluster

Now we’ll need to edit the MySQL configuration so that the storage nodes will communicate with the Management Node.

Edit /etc/mysql/my.cnf

Search for [mysqld] and add the following.

[mysqld]

ndbcluster

# IP address of the cluster management node

ndb-connectstring=192.168.1.5

Then scroll down to the bottom until you see [MYSQL_CLUSTER]. Uncomment the line and edit so it looks like

[MYSQL_CLUSTER]

ndb-connectstring=192.168.1.5

The reason the connect string it found twice in the mysql file is because one is used by mysql server, and the other is used by the ndb data node app. Save the changes to the file.

Make sure you complete the changes on both data nodes.

Start the Management Node

Start the Management Node using

/etc/init.d/mysql-ndb-mgm restart

The process shouldn’t be running but using restart doesnt hurt. Once it is started we can access the management console using the command ndb_mgm. At the prompt type show; and you will see

ndb_mgm> show;

Connected to Management Server at: localhost:1186

Cluster Configuration

---------------------

[ndbd(NDB)]    2 node(s)

id=2 (not connected, accepting connect from 192.168.1.6)

id=3 (not connected, accepting connect from 192.168.1.7)

 

[ndb_mgmd(MGM)]    1 node(s)

id=1    @192.168.1.5  (Version: 5.0.51)

 

[mysqld(API)]    2 node(s)

id=4 (not connected, accepting connect from any host)

id=5 (not connected, accepting connect from any host)

As you can see the management node is waiting for connections from the data nodes.

Start the Data Nodes

On the data nodes, issue the commands

/etc/init.d/mysql restart

/etc/init.d/mysql-ndb restart

Go back to the management node, type show; again, and now you should see something similar to

id=2    @192.168.1.6  (Version: 5.0.51, starting, Nodegroup: 0)

id=3    @192.168.1.7  (Version: 5.0.51, starting, Nodegroup: 0)

Once they have started properly, the show command should display

ndb_mgm> show;

Cluster Configuration

---------------------

[ndbd(NDB)]    2 node(s)

id=2    @192.168.1.6  (Version: 5.0.51, Nodegroup: 0, Master)

id=3    @192.168.1.7  (Version: 5.0.51, Nodegroup: 0)

[ndb_mgmd(MGM)]    1 node(s)

id=1    @192.168.1.5  (Version: 5.0.51)

[mysqld(API)]    2 node(s)

id=4    @192.168.1.7  (Version: 5.0.51)

id=5    @192.168.1.6  (Version: 5.0.51)

Congratulations, your cluster is now setup.

Testing the cluster

Issue the following on both data nodes to create the test database. Since clustering is done on a table basis in MySQL we have to create the database manually on both data nodes.

$> mysql -u root -p

Enter password:

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 8

Server version: 5.0.51a-3ubuntu5.1 (Ubuntu)

 

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

 

mysql> create database clustertest;

Query OK, 1 row affected (0.00 sec)

Once this i done, on ONE of the data nodes, create a test table and add an entry.

mysql> use clustertest;

Database changed

mysql> create table test (i int) engine=ndbcluster;

Query OK, 0 rows affected (0.71 sec)

 

mysql> insert into test values (1);

Query OK, 1 row affected (0.05 sec)

 

mysql> select * from test;

+------+

| i    |

+------+

|    1 |

+------+

1 row in set (0.03 sec)

We’ve just created a table test, added a value to this table and made sure that the table contains one entry. Note that engine=ndbcluster must be used to let MySQL know that this table should be clustered among the data nodes. Let’s make sure that the table is infact created on the other data node, and contains one entry.

mysql> use clustertest;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> show tables;

+-----------------------+

| Tables_in_clustertest |

+-----------------------+

| test                  |

+-----------------------+

1 row in set (0.01 sec)

 

mysql> select * from test;

+------+

| i    |

+------+

|    1 |

+------+

1 row in set (0.04 sec)

As you can see, the cluster is working.

Moving an existing database to the cluster

Now that we have the cluster working, we can easily change an existing database to be clustered. All you need to do is run the following command on each of the tables.

alter table my_test_table engine=ndbcluster;

The table, and all it’s data will be copied to the datanodes and you can now access/change then through any nodes in the cluster. Very simple.


Possibly related posts: (automatically generated)

·         How to fix forgotten MySql passwords in Ubuntu

·         How to set or reset password mysql on Ubuntu

·         Ebooks on MySQL

 

'Brain Trainning > DataBase' 카테고리의 다른 글

SQL Server 2008 SP2 (서비스팩2)  (0) 2010.10.04
MySQL Clustering Config  (0) 2010.09.29
MySQL Clustering on Ubuntu  (0) 2010.09.29
MySQL Cluster 구성  (0) 2010.09.20
유지관리 - 흔적삭제  (0) 2010.09.13
Brad M. McGehee SQL Presentations  (0) 2010.09.08
posted by LifeisSimple

댓글을 달아 주세요

prev 1 next