Posted
Filed under Mysql
ALTER TABLE `tb_webstat_info`
 CHANGE COLUMN `wi_code` `wi_code` VARCHAR(10) binary DEFAULT NULL COMMENT '모바일명세표_코드' AFTER `wi_rnum`;

VARCHAR(10) binary  형식으로 설정을 해주던지
문자열 타입을  utf8_bin;으로 설정 한다
2018/09/28 18:12 2018/09/28 18:12
Posted
Filed under Mysql
mariadb galera cluster 설치는 유수한 자료들로 구글에 검색 해보면
찾을 수 있습니다.
하지만 실제 운영을 하려고 하면 , DB에 발생된 장애 처리를 해야 합니다.
상황에 따라서 발생할 수 있는 장애들을
아래 percona.com 자료에서 설명 하고 있습니다.
운영시 발생할 수 있는 상황에 대해서 잘 설명 해주고 있습니다.
우선 서버가 강제적으로 다운 되면 강제 다운 시켜야 될 경우 또는
테스트중 정전으로 서버 시작이 안될 경우
난감한 문제들이 발생 합니다.
우선 확인 해야 될 것이 갑작스런 정전인 경우

[서버 강제종료 / 다운 후 시작이 안되는 경우]
grastate.dat
safe_to_bootstrap: 1      // 0으로 변경 한다.
위와 같이 변경을 하면 어느 정도 해결이 됩니다.

기타 장애 복구  참고 사이트

[원문]
https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/

상황별 가레나클러스터 복구 방법

Galera replication for MySQL brings not only the new, great features to our ecosystem, but also introduces completely new maintenance techniques. Are you concerned about adding such new complexity to your MySQL environment? Perhaps that concern is unnecessarily.

I am going to present here some simple tips that hopefully will let fresh Galera users prevent headaches when there is the need to recover part or a whole cluster in certain situations. I used Percona XtraDB Cluster (project based on Percona Server and Galera library + MySQL extensions from Codership) to prepare this post, but most if not all of the scenarios should also apply to any solution based on MySQL+Galera tandem you actually chose, whether these are binaries from Codership, MariaDB Galera Cluster or maybe your own builds.

Unlike standard MySQL replication, a PXC cluster acts like one logical entity, which takes care about each node status and consistency as well as cluster status as a whole. This allows to maintain much better data integrity then you may expect from traditional asynchronous replication while allowing safe writes on multiple nodes in the same time. This is though for the price of more possible scenarios where database service will be stopped with no node being able to serve requests.

Lets assume the simplest case cluster of nodes A, B and C and few possible scenarios where some or all nodes are out of service. What may happen and what we have to do, to bring them (or whole cluster) back up.

Scenario 1

g1Node A is gracefully stopped. Likely for the purpose of maintenance, configuration change, etc.
In this case the other nodes receive “good bye” message from that node, hence the cluster size is reduced and some properties like quorum calculation or auto increment are automatically changed. Once we start the A node again, it will join the cluster based on it’s wsrep_cluster_address setting in my.cnf. This process is much different from normal replication – the joiner node won’t serve any requests until it is again fully synchronized with the cluster, so connecting to it’s peers isn’t enough, state transfer must succeed first. If the writeset cache (gcache.size), on nodes B and/or C has still all the transactions there were executed during the time this node was down, joining will be possible via (usually fast and light) IST. Otherwise, full SST will be needed, which in fact is full binary data snapshot copy. Hence it may be important here to determine the best donor, as shown in this article. If IST is impossible due to missing transactions in donor’s gcache, the fallback decision is made by the donor and SST is started automatically instead.

Scenario 2

g2Nodes A and B are gracefully stopped. Similar to previous case, cluster size is reduced to 1, hence even the single remaining node C forms a primary component and is serving client requests. To get the nodes back into the cluster, you just need to start them. However, the node C will be switched to “Donor/Desynced” state as it will have to provide state transfer to at least first joining node. It is still possible to read/write to it during that process, but it may be much slower, depending how large state transfers it needs to send. Also some load balancers may consider the donor node as not operational and remove it from the pool. So it is best to avoid situation when only one node is up.

Note though, if you restart A and then B in that order, you may want to make sure B won’t use A as state transfer donor, as A may not have all the needed writesets in it’s gcache. So just specify the C node as donor this way (“nodeC” name is the one you specify with wsrep_node_name variable):

service mysql start --wsrep_sst_donor=nodeC

Scenario 3

g3All three nodes are gracefully stopped. Cluster is deformed. In this case, the problem is how to initialize it again. Here, it is important to know, that during clean shutdown, a PXC node writes it’s last executed position into the grastate.dat file. By comparing the seqno number inside, you will see which node is the most advanced one (most likely the last one stopped). Cluster must be bootstrapped using this node, otherwise nodes that had more advanced position will have to perform full SST to join cluster initialized from the less advanced one (and some transactions will be lost). To bootstrap the first node, invoke the startup script like this:

/etc/init.d/mysql bootstrap-pxc

or

service mysql bootstrap-pxc

or

service mysql start --wsrep_new_cluster

or

service mysql start --wsrep-cluster-address="gcomm://"

or in packages using systemd service manager (Centos7 at the moment):

systemctl start mysql@bootstrap.service

In older PXC versions, to bootstrap cluster, you had to edit my.cnf and replace previous wsrep_cluster_address line with empty value like this: wsrep_cluster_address=gcomm:// and start mysql normally. More details to be found here.

Please note that even if you bootstrap from the most advanced node, so the other nodes have lower sequence number, they will have to still join via full-SST because the Galera Cache is not retained on restart. For that reason, it is recommended to stop writes to the cluster *before* it’s full shutdown, so that all nodes stop in the same position. Edit: This changes since Galera 3.19 thanks to gcache-recover option.

g4Scenario 4

Node A disappears from the cluster. By disappear I mean power outage, hardware failure, kernel panic, mysqld crash, kill -9 on mysqld pid, OOMkiller, etc. Two remaining nodes notice the connection to A node is down and will be trying to re-connect to it. After some timeouts, both agree that node A is really down and remove it “officially” from the cluster. Quorum is saved ( 2 out of 3 nodes are up), so no service disruption happens. After restarting, A will join automatically the same way as in scenario 1.

Scenario 5

Nodes A and B disappear. The node C is not able to form the quorum alone, so the cluster is switching into a non-primary mode, in which MySQL refuses to serve any SQL query. In this state, mysqld process on C will be still running, you can connect to it, but any statement related to data fails with:

mysql> select * from test.t1;
ERROR 1047 (08S01): Unknown command

Actually reads will be possible for a moment until C decides that it cannot reach A and B, but immediately no new writes will be allowed thanks to the certification based replication in Galera. This is what we are going to see in the remaining node’s log:

140814 0:42:13 [Note] WSREP: commit failed for reason: 3
140814 0:42:13 [Note] WSREP: conflict state: 0
140814 0:42:13 [Note] WSREP: cluster conflict due to certification failure for threads:
140814 0:42:13 [Note] WSREP: Victim thread:
THD: 7, mode: local, state: executing, conflict: cert failure, seqno: -1
SQL: insert into t values (1)

The single node C is then waiting for it’s peers to show up again, and in some cases if that happens, like when there was network outage and those nodes were up all the time, the cluster will be formed again automatically. Also if the nodes B and C were just network-severed from the first node, but they can still reach each other, they will keep functioning as they still form the quorum. If A and B were crashed ( due to data inconsistency, bug, etc. ) or off due to power outage, you need to do manual action to enable primary component on the C node, before you can bring A and B back. This way, we tell the C node “Hey, you can now form a new cluster alone, forget A and B!”. The command to do this is:

SET GLOBAL wsrep_provider_options='pc.bootstrap=true';

However, you should double check in order to be very sure the other nodes are really down before doing that! Otherwise, you will most likely end up with two clusters having different data.

Scenario 6

g6All nodes went down without proper shutdown procedure. Such situation may happen in case of datacenter power failure, hitting some MySQL or Galera bug leading to crash on all nodes, but also as a result of data consistency being compromised where cluster detects that each node has different data. In each of those cases, the grastate.dat file is not updated and does not contain valid sequence number (seqno). It may look like this:

cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 220dcdcb-1629-11e4-add3-aec059ad3734
seqno: -1
cert_index:

In this case, we are not sure if all nodes were consistent with each other, hence it is crucial to find the most advanced one in order to boostrap the cluster using it. Before starting mysql daemon on any node, you have to extract the last sequence number by checking it’s transactional state. You can do it this way:

[root@percona3 ~]# mysqld_safe --wsrep-recover
140821 15:57:15 mysqld_safe Logging to '/var/lib/mysql/percona3_error.log'.
140821 15:57:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
140821 15:57:15 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.6bUIqM' --pid-file='/var/lib/mysql/percona3-recover.pid'
140821 15:57:17 mysqld_safe WSREP: Recovered position 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:2
140821 15:57:19 mysqld_safe mysqld from pid file /var/lib/mysql/percona3.pid ended

So the last committed transaction sequence number on this node was 2. Now you just need to bootstrap from the latest node first and then start the others.

However, the above procedure won’t be needed in the recent Galera versions (3.6+?), available since PXC 5.6.19. There is a new option – pc.recovery (enabled by default), which saves the cluster state into a file named gvwstate.dat on each member node. As the variable name says (pc – primary component), it saves only a cluster being in PRIMARY state. An example content of that file may look like this:

cat /var/lib/mysql/gvwstate.dat
my_uuid: 76de8ad9-2aac-11e4-8089-d27fd06893b9
#vwbeg
view_id: 3 6c821ecc-2aac-11e4-85a5-56fe513c651f 3
bootstrap: 0
member: 6c821ecc-2aac-11e4-85a5-56fe513c651f 0
member: 6d80ec1b-2aac-11e4-8d1e-b2b2f6caf018 0
member: 76de8ad9-2aac-11e4-8089-d27fd06893b9 0
#vwend

We can see three node cluster above with all members being up. Thanks to this new feature, in the case of power outage in our datacenter, after power is back, the nodes will read the last state on startup and will try to restore primary component once all the members again start to see each other. This makes the PXC cluster to automatically recover from being powered down without any manual intervention!  In the logs we will see:

140823 15:28:55 [Note] WSREP: restore pc from disk successfully
(...)
140823 15:29:59 [Note] WSREP: declaring 6c821ecc at tcp://192.168.90.3:4567 stable
140823 15:29:59 [Note] WSREP: declaring 6d80ec1b at tcp://192.168.90.4:4567 stable
140823 15:29:59 [Warning] WSREP: no nodes coming from prim view, prim not possible
140823 15:29:59 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 2, memb_num = 3
140823 15:29:59 [Note] WSREP: Flow-control interval: [28, 28]
140823 15:29:59 [Note] WSREP: Received NON-PRIMARY.
140823 15:29:59 [Note] WSREP: New cluster view: global state: 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:11, view# -1: non-Primary, number of nodes: 3, my index: 2, protocol version -1
140823 15:29:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
140823 15:29:59 [Note] WSREP: promote to primary component
140823 15:29:59 [Note] WSREP: save pc into disk
140823 15:29:59 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = yes, my_idx = 2, memb_num = 3
140823 15:29:59 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
140823 15:29:59 [Note] WSREP: clear restored view
(...)
140823 15:29:59 [Note] WSREP: Bootstrapped primary 00000000-0000-0000-0000-000000000000 found: 3.
140823 15:29:59 [Note] WSREP: Quorum results:
version = 3,
component = PRIMARY,
conf_id = -1,
members = 3/3 (joined/total),
act_id = 11,
last_appl. = -1,
protocols = 0/6/2 (gcs/repl/appl),
group UUID = 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a
140823 15:29:59 [Note] WSREP: Flow-control interval: [28, 28]
140823 15:29:59 [Note] WSREP: Restored state OPEN -> JOINED (11)
140823 15:29:59 [Note] WSREP: New cluster view: global state: 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:11, view# 0: Primary, number of nodes: 3, my index: 2, protocol version 2
140823 15:29:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
140823 15:29:59 [Note] WSREP: REPL Protocols: 6 (3, 2)
140823 15:29:59 [Note] WSREP: Service thread queue flushed.
140823 15:29:59 [Note] WSREP: Assign initial position for certification: 11, protocol version: 3
140823 15:29:59 [Note] WSREP: Service thread queue flushed.
140823 15:29:59 [Note] WSREP: Member 1.0 (percona3) synced with group.
140823 15:29:59 [Note] WSREP: Member 2.0 (percona1) synced with group.
140823 15:29:59 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 11)
140823 15:29:59 [Note] WSREP: Synchronized with group, ready for connections

Scenario 7

g7Cluster lost it’s primary state due to split brain situation. For the purpose of this example, let’s assume we have the cluster formed from even number of nodes – six and three of them are in one location while another three in second location (datacenter) and network connectivity is broken between them. Of course the best practice is to avoid such topology: if you can’t have odd number of real nodes, at least you can use an additional arbitrator (garbd) node or set higher pc.weight to some nodes. But when split brain happens any way, so none of the separated groups can maintain the quorum – all nodes must stop serving requests and both parts of the cluster are just continuously trying to re-connect. If you want to restore the service even before the network link is restored, you can make one of the groups primary again using the same command like in scenario 5:

SET GLOBAL wsrep_provider_options='pc.bootstrap=true';

After that, you are able to work on the manually restored part of the cluster, and the second half should be able to automatically re-join using incremental state transfer (IST) once the network link is restored. But beware: if you set the bootstrap option on both the separated parts, you will end up with two living cluster instances, with data likely diverging away from each other. Restoring network link in that case won’t make them to re-join until nodes are restarted and try to re-connect to members specified in configuration file. Then, as Galera replication model truly cares about data consistency – once the inconsistency will be detected, nodes that cannot execute row change statement due to different data – will perform emergency shutdown and the only way to bring them back to the cluster will be via full SST.

I hope I covered most of the possible failure scenarios of Galera-based clusters, and made the recovery procedures bit more clear.

2018/04/03 11:06 2018/04/03 11:06
Posted
Filed under Mysql
mysqlcheck 유틸리티 사용법(analyze, optimize, repair)

mysqlcheck 유틸리티 사용법

cost 베이스 DB에서 DB전체를 한번에 analyze, optimize, repair 하기 위한 유틸리티 입니다.
파일위치 : mysql/bin
----------------------------------------------------
전체 체크 및 자동 복구(check and repair)
mysqlcheck -Aa --auto-repair -u root -p
mysqlcheck -Ao --auto-repair -u root -p
----------------------------------------------------
-A, --all-databases
-a, --analyze
-o, --optimize
-P 포트번호
----------------------------------------------------
특정 데이터베이스만 체크 및 자동 복구
mysqlcheck --auto-repair -u root -p 데이터베이스명
----------------------------------------------------
특정 테이블만 체크 및 자동 복구
mysqlcheck --auto-repair -u root -p 데이터베이스명 테이블명
----------------------------------------------------
명령어 수정하여 사용하기
mysqlrepair = mysqlcheck --repair
mysqlanalyze = mysqlcheck --analyze
mysqloptimize = mysqlcheck --optimize
----------------------------------------------------
mysql에 접속하여 개별적으로 하는 방법
mysql> use 데이터베이스명;
mysql> show tables;
mysql> check table 테이블명;
mysql> repair table 테이블명;
mysql> analyze table 테이블명;
mysql> optimize table 테이블명;
----------------------------------------------------
##########################
OPTIMIZED table script PHP
##########################
$db = 'database_name';
$tables = mysql_list_tables($db);
while (list($table)name) = mysql_fetch_array($tables)) {
$sql = "OPTIMIZE TABLE $table_name";
mysql_query($sql) or exit(mysql_error());
}
2018/03/31 13:10 2018/03/31 13:10
Posted
Filed under Mysql

METHOD
CHECK TABLE, OPTIMIZE TABLE, ANALYZE TABLE, and REPAIR TABLE

HOWTO

ALTER TABLE tb_cycle_data CHECK   PARTITION tb_cycle_data2013;

ALTER TABLE {TABLE-NAME} {METHOD}   PARTITION {PARTITION-TABLE-NAME};


22.3.4 Maintenance of Partitions

A number of table and partition maintenance tasks can be carried out using SQL statements intended for such purposes on partitioned tables in MySQL 5.7.

Table maintenance of partitioned tables can be accomplished using the statements CHECK TABLE, OPTIMIZE TABLE, ANALYZE TABLE, and REPAIR TABLE, which are supported for partitioned tables.

You can use a number of extensions to ALTER TABLE for performing operations of this type on one or more partitions directly, as described in the following list:

  • Rebuilding partitions.  Rebuilds the partition; this has the same effect as dropping all records stored in the partition, then reinserting them. This can be useful for purposes of defragmentation.

    Example:

    ALTER TABLE t1 REBUILD PARTITION p0, p1;
  • Optimizing partitions.  If you have deleted a large number of rows from a partition or if you have made many changes to a partitioned table with variable-length rows (that is, having VARCHAR, BLOB, or TEXT columns), you can use ALTER TABLE ... OPTIMIZE PARTITION to reclaim any unused space and to defragment the partition data file.

    Example:

    ALTER TABLE t1 OPTIMIZE PARTITION p0, p1;

    Using OPTIMIZE PARTITION on a given partition is equivalent to running CHECK PARTITION, ANALYZE PARTITION, and REPAIR PARTITION on that partition.

    Some MySQL storage engines, including InnoDB, do not support per-partition optimization; in these cases, ALTER TABLE ... OPTIMIZE PARTITION analyzes and rebuilds the entire table, and causes an appropriate warning to be issued. (Bug #11751825, Bug #42822) Use ALTER TABLE ... REBUILD PARTITION and ALTER TABLE ... ANALYZE PARTITION instead, to avoid this issue.

  • Analyzing partitions.  This reads and stores the key distributions for partitions.

    Example:

    ALTER TABLE t1 ANALYZE PARTITION p3;
  • Repairing partitions.  This repairs corrupted partitions.

    Example:

    Press CTRL+C to copy
     
    ALTER TABLE t1 REPAIR PARTITION p0,p1;

    Normally, REPAIR PARTITION fails when the partition contains duplicate key errors. In MySQL 5.7.2 and later, you can use ALTER IGNORE TABLE with this option, in which case all rows that cannot be moved due to the presence of duplicate keys are removed from the partition (Bug #16900947).

  • Checking partitions.  You can check partitions for errors in much the same way that you can use CHECK TABLE with nonpartitioned tables.

    Example:

    Press CTRL+C to copy
     
    ALTER TABLE trb3 CHECK PARTITION p1;

    This command will tell you if the data or indexes in partition p1 of table t1 are corrupted. If this is the case, use ALTER TABLE ... REPAIR PARTITION to repair the partition.

    Normally, CHECK PARTITION fails when the partition contains duplicate key errors. In MySQL 5.7.2 and later, you can use ALTER IGNORE TABLE with this option, in which case the statement returns the contents of each row in the partition where a duplicate key violation is found. Only the values for the columns in the partitioning expression for the table are reported. (Bug #16900947)

Each of the statements in the list just shown also supports the keyword ALL in place of the list of partition names. Using ALL causes the statement to act on all partitions in the table.

The use of mysqlcheck and myisamchk is not supported with partitioned tables.

In MySQL 5.7, you can also truncate partitions using ALTER TABLE ... TRUNCATE PARTITION. This statement can be used to delete all rows from one or more partitions in much the same way that TRUNCATE TABLE deletes all rows from a table.

ALTER TABLE ... TRUNCATE PARTITION ALL truncates all partitions in the table.

Prior to MySQL 5.7.2, ANALYZE, CHECK, OPTIMIZE, REBUILD, REPAIR, and TRUNCATE operations were not permitted on subpartitions (Bug #14028340, Bug #65184).

[원문]
https://dev.mysql.com/doc/refman/5.7/en/partitioning-maintenance.html

2018/01/09 18:23 2018/01/09 18:23
Posted
Filed under Mysql
종 데이터를 횡 데이트로 만들기 (세로 - 가로)

횡으로 구성되어있는 데이터를 종으로 만드는 방법에 대해 설명합니다. 먼저 전, 후의 데이터를 비교해보세요.

처리 전 데이터

1
2
3
4
5
6
7
8
9
10
+-----+------+-------+-------+
| idx | name | class | score |
+-----+------+-------+-------+
|   7 | choi | kor   |    90 |
|   8 | choi | eng   |    80 |
|   9 | choi | math  |    70 |
|  10 | kim  | kor   |    60 |
|  11 | kim  | eng   |    85 |
|  12 | kim  | math  |   100 |
+-----+------+-------+-------+

처리 후 데이터

1
2
3
4
5
6
+------+------+------+------+
| name | kor  | eng  | math |
+------+------+------+------+
| choi |   90 |   80 |   70 |
| kim  |   60 |   85 |  100 |
+------+------+------+------+

처리 후 데이터를 보면 name 기준으로 kor, eng, math항목이 횡으로 나열된 것을 볼 수 있습니다.

종 데이터 샘플 생성

종 데이터로 사용할 사용할 가상의 샘플 데이터를 생성해 보겠습니다.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
CREATE TABLE `score` (
  `idx` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(32) NOT NULL,
  `class` varchar(32) NOT NULL,
  `score` int(11) NOT NULL,
  PRIMARY KEY (`idx`)
);
 
INSERT INTO score (name, class, score) VALUES ('choi', 'kor', 90);
INSERT INTO score (name, class, score) VALUES ('choi', 'eng', 80);
INSERT INTO score (name, class, score) VALUES ('choi', 'math', 70);
 
INSERT INTO score (name, class, score) VALUES ('kim', 'kor', 60);
INSERT INTO score (name, class, score) VALUES ('kim', 'eng', 85);
INSERT INTO score (name, class, score) VALUES ('kim', 'math', 100);

종 데이터 생성이 완료 되었습니다.

1
2
3
4
5
6
7
8
9
10
11
mysql> SELECT * FROM score;
+-----+------+-------+-------+
| idx | name | class | score |
+-----+------+-------+-------+
|   7 | choi | kor   |    90 |
|   8 | choi | eng   |    80 |
|   9 | choi | math  |    70 |
|  10 | kim  | kor   |    60 |
|  11 | kim  | eng   |    85 |
|  12 | kim  | math  |   100 |
+-----+------+-------+-------+

alias를 이용한 컬럼 생성

종 데이터를 kor, eng, math로 구성된 횡 데이터로 만들기 위해 alias를 이용 K, E, M 컬럼을 생성하고 kor데이터는 K에 eng데이터는 E에 math데이터는 M에 넣습니다.

값이 저장되지 않은 컬럼은 NULL이 저장됩니다.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
SELECT
      name,
      CASE WHEN class = 'kor' THEN score END AS K,
      CASE WHEN class = 'eng' THEN score END AS E,
      CASE WHEN class = 'math' THEN score END AS M
FROM score;
 
+------+------+------+------+
| name | K    | E    | M    |
+------+------+------+------+
| choi |   90 | NULL | NULL |
| choi | NULL |   80 | NULL |
| choi | NULL | NULL |   70 |
| kim  |   60 | NULL | NULL |
| kim  | NULL |   85 | NULL |
| kim  | NULL | NULL |  100 |
+------+------+------+------+

아직까지는 종 데이터의 형식이지만 이제 횡데이터로 만들기 위한 준비는 되었습니다.

횡 데이터 생성

위에서 만들었던 쿼리를 이용해서 임의의 테이블을 만듭니다.
임의의 테이블을 group by하고 alias를 이용해서 생성한 K, E, M 컬럼을 이용 횡데이터로 만듭니다.
K, E, M컬럼은 SUM 함수를 이용해서 더해주고 alias를 이용 kor, eng, math로 컬럼명을 지정합니다.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
SELECT name, SUM(K) as kor, SUM(E) as eng, SUM(M) as math FROM
(
  SELECT
        name,
        CASE WHEN class = 'kor' THEN score END AS K,
        CASE WHEN class = 'eng' THEN score END AS E,
        CASE WHEN class = 'math' THEN score END AS M
  FROM score
)AS T GROUP BY name;
 
+------+------+------+------+
| name | kor  | eng  | math |
+------+------+------+------+
| choi |   90 |   80 |   70 |
| kim  |   60 |   85 |  100 |
+------+------+------+------+


[원문]http://blog.devez.net/307

참고 URL

http://stackoverflow.com/questions/1241178/mysql-rows-to-columns

2018/01/09 10:57 2018/01/09 10:57
Posted
Filed under Mysql

[참고] : https://dev.mysql.com/doc/refman/5.7/en/alter-table-partition-operations.html

========================================================================================================
1. 파티션 지원 확인

SHOW VARIABLES LIKE '%partition%';

SHOW PLUGINS;

 

2. 파티션 추가

CREATE TABLE `테이블` (

  ....

)

PARTITION BY RANGE(함수(`필드명`)) (

  PARTITION `파티션명1` VALUES LESS THAN (값),

  PARTITION `파티션명2` VALUES LESS THAN (값),

  PARTITION `파티션명3` VALUES LESS THAN (MAXVALUE)

);

 

CREATE TABLE 이후 추가

ALTER TABLE `테이블` PARTITION BY RANGE(함수(`필드명`)) (

  PARTITION `파티션명1` VALUES LESS THAN (값),

  PARTITION `파티션명2` VALUES LESS THAN (값),

  PARTITION `파티션명3` VALUES LESS THAN (MAXVALUE)

);

 

# THAN (표현식) 에는 THAN (함수(값)) 형태로도 가능

#표현식에 사용할 수 있는 함수 종류 및 파티션의 종류는 아래 링크에서 참고

http://wyseburn.tistory.com/entry/MySQL-%ED%8C%8C%ED%8B%B0%EC%85%94%EB%8B%9D

 

3. 파티션 분할

ALTER TABLE `테이블` REORGANIZE PARTITION `파티션명3` INTO (

  PARTITION `파티션명3` VALUES LESS THAN (값),

  PARTITION `파티션명4` VALUES LESS THAN (MAXVALUE)

);

 

4. 파티션 삭제

ALTER TABLE `테이블` DROP PARTITION `파티션명1`;

 

5. 파티션 관리

ALTER TABLE REBUILD PARTITION;
ALTER TABLE OPTIMIZE PARTITION;
ALTER TABLE ANALYZE PARTITION;
ALTER TABLE REPAIR PARTITION;
ALTER TABLE CHECK PARTITION;

 

6. 파티션 완전삭제

ALTER TABLE `테이블` REMOVE PARTITIONING;



출처: http://wyseburn.tistory.com/entry/MySql-파티셔닝-정리 [메모장입니다.]
2018/01/08 15:31 2018/01/08 15:31
Posted
Filed under Mysql


실 운영시 =2로 사용(안정성)
디비작업시(대량)인서트  = 잠시 0 설정 해줌

mariadb 콜솔에서 실행
set global innodb_flush_log_at_trx_commit = 0;


innodb_flush_log_at_trx_commit = 0
innodb_flush_log_at_trx_commit = 1
innodb_flush_log_at_trx_commit = 2

0 인 경우, MySQL 이나 OS가 갑자기 crash 된다면 최대 1초동안의 트랜잭션을 잃을 수 있다.

  • 1 인 경우, 안전하다.
  • 2 인 경우, OS가 갑자기 crash 된다면 최대 1초동안의 트랜잭션을 잃을 수 있다. 하지만 MySQL 장애시에는 이미 OS 영역으로 데이터는 넘어갔기 때문에 안전할 수 있다.
  • 각 값에 따라, 엄청난 성능을 보일 수 있다.
    • 지난번에 엄청난 양의 log를 위하여 MySQL을 사용하는 팀이 있었는데, 해당값을 1에서 0으로 수정함에 따라 성능이 7배 빨라지기도 했다.
    • 단순 select용의 slave나, 최대 1초정도의 트랜잭션은 무시할 수 있는 서비스 혹은 log를 저장할 서버라면 해당값을 1에서 0으로 변경할 수 있다.
    • MySQL을 가장 빠르고 쉽게 튜닝할 수 있는 parameter 중의 하나이다.
2017/09/27 09:15 2017/09/27 09:15
Posted
Filed under Mysql

[원문]https://sites.google.com/site/dist777study/home/db/mysql/wait_timeout
SET @@GLOBAL.wait_timeout=300
my.cnf 에
wait_timeout=300
추가

2017/09/20 14:21 2017/09/20 14:21
Posted
Filed under Mysql

날짜 중복확인:

(('사용자입력시작일' <= DB시작일 and DB시작일 <= '사용자입력종료일') or ('사용자입력시작일' <= DB종료일 and DB종료일 <= '사용자입력종료일') or (DB시작일 <= '사용자입력시작일' and '사용자입력종료일' <= DB종료일)) 

2017/07/18 13:38 2017/07/18 13:38
Posted
Filed under Mysql


-- Maria DB max connection 수 확인

SHOW VARIABLES LIKE '%max_connection%'

-- Maria DB에 접속되어 있는 client 정보 리스트

SHOW PROCESSLIST;

-- Maria DB max connection 수 조절

SET GLOBAL max_connections = 10000;

//status 관련 값
show status like '%CONNECT%';

Max_used_connections <--현재 연결된 접속수

Threads_connected  <-- 연결되었던 최대 접속수

2016/10/13 09:45 2016/10/13 09:45