Posted
Filed under Mysql
mariadb galera cluster 설치는 유수한 자료들로 구글에 검색 해보면
찾을 수 있습니다.
하지만 실제 운영을 하려고 하면 , DB에 발생된 장애 처리를 해야 합니다.
상황에 따라서 발생할 수 있는 장애들을
아래 percona.com 자료에서 설명 하고 있습니다.
운영시 발생할 수 있는 상황에 대해서 잘 설명 해주고 있습니다.
우선 서버가 강제적으로 다운 되면 강제 다운 시켜야 될 경우 또는
테스트중 정전으로 서버 시작이 안될 경우
난감한 문제들이 발생 합니다.
우선 확인 해야 될 것이 갑작스런 정전인 경우

[서버 강제종료 / 다운 후 시작이 안되는 경우]
grastate.dat
safe_to_bootstrap: 1      // 0으로 변경 한다.
위와 같이 변경을 하면 어느 정도 해결이 됩니다.

기타 장애 복구  참고 사이트

[원문]
https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/

상황별 가레나클러스터 복구 방법

Galera replication for MySQL brings not only the new, great features to our ecosystem, but also introduces completely new maintenance techniques. Are you concerned about adding such new complexity to your MySQL environment? Perhaps that concern is unnecessarily.

I am going to present here some simple tips that hopefully will let fresh Galera users prevent headaches when there is the need to recover part or a whole cluster in certain situations. I used Percona XtraDB Cluster (project based on Percona Server and Galera library + MySQL extensions from Codership) to prepare this post, but most if not all of the scenarios should also apply to any solution based on MySQL+Galera tandem you actually chose, whether these are binaries from Codership, MariaDB Galera Cluster or maybe your own builds.

Unlike standard MySQL replication, a PXC cluster acts like one logical entity, which takes care about each node status and consistency as well as cluster status as a whole. This allows to maintain much better data integrity then you may expect from traditional asynchronous replication while allowing safe writes on multiple nodes in the same time. This is though for the price of more possible scenarios where database service will be stopped with no node being able to serve requests.

Lets assume the simplest case cluster of nodes A, B and C and few possible scenarios where some or all nodes are out of service. What may happen and what we have to do, to bring them (or whole cluster) back up.

Scenario 1

g1Node A is gracefully stopped. Likely for the purpose of maintenance, configuration change, etc.
In this case the other nodes receive “good bye” message from that node, hence the cluster size is reduced and some properties like quorum calculation or auto increment are automatically changed. Once we start the A node again, it will join the cluster based on it’s wsrep_cluster_address setting in my.cnf. This process is much different from normal replication – the joiner node won’t serve any requests until it is again fully synchronized with the cluster, so connecting to it’s peers isn’t enough, state transfer must succeed first. If the writeset cache (gcache.size), on nodes B and/or C has still all the transactions there were executed during the time this node was down, joining will be possible via (usually fast and light) IST. Otherwise, full SST will be needed, which in fact is full binary data snapshot copy. Hence it may be important here to determine the best donor, as shown in this article. If IST is impossible due to missing transactions in donor’s gcache, the fallback decision is made by the donor and SST is started automatically instead.

Scenario 2

g2Nodes A and B are gracefully stopped. Similar to previous case, cluster size is reduced to 1, hence even the single remaining node C forms a primary component and is serving client requests. To get the nodes back into the cluster, you just need to start them. However, the node C will be switched to “Donor/Desynced” state as it will have to provide state transfer to at least first joining node. It is still possible to read/write to it during that process, but it may be much slower, depending how large state transfers it needs to send. Also some load balancers may consider the donor node as not operational and remove it from the pool. So it is best to avoid situation when only one node is up.

Note though, if you restart A and then B in that order, you may want to make sure B won’t use A as state transfer donor, as A may not have all the needed writesets in it’s gcache. So just specify the C node as donor this way (“nodeC” name is the one you specify with wsrep_node_name variable):

service mysql start --wsrep_sst_donor=nodeC

Scenario 3

g3All three nodes are gracefully stopped. Cluster is deformed. In this case, the problem is how to initialize it again. Here, it is important to know, that during clean shutdown, a PXC node writes it’s last executed position into the grastate.dat file. By comparing the seqno number inside, you will see which node is the most advanced one (most likely the last one stopped). Cluster must be bootstrapped using this node, otherwise nodes that had more advanced position will have to perform full SST to join cluster initialized from the less advanced one (and some transactions will be lost). To bootstrap the first node, invoke the startup script like this:

/etc/init.d/mysql bootstrap-pxc

or

service mysql bootstrap-pxc

or

service mysql start --wsrep_new_cluster

or

service mysql start --wsrep-cluster-address="gcomm://"

or in packages using systemd service manager (Centos7 at the moment):

systemctl start mysql@bootstrap.service

In older PXC versions, to bootstrap cluster, you had to edit my.cnf and replace previous wsrep_cluster_address line with empty value like this: wsrep_cluster_address=gcomm:// and start mysql normally. More details to be found here.

Please note that even if you bootstrap from the most advanced node, so the other nodes have lower sequence number, they will have to still join via full-SST because the Galera Cache is not retained on restart. For that reason, it is recommended to stop writes to the cluster *before* it’s full shutdown, so that all nodes stop in the same position. Edit: This changes since Galera 3.19 thanks to gcache-recover option.

g4Scenario 4

Node A disappears from the cluster. By disappear I mean power outage, hardware failure, kernel panic, mysqld crash, kill -9 on mysqld pid, OOMkiller, etc. Two remaining nodes notice the connection to A node is down and will be trying to re-connect to it. After some timeouts, both agree that node A is really down and remove it “officially” from the cluster. Quorum is saved ( 2 out of 3 nodes are up), so no service disruption happens. After restarting, A will join automatically the same way as in scenario 1.

Scenario 5

Nodes A and B disappear. The node C is not able to form the quorum alone, so the cluster is switching into a non-primary mode, in which MySQL refuses to serve any SQL query. In this state, mysqld process on C will be still running, you can connect to it, but any statement related to data fails with:

mysql> select * from test.t1;
ERROR 1047 (08S01): Unknown command

Actually reads will be possible for a moment until C decides that it cannot reach A and B, but immediately no new writes will be allowed thanks to the certification based replication in Galera. This is what we are going to see in the remaining node’s log:

140814 0:42:13 [Note] WSREP: commit failed for reason: 3
140814 0:42:13 [Note] WSREP: conflict state: 0
140814 0:42:13 [Note] WSREP: cluster conflict due to certification failure for threads:
140814 0:42:13 [Note] WSREP: Victim thread:
THD: 7, mode: local, state: executing, conflict: cert failure, seqno: -1
SQL: insert into t values (1)

The single node C is then waiting for it’s peers to show up again, and in some cases if that happens, like when there was network outage and those nodes were up all the time, the cluster will be formed again automatically. Also if the nodes B and C were just network-severed from the first node, but they can still reach each other, they will keep functioning as they still form the quorum. If A and B were crashed ( due to data inconsistency, bug, etc. ) or off due to power outage, you need to do manual action to enable primary component on the C node, before you can bring A and B back. This way, we tell the C node “Hey, you can now form a new cluster alone, forget A and B!”. The command to do this is:

SET GLOBAL wsrep_provider_options='pc.bootstrap=true';

However, you should double check in order to be very sure the other nodes are really down before doing that! Otherwise, you will most likely end up with two clusters having different data.

Scenario 6

g6All nodes went down without proper shutdown procedure. Such situation may happen in case of datacenter power failure, hitting some MySQL or Galera bug leading to crash on all nodes, but also as a result of data consistency being compromised where cluster detects that each node has different data. In each of those cases, the grastate.dat file is not updated and does not contain valid sequence number (seqno). It may look like this:

cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 220dcdcb-1629-11e4-add3-aec059ad3734
seqno: -1
cert_index:

In this case, we are not sure if all nodes were consistent with each other, hence it is crucial to find the most advanced one in order to boostrap the cluster using it. Before starting mysql daemon on any node, you have to extract the last sequence number by checking it’s transactional state. You can do it this way:

[root@percona3 ~]# mysqld_safe --wsrep-recover
140821 15:57:15 mysqld_safe Logging to '/var/lib/mysql/percona3_error.log'.
140821 15:57:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
140821 15:57:15 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.6bUIqM' --pid-file='/var/lib/mysql/percona3-recover.pid'
140821 15:57:17 mysqld_safe WSREP: Recovered position 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:2
140821 15:57:19 mysqld_safe mysqld from pid file /var/lib/mysql/percona3.pid ended

So the last committed transaction sequence number on this node was 2. Now you just need to bootstrap from the latest node first and then start the others.

However, the above procedure won’t be needed in the recent Galera versions (3.6+?), available since PXC 5.6.19. There is a new option – pc.recovery (enabled by default), which saves the cluster state into a file named gvwstate.dat on each member node. As the variable name says (pc – primary component), it saves only a cluster being in PRIMARY state. An example content of that file may look like this:

cat /var/lib/mysql/gvwstate.dat
my_uuid: 76de8ad9-2aac-11e4-8089-d27fd06893b9
#vwbeg
view_id: 3 6c821ecc-2aac-11e4-85a5-56fe513c651f 3
bootstrap: 0
member: 6c821ecc-2aac-11e4-85a5-56fe513c651f 0
member: 6d80ec1b-2aac-11e4-8d1e-b2b2f6caf018 0
member: 76de8ad9-2aac-11e4-8089-d27fd06893b9 0
#vwend

We can see three node cluster above with all members being up. Thanks to this new feature, in the case of power outage in our datacenter, after power is back, the nodes will read the last state on startup and will try to restore primary component once all the members again start to see each other. This makes the PXC cluster to automatically recover from being powered down without any manual intervention!  In the logs we will see:

140823 15:28:55 [Note] WSREP: restore pc from disk successfully
(...)
140823 15:29:59 [Note] WSREP: declaring 6c821ecc at tcp://192.168.90.3:4567 stable
140823 15:29:59 [Note] WSREP: declaring 6d80ec1b at tcp://192.168.90.4:4567 stable
140823 15:29:59 [Warning] WSREP: no nodes coming from prim view, prim not possible
140823 15:29:59 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 2, memb_num = 3
140823 15:29:59 [Note] WSREP: Flow-control interval: [28, 28]
140823 15:29:59 [Note] WSREP: Received NON-PRIMARY.
140823 15:29:59 [Note] WSREP: New cluster view: global state: 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:11, view# -1: non-Primary, number of nodes: 3, my index: 2, protocol version -1
140823 15:29:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
140823 15:29:59 [Note] WSREP: promote to primary component
140823 15:29:59 [Note] WSREP: save pc into disk
140823 15:29:59 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = yes, my_idx = 2, memb_num = 3
140823 15:29:59 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
140823 15:29:59 [Note] WSREP: clear restored view
(...)
140823 15:29:59 [Note] WSREP: Bootstrapped primary 00000000-0000-0000-0000-000000000000 found: 3.
140823 15:29:59 [Note] WSREP: Quorum results:
version = 3,
component = PRIMARY,
conf_id = -1,
members = 3/3 (joined/total),
act_id = 11,
last_appl. = -1,
protocols = 0/6/2 (gcs/repl/appl),
group UUID = 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a
140823 15:29:59 [Note] WSREP: Flow-control interval: [28, 28]
140823 15:29:59 [Note] WSREP: Restored state OPEN -> JOINED (11)
140823 15:29:59 [Note] WSREP: New cluster view: global state: 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:11, view# 0: Primary, number of nodes: 3, my index: 2, protocol version 2
140823 15:29:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
140823 15:29:59 [Note] WSREP: REPL Protocols: 6 (3, 2)
140823 15:29:59 [Note] WSREP: Service thread queue flushed.
140823 15:29:59 [Note] WSREP: Assign initial position for certification: 11, protocol version: 3
140823 15:29:59 [Note] WSREP: Service thread queue flushed.
140823 15:29:59 [Note] WSREP: Member 1.0 (percona3) synced with group.
140823 15:29:59 [Note] WSREP: Member 2.0 (percona1) synced with group.
140823 15:29:59 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 11)
140823 15:29:59 [Note] WSREP: Synchronized with group, ready for connections

Scenario 7

g7Cluster lost it’s primary state due to split brain situation. For the purpose of this example, let’s assume we have the cluster formed from even number of nodes – six and three of them are in one location while another three in second location (datacenter) and network connectivity is broken between them. Of course the best practice is to avoid such topology: if you can’t have odd number of real nodes, at least you can use an additional arbitrator (garbd) node or set higher pc.weight to some nodes. But when split brain happens any way, so none of the separated groups can maintain the quorum – all nodes must stop serving requests and both parts of the cluster are just continuously trying to re-connect. If you want to restore the service even before the network link is restored, you can make one of the groups primary again using the same command like in scenario 5:

SET GLOBAL wsrep_provider_options='pc.bootstrap=true';

After that, you are able to work on the manually restored part of the cluster, and the second half should be able to automatically re-join using incremental state transfer (IST) once the network link is restored. But beware: if you set the bootstrap option on both the separated parts, you will end up with two living cluster instances, with data likely diverging away from each other. Restoring network link in that case won’t make them to re-join until nodes are restarted and try to re-connect to members specified in configuration file. Then, as Galera replication model truly cares about data consistency – once the inconsistency will be detected, nodes that cannot execute row change statement due to different data – will perform emergency shutdown and the only way to bring them back to the cluster will be via full SST.

I hope I covered most of the possible failure scenarios of Galera-based clusters, and made the recovery procedures bit more clear.

2018/04/03 11:06 2018/04/03 11:06
Posted
Filed under Mysql
mysqlcheck 유틸리티 사용법(analyze, optimize, repair)

mysqlcheck 유틸리티 사용법

cost 베이스 DB에서 DB전체를 한번에 analyze, optimize, repair 하기 위한 유틸리티 입니다.
파일위치 : mysql/bin
----------------------------------------------------
전체 체크 및 자동 복구(check and repair)
mysqlcheck -Aa --auto-repair -u root -p
mysqlcheck -Ao --auto-repair -u root -p
----------------------------------------------------
-A, --all-databases
-a, --analyze
-o, --optimize
-P 포트번호
----------------------------------------------------
특정 데이터베이스만 체크 및 자동 복구
mysqlcheck --auto-repair -u root -p 데이터베이스명
----------------------------------------------------
특정 테이블만 체크 및 자동 복구
mysqlcheck --auto-repair -u root -p 데이터베이스명 테이블명
----------------------------------------------------
명령어 수정하여 사용하기
mysqlrepair = mysqlcheck --repair
mysqlanalyze = mysqlcheck --analyze
mysqloptimize = mysqlcheck --optimize
----------------------------------------------------
mysql에 접속하여 개별적으로 하는 방법
mysql> use 데이터베이스명;
mysql> show tables;
mysql> check table 테이블명;
mysql> repair table 테이블명;
mysql> analyze table 테이블명;
mysql> optimize table 테이블명;
----------------------------------------------------
##########################
OPTIMIZED table script PHP
##########################
$db = 'database_name';
$tables = mysql_list_tables($db);
while (list($table)name) = mysql_fetch_array($tables)) {
$sql = "OPTIMIZE TABLE $table_name";
mysql_query($sql) or exit(mysql_error());
}
2018/03/31 13:10 2018/03/31 13:10
Posted
Filed under Linux
Redis can’t set maximum open files to 10064 because of OS error: Operation not permitted.,
Redis can't set maximum open files to 10064 because of OS error: Operation not permitted.
Increased maximum number of open files to 10032 (it was originally set to 1024).

[cenos7]
ulimit -n 32768

change maximum number of open files  1024 to 32768

[참조]
http://myblog.opendocs.co.kr/archives/tag/redis-cant-set-maximum-open-files-to-10064-because-of-os-error-operation-not-permitted
2018/03/30 21:25 2018/03/30 21:25
Posted
Filed under JSP, JAVA
select FLOOR((DayOfMonth(20180221)-1)/7)+1
or

select FLOOR((DayOfMonth(now())-1)/7)+1
2018/02/23 15:17 2018/02/23 15:17
Posted
Filed under JSP, JAVA

CREATE OR REPLACE FUNCTION is_number(v_str_number IN varchar2)

RETURN NUMBER

IS  /* 데이터가 number 형인지 검사하는 함수임. 1 이 나오면 NUMBER 형임 */

     V_NUM NUMBER;

BEGIN

  V_NUM := TO_NUMBER(v_str_number);

  RETURN 1;

EXCEPTION

  WHEN OTHERS THEN RETURN 0    ;

END;

 


CREATE OR REPLACE FUNCTION is_date(v_str_date IN varchar2, V_FORMAT IN VARCHAR2 DEFAULT 'YYYYMMDD')

RETURN NUMBER

IS   /* 데이터가 DATE 형인지 검사하는 함수임. 1 이 나오면 DATE 형임 */

     V_DATE DATE;

BEGIN

  V_DATE := TO_DATE(v_str_date, V_FORMAT);

  RETURN 1;   

EXCEPTION

  WHEN OTHERS THEN RETURN 0    ;

END; 

 




출처: http://scidb.tistory.com/entry/오라클에서-isnumber-isdate-함수-사용하기 [Science of Database]

2018/02/21 10:58 2018/02/21 10:58
Posted
Filed under JSP, JAVA
<build>
<resources>
<resource>
<directory>src/main/java</directory>
<includes>
<include>**/maps/*.xml</include>
</includes>
</resource>
</resources>

pom.xml 파일에 빌드 요소 하단에 다음과 같이 추가 한다.
2018/02/20 10:11 2018/02/20 10:11
Posted
Filed under JSP, JAVA
tomcat 의 web.xml 또는
프로젝트의 web.xml에 다음과 같이 추가 한다


<filter>
 <filter-name>httpHeaderSecurity</filter-name>
 <filter-class>org.apache.catalina.filters.HttpHeaderSecurityFilter</filter-class>
 <init-param>
 <param-name>antiClickJackingOption</param-name>
 <param-value>SAMEORIGIN</param-value>
 </init-param>
</filter>
<filter-mapping>
 <filter-name>httpHeaderSecurity</filter-name>
 <url-pattern>/*</url-pattern>
</filter-mapping>
2018/01/26 17:40 2018/01/26 17:40
Posted
Filed under Linux

기본적인 사용법

# rsync [옵션] [계정]@[sourcePC:/디렉토리] [target 디렉토리]

-a : archice mode ( 심볼릭 링크, 속성, 퍼미션, 소유권 등 보곤)

-v : verbose mode (자세한 정보 출력)

-z : compress (전송시 압축)

-r : 하위 디렉토리 포함

-u :  --update update only (don't overwrite newer files)

-e ssh : ssh를 이용한 rsync 동기화

--stats : 결과보고

--delete : 원본 서버에 없는 파일은 백업 서버에서 삭제

--progress : rsync 진행 상항 보기

--exclude : 제외할 파일 지정

--bwlimit : 대역폭(복사속도) 제어

--max-size : 특정 크기 이상 파일 제외(rsync 2.6.4 버전부터 추가된옵션)

--min-size : 특정 크기 이하 파일 제외(rsync 2.6.7 버전부터 추가된옵션)

 

응용

1) 단순동기화(원본파일 복사)

ex) rsync -avz root@soucepc:/data/backup  /data/backup

2) 동기화(삭제파일도 동일적용)

ex) rsync -avz --delete root@soucepc:/data/backup  /data/backup

3) 대역폭 조절

ex) rsync -avz --bwlimit=1024 root@soucepc:/data/backup  /data/backup

4) -e ssh

ex) rsync -avz -e "ssh -i /home/test/rsync-key" root@soucepc:/data/backup  /data/backup

5) 특정파일

ex) rsync -avz --exclude=.txt --exclude=.sh root@soucepc:/data/backup  /data/backup/

6) 특정폴더

ex) rsync -avz --exclude=aaa(폴더1) --exclude=bbb(폴더2) root@soucepc:/data/backup  /data/backup/

7) 날짜(3일이내 파일만 복사)

ex) find . -type f -mtime -3  | rsync -avz --files-from=- /soucepc /data/backup



[원문]http://algo79.tistory.com/entry/rsync-%EC%82%AC%EC%9A%A9%EB%B2%95%EB%8C%80%EC%97%AD%ED%8F%AD%EB%82%A0%EC%A7%9C%ED%8F%B4%EB%8D%94%ED%8C%8C%EC%9D%BC%ED%81%AC%EA%B8%B0%EB%93%B1
2018/01/24 11:04 2018/01/24 11:04
Posted
Filed under Linux
Apache Tomcat 설치후 관리자 모드 실행 되지 않을때 문제 해결방법
버전 : Tomcat 8

파일 : tomcat-users.xml
경로 : 톰캣설치디렉토리/conf

<!-- 아래 내용 추가--> <role rolename="manager-gui"/> <user username="tomcat" password="비밀번호 작성" roles="tomcat,manager-gui"/>



파일 : server.xml
경로 : 톰캣설치디렉토리/conf

<Service name="Catalina"> .......... <!-- 아래 내용 추가--> <Connector port="9009" protocol="AJP/1.3" redirectPort="9443" address="192.168.0.52" useIPVHosts="true" />

파일 : context.xml
경로 : 톰캣설치디렉토리/webapps/manager/META-INF

allow에 접속하려는 클라이언트PC의 IP주소를 추가

<Context antiResourceLocking="false" privileged="true" > <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="192\.168\.0\.52|127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" /> </Context>


위 3가지 수정후 서버 재시작

 

[원문]
http://blog.naver.com/PostView.nhn?blogId=ses1030&logNo=220823127740

2018/01/12 09:43 2018/01/12 09:43
Posted
Filed under Mysql

METHOD
CHECK TABLE, OPTIMIZE TABLE, ANALYZE TABLE, and REPAIR TABLE

HOWTO

ALTER TABLE tb_cycle_data CHECK   PARTITION tb_cycle_data2013;

ALTER TABLE {TABLE-NAME} {METHOD}   PARTITION {PARTITION-TABLE-NAME};


22.3.4 Maintenance of Partitions

A number of table and partition maintenance tasks can be carried out using SQL statements intended for such purposes on partitioned tables in MySQL 5.7.

Table maintenance of partitioned tables can be accomplished using the statements CHECK TABLE, OPTIMIZE TABLE, ANALYZE TABLE, and REPAIR TABLE, which are supported for partitioned tables.

You can use a number of extensions to ALTER TABLE for performing operations of this type on one or more partitions directly, as described in the following list:

  • Rebuilding partitions.  Rebuilds the partition; this has the same effect as dropping all records stored in the partition, then reinserting them. This can be useful for purposes of defragmentation.

    Example:

    ALTER TABLE t1 REBUILD PARTITION p0, p1;
  • Optimizing partitions.  If you have deleted a large number of rows from a partition or if you have made many changes to a partitioned table with variable-length rows (that is, having VARCHAR, BLOB, or TEXT columns), you can use ALTER TABLE ... OPTIMIZE PARTITION to reclaim any unused space and to defragment the partition data file.

    Example:

    ALTER TABLE t1 OPTIMIZE PARTITION p0, p1;

    Using OPTIMIZE PARTITION on a given partition is equivalent to running CHECK PARTITION, ANALYZE PARTITION, and REPAIR PARTITION on that partition.

    Some MySQL storage engines, including InnoDB, do not support per-partition optimization; in these cases, ALTER TABLE ... OPTIMIZE PARTITION analyzes and rebuilds the entire table, and causes an appropriate warning to be issued. (Bug #11751825, Bug #42822) Use ALTER TABLE ... REBUILD PARTITION and ALTER TABLE ... ANALYZE PARTITION instead, to avoid this issue.

  • Analyzing partitions.  This reads and stores the key distributions for partitions.

    Example:

    ALTER TABLE t1 ANALYZE PARTITION p3;
  • Repairing partitions.  This repairs corrupted partitions.

    Example:

    Press CTRL+C to copy
     
    ALTER TABLE t1 REPAIR PARTITION p0,p1;

    Normally, REPAIR PARTITION fails when the partition contains duplicate key errors. In MySQL 5.7.2 and later, you can use ALTER IGNORE TABLE with this option, in which case all rows that cannot be moved due to the presence of duplicate keys are removed from the partition (Bug #16900947).

  • Checking partitions.  You can check partitions for errors in much the same way that you can use CHECK TABLE with nonpartitioned tables.

    Example:

    Press CTRL+C to copy
     
    ALTER TABLE trb3 CHECK PARTITION p1;

    This command will tell you if the data or indexes in partition p1 of table t1 are corrupted. If this is the case, use ALTER TABLE ... REPAIR PARTITION to repair the partition.

    Normally, CHECK PARTITION fails when the partition contains duplicate key errors. In MySQL 5.7.2 and later, you can use ALTER IGNORE TABLE with this option, in which case the statement returns the contents of each row in the partition where a duplicate key violation is found. Only the values for the columns in the partitioning expression for the table are reported. (Bug #16900947)

Each of the statements in the list just shown also supports the keyword ALL in place of the list of partition names. Using ALL causes the statement to act on all partitions in the table.

The use of mysqlcheck and myisamchk is not supported with partitioned tables.

In MySQL 5.7, you can also truncate partitions using ALTER TABLE ... TRUNCATE PARTITION. This statement can be used to delete all rows from one or more partitions in much the same way that TRUNCATE TABLE deletes all rows from a table.

ALTER TABLE ... TRUNCATE PARTITION ALL truncates all partitions in the table.

Prior to MySQL 5.7.2, ANALYZE, CHECK, OPTIMIZE, REBUILD, REPAIR, and TRUNCATE operations were not permitted on subpartitions (Bug #14028340, Bug #65184).

[원문]
https://dev.mysql.com/doc/refman/5.7/en/partitioning-maintenance.html

2018/01/09 18:23 2018/01/09 18:23