Posted
Filed under PHP
In SQL Server, you can use CONVERT function to convert a string with the specified format to a DATETIME value. In MySQL, you can use STR_TO_DATE function if you need a specific format, or CONVERT if you need the default format.

Note that the order of parameters in SQL Server and MySQL CONVERT functions is different.

SQL Server:

  -- 3rd parameter specifies 121 style (ODBC 'YYYY-MM-DD HH:MI:SS.FFF' format with milliseconds)
  SELECT CONVERT(DATETIME, '2012-11-29 18:21:11.123', 121);
  # 2012-11-29 18:21:11.123
 
  SELECT CONVERT(DATETIME, GETDATE());
  # 2017-04-07 09:55:40.550

MySQL:

  -- Specify string format using format specifiers
  SELECT STR_TO_DATE('2012-11-29 18:21:11.123', '%Y-%m-%d %T.%f');
  # 2012-11-29 18:21:11.123000
 
   SELECT CONVERT(NOW(), DATETIME);
   # 2017-04-07 09:55:40

Mapping SQL Server Datetime Style to MySQL Format

When you convert CONVERT function to STR_TO_DATE you have to map the SQL Server style to the appropriate format string in MySQL:

SQL Server Style MySQL Format String String Example
101 US - MM/DD/YYYY '%m/%d/%Y' '11/29/2012'
121 ODBC - YYYY-MM-DD HH:MI:SS.FFF '%Y-%m-%d %T.%f' '2012-11-29 18:21:11.123'

Conversion examples:

SQL Server MySQL
CONVERT(DATETIME, '11/29/2012', 101) STR_TO_DATE('11/29/2012', '%m/%d/%Y')
CONVERT(DATETIME, '2012-11-29 18:21:11.123', 121) STR_TO_DATE('2012-11-29 18:21:11.123', '%Y-%m-%d %T.%f')

SQL Server CONVERT for SMALLDATETIME in MySQL

In SQL Server SMALLDATETIME data type stores a datetime value with 00 seconds. You can use the expression below to keep 00 seconds after using the CONVERT function in MySQL:

SQL Server:

  -- SMALLDATETIME is always with 00 seconds
  SELECT CONVERT(SMALLDATETIME, GETDATE());
  # 2017-04-07 10:05:00

MySQL:

   SELECT CONVERT(DATE_FORMAT(NOW(), '%Y-%m-%d %H-%i-00'), DATETIME);
   # 2017-04-07 10:05:00
2018/04/27 22:47 2018/04/27 22:47
Posted
Filed under PHP
[원문]
lib/shop.lib.php

#1
1 if($row['io_price'] >= 0)
2     $price = '  + '.number_format($row['io_price']).'원';
3 else
4     $price = '   '.number_format($row['io_price']).'원';


1 if($row['io_price'] > 0)
2     $price = '  + '.number_format($row['io_price']).'원';
3 else if ($row['io_price'] == 0)
4     $price = '';
5 else
6     $price = '   '.number_format($row['io_price']).'원';

로 수정


#2
1 if($row['io_price'] >= 0)
2     $price = '  + '.number_format($row['io_price']).'원';
3 else
4     $price = '   '.number_format($row['io_price']).'원';


1 if($row['io_price'] > 0)
2     $price = '  + '.number_format($row['io_price']).'원';
3 else if ($row['io_price'] == 0)
4     $price = '';
5 else
6     $price = '   '.number_format($row['io_price']).'원';

로 수정

js/shop.js

#1
1 var priceHide = '';
2 if (parseInt(price) == 0) priceHide = ' sound_only';

를 add_sel_option 에 추가

#2
1 opt += "<span class=\"sit_opt_prc\">"+opt_prc+"</span>";


1 opt += "<span class=\"sit_opt_prc"+priceHide+"\">"+opt_prc+"</span>";

로 수정

2018/04/09 17:50 2018/04/09 17:50
Posted
Filed under JSP, JAVA

tomcat8
/webapp/manager/META-INF/context.xml

<!--Valve className="org.apache.catalina.valves.RemoteAddrValve"
         allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" /-->
주석 처리가 되어 있다면
주석을 해제하여 외부에서 접근 하는 것을 막을 수 있다.


2018/04/09 14:28 2018/04/09 14:28
Posted
Filed under Linux
cat /proc/$(pgrep maxscale)/limits
실행 하면 maxscale의
open file 쪽에 1024로 설정 되어 있습니다.
이부분을 늘려 주기 위해서는
vi /usr/lib/systemd/system/maxscale.service
LimitNOFILE=65536
위와 같이 설정 합니다.

2018/04/09 13:45 2018/04/09 13:45
Posted
Filed under Linux

호스트 내임 변경
hostnamectl set-hostname  blog.visualp.com

호스트 내임 확인
hostname

현재 세션(쉘)에서는 바뀌지 않지만,
재접속 하면 변경된것을 확인 할 수 있습니다.



2018/04/05 14:48 2018/04/05 14:48
Posted
Filed under Mysql
mariadb galera cluster 설치는 유수한 자료들로 구글에 검색 해보면
찾을 수 있습니다.
하지만 실제 운영을 하려고 하면 , DB에 발생된 장애 처리를 해야 합니다.
상황에 따라서 발생할 수 있는 장애들을
아래 percona.com 자료에서 설명 하고 있습니다.
운영시 발생할 수 있는 상황에 대해서 잘 설명 해주고 있습니다.
우선 서버가 강제적으로 다운 되면 강제 다운 시켜야 될 경우 또는
테스트중 정전으로 서버 시작이 안될 경우
난감한 문제들이 발생 합니다.
우선 확인 해야 될 것이 갑작스런 정전인 경우

[서버 강제종료 / 다운 후 시작이 안되는 경우]
grastate.dat
safe_to_bootstrap: 1      // 0으로 변경 한다.
위와 같이 변경을 하면 어느 정도 해결이 됩니다.

기타 장애 복구  참고 사이트

[원문]
https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/

상황별 가레나클러스터 복구 방법

Galera replication for MySQL brings not only the new, great features to our ecosystem, but also introduces completely new maintenance techniques. Are you concerned about adding such new complexity to your MySQL environment? Perhaps that concern is unnecessarily.

I am going to present here some simple tips that hopefully will let fresh Galera users prevent headaches when there is the need to recover part or a whole cluster in certain situations. I used Percona XtraDB Cluster (project based on Percona Server and Galera library + MySQL extensions from Codership) to prepare this post, but most if not all of the scenarios should also apply to any solution based on MySQL+Galera tandem you actually chose, whether these are binaries from Codership, MariaDB Galera Cluster or maybe your own builds.

Unlike standard MySQL replication, a PXC cluster acts like one logical entity, which takes care about each node status and consistency as well as cluster status as a whole. This allows to maintain much better data integrity then you may expect from traditional asynchronous replication while allowing safe writes on multiple nodes in the same time. This is though for the price of more possible scenarios where database service will be stopped with no node being able to serve requests.

Lets assume the simplest case cluster of nodes A, B and C and few possible scenarios where some or all nodes are out of service. What may happen and what we have to do, to bring them (or whole cluster) back up.

Scenario 1

g1Node A is gracefully stopped. Likely for the purpose of maintenance, configuration change, etc.
In this case the other nodes receive “good bye” message from that node, hence the cluster size is reduced and some properties like quorum calculation or auto increment are automatically changed. Once we start the A node again, it will join the cluster based on it’s wsrep_cluster_address setting in my.cnf. This process is much different from normal replication – the joiner node won’t serve any requests until it is again fully synchronized with the cluster, so connecting to it’s peers isn’t enough, state transfer must succeed first. If the writeset cache (gcache.size), on nodes B and/or C has still all the transactions there were executed during the time this node was down, joining will be possible via (usually fast and light) IST. Otherwise, full SST will be needed, which in fact is full binary data snapshot copy. Hence it may be important here to determine the best donor, as shown in this article. If IST is impossible due to missing transactions in donor’s gcache, the fallback decision is made by the donor and SST is started automatically instead.

Scenario 2

g2Nodes A and B are gracefully stopped. Similar to previous case, cluster size is reduced to 1, hence even the single remaining node C forms a primary component and is serving client requests. To get the nodes back into the cluster, you just need to start them. However, the node C will be switched to “Donor/Desynced” state as it will have to provide state transfer to at least first joining node. It is still possible to read/write to it during that process, but it may be much slower, depending how large state transfers it needs to send. Also some load balancers may consider the donor node as not operational and remove it from the pool. So it is best to avoid situation when only one node is up.

Note though, if you restart A and then B in that order, you may want to make sure B won’t use A as state transfer donor, as A may not have all the needed writesets in it’s gcache. So just specify the C node as donor this way (“nodeC” name is the one you specify with wsrep_node_name variable):

service mysql start --wsrep_sst_donor=nodeC

Scenario 3

g3All three nodes are gracefully stopped. Cluster is deformed. In this case, the problem is how to initialize it again. Here, it is important to know, that during clean shutdown, a PXC node writes it’s last executed position into the grastate.dat file. By comparing the seqno number inside, you will see which node is the most advanced one (most likely the last one stopped). Cluster must be bootstrapped using this node, otherwise nodes that had more advanced position will have to perform full SST to join cluster initialized from the less advanced one (and some transactions will be lost). To bootstrap the first node, invoke the startup script like this:

/etc/init.d/mysql bootstrap-pxc

or

service mysql bootstrap-pxc

or

service mysql start --wsrep_new_cluster

or

service mysql start --wsrep-cluster-address="gcomm://"

or in packages using systemd service manager (Centos7 at the moment):

systemctl start mysql@bootstrap.service

In older PXC versions, to bootstrap cluster, you had to edit my.cnf and replace previous wsrep_cluster_address line with empty value like this: wsrep_cluster_address=gcomm:// and start mysql normally. More details to be found here.

Please note that even if you bootstrap from the most advanced node, so the other nodes have lower sequence number, they will have to still join via full-SST because the Galera Cache is not retained on restart. For that reason, it is recommended to stop writes to the cluster *before* it’s full shutdown, so that all nodes stop in the same position. Edit: This changes since Galera 3.19 thanks to gcache-recover option.

g4Scenario 4

Node A disappears from the cluster. By disappear I mean power outage, hardware failure, kernel panic, mysqld crash, kill -9 on mysqld pid, OOMkiller, etc. Two remaining nodes notice the connection to A node is down and will be trying to re-connect to it. After some timeouts, both agree that node A is really down and remove it “officially” from the cluster. Quorum is saved ( 2 out of 3 nodes are up), so no service disruption happens. After restarting, A will join automatically the same way as in scenario 1.

Scenario 5

Nodes A and B disappear. The node C is not able to form the quorum alone, so the cluster is switching into a non-primary mode, in which MySQL refuses to serve any SQL query. In this state, mysqld process on C will be still running, you can connect to it, but any statement related to data fails with:

mysql> select * from test.t1;
ERROR 1047 (08S01): Unknown command

Actually reads will be possible for a moment until C decides that it cannot reach A and B, but immediately no new writes will be allowed thanks to the certification based replication in Galera. This is what we are going to see in the remaining node’s log:

140814 0:42:13 [Note] WSREP: commit failed for reason: 3
140814 0:42:13 [Note] WSREP: conflict state: 0
140814 0:42:13 [Note] WSREP: cluster conflict due to certification failure for threads:
140814 0:42:13 [Note] WSREP: Victim thread:
THD: 7, mode: local, state: executing, conflict: cert failure, seqno: -1
SQL: insert into t values (1)

The single node C is then waiting for it’s peers to show up again, and in some cases if that happens, like when there was network outage and those nodes were up all the time, the cluster will be formed again automatically. Also if the nodes B and C were just network-severed from the first node, but they can still reach each other, they will keep functioning as they still form the quorum. If A and B were crashed ( due to data inconsistency, bug, etc. ) or off due to power outage, you need to do manual action to enable primary component on the C node, before you can bring A and B back. This way, we tell the C node “Hey, you can now form a new cluster alone, forget A and B!”. The command to do this is:

SET GLOBAL wsrep_provider_options='pc.bootstrap=true';

However, you should double check in order to be very sure the other nodes are really down before doing that! Otherwise, you will most likely end up with two clusters having different data.

Scenario 6

g6All nodes went down without proper shutdown procedure. Such situation may happen in case of datacenter power failure, hitting some MySQL or Galera bug leading to crash on all nodes, but also as a result of data consistency being compromised where cluster detects that each node has different data. In each of those cases, the grastate.dat file is not updated and does not contain valid sequence number (seqno). It may look like this:

cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 220dcdcb-1629-11e4-add3-aec059ad3734
seqno: -1
cert_index:

In this case, we are not sure if all nodes were consistent with each other, hence it is crucial to find the most advanced one in order to boostrap the cluster using it. Before starting mysql daemon on any node, you have to extract the last sequence number by checking it’s transactional state. You can do it this way:

[root@percona3 ~]# mysqld_safe --wsrep-recover
140821 15:57:15 mysqld_safe Logging to '/var/lib/mysql/percona3_error.log'.
140821 15:57:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
140821 15:57:15 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.6bUIqM' --pid-file='/var/lib/mysql/percona3-recover.pid'
140821 15:57:17 mysqld_safe WSREP: Recovered position 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:2
140821 15:57:19 mysqld_safe mysqld from pid file /var/lib/mysql/percona3.pid ended

So the last committed transaction sequence number on this node was 2. Now you just need to bootstrap from the latest node first and then start the others.

However, the above procedure won’t be needed in the recent Galera versions (3.6+?), available since PXC 5.6.19. There is a new option – pc.recovery (enabled by default), which saves the cluster state into a file named gvwstate.dat on each member node. As the variable name says (pc – primary component), it saves only a cluster being in PRIMARY state. An example content of that file may look like this:

cat /var/lib/mysql/gvwstate.dat
my_uuid: 76de8ad9-2aac-11e4-8089-d27fd06893b9
#vwbeg
view_id: 3 6c821ecc-2aac-11e4-85a5-56fe513c651f 3
bootstrap: 0
member: 6c821ecc-2aac-11e4-85a5-56fe513c651f 0
member: 6d80ec1b-2aac-11e4-8d1e-b2b2f6caf018 0
member: 76de8ad9-2aac-11e4-8089-d27fd06893b9 0
#vwend

We can see three node cluster above with all members being up. Thanks to this new feature, in the case of power outage in our datacenter, after power is back, the nodes will read the last state on startup and will try to restore primary component once all the members again start to see each other. This makes the PXC cluster to automatically recover from being powered down without any manual intervention!  In the logs we will see:

140823 15:28:55 [Note] WSREP: restore pc from disk successfully
(...)
140823 15:29:59 [Note] WSREP: declaring 6c821ecc at tcp://192.168.90.3:4567 stable
140823 15:29:59 [Note] WSREP: declaring 6d80ec1b at tcp://192.168.90.4:4567 stable
140823 15:29:59 [Warning] WSREP: no nodes coming from prim view, prim not possible
140823 15:29:59 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 2, memb_num = 3
140823 15:29:59 [Note] WSREP: Flow-control interval: [28, 28]
140823 15:29:59 [Note] WSREP: Received NON-PRIMARY.
140823 15:29:59 [Note] WSREP: New cluster view: global state: 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:11, view# -1: non-Primary, number of nodes: 3, my index: 2, protocol version -1
140823 15:29:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
140823 15:29:59 [Note] WSREP: promote to primary component
140823 15:29:59 [Note] WSREP: save pc into disk
140823 15:29:59 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = yes, my_idx = 2, memb_num = 3
140823 15:29:59 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
140823 15:29:59 [Note] WSREP: clear restored view
(...)
140823 15:29:59 [Note] WSREP: Bootstrapped primary 00000000-0000-0000-0000-000000000000 found: 3.
140823 15:29:59 [Note] WSREP: Quorum results:
version = 3,
component = PRIMARY,
conf_id = -1,
members = 3/3 (joined/total),
act_id = 11,
last_appl. = -1,
protocols = 0/6/2 (gcs/repl/appl),
group UUID = 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a
140823 15:29:59 [Note] WSREP: Flow-control interval: [28, 28]
140823 15:29:59 [Note] WSREP: Restored state OPEN -> JOINED (11)
140823 15:29:59 [Note] WSREP: New cluster view: global state: 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:11, view# 0: Primary, number of nodes: 3, my index: 2, protocol version 2
140823 15:29:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
140823 15:29:59 [Note] WSREP: REPL Protocols: 6 (3, 2)
140823 15:29:59 [Note] WSREP: Service thread queue flushed.
140823 15:29:59 [Note] WSREP: Assign initial position for certification: 11, protocol version: 3
140823 15:29:59 [Note] WSREP: Service thread queue flushed.
140823 15:29:59 [Note] WSREP: Member 1.0 (percona3) synced with group.
140823 15:29:59 [Note] WSREP: Member 2.0 (percona1) synced with group.
140823 15:29:59 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 11)
140823 15:29:59 [Note] WSREP: Synchronized with group, ready for connections

Scenario 7

g7Cluster lost it’s primary state due to split brain situation. For the purpose of this example, let’s assume we have the cluster formed from even number of nodes – six and three of them are in one location while another three in second location (datacenter) and network connectivity is broken between them. Of course the best practice is to avoid such topology: if you can’t have odd number of real nodes, at least you can use an additional arbitrator (garbd) node or set higher pc.weight to some nodes. But when split brain happens any way, so none of the separated groups can maintain the quorum – all nodes must stop serving requests and both parts of the cluster are just continuously trying to re-connect. If you want to restore the service even before the network link is restored, you can make one of the groups primary again using the same command like in scenario 5:

SET GLOBAL wsrep_provider_options='pc.bootstrap=true';

After that, you are able to work on the manually restored part of the cluster, and the second half should be able to automatically re-join using incremental state transfer (IST) once the network link is restored. But beware: if you set the bootstrap option on both the separated parts, you will end up with two living cluster instances, with data likely diverging away from each other. Restoring network link in that case won’t make them to re-join until nodes are restarted and try to re-connect to members specified in configuration file. Then, as Galera replication model truly cares about data consistency – once the inconsistency will be detected, nodes that cannot execute row change statement due to different data – will perform emergency shutdown and the only way to bring them back to the cluster will be via full SST.

I hope I covered most of the possible failure scenarios of Galera-based clusters, and made the recovery procedures bit more clear.

2018/04/03 11:06 2018/04/03 11:06