Zookeeper连接超时难点排查,zookpeer客户端狂刷KeeperErrorCode

今日在起步应用的时候,出现了之类的不胜:
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode =
ConnectionLoss
at
org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:198)
[curator-client-2.5.0.jar:na]
at
org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:88)
[curator-client-2.5.0.jar:na]
at
org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:115)
[curator-client-2.5.0.jar:na]

标题是这么的,在测试环境平常都足以健康使用zookeeper,可是每便准备上版本从前有频仍的测试。则久不久就会现身日志刷屏的意况。
使用的zk客户端框架是Curator,项目里面kafka/dubbo/elastic-job都接纳到zk。千头万绪而且标题又不能够时刻复发,所以排查了很久向来找不出难题由来。

第2利用telnet测试了下IP和端口,发现是足以连绵不断上的,使用ping时延也正如少,那几个时候去看了Zookeeper的日记,呈现:
图片 1
从日记中能够窥见,该IP超出了最明斯克接数限制,Zookeeper对连接数实行了限定,以IP作为单位的,暗中同意情形下是60,能够将参数maxClientCnxns修改大学一年级部分,可能涂改为0,表示不实行界定。即使将劳动配置在Ali云,而在集团本地访问的时候,假若运用过多,就会促成位置11分场所,因为都以同二个IP出去的。

日志如下:

[2016-12-07 21:11:04,435] [525133fa-24f2-44bf-beab-58d6ff36b9ea]
[main-EventThread] [INFO] [ConnectionStateManager.java:228] State
change: SUSPENDED

[2016-12-07 21:11:04,435] [5c4a12ce-7e81-435f-80c7-420996812c78]
[main-EventThread] [WARN] [ConnectionStateManager.java:235]
ConnectionStateManager queue full – dropping events to make room

[2016-12-07 21:11:05,015] [c6425e13-048a-486d-a9f1-211566fc8221]
[Curator-Framework-0] [INFO] [ConnectionStateManager.java:228]
State change: LOST

[2016-12-07 21:11:05,015] [7bc4a190-6f0f-4206-b6b9-9a02c793619d]
[Curator-Framework-0] [WARN] [ConnectionStateManager.java:235]
ConnectionStateManager queue full – dropping events to make room

[2016-12-07 21:11:05,017] [54fcf829-4285-4fd2-a889-b67c28e1e50a]
[Curator-Framework-0] [ERROR] [CuratorFrameworkImpl.java:537]
Background operation retry gave up

org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss

at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
~[zookeeper-3.4.6.jar:3.4.6-1569965]

at
org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:708)
[curator-framework-2.8.0.jar:na]

at
org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:826)
[curator-framework-2.8.0.jar:na]

at
org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:792)
[curator-framework-2.8.0.jar:na]

at
org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:62)
[curator-framework-2.8.0.jar:na]

at
org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:257)
[curator-framework-2.8.0.jar:na]

at java.util.concurrent.FutureTask.run(FutureTask.java:262)
[na:1.7.0_51]

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_51]

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_51]

at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]

[2016-12-07 21:11:05,019] [6ced4c75-20cf-4fdc-a90d-ac303d6abc11]
[Curator-Framework-0] [ERROR] [CuratorFrameworkImpl.java:537]
Background retry gave up

org.apache.curator.CuratorConnectionLossException: KeeperErrorCode =
ConnectionLoss

at
org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:809)
[curator-framework-2.8.0.jar:na]

at
org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:792)
[curator-framework-2.8.0.jar:na]

at
org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:62)
[curator-framework-2.8.0.jar:na]

at
org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:257)
[curator-framework-2.8.0.jar:na]

at java.util.concurrent.FutureTask.run(FutureTask.java:262)
[na:1.7.0_51]

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_51]

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_51]

at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]

因而查看信赖的zk/zk客户端有三个例外的版本,而且在其余地方来看了那几个博客(http://blog.csdn.net/azhao\_dn/article/details/8469680)所以就相继的在pom中排掉低版本的zk和zk客户端(Curator)。可是后来难点依然重现,无法发只好看源码了,翻看了下Curator的源码发现并发KeeperException$ConnectionLossException:
KeeperErrorCode =
ConnectionLoss因为初阶化zk的等候连接的时光使用的是私下认可的15秒

图片 2

就此在开首话的操作少将那几个日子稍微再加大学一年级些。可是后来难题依旧再次出现无奈之下只得再翻看代码。

CuratorFrameworkImpl.java

private void backgroundOperationsLoop()

{

while ( !Thread.currentThread().isInterrupted() )

{

OperationAndData operationAndData;

try

{

operationAndData = backgroundOperations.take();

if ( debugListener != null )

{

debugListener.listen(operationAndData);

}

}

catch ( InterruptedException e )

{

Thread.currentThread().interrupt();

break;

}

performBackgroundOperation(operationAndData);

}

}

private void performBackgroundOperation(OperationAndData
operationAndData)

{

try

{

if ( client.isConnected() )

{

operationAndData.callPerformBackgroundOperation();

}

else

{

client.getZooKeeper();  // important – allow connection resets,
timeouts, etc. to occur

if ( operationAndData.getElapsedTimeMs() >=
client.getConnectionTimeoutMs() )

{

throw new CuratorConnectionLossException();

}

operationAndData.sleepFor(1, TimeUnit.SECONDS);

queueOperation(operationAndData);

}

}

catch ( Throwable e )

{

/**

* Fix edge case reported as CURATOR-52. ConnectionState.checkTimeouts()
throws KeeperException.ConnectionLossException

* when the initial (or previously failed) connection cannot be
re-established. This needs to be run through the retry policy

* and callbacks need to get invoked, etc.

*/

if ( e instanceof CuratorConnectionLossException )

{

WatchedEvent watchedEvent = new
WatchedEvent(Watcher.Event.EventType.None,
Watcher.Event.KeeperState.Disconnected, null);

CuratorEvent event = new CuratorEventImpl(this,
CuratorEventType.WATCHED,
KeeperException.Code.CONNECTIONLOSS.intValue(), null, null,
operationAndData.getContext(), null, null, null, watchedEvent, null);

if ( checkBackgroundRetry(operationAndData, event) )

{

queueOperation(operationAndData);

}

else

{

logError(“Background retry gave up”, e);

}

}

else

{

handleBackgroundOperationException(operationAndData, e);

}

}

}

OperationAndData.java

longgetElapsedTimeMs()

{

returnSystem.currentTimeMillis() -startTimeMs;

}

DelayQueue.java:

public E take() throws InterruptedException {

final ReentrantLock lock = this.lock;

lock.lockInterruptibly();

try {

for (;;) {

E first = q.peek();//队列中为空,firsr值为Null

if (first == null)

available.await();

else {

long delay = first.getDelay(NANOSECONDS);

if (delay <= 0)

return q.poll();

first = null; // don’t retain ref while waiting

if (leader != null)

available.await();

else {

Thread thisThread = Thread.currentThread();

leader = thisThread;

try {

available.awaitNanos(delay);

} finally {

if (leader == thisThread)

leader = null;

}

}

}

}

} finally {

if (leader == null && q.peek() != null)

available.signal();

lock.unlock();

}

}

看看这里,作者估摸应该是从队列中赢获得的要素为空,所以程序block在那里导致超时。所以难题就成形成为怎么样优先级队列(PriorityQueue)为空。导致获取不到成分,程序block