Quartz recovery 及misfired机制的源码分析

来源:互联网 发布:手机淘宝人工客服电话 编辑:程序博客网 时间:2024/05/17 23:34

quartz作为成熟的任务调度系统对系统的异常及崩溃后处理机制有很好的设计,以保证整个调度过程是一个逻辑闭环,任何阶段出现的问题都可以通过框架中的机制尽最大限度的弥补,并将系统的状态引向正轨。

首先要明确的是:quartz如果在执行具体任务时,在任务执行过程中抛出异常,那么不作任何处理,这是使用者程序本身的问题,不需要框架处理。

下面介绍quartz中的两大类异常情况:

misfired 哑火(*注:笔者自己直译)

fail-over 故障转移

1.misfired 哑火

哑火顾名思义,就是quartz在应该触发(fire)trigger的时候未能及时将其触发,这将导致trigger的下次触发时间落在当前时间之前,那么按照正常的quartz调度流程,该trigger就再也没有机会被调度了。由于一个调度器实例在每次调度的过程中都会有一定的睡眠时间,存在在一段时间内所有调度器实例都在睡眠,而无人触发调度的潜在可能。于是调度器需要每隔一段时间(15s~60s)查看一次各trigger的nextfiretime,检查出否有tirgger的下一次触发落在了当前时间之前足够长的时间,在这里系统设定了一个60s的域(misfireThreshold),当一个trigger的下一次触发时间早于当前时间60s之外时,调度器判定该触发器misfired,在发现有触发器哑火之后启动相应的流程回复trigger至正常状态。上述这些过程是在调度器初始化时与主调度线程类quartzSchedulerThread同时开启的一个线程类MisfireHandler中进行的。

下面是quartz检测misfired的逻辑:

protected RecoverMisfiredJobsResult doRecoverMisfires() throws JobPersistenceException {
    boolean transOwner = false;
    Connection conn = getNonManagedTXConnection();
    try {
        RecoverMisfiredJobsResult result = RecoverMisfiredJobsResult.NO_OP;
         
        // Before we make the potentially expensive call to acquire the
        // trigger lock, peek ahead to see if it is likely we would find
        // misfired triggers requiring recovery.
        //统计哑火 trigger的个数
        int misfireCount = (getDoubleCheckLockMisfireHandler()) ?
            getDelegate().countMisfiredTriggersInState(
                conn, STATE_WAITING, getMisfireTime()) :
            Integer.MAX_VALUE;
        //没有哑火trigger,do nothing
        if (misfireCount == 0) {
            getLog().debug(
                "Found 0 triggers that missed their scheduled fire-time.");
        else {
            transOwner = getLockHandler().obtainLock(conn, LOCK_TRIGGER_ACCESS);
            //检查到有哑火的tirgger,启动recovery程序,处理哑火trigger
            result = recoverMisfiredJobs(conn, false);
        }
         
        commitConnection(conn);
        return result;
    catch (JobPersistenceException e) {
        rollbackConnection(conn);
        throw e;
    catch (SQLException e) {
        rollbackConnection(conn);
        throw new JobPersistenceException("Database error recovering from misfires.", e);
    catch (RuntimeException e) {
        rollbackConnection(conn);
        throw new JobPersistenceException("Unexpected runtime exception: "
                + e.getMessage(), e);
    finally {
        try {
            releaseLock(LOCK_TRIGGER_ACCESS, transOwner);
        finally {
            cleanupConnection(conn);
        }
    }
}

 

下面是quartz对misfire处理的关键代码:

protected RecoverMisfiredJobsResult recoverMisfiredJobs(
        Connection conn, boolean recovering)
        throws JobPersistenceException, SQLException {
        // If recovering, we want to handle all of the misfired
        // triggers right away.
        int maxMisfiresToHandleAtATime =
            (recovering) ? -1 : getMaxMisfiresToHandleAtATime();
        //定义列表,用以存储misfired triggers
        List<TriggerKey> misfiredTriggers = new LinkedList<TriggerKey>();
        long earliestNewTime = Long.MAX_VALUE;
        
        //传入列表,引用,该方法将在列表中添加哑火的trigger对象
        boolean hasMoreMisfiredTriggers =
            getDelegate().hasMisfiredTriggersInState(
                conn, STATE_WAITING, getMisfireTime(),
                maxMisfiresToHandleAtATime, misfiredTriggers);
        if (hasMoreMisfiredTriggers) {
            getLog().info(
                "Handling the first " + misfiredTriggers.size() +
                " triggers that missed their scheduled fire-time.  " +
                "More misfired triggers remain to be processed.");
        else if (misfiredTriggers.size() > 0) {
            getLog().info(
                "Handling " + misfiredTriggers.size() +
                " trigger(s) that missed their scheduled fire-time.");
        else {
            getLog().debug(
                "Found 0 triggers that missed their scheduled fire-time.");
            return RecoverMisfiredJobsResult.NO_OP;
        }
        //迭代triggers列表
        for (TriggerKey triggerKey: misfiredTriggers) {
            //获取trigger的详细信息
            OperableTrigger trig =
                retrieveTrigger(conn, triggerKey);
            if (trig == null) {
                continue;
            }
            //根据特定的trigger类型与指定的处理策略处理对trigger的下一触发时间做出设定,并持久化到数据库。
            doUpdateOfMisfiredTrigger(conn, trig, false, STATE_WAITING, recovering);
            if(trig.getNextFireTime() != null && trig.getNextFireTime().getTime() < earliestNewTime)
                earliestNewTime = trig.getNextFireTime().getTime();
        }
        return new RecoverMisfiredJobsResult(
                hasMoreMisfiredTriggers, misfiredTriggers.size(), earliestNewTime);
    }

对trigger哑火处理的最关键一点在于针对不同策略对trigger的nextfiretime进行设定,这一过程对于不同的trigger类型有不同的策略供选择。

下面是各种不同triigger对应的不同misfire策略(摘自网络):
CronTrigger

withMisfireHandlingInstructionDoNothing
——不触发立即执行
——等待下次Cron触发频率到达时刻开始按照Cron频率依次执行

withMisfireHandlingInstructionIgnoreMisfires
——以错过的第一个频率时间立刻开始执行
——重做错过的所有频率周期后
——当下一次触发频率发生时间大于当前时间后,再按照正常的Cron频率依次执行

withMisfireHandlingInstructionFireAndProceed(默认)
——以当前时间为触发频率立刻触发一次执行
——然后按照Cron频率依次执行


SimpleTrigger

withMisfireHandlingInstructionFireNow
——以当前时间为触发频率立即触发执行
——执行至FinalTIme的剩余周期次数
——以调度或恢复调度的时刻为基准的周期频率,FinalTime根据剩余次数和当前时间计算得到
——调整后的FinalTime会略大于根据starttime计算的到的FinalTime值

withMisfireHandlingInstructionIgnoreMisfires
——以错过的第一个频率时间立刻开始执行
——重做错过的所有频率周期
——当下一次触发频率发生时间大于当前时间以后,按照Interval的依次执行剩下的频率
——共执行RepeatCount+1次

withMisfireHandlingInstructionNextWithExistingCount
——不触发立即执行
——等待下次触发频率周期时刻,执行至FinalTime的剩余周期次数
——以startTime为基准计算周期频率,并得到FinalTime
——即使中间出现pause,resume以后保持FinalTime时间不变


withMisfireHandlingInstructionNowWithExistingCount(默认)
——以当前时间为触发频率立即触发执行
——执行至FinalTIme的剩余周期次数
——以调度或恢复调度的时刻为基准的周期频率,FinalTime根据剩余次数和当前时间计算得到
——调整后的FinalTime会略大于根据starttime计算的到的FinalTime值

withMisfireHandlingInstructionNextWithRemainingCount
——不触发立即执行
——等待下次触发频率周期时刻,执行至FinalTime的剩余周期次数
——以startTime为基准计算周期频率,并得到FinalTime
——即使中间出现pause,resume以后保持FinalTime时间不变

withMisfireHandlingInstructionNowWithRemainingCount
——以当前时间为触发频率立即触发执行
——执行至FinalTIme的剩余周期次数
——以调度或恢复调度的时刻为基准的周期频率,FinalTime根据剩余次数和当前时间计算得到

——调整后的FinalTime会略大于根据starttime计算的到的FinalTime值

MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_REMAINING_REPEAT_COUNT
——此指令导致trigger忘记原始设置的starttime和repeat-count
——触发器的repeat-count将被设置为剩余的次数
——这样会导致后面无法获得原始设定的starttime和repeat-count值

 

2.fail-over 故障转移

quartz考虑的另一个问题是运行时的系统崩溃,在集群中如果有一个节点突然崩溃,那么它所执行的任务会被首先发现其崩溃的节点接手,重新执行,换句话说就是把故障节点的工作转移到其他节点上,简称故障转移。recovery机制工作在集群环境中,执行recovery工作的线程类叫做ClusterManager,该线程类同样是在调度器初始化时就开启运行了。这个线程类在运行期间每15s进行一次check in操作,所谓check in,就是在数据库的QRTZ2_SCHEDULER_STATE表中更新该调度器对应的LAST_CHECKIN_TIME字段为当前时间,并且查看其他调度器实例的该字段有没有发生停止更新的情况,如果检查到有调度器的check in time比当前时间要早约15s(视具体的执行预配置情况而定),那么就判定该调度实例需要recover,随后会启动该调度器的recovery机制,获取目标调度器实例正在触发的trigger,并针对每一个trigger临时添加一各对应的仅执行一次的simpletrigger。等到调度流程扫描trigger时,这些trigger会被触发,这样就成功的把这些未完整执行的调度以一种特殊trigger的形式纳入了普通的调度流程中,只要调度流程在正常运行,这些被recover的trigger就会很快被触发并执行。

下面的代码是ClusterManager线程类的run方法,可以看到,该线程类不断地在调用manage方法,该方法中包含了check in与recover的逻辑。

public void run() {
        while (!shutdown) {
            if (!shutdown) {
                long timeToSleep = getClusterCheckinInterval();
                long transpiredTime = (System.currentTimeMillis() - lastCheckin);
                timeToSleep = timeToSleep - transpiredTime;
                if (timeToSleep <= 0) {
                    timeToSleep = 100L;
                }
                if(numFails > 0) {
                    //每次循环会睡眠一个不小于DbRetryInterval(默认15s)的时间
                    timeToSleep = Math.max(getDbRetryInterval(), timeToSleep);
                }
                 
                try {
                    Thread.sleep(timeToSleep);
                catch (Exception ignore) {
                }
            }
            //调用manage方法,该方法内包含check in与recover的主要逻辑
            if (!shutdown && this.manage()) {
                signalSchedulingChangeImmediately(0L);
            }
        }//while !shutdown
    }

manage方法主要调用doCheckIn方法,该方法中承载的check in与recover的详细逻辑:

protected boolean doCheckin() throws JobPersistenceException {
    boolean transOwner = false;
    boolean transStateOwner = false;
    boolean recovered = false;
    Connection conn = getNonManagedTXConnection();
    try {
        // Other than the first time, always checkin first to make sure there is
        // work to be done before we acquire the lock (since that is expensive,
        // and is almost never necessary).  This must be done in a separate
        // transaction to prevent a deadlock under recovery conditions.
        List<SchedulerStateRecord> failedRecords = null;
        //第一次check in时数据库还没有该调度器的数据,要做特殊处理,否则就首先调用clusterCheckIn方法,并提交操作,
        //clusterCheckIn方法中首先调用findFailedInstances方法,查找数据库中有没有需要recover的trigger,
        //然后刷新本调度器的check in time
        if (!firstCheckIn) {
            failedRecords = clusterCheckIn(conn);
            commitConnection(conn);
        }
         
        if (firstCheckIn || (failedRecords.size() > 0)) {
            getLockHandler().obtainLock(conn, LOCK_STATE_ACCESS);
            transStateOwner = true;
 
            // Now that we own the lock, make sure we still have work to do.
            // The first time through, we also need to make sure we update/create our state record
            //如果是第一次check in那么执行clusterCheckIn,否则只执行findFailedInstances
            failedRecords = (firstCheckIn) ? clusterCheckIn(conn) : findFailedInstances(conn);
 
            if (failedRecords.size() > 0) {
                getLockHandler().obtainLock(conn, LOCK_TRIGGER_ACCESS);
                //getLockHandler().obtainLock(conn, LOCK_JOB_ACCESS);
                transOwner = true;
                //对需要recover的调度器实例启动recover流程
                clusterRecover(conn, failedRecords);
                recovered = true;
            }
        }
         
        commitConnection(conn);
    catch (JobPersistenceException e) {
        rollbackConnection(conn);
        throw e;
    finally {
        try {
            releaseLock(LOCK_TRIGGER_ACCESS, transOwner);
        finally {
            try {
                releaseLock(LOCK_STATE_ACCESS, transStateOwner);
            finally {
                cleanupConnection(conn);
            }
        }
    }
    firstCheckIn = false;
    return recovered;
}

在代码中对第一次check in的操作比较令人困惑,不管是不是第一次check in似乎都需要调用clusterCheckIn方法,而该方法内部又调用了findFailedInstances方法,见代码:

protected List<SchedulerStateRecord> clusterCheckIn(Connection conn)
    throws JobPersistenceException {
    List<SchedulerStateRecord> failedInstances = findFailedInstances(conn);
     
    try {
        // FUTURE_TODO: handle self-failed-out
        // check in...
        lastCheckin = System.currentTimeMillis();
        if(getDelegate().updateSchedulerState(conn, getInstanceId(), lastCheckin) == 0) {
            getDelegate().insertSchedulerState(conn, getInstanceId(),
                    lastCheckin, getClusterCheckinInterval());
        }
         
    catch (Exception e) {
        throw new JobPersistenceException("Failure updating scheduler state when checking-in: "
                + e.getMessage(), e);
    }
    return failedInstances;
}

而findFailedInstances方法中有一个处理无主trigger的逻辑,无主trigger是说在QRTZ2_FIRED_TRIGGERS表中如果一条记录的调度器id在QRTZ2_SCHEDULER_STATE表中找不到相应的记录,那么这条trigger触发就成为一个无主的记录。这种记录只能查询到其调用者id,而无法知道QRTZ2_SCHEDULER_STATE表中可以查到的其他字段,需要系统做特殊处理(TODO:如何处理)。

上述的情况需要在五新的QRTZ2_SCHEDULER_STATE记录插入时进行,所以在doCheckin中的安排仅仅是为了让调度器在自身数据要加入进QRTZ2_SCHEDULER_STATE表之前检查一遍是否有这种无主数据,并做处理。

回到doCheckin方法中,得到需要recover的调度器实例列表,启动recover流程。clusterRecover的代码如下:

protected void clusterRecover(Connection conn, List<SchedulerStateRecord> failedInstances)
    throws JobPersistenceException {
    if (failedInstances.size() > 0) {
        long recoverIds = System.currentTimeMillis();
        logWarnIfNonZero(failedInstances.size(),
                "ClusterManager: detected " + failedInstances.size()
                        " failed or restarted instances.");
        try {
            //迭代需要recover的SchedulerStateRecord列表
            for (SchedulerStateRecord rec : failedInstances) {
                getLog().info(
                        "ClusterManager: Scanning for instance \""
                                + rec.getSchedulerInstanceId()
                                "\"'s failed in-progress jobs.");
                //读取该实例遗留的触发记录
                List<FiredTriggerRecord> firedTriggerRecs = getDelegate()
                        .selectInstancesFiredTriggerRecords(conn,
                                rec.getSchedulerInstanceId());
                int acquiredCount = 0;
                int recoveredCount = 0;
                int otherCount = 0;
                Set<TriggerKey> triggerKeys = new HashSet<TriggerKey>();
                //迭代触发记录
                for (FiredTriggerRecord ftRec : firedTriggerRecs) {
                    TriggerKey tKey = ftRec.getTriggerKey();
                    JobKey jKey = ftRec.getJobKey();
                    triggerKeys.add(tKey);
                    // release blocked triggers..
                    if (ftRec.getFireInstanceState().equals(STATE_BLOCKED)) {
                        getDelegate()
                                .updateTriggerStatesForJobFromOtherState(
                                        conn, jKey,
                                        STATE_WAITING, STATE_BLOCKED);
                    else if (ftRec.getFireInstanceState().equals(STATE_PAUSED_BLOCKED)) {
                        getDelegate()
                                .updateTriggerStatesForJobFromOtherState(
                                        conn, jKey,
                                        STATE_PAUSED, STATE_PAUSED_BLOCKED);
                    }
                    // release acquired triggers..
                    if (ftRec.getFireInstanceState().equals(STATE_ACQUIRED)) {
                        getDelegate().updateTriggerStateFromOtherState(
                                conn, tKey, STATE_WAITING,
                                STATE_ACQUIRED);
                        acquiredCount++;
                    //如果trigger指定的job策略为需要recovery,那么执行recovery
                    else if (ftRec.isJobRequestsRecovery()) {
                        // handle jobs marked for recovery that were not fully
                        // executed..
                        if (jobExists(conn, jKey)) {
                            @SuppressWarnings("deprecation")
                            //新建一次性的simpletrigger,用以重新执行需要recover的tirgger
                            SimpleTriggerImpl rcvryTrig = new SimpleTriggerImpl(
                                    "recover_"
                                            + rec.getSchedulerInstanceId()
                                            "_"
                                            + String.valueOf(recoverIds++),
                                    Scheduler.DEFAULT_RECOVERY_GROUP,
                                    new Date(ftRec.getScheduleTimestamp()));
                            rcvryTrig.setJobName(jKey.getName());
                            rcvryTrig.setJobGroup(jKey.getGroup());
                            rcvryTrig.setMisfireInstruction(SimpleTrigger.MISFIRE_INSTRUCTION_IGNORE_MISFIRE_POLICY);
                            rcvryTrig.setPriority(ftRec.getPriority());
                            //读取jobDataMap
                            JobDataMap jd = getDelegate().selectTriggerJobDataMap(conn, tKey.getName(), tKey.getGroup());
                            jd.put(Scheduler.FAILED_JOB_ORIGINAL_TRIGGER_NAME, tKey.getName());
                            jd.put(Scheduler.FAILED_JOB_ORIGINAL_TRIGGER_GROUP, tKey.getGroup());
                            jd.put(Scheduler.FAILED_JOB_ORIGINAL_TRIGGER_FIRETIME_IN_MILLISECONDS, String.valueOf(ftRec.getFireTimestamp()));
                            jd.put(Scheduler.FAILED_JOB_ORIGINAL_TRIGGER_SCHEDULED_FIRETIME_IN_MILLISECONDS, String.valueOf(ftRec.getScheduleTimestamp()));
                            rcvryTrig.setJobDataMap(jd);
                            rcvryTrig.computeFirstFireTime(null);
                            //持久化一次性的simpleTrigger,使得这个trigger可以被正常的调度流程触发。
                            storeTrigger(conn, rcvryTrig, nullfalse,
                                    STATE_WAITING, falsetrue);
                            recoveredCount++;
                        else {
                            getLog()
                                    .warn(
                                            "ClusterManager: failed job '"
                                                    + jKey
                                                    "' no longer exists, cannot schedule recovery.");
                            otherCount++;
                        }
                    else {
                        otherCount++;
                    }
                    // free up stateful job's triggers
                    if (ftRec.isJobDisallowsConcurrentExecution()) {
                        getDelegate()
                                .updateTriggerStatesForJobFromOtherState(
                                        conn, jKey,
                                        STATE_WAITING, STATE_BLOCKED);
                        getDelegate()
                                .updateTriggerStatesForJobFromOtherState(
                                        conn, jKey,
                                        STATE_PAUSED, STATE_PAUSED_BLOCKED);
                    }
                }
                //删除fired_trigger表中的trigger触发记录
                getDelegate().deleteFiredTriggers(conn,
                        rec.getSchedulerInstanceId());
                // Check if any of the fired triggers we just deleted were the last fired trigger
                // records of a COMPLETE trigger.
                int completeCount = 0;
                for (TriggerKey triggerKey : triggerKeys) {
                    if (getDelegate().selectTriggerState(conn, triggerKey).
                            equals(STATE_COMPLETE)) {
                        List<FiredTriggerRecord> firedTriggers =
                                getDelegate().selectFiredTriggerRecords(conn, triggerKey.getName(), triggerKey.getGroup());
                        if (firedTriggers.isEmpty()) {
                            if (removeTrigger(conn, triggerKey)) {
                                completeCount++;
                            }
                        }
                    }
                }
                logWarnIfNonZero(acquiredCount,
                        "ClusterManager: ......Freed " + acquiredCount
                                " acquired trigger(s).");
                logWarnIfNonZero(completeCount,
                        "ClusterManager: ......Deleted " + completeCount
                                " complete triggers(s).");
                logWarnIfNonZero(recoveredCount,
                        "ClusterManager: ......Scheduled " + recoveredCount
                                " recoverable job(s) for recovery.");
                logWarnIfNonZero(otherCount,
                        "ClusterManager: ......Cleaned-up " + otherCount
                                " other failed job(s).");
                if (!rec.getSchedulerInstanceId().equals(getInstanceId())) {
                    getDelegate().deleteSchedulerState(conn,
                            rec.getSchedulerInstanceId());
                }
            }
        catch (Throwable e) {
            throw new JobPersistenceException("Failure recovering jobs: "
                    + e.getMessage(), e);
        }
    }
}

可以看到,最终一个需要recover的节点,其未执行完整的任务最终会被其他节点已新建临时trigger的形式重新执行。这一系列的机制正是保证quartz稳定性的可靠保证。

1 0
原创粉丝点击