reco job queue process【每日一译】--2012-10-13

来源:互联网 发布:外资银行招聘条件 知乎 编辑:程序博客网 时间:2024/05/16 10:30
Recoverer Process (RECO)
The recoverer process (RECO) is a background process used with the distributed
database configuration that automatically resolves failures involving distributed
transactions. The RECO process of a node automatically connects to other databases
involved in an in-doubt distributed transaction. When the RECO process reestablishes
a connection between involved database servers, it automatically resolves all in-doubt
transactions, removing from each database’s pending transaction table any rows that

correspond to the resolved in-doubt transactions.

RECO是一个后台进程使用在分布式数据库配置中,它自动解决涉及失败分布式事务。

一个节点的RECO进程自动连接被涉及的有问题题的分布事务的数据。当RECO进程重新

建立在相关多个数据服务的连接时,它自动解决所有可疑事务,删除每一个库中正在执行的事务

表的任意行也就是相当于解决了有问题的事务。


If the RECO process fails to connect with a remote server, RECO automatically tries to
connect again after a timed interval. However, RECO waits an increasing amount of
time (growing exponentially) before it attempts another connection. The RECO process
is present only if the instance permits distributed transactions. The number of

concurrent distributed transactions is not limited.

如果RECO进程失败当连接远程服务时,RECO自动尝试去再次连接当一个周期的时间后。

但是RECO等待时间的数量(按幂的增长)在它尝试连接另一个连接时。RECO进程只出

现在允许分布式事务的实例中。分布式事务并行的数量没有限制。

Job Queue Processes
Job queue processes are used for batch processing. They run user jobs. They can be
viewed as a scheduler service that can be used to schedule jobs as PL/SQL statements
or procedures on an Oracle instance. Given a start date and an interval, the job queue
processes try to run the job at the next occurrence of the interval.
Job queue processes are managed dynamically. This allows job queue clients to use
more job queue processes when required. The resources used by the new processes are

released when they are idle.

JOB队列进程用于批量处理。他们运行用户任务。它们可以被作为一个计划服务来看待

,计划服务可以用来制定任务使用PL/SQL语句或者是过程在ORACLE实例中。设定一个

启动时间和周期。JOB队列进程被动态管理。它允许JOB队列客户端去使用更多的JOB队

列进程当需要时。当他们空闲时候资源被释放并且被新的进程使用。

Dynamic job queue processes can run a large number of jobs concurrently at a given
interval. The job queue processes run user jobs as they are assigned by the CJQ

process. Here’s what happens:

动态JOB队列进程可以进程超大数据的JOB并行的使用在一个给定的周期。JOB队列进程运行

用户任务它这些任务被CJQ进程指定。这里就是发生的事情:

1. The coordinator process, named CJQ0, periodically selects jobs that need to be run

from the system JOB$ table. New jobs selected are ordered by time.

1.协作进程,命名为CJQ0,周期性的选择在系统JOB$表中需要运行的任务。新任务选择是

通过时间排序的。

2. The CJQ0 process dynamically spawns job queue slave processes (J000…J999) to

run the jobs.

2.CJQ0进程动态产生JOB队例子进程(J000...J999)用于运行任务。

3. The job queue process runs one of the jobs that was selected by the CJQ process for

execution. The processes run one job at a time.

3.JOB队列进程运行被CJQ进程选中的其中一个任务。进程在某个时间点只运行一个JOB。

4. After the process finishes execution of a single job, it polls for more jobs. If no jobs
are scheduled for execution, then it enters a sleep state, from which it wakes up at
periodic intervals and polls for more jobs. If the process does not find any new

jobs, then it aborts after a preset interval.

当进程完成执行一个单独的任务后,它将处理更多任务。如果没有任务是计划执行的。那么

它进入一个睡眠状态,当周期和提供更多的任务时它将苏醒。如果进程没有发现任务新的JOB时,

那么它将在一个预设的周期后结束。


The initialization parameter JOB_QUEUE_PROCESSES represents the maximum
number of job queue processes that can concurrently run on an instance. However,
clients should not assume that all job queue processes are available for job execution.

初始化参数JOB_QUEUE_PROCESS描述了在一个实例上可以并发的最大JOB队列进程

数。但客户端不能假设所有的队列进程对于JOB的执行都是可用的。


Note: The coordinator process is not started if the initialization
parameter JOB_QUEUE_PROCESSES is set to 0.

备注:JOB_QUEUE_PROCESS设为0的时候,协助进程它将不启动。


原创粉丝点击