Swift源码分析----swift-account-replicator(2)

来源:互联网 发布:js金沙7726下载 编辑:程序博客网 时间:2024/06/06 18:19

感谢朋友支持本博客,欢迎共同探讨交流,由于能力和时间有限,错误之处在所难免,欢迎指正!

如果转载,请保留作者信息。
博客地址:http://blog.csdn.net/gaoxingnengjisuan
邮箱地址:dong.liu@siat.ac.cn

PS:最近没有登录博客,很多朋友的留言没有看见,这里道歉!还有就是本人较少上QQ,可以邮件交流。


接续上一篇博客:

转到2.3,来看方法_repl_to_node的实现:

def _repl_to_node(self, node, broker, partition, info):     """     复制数据库文件到指定node;          建立到目标分区的连接;          实现一个HTTP REPLICATE复制请求;          获取请求操作的响应信息;          通过比较同步点和哈希值来判断复制后的两个副本是否是同步的,即复制操作是否成功;          如果复制成功则直接返回True;     """    with ConnectionTimeout(self.conn_timeout):        http = self._http_connect(node, partition, broker.db_file)    if not http:        self.logger.error(_('ERROR Unable to connect to remote server: %s'), node)        return False             # 实现一个HTTP REPLICATE复制请求;    with Timeout(self.node_timeout):        response = http.replicate(            'sync', info['max_row'], info['hash'], info['id'],            info['created_at'], info['put_timestamp'],            info['delete_timestamp'], info['metadata'])             # 若干异常处理;    if not response:        return False    elif response.status == HTTP_NOT_FOUND:  # completely missing, rsync        self.stats['rsync'] += 1        self.logger.increment('rsyncs')        return self._rsync_db(broker, node, http, info['id'])       elif response.status == HTTP_INSUFFICIENT_STORAGE:        raise DriveNotMounted()             # 响应状态说明了复制操作完成;     # 通过比较同步点和哈希值来判断复制后的两个副本是否是同步的,即复制操作是否成功;     # 如果复制成功则直接返回True;    elif response.status >= 200 and response.status < 300:        rinfo = simplejson.loads(response.data)        local_sync = broker.get_sync(rinfo['id'], incoming=False)                       # 比较rinfo(远程复制数据信息)和info(本地复制数据信息)的同步点和哈希值,           # 来判断完成复制操作的两个副本间是否是同步的;        if self._in_sync(rinfo, info, broker, local_sync):            return True        # if the difference in rowids between the two differs by        # more than 50%, rsync then do a remote merge.        if rinfo['max_row'] / float(info['max_row']) < 0.5:            self.stats['remote_merge'] += 1            self.logger.increment('remote_merges')            return self._rsync_db(broker, node, http, info['id'],                                  replicate_method='rsync_then_merge',                                  replicate_timeout=(info['count'] / 2000))        # else send diffs over to the remote server        return self._usync_db(max(rinfo['point'], local_sync), broker, http, rinfo['id'], info['id'])
2.3.1.调用方法replicate实现通过HTTP协议调用REPLICATE方法,实现本地指定文件到远程指定节点的同步操作,并获取相应信息;
      注:这里具体还有很多细节,就不进行进一步解析了;
2.3.2.通过上述的响应信息判断同步操作是否完成,如果完成,则进一步比较rinfo(远程复制数据信息)和info(本地复制数据信息)的同步点和哈希值,来判断完成复制操作的两个副本间是否是同步的,如果同步,则说明复制操作成功,直接返回True;
2.3.3.如果同步操作不成功,则通过rinfo['max_row']/float(info['max_row'])的比值判断,如果远程节点数据和本地复制数据差异超过50%,说明数据差异较大,则调用方法_rsync_db通过命令rsync实现全部数据的同步;
2.3.4.通过rinfo['max_row']/float(info['max_row'])的比值判断,如果远程节点数据和本地复制数据差异没有超过50%,说明数据差异较小,则调用方法_usync_db实现数据的同步;


转到2.3.3,来看方法_rsync_db的实现:

def _rsync_db(self, broker, device, http, local_id, replicate_method='complete_rsync', replicate_timeout=None):     """     通过命令rsync实现节点间全部数据的同步;     """    device_ip = rsync_ip(device['replication_ip'])    if self.vm_test_mode:        remote_file = '%s::%s%s/%s/tmp/%s' % (device_ip, self.server_type, device['replication_port'], device['device'], local_id)    else:        remote_file = '%s::%s/%s/tmp/%s' % (device_ip, self.server_type, device['device'], local_id)    mtime = os.path.getmtime(broker.db_file)    if not self._rsync_file(broker.db_file, remote_file):        return False    # perform block-level sync if the db was modified during the first sync    if os.path.exists(broker.db_file + '-journal') or os.path.getmtime(broker.db_file) > mtime:        # grab a lock so nobody else can modify it        with broker.lock():            if not self._rsync_file(broker.db_file, remote_file, False):                return False    with Timeout(replicate_timeout or self.node_timeout):        response = http.replicate(replicate_method, local_id)    return response and response.status >= 200 and response.status < 300
2.3.3.1 第一次调用方法_rsync_file,通过应用命令rsync,实现两个节点间数据的同步,这里设置whole_file=True,说明进行了全数据的复制操作;
2.3.3.2.如果文件路径下存在以-journal为后缀的文件,说明在第一次数据同步的过程中,数据文件有被修改;所以第二次调用方法_rsync_file,但是这里对文件处理操作加锁,以防止在数据同步的过程中,数据文件再被修改;这里设置参数whole_file=Flase,说明没有进行全数据的复制操作,而是进行了差异部分的数据复制操作;
2.3.3.3.在执行完成数据同步操作之后,调用方法replicate实现通过HTTP推送REPLICATE方法,进而调用设定的rsync_then_merge方法,实现部分+部分地从container数据表中整合相关属性信息到container数据表中(这里有些细节需要进一步理解);


来看方法_rsync_file的实现:

def _rsync_file(self, db_file, remote_file, whole_file=True):    """    通过应用命令rsync,实现两个节点间数据的同步;    """    popen_args = ['rsync', '--quiet', '--no-motd',                  '--timeout=%s' % int(math.ceil(self.node_timeout)),                  '--contimeout=%s' % int(math.ceil(self.conn_timeout))]    if whole_file:        popen_args.append('--whole-file')    popen_args.extend([db_file, remote_file])    proc = subprocess.Popen(popen_args)    proc.communicate()    if proc.returncode != 0:        self.logger.error(_('ERROR rsync failed with %(code)s: %(args)s'),                          {'code': proc.returncode, 'args': popen_args})    return proc.returncode == 0
注:(1)可见当命令行组成之后,将会调用方法communicate实现命令行的远程执行;
    (2)命令rsync常用于节点间的数据同步和备份操作;rsync命令有特性:第一次同步时rsync会复制全部内容,但在下一次只传输修改过的文件,这样的效率就比较高的;
    (3)whole_file参数决定了是否进行全数据的复制操作;


再来看方法rsync_then_merge的实现:

def rsync_then_merge(self, drive, db_file, args):    old_filename = os.path.join(self.root, drive, 'tmp', args[0])            if not os.path.exists(db_file) or not os.path.exists(old_filename):        return HTTPNotFound()            new_broker = self.broker_class(old_filename)    existing_broker = self.broker_class(db_file)            point = -1    objects = existing_broker.get_items_since(point, 1000)    while len(objects):        new_broker.merge_items(objects)        point = objects[-1]['ROWID']        objects = existing_broker.get_items_since(point, 1000)        sleep()    new_broker.newid(args[0])    renamer(old_filename, db_file)    return HTTPNoContent()

def get_items_since(self, start, count):    self._commit_puts_stale_ok()    with self.get() as conn:        curs = conn.execute('''            SELECT * FROM %s WHERE ROWID > ? ORDER BY ROWID ASC LIMIT ?        ''' % self.db_contains_type, (start, count))        curs.row_factory = dict_factory        return [r for r in curs]

def merge_items(self, item_list, source=None):"""整合指定账户下container数据表中的相关属性信息到目标container数据表中;"""    with self.get() as conn:        max_rowid = -1        for rec in item_list:            record = [rec['name'], rec['put_timestamp'],                      rec['delete_timestamp'], rec['object_count'],                      rec['bytes_used'], rec['deleted']]            query = '''                SELECT name, put_timestamp, delete_timestamp,object_count, bytes_used, deleted                FROM container WHERE name = ?'''            if self.get_db_version(conn) >= 1:                query += ' AND deleted IN (0, 1)'            curs = conn.execute(query, (rec['name'],))            curs.row_factory = None            row = curs.fetchone()            if row:                row = list(row)                for i in xrange(5):                    if record[i] is None and row[i] is not None:                        record[i] = row[i]                if row[1] > record[1]:  # Keep newest put_timestamp                    record[1] = row[1]                if row[2] > record[2]:  # Keep newest delete_timestamp                    record[2] = row[2]                # If deleted, mark as such                if record[2] > record[1] and record[3] in (None, '', 0, '0'):                    record[5] = 1                else:                    record[5] = 0            conn.execute('''DELETE FROM container WHERE name = ? AND deleted IN (0, 1)''', (record[0],))            conn.execute('''INSERT INTO container (name, put_timestamp, delete_timestamp, object_count, bytes_used, deleted) VALUES (?, ?, ?, ?, ?, ?)''', record)            if source:                max_rowid = max(max_rowid, rec['ROWID'])        if source:            try:                conn.execute('''INSERT INTO incoming_sync (sync_point, remote_id) VALUES (?, ?)''', (max_rowid, source))            except sqlite3.IntegrityError:                conn.execute('''UPDATE incoming_sync SET sync_point=max(?, sync_point) WHERE remote_id=?''', (max_rowid, source))        conn.commit()

转到2.3.4,来看方法_usync_db的实现:

def _usync_db(self, point, broker, http, remote_id, local_id):    """    由于节点间数据差异不超过50%,所有通过发送自从上一次同步操作以来的所有数据变化记录来实现节点间的数据同步操作;    """    self.stats['diff'] += 1    self.logger.increment('diffs')    self.logger.debug(_('Syncing chunks with %s'), http.host)    sync_table = broker.get_syncs()    objects = broker.get_items_since(point, self.per_diff)    diffs = 0    while len(objects) and diffs < self.max_diffs:        diffs += 1        with Timeout(self.node_timeout):            response = http.replicate('merge_items', objects, local_id)        if not response or response.status >= 300 or response.status < 200:            if response:                self.logger.error(_('ERROR Bad response %(status)s from %(host)s'),                                   {'status': response.status, 'host': http.host})            return False        point = objects[-1]['ROWID']        objects = broker.get_items_since(point, self.per_diff)    if objects:        self.logger.debug(_('Synchronization for %s has fallen more than '            '%s rows behind; moving on and will try again next pass.'),            broker, self.max_diffs * self.per_diff)        self.stats['diff_capped'] += 1        self.logger.increment('diff_caps')    else:        with Timeout(self.node_timeout):            response = http.replicate('merge_syncs', sync_table)        if response and response.status >= 200 and response.status < 300:            broker.merge_syncs([{'remote_id': remote_id, 'sync_point': point}],                               incoming=False)            return True    return False

0 0
原创粉丝点击