kubernetes源码阅读之controller-manager(三)
来源:互联网 发布:知乎 哈布斯堡 编辑:程序博客网 时间:2024/04/30 07:13
上一篇介绍controller manager的watch机制,还有一点没有说清楚的就是添加的回调方法(AddEventHandler)怎样被触发的,方法的回调和之前介绍kubelet的store回调稍有不同,它的回调是通过FIFO queue完成的。看代码:
func (s *sharedIndexInformer) Run(stopCh <-chan struct{}) { defer utilruntime.HandleCrash() fifo := NewDeltaFIFO(MetaNamespaceKeyFunc, nil, s.indexer) cfg := &Config{ Queue: fifo, ListerWatcher: s.listerWatcher, ObjectType: s.objectType, FullResyncPeriod: s.resyncCheckPeriod, RetryOnError: false, ShouldResync: s.processor.shouldResync, Process: s.HandleDeltas, } func() { s.startedLock.Lock() defer s.startedLock.Unlock() s.controller = New(cfg) s.controller.(*controller).clock = s.clock s.started = true }() s.stopCh = stopCh s.cacheMutationDetector.Run(stopCh) s.processor.run(stopCh) s.controller.Run(stopCh)}
创建了一个FIFO队列,执行controller run方法
func (c *controller) Run(stopCh <-chan struct{}) { defer utilruntime.HandleCrash() go func() { <-stopCh c.config.Queue.Close() }() r := NewReflector( c.config.ListerWatcher, c.config.ObjectType, c.config.Queue, c.config.FullResyncPeriod, ) r.ShouldResync = c.config.ShouldResync r.clock = c.clock c.reflectorMutex.Lock() c.reflector = r c.reflectorMutex.Unlock() r.RunUntil(stopCh) wait.Until(c.processLoop, time.Second, stopCh)}
RunUntil是通过listwatch->watchHandler->loop 循环watch的event放到store中,这个store本质就是一个FIFO队列,此时并不像kubelet那样直接触发而是通过wait.Until(c.processLoop, time.Second, stopCh)完成的
func (c *controller) processLoop() { for { obj, err := c.config.Queue.Pop(PopProcessFunc(c.config.Process)) if err != nil { if err == FIFOClosedError { return } if c.config.RetryOnError { // This is the safe way to re-enqueue. c.config.Queue.AddIfNotPresent(obj) } } }}
上面的函数从队列里面逐个取出event执行pop函数。其实就是调用controller的config.Process方法(vendor/k8s.io/client-go/tools/cache/shared_informer.go),
func (s *sharedIndexInformer) HandleDeltas(obj interface{}) error { s.blockDeltas.Lock() defer s.blockDeltas.Unlock() // from oldest to newest for _, d := range obj.(Deltas) { switch d.Type { case Sync, Added, Updated: isSync := d.Type == Sync s.cacheMutationDetector.AddObject(d.Object) if old, exists, err := s.indexer.Get(d.Object); err == nil && exists { if err := s.indexer.Update(d.Object); err != nil { return err } s.processor.distribute(updateNotification{oldObj: old, newObj: d.Object}, isSync) } else { if err := s.indexer.Add(d.Object); err != nil { return err } s.processor.distribute(addNotification{newObj: d.Object}, isSync) } case Deleted: if err := s.indexer.Delete(d.Object); err != nil { return err } s.processor.distribute(deleteNotification{oldObj: d.Object}, false) } } return nil}
触发的方法在distribute里面
func (p *sharedProcessor) distribute(obj interface{}, sync bool) { p.listenersLock.RLock() defer p.listenersLock.RUnlock() if sync { for _, listener := range p.syncingListeners { listener.add(obj) } } else { for _, listener := range p.listeners { listener.add(obj) } }}
这里不得不回到上面第一个函数中的方法调用s.processor.run(stopCh)
func (p *sharedProcessor) run(stopCh <-chan struct{}) { p.listenersLock.RLock() defer p.listenersLock.RUnlock() for _, listener := range p.listeners { go listener.run(stopCh) go listener.pop(stopCh) }}
listeners里面注册了各种监听器,看run方法:
func (p *processorListener) run(stopCh <-chan struct{}) { defer utilruntime.HandleCrash() for { var next interface{} select { case <-stopCh: func() { p.lock.Lock() defer p.lock.Unlock() p.cond.Broadcast() }() return case next = <-p.nextCh: } switch notification := next.(type) { case updateNotification: p.handler.OnUpdate(notification.oldObj, notification.newObj) case addNotification: p.handler.OnAdd(notification.newObj) case deleteNotification: p.handler.OnDelete(notification.oldObj) default: utilruntime.HandleError(fmt.Errorf("unrecognized notification: %#v", next)) } }}
这里终于找到了event handler注册方法回调的地方了。细心的会有疑问,p.nextCh这个管道数据来自哪里?还是在上面的方法listener.pop(stopCh)
func (p *processorListener) pop(stopCh <-chan struct{}) { defer utilruntime.HandleCrash() for { blockingGet := func() (interface{}, bool) { p.lock.Lock() defer p.lock.Unlock() for len(p.pendingNotifications) == 0 { // check if we're shutdown select { case <-stopCh: return nil, true default: } p.cond.Wait() } nt := p.pendingNotifications[0] p.pendingNotifications = p.pendingNotifications[1:] return nt, false } notification, stopped := blockingGet() if stopped { return } select { case <-stopCh: return case p.nextCh <- notification: } }}
它将之前distribute进pendingNotifications []interface{}这个数组的notification逐一获取,放入nextCh管道中。ok,至此所有controller manager watch机制流程全部完毕!
- kubernetes源码阅读之controller-manager(三)
- kubernetes源码阅读之controller manager启动
- kubernetes源码阅读之controller-manager(二)
- 【kubernetes/k8s源码分析】kube-controller-manager之replication controller源码分析
- 【kubernetes/k8s源码分析】kube-controller-manager之node controller源码分析
- 【kubernetes/k8s源码分析】kube-controller-manager之endpoint controller源码分析
- Kubernetes核心原理(二)之Controller Manager
- kubernetes源码之watch包filter.go阅读理解三
- kubernetes源码阅读之apiserver
- 【kubernetes/k8s源码分析】kube-controller-manager 启动流程分析
- Kubernetes中controller-manager源码分析--启动流程
- Kubernetes controller-manager中deployment处理流程源码分析
- kubernetes 源码分析之kubeadm(三)
- kubernetes 源码分析之ingress(三)
- Kubernetes Node Controller源码分析之Taint Controller
- kubernetes源码阅读之整体架构分析
- kubernetes源码阅读之kubelet启动
- kubernetes源码阅读之kubelet(二)
- numpy中数组元素的统一赋值
- Unity Shader 之 Halo
- Linux下system()函数引发的错误
- java计算混淆矩阵(分类指标:查准率P,查全率R,P和R的调和均值F1,正确率A)
- aidl的简单使用
- kubernetes源码阅读之controller-manager(三)
- 03-树1 树的同构 (25分)
- 检测linux网络IP和端口连通性
- Node.js中的模块接口module.exports
- Java Collection
- 每天一个Linux命令(21):find命令之xargs
- 64位网络字节序与主机字节序转换
- CentOS系列启动流程详解
- 基于Eclipse的arm-linux的远程GDB调试