Reading notes of Broadcast Mode

来源:互联网 发布:starry night软件下载 编辑:程序博客网 时间:2024/05/16 18:08

Broadcast Mode

On some architectures, clock event devices will go to sleep when certain
power-saving modes are active. Thankfully, systems do not have only
a single clck evet device, so another device that still works can replace
the stopped devices. The global variable tick_broadcast_device contains
the tick_device instance for the broadcast device.
 
 The APIC devices are not functional, but the broadcast event device still is.
 tick_handle_periodic_broadcast is used as the event handler.
 It deals with both periodic and one-shot modes of the broadcast device, so
 this need not concern us any more. The handler will be activated after
 each tick_period.
 
 The broadcast handler uses tick_do_periodic_broadcast. The function invokes
 the event_handler method of the nonfunctional device on the current CPU.
 The handler can not  distinguish if it was invoked from a clock interrupt or
 from the broadcast device, and is thus executed as if the underlying
 event device were functional.
 
 If there are more nonfunctional local tick devices, then tick_do_broadcast employs
 the broadcst method of the first device in the list. For local APICs, the broadcast
 mothod is lapic_timer_broadcast. It is responsible to send the inter-processor
 interrupt(IPI) LOCAL_TIMER_VECTOR to all CPUs that are associated with
 nonfunctional tick devices. The vector has been set up by the kernel to call
 local_apic_timer_interrupt(). The result is that the clock event device cannot
 distinguish between IPIs and real interrupts, so the effect is the same as if
 the device  were still functional.
 
 For inter-processor interrupts are slow,, the kernel always switches to
 low-resolution mode if broadcasting is required.
 
 Figure 15-21


 calling  tree:
 
 tick_do_periodic_broadcast
        /* Determine affected CPUs */
      tick_do_broadcast
                /* Remove current CPU from the mask */
                /* Call event_handler for the current CPU  */
        /* More CPUs in broadcast mask ? */
                 /* Call broadcast method */
                 

157 /*  
158  * Periodic broadcast:
159  * - invoke the broadcast handlers
160  */
161 static void tick_do_periodic_broadcast(void)
162 {
163     raw_spin_lock(&tick_broadcast_lock);
164         
165     cpumask_and(to_cpumask(tmpmask),
166             cpu_online_mask, tick_get_broadcast_mask());
167     tick_do_broadcast(to_cpumask(tmpmask));
168
169     raw_spin_unlock(&tick_broadcast_lock);
170 }


128 /*
129  * Broadcast the event to the cpus, which are set in the mask (mangled).
130  */
131 static void tick_do_broadcast(struct cpumask *mask)
132 {
133     int cpu = smp_processor_id();
134     struct tick_device *td;
135
136     /*
137      * Check, if the current cpu is in the mask
138      */
139     if (cpumask_test_cpu(cpu, mask)) {
140         cpumask_clear_cpu(cpu, mask);
141         td = &per_cpu(tick_cpu_device, cpu);
142         td->evtdev->event_handler(td->evtdev);
143     }
144
145     if (!cpumask_empty(mask)) {
146         /*
147          * It might be necessary to actually check whether the devices
148          * have different broadcast functions. For now, just use the
149          * one of the first device. This works as long as we have this
150          * misfeature only on x86 (lapic)
151          */
152         td = &per_cpu(tick_cpu_device, cpumask_first(mask));
153         td->evtdev->broadcast(mask);
154     }
155 }

128 /*
129  * Broadcast the event to the cpus, which are set in the mask (mangled).
130  */
131 static void tick_do_broadcast(struct cpumask *mask)
132 {
133     int cpu = smp_processor_id();
134     struct tick_device *td;
135
136     /*
137      * Check, if the current cpu is in the mask
138      */
139     if (cpumask_test_cpu(cpu, mask)) {
140         cpumask_clear_cpu(cpu, mask);
141         td = &per_cpu(tick_cpu_device, cpu);
142         td->evtdev->event_handler(td->evtdev);
143     }
144
145     if (!cpumask_empty(mask)) {
146         /*
147          * It might be necessary to actually check whether the devices
148          * have different broadcast functions. For now, just use the
149          * one of the first device. This works as long as we have this
150          * misfeature only on x86 (lapic)
151          */
152         td = &per_cpu(tick_cpu_device, cpumask_first(mask));
153         td->evtdev->broadcast(mask);
154     }
155 }
156   


 475 /*
 476  * Local APIC timer broadcast function
 477  */    
 478 static void lapic_timer_broadcast(const struct cpumask *mask)
 479 {      
 480 #ifdef CONFIG_SMP
 481     apic->send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
 482 #endif
 483 }  
 
  804 /*
 805  * The guts of the apic timer interrupt
 806  */
 807 static void local_apic_timer_interrupt(void)
 808 {
 809     int cpu = smp_processor_id();
 810     struct clock_event_device *evt = &per_cpu(lapic_events, cpu);
 811
 812     /*
 813      * Normally we should not be here till LAPIC has been initialized but
 814      * in some cases like kdump, its possible that there is a pending LAPIC
 815      * timer interrupt from previous kernel's context and is delivered in
 816      * new kernel the moment interrupts are enabled.
 817      *
 818      * Interrupts are enabled early and LAPIC is setup much later, hence
 819      * its possible that when we get here evt->event_handler is NULL.
 820      * Check for event_handler being NULL and discard the interrupt as
 821      * spurious.
 822      */
 823     if (!evt->event_handler) {
 824         pr_warning("Spurious LAPIC timer interrupt on cpu %d\n", cpu);
 825         /* Switch it off */
 826         lapic_timer_setup(CLOCK_EVT_MODE_SHUTDOWN, evt);
 827         return;
 828     }
 829
 830     /*
 831      * the NMI deadlock-detector uses this.
 832      */
 833     inc_irq_stat(apic_timer_irqs);
 834
 835     evt->event_handler(evt);
 836 }