arm neon RGB转Gray的例子

来源:互联网 发布:javaweb书籍推荐知乎 编辑:程序博客网 时间:2024/04/18 21:04

确认处理器是否支持NEON 

cat /proc/cpuinfo | grep neon 

看是否有如下内容 

Features : swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt


void reference_convert (uint8_t * __restrict dest, uint8_t * __restrict src, int n){  int i;  for (i=0; i<n; i++)  {    int r = *src++; // load red    int g = *src++; // load green    int b = *src++; // load blue     // build weighted average:    int y = (r*77)+(g*151)+(b*28);    // undo the scale by 256 and write to memory:    *dest++ = (y>>8);  }}使用NEON库进行代码优化  
 Since NEON works in 64 or 128 bit registers it’s best to process eight pixels in parallel.<br>  
 That way we can exploit the parallel nature of the SIMD-unit. Here is what I came up with:  
因为NEON工作在64位或128位的寄存器上,因此最适合同时处理8个像素点的转换。

这样就形成了下面这样的代码

void neon_convert (uint8_t * __restrict dest, uint8_t * __restrict src, int n){  int i;  uint8x8_t rfac = vdup_n_u8 (77);  uint8x8_t gfac = vdup_n_u8 (151);  uint8x8_t bfac = vdup_n_u8 (28);  n/=8;  for (i=0; i<n; i++)  {    uint16x8_t  temp;    uint8x8x3_t rgb  = vld3_u8 (src);    uint8x8_t result;    temp = vmull_u8 (rgb.val[0],      rfac);    temp = vmlal_u8 (temp,rgb.val[1], gfac);    temp = vmlal_u8 (temp,rgb.val[2], bfac);    result = vshrn_n_u16 (temp, 8);    vst1_u8 (dest, result);    src  += 8*3;    dest += 8;  }}

Lets take a look at it step by step:

First off I load my weight factors into three NEON registers. The vdup.8 instruction does this and also replicates the byte into all 8 bytes of the NEON register.

    uint8x8_t rfac = vdup_n_u8 (77);    uint8x8_t gfac = vdup_n_u8 (151);    uint8x8_t bfac = vdup_n_u8 (28); 

Now I load 8 pixels at once into three registers.

    uint8x8x3_t rgb  = vld3_u8 (src);

The vld3.8 instruction is a specialty of the NEON instruction set. With NEON you can not only do loads and stores of multiple registers at once, you can de-interleave the data on the fly as well. Since I expect my pixel data to be interleaved the vld3.8 instruction is a perfect fit for a tight loop.

After the load, I have all the red components of 8 pixels in the first loaded register. The green components end up in the second and blue in the third.

Now calculate the weighted average:

    temp = vmull_u8 (rgb.val[0],      rfac);    temp = vmlal_u8 (temp,rgb.val[1], gfac);    temp = vmlal_u8 (temp,rgb.val[2], bfac);

vmull.u8 multiplies each byte of the first argument with each corresponding byte of the second argument. Each result becomes a 16 bit unsigned integer, so no overflow can happen. The entire result is returned as a 128 bit NEON register pair.

vmlal.u8 does the same thing as vmull.u8 but also adds the content of another register to the result.

So we end up with just three instructions for weighted average of eight pixels. Nice.

Now it’s time to undo the scaling of the weight factors. To do so I shift each 16 bit result to the right by 8 bits. This equals to a division by 256. ARM NEON has lots of instructions to do the shift, but also a “narrow” variant exists. This one does two things at once: It does the shift and afterwards converts the 16 bit integers back to 8 bit by removing all the high-bytes from the result. We get back from the 128 bit register pair to a single 64 bit register.

    result = vshrn_n_u16 (temp, 8);

And finally store the result.

    vst1_u8 (dest, result);

First Results:

How does the reference C-function and the NEON optimized version compare? I did a test on my Omap3 CortexA8 CPU on the beagle-board and got the following timings:

C-version:       15.1 cycles per pixel.NEON-version:     9.9 cycles per pixel.

That’s only a speed-up of factor 1.5. I expected much more from the NEON implementation. It processes 8 pixels with just 6 instructions after all. What’s going on here? A look at the assembler output explained it all. Here is the inner-loop part of the convert function:

 160:   f46a040f        vld3.8  {d16-d18}, [sl] 164:   e1a0c005        mov     ip, r5 168:   ecc80b06        vstmia  r8, {d16-d18} 16c:   e1a04007        mov     r4, r7 170:   e2866001        add     r6, r6, #1      ; 0x1 174:   e28aa018        add     sl, sl, #24     ; 0x18 178:   e8bc000f        ldm     ip!, {r0, r1, r2, r3} 17c:   e15b0006        cmp     fp, r6 180:   e1a08005        mov     r8, r5 184:   e8a4000f        stmia   r4!, {r0, r1, r2, r3} 188:   eddd0b06        vldr    d16, [sp, #24] 18c:   e89c0003        ldm     ip, {r0, r1} 190:   eddd2b08        vldr    d18, [sp, #32] 194:   f3c00ca6        vmull.u8        q8, d16, d22 198:   f3c208a5        vmlal.u8        q8, d18, d21 19c:   e8840003        stm     r4, {r0, r1} 1a0:   eddd3b0a        vldr    d19, [sp, #40] 1a4:   f3c308a4        vmlal.u8        q8, d19, d20 1a8:   f2c80830        vshrn.i16       d16, q8, #8 1ac:   f449070f        vst1.8  {d16}, [r9] 1b0:   e2899008        add     r9, r9, #8      ; 0x8 1b4:   caffffe9        bgt     160

Note the store at offset 168? The compiler decides to write the three registers onto the stack. After a bit of useless memory accesses from the GPP side the compiler reloads them (offset 188, 190 and 1a0) in exactly the same physical NEON register.

What all the ordinary integer instructions do? I have no idea. Lots of memory accesses target the stack for no good reason. There is definitely no shortage of registers anywhere. For reference: I used the GCC 4.3.3 (CodeSourcery 2009q1 lite) compiler .

NEON and assembler

Since the compiler can’t generate good code I wrote the same loop in assembler. In a nutshell I just took the intrinsic based loop and converted the instructions one by one. The loop-control is a bit different, but that’s all.

convert_asm_neon:      # r0: Ptr to destination data      # r1: Ptr to source data      # r2: Iteration count:    push       {r4-r5,lr}      lsr         r2, r2, #3      # build the three constants:      mov         r3, #77      mov         r4, #151      mov         r5, #28      vdup.8      d3, r3      vdup.8      d4, r4      vdup.8      d5, r5  .loop:      # load 8 pixels:      vld3.8      {d0-d2}, [r1]!      # do the weight average:      vmull.u8    q3, d0, d3      vmlal.u8    q3, d1, d4      vmlal.u8    q3, d2, d5      # shift and store:      vshrn.u16   d6, q3, #8      vst1.8      {d6}, [r0]!      subs        r2, r2, #1      bne         .loop      pop         { r4-r5, pc }

Final Results:

Time for some benchmarking again. How does the hand-written assembler version compares? Well – here are the results:

  C-version:       15.1 cycles per pixel.  NEON-version:     9.9 cycles per pixel.  Assembler:        2.0 cycles per pixel.

That’s roughly a factor of five over the intrinsic version and 7.5 times faster than my not-so-bad C implementation. And keep in mind: I didn’t even optimized the assembler loop.

My conclusion: If you want performance out of your NEON unit stay away from the intrinsics. They are nice as a prototyping tool. Use them to get your algorithm working and then rewrite the NEON-parts of it in assembler.


原文:http://hilbert-space.de/?p=22

0 0
原创粉丝点击