Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

Let's assume that we have a function that multiplies two arrays of 1000000 doubles each. In C/C++ the function looks like this:

void mul_c(double* a, double* b)
{
    for (int i = 0; i != 1000000; ++i)
    {
        a[i] = a[i] * b[i];
    }
}

The compiler produces the following assembly with -O2:

mul_c(double*, double*):
        xor     eax, eax
.L2:
        movsd   xmm0, QWORD PTR [rdi+rax]
        mulsd   xmm0, QWORD PTR [rsi+rax]
        movsd   QWORD PTR [rdi+rax], xmm0
        add     rax, 8
        cmp     rax, 8000000
        jne     .L2
        rep ret

From the above assembly it seems that the compiler uses the SIMD instructions, but it only multiplies one double each iteration. So I decided to write the same function in inline assembly instead, where I make full use of the xmm0 register and multiply two doubles in one go:

void mul_asm(double* a, double* b)
{
    asm volatile
    (
        ".intel_syntax noprefix             
"
        "xor    rax, rax                    
"
        "0:                                 
"
        "movupd xmm0, xmmword ptr [rdi+rax] 
"
        "mulpd  xmm0, xmmword ptr [rsi+rax] 
"
        "movupd xmmword ptr [rdi+rax], xmm0 
"
        "add    rax, 16                     
"
        "cmp    rax, 8000000                
"
        "jne    0b                          
"
        ".att_syntax noprefix               
"

        : 
        : "D" (a), "S" (b)
        : "memory", "cc"
    );
}

After measuring the execution time individually for both of these functions, it seems that both of them takes 1 ms to complete:

> gcc -O2 main.cpp
> ./a.out < input

mul_c: 1 ms
mul_asm: 1 ms

[a lot of doubles...]

I expected the SIMD implementation to be atleast twice as fast (0 ms) as there is only half the amount of multiplications/memory instructions.

So my question is: Why isn't the SIMD implementation faster than the ordinary C/C++ implementation when the SIMD implementation only does half the amount of multiplications/memory instructions?

Here's the full program:

#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>

void mul_c(double* a, double* b)
{
    for (int i = 0; i != 1000000; ++i)
    {
        a[i] = a[i] * b[i];
    }
}

void mul_asm(double* a, double* b)
{
    asm volatile
    (
        ".intel_syntax noprefix             
"
        "xor    rax, rax                    
"
        "0:                                 
"
        "movupd xmm0, xmmword ptr [rdi+rax] 
"
        "mulpd  xmm0, xmmword ptr [rsi+rax] 
"
        "movupd xmmword ptr [rdi+rax], xmm0 
"
        "add    rax, 16                     
"
        "cmp    rax, 8000000                
"
        "jne    0b                          
"
        ".att_syntax noprefix               
"

        : 
        : "D" (a), "S" (b)
        : "memory", "cc"
    );
}

int main()
{
    struct timeval t1;
    struct timeval t2;
    unsigned long long time;

    double* a = (double*)malloc(sizeof(double) * 1000000);
    double* b = (double*)malloc(sizeof(double) * 1000000);
    double* c = (double*)malloc(sizeof(double) * 1000000);

    for (int i = 0; i != 1000000; ++i)
    {
        double v;
        scanf("%lf", &v);
        a[i] = v;
        b[i] = v;
        c[i] = v;
    }

    gettimeofday(&t1, NULL);
    mul_c(a, b);
    gettimeofday(&t2, NULL);
    time = 1000 * (t2.tv_sec - t1.tv_sec) + (t2.tv_usec - t1.tv_usec) / 1000;
    printf("mul_c: %llu ms
", time);

    gettimeofday(&t1, NULL);
    mul_asm(b, c);
    gettimeofday(&t2, NULL);
    time = 1000 * (t2.tv_sec - t1.tv_sec) + (t2.tv_usec - t1.tv_usec) / 1000;
    printf("mul_asm: %llu ms

", time);

    for (int i = 0; i != 1000000; ++i)
    {
        printf("%lf%lf
", a[i], b[i]);
    }

    return 0;
}

I also tried to to make use of all xmm registers (0-7) and remove instruction dependencies to get better parallell computing:

void mul_asm(double* a, double* b)
{
    asm volatile
    (
        ".intel_syntax noprefix                 
"
        "xor    rax, rax                        
"
        "0:                                     
"
        "movupd xmm0, xmmword ptr [rdi+rax]     
"
        "movupd xmm1, xmmword ptr [rdi+rax+16]  
"
        "movupd xmm2, xmmword ptr [rdi+rax+32]  
"
        "movupd xmm3, xmmword ptr [rdi+rax+48]  
"
        "movupd xmm4, xmmword ptr [rdi+rax+64]  
"
        "movupd xmm5, xmmword ptr [rdi+rax+80]  
"
        "movupd xmm6, xmmword ptr [rdi+rax+96]  
"
        "movupd xmm7, xmmword ptr [rdi+rax+112] 
"
        "mulpd  xmm0, xmmword ptr [rsi+rax]     
"
        "mulpd  xmm1, xmmword ptr [rsi+rax+16]  
"
        "mulpd  xmm2, xmmword ptr [rsi+rax+32]  
"
        "mulpd  xmm3, xmmword ptr [rsi+rax+48]  
"
        "mulpd  xmm4, xmmword ptr [rsi+rax+64]  
"
        "mulpd  xmm5, xmmword ptr [rsi+rax+80]  
"
        "mulpd  xmm6, xmmword ptr [rsi+rax+96]  
"
        "mulpd  xmm7, xmmword ptr [rsi+rax+112] 
"
        "movupd xmmword ptr [rdi+rax], xmm0     
"
        "movupd xmmword ptr [rdi+rax+16], xmm1  
"
        "movupd xmmword ptr [rdi+rax+32], xmm2  
"
        "movupd xmmword ptr [rdi+rax+48], xmm3  
"
        "movupd xmmword ptr [rdi+rax+64], xmm4  
"
        "movupd xmmword ptr [rdi+rax+80], xmm5  
"
        "movupd xmmword ptr [rdi+rax+96], xmm6  
"
        "movupd xmmword ptr [rdi+rax+112], xmm7 
"
        "add    rax, 128                        
"
        "cmp    rax, 8000000                    
"
        "jne    0b                              
"
        ".att_syntax noprefix                   
"

        : 
        : "D" (a), "S" (b)
        : "memory", "cc"
    );
}

But it still runs at 1 ms, the same speed as the ordinary C/C++ implementation.


UPDATES

As suggested by answers/comments, I've implemented another way of measuring the execution time:

#include <stdio.h>
#include <stdlib.h>

void mul_c(double* a, double* b)
{
    for (int i = 0; i != 1000000; ++i)
    {
        a[i] = a[i] * b[i];
    }
}

void mul_asm(double* a, double* b)
{
    asm volatile
    (
        ".intel_syntax noprefix             
"
        "xor    rax, rax                    
"
        "0:                                 
"
        "movupd xmm0, xmmword ptr [rdi+rax] 
"
        "mulpd  xmm0, xmmword ptr [rsi+rax] 
"
        "movupd xmmword ptr [rdi+rax], xmm0 
"
        "add    rax, 16                     
"
        "cmp    rax, 8000000                
"
        "jne    0b                          
"
        ".att_syntax noprefix               
"

        : 
        : "D" (a), "S" (b)
        : "memory", "cc"
    );
}

void mul_asm2(double* a, double* b)
{
    asm volatile
    (
        ".intel_syntax noprefix                 
"
        "xor    rax, rax                        
"
        "0:                                     
"
        "movupd xmm0, xmmword ptr [rdi+rax]     
"
        "movupd xmm1, xmmword ptr [rdi+rax+16]  
"
        "movupd xmm2, xmmword ptr [rdi+rax+32]  
"
        "movupd xmm3, xmmword ptr [rdi+rax+48]  
"
        "movupd xmm4, xmmword ptr [rdi+rax+64]  
"
        "movupd xmm5, xmmword ptr [rdi+rax+80]  
"
        "movupd xmm6, xmmword ptr [rdi+rax+96]  
"
        "movupd xmm7, xmmword ptr [rdi+rax+112] 
"
        "mulpd  xmm0, xmmword ptr [rsi+rax]     
"
        "mulpd  xmm1, xmmword ptr [rsi+rax+16]  
"
        "mulpd  xmm2, xmmword ptr [rsi+rax+32]  
"
        "mulpd  xmm3, xmmword ptr [rsi+rax+48]  
"
        "mulpd  xmm4, xmmword ptr [rsi+rax+64]  
"
        "mulpd  xmm5, xmmword ptr [rsi+rax+80]  
"
        "mulpd  xmm6, xmmword ptr [rsi+rax+96]  
"
        "mulpd  xmm7, xmmword ptr [rsi+rax+112] 
"
        "movupd xmmword ptr [rdi+rax], xmm0     
"
        "movupd xmmword ptr [rdi+rax+16], xmm1  
"
        "movupd xmmword ptr [rdi+rax+32], xmm2  
"
        "movupd xmmword ptr [rdi+rax+48], xmm3  
"
        "movupd xmmword ptr [rdi+rax+64], xmm4  
"
        "movupd xmmword ptr [rdi+rax+80], xmm5  
"
        "movupd xmmword ptr [rdi+rax+96], xmm6  
"
        "movupd xmmword ptr [rdi+rax+112], xmm7 
"
        "add    rax, 128                        
"
        "cmp    rax, 8000000                    
"
        "jne    0b                              
"
        ".att_syntax noprefix                   
"

        : 
        : "D" (a), "S" (b)
        : "memory", "cc"
    );
}

unsigned long timestamp()
{
    unsigned long a;

    asm volatile
    (
        ".intel_syntax noprefix 
"
        "xor   rax, rax         
"
        "xor   rdx, rdx         
"
        "RDTSCP                 
"
        "shl   rdx, 32          
"
        "or    rax, rdx         
"
        ".att_syntax noprefix   
"

        : "=a" (a)
        : 
        : "memory", "cc"
    );

    return a;
}

int main()
{
    unsigned long t1;
    unsigned long t2;

    double* a;
    double* b;

    a = (double*)malloc(sizeof(double) * 1000000);
    b = (double*)malloc(sizeof(double) * 1000000);

    for (int i = 0; i != 1000000; ++i)
    {
        double v;
        scanf("%lf", &v);
        a[i] = v;
        b[i] = v;
    }

    t1 = timestamp();
    mul_c(a, b);
    //mul_asm(a, b);
    //mul_asm2(a, b);
    t2 = timestamp();
    printf("mul_c: %lu cycles

", t2 - t1);

    for (int i = 0; i != 1000000; ++i)
    {
        printf("%lf%lf
", a[i], b[i]);
    }

    return 0;
}

When I run the program with this measurement, I get this result:

mul_c:    ~2163971628 cycles
mul_asm:  ~2532045184 cycles
mul_asm2: ~5230488    cycles <-- what???

Two things are worth a notice here, first of all, the cycles count vary a LOT, and I assume that's because of the operating system allowing other processes to run inbetween. Is there any way to prevent that or only count the cycles while my program is executed? Also, mul_asm2 produces identical output compared to the other two, but it so much faster, how?


I tried Z boson's program on my system together with my 2 implementations and got the following result:

> g++ -O2 -fopenmp main.cpp
> ./a.out
mul         time 1.33, 18.08 GB/s
mul_SSE     time 1.13, 21.24 GB/s
mul_SSE_NT  time 1.51, 15.88 GB/s
mul_SSE_OMP time 0.79, 30.28 GB/s
mul_SSE_v2  time 1.12, 21.49 GB/s
mul_v2      time 1.26, 18.99 GB/s
mul_asm     time 1.12, 21.50 GB/s
mul_asm2    time 1.09, 22.08 GB/s
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
181 views
Welcome To Ask or Share your Answers For Others

1 Answer

There was a major bug in the timing function I used for previous benchmarks. This grossly underestimated the bandwidth without vectorization as well as other measurements. Additionally, there was another problem that was overestimating the bandwidth due to COW on the array that was read but not written to. Finally, the maximum bandwidth I used was incorrect. I have updated my answer with the corrections and I have left the old answer at the end of this answer.


Your operation is memory bandwidth bound. This means the CPU is spending most of its time waiting on slow memory reads and writes. An excellent explanation for this can be found here: Why vectorizing the loop does not have performance improvement.

However, I have to disagree slightly with one statement in that answer.

So regardless of how it's optimized, (vectorized, unrolled, etc...) it isn't gonna get much faster.

In fact, vectorization, unrolling, and multiple threads can significantly increase the bandwidth even in memory bandwidth bound operations. The reason is that it is difficult to obtain the maximum memory bandwidth. A good explanation for this can be found here: https://stackoverflow.com/a/25187492/2542702.

The rest of my answer will show how vectorization and multiple threads can get closer to the maximum memory bandwidth.

My test system: Ubuntu 16.10, Skylake ([email protected]), 32GB RAM, dual channel DDR4@2400 GHz. The maximum bandwidth from my system is 38.4 GB/s.

From the code below I produce the following tables. I set the number of thread using OMP_NUM_THREADS e.g. export OMP_NUM_THREADS=4. The efficiency is bandwidth/max_bandwidth.

-O2 -march=native -fopenmp
Threads Efficiency
1       59.2%
2       76.6%
4       74.3%
8       70.7%

-O2 -march=native -fopenmp -funroll-loops
1       55.8%
2       76.5%
4       72.1%
8       72.2%

-O3 -march=native -fopenmp
1       63.9%
2       74.6%
4       63.9%
8       63.2%

-O3 -march=native -fopenmp -mprefer-avx128
1       67.8%
2       76.0%
4       63.9%
8       63.2%

-O3 -march=native -fopenmp -mprefer-avx128 -funroll-loops
1       68.8%
2       73.9%
4       69.0%
8       66.8%

After several iterations of running due to uncertainties in the measurements I have formed the following conclusions:

  • single threaded scalar operations get more than 50% of the bandwidth.
  • two threaded scalar operations get the highest bandwidth.
  • single threaded vector operations are faster than single threaded scalar operations.
  • single threaded SSE operations are faster than single threaded AVX operations.
  • unrolling is not helpful.
  • unrolling single-threaded operations is slower than without unrolling.
  • more threads than cores (Hyper-Threading) gives a lower bandwidth.

The solution that gives the best bandwidth is scalar operations with two threads.

The code I used to benchmark:

#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <omp.h>

#define N 10000000
#define R 100

void mul(double *a, double *b) {
  #pragma omp parallel for
  for (int i = 0; i<N; i++) a[i] *= b[i];
}

int main() {
  double maxbw = 2.4*2*8; // 2.4GHz * 2-channels * 64-bits * 1-byte/8-bits 
  double mem = 3*sizeof(double)*N*R*1E-9; // GB

  double *a = (double*)malloc(sizeof *a * N);
  double *b = (double*)malloc(sizeof *b * N);

  //due to copy-on-write b must be initialized to get the correct bandwidth
  //also, GCC will convert malloc + memset(0) to calloc so use memset(1)
  memset(b, 1, sizeof *b * N);

  double dtime = -omp_get_wtime();
  for(int i=0; i<R; i++) mul(a,b);
  dtime += omp_get_wtime();
  printf("%.2f s, %.1f GB/s, %.1f%%
", dtime, mem/dtime, 100*mem/dtime/maxbw);

  free(a), free(b);
}

The old solution with the timing bug

The modern solution for inline assembly is to use intrinsics. There are still cases where one needs inline assembly but this is not one of them.

One intrinsics solution for you inline assembly approach is simply:

void mul_SSE(double*  a, double*  b) {
  for (int i = 0; i<N/2; i++) 
      _mm_store_pd(&a[2*i], _mm_mul_pd(_mm_load_pd(&a[2*i]),_mm_load_pd(&b[2*i])));
}

Let me define some test code

#include <x86intrin.h>
#include <string.h>
#include <stdio.h>
#include <x86intrin.h>
#include <omp.h>

#define N 1000000
#define R 1000

typedef __attribute__(( aligned(32)))  double aligned_double;
void  (*fp)(aligned_double *a, aligned_double *b);

void mul(aligned_double* __restrict a, aligned_double* __restrict b) {
  for (int i = 0; i<N; i++) a[i] *= b[i];
}

void mul_SSE(double*  a, double*  b) {
  for (int i = 0; i<N/2; i++) _mm_store_pd(&a[2*i], _mm_mul_pd(_mm_load_pd(&a[2*i]),_mm_load_pd(&b[2*i])));
}

void mul_SSE_NT(double*  a, double*  b) {
  for (int i = 0; i<N/2; i++) _mm_stream_pd(&a[2*i], _mm_mul_pd(_mm_load_pd(&a[2*i]),_mm_load_pd(&b[2*i])));
}

void mul_SSE_OMP(double*  a, double*  b) {
  #pragma omp parallel for
  for (int i = 0; i<N; i++) a[i] *= b[i];
}

void test(aligned_double *a, aligned_double *b, const char *name) {
  double dtime;
  const double mem = 3*sizeof(double)*N*R/1024/1024/1024;
  const double maxbw = 34.1;
  dtime = -omp_get_wtime();
  for(int i=0; i<R; i++) fp(a,b);
  dtime += omp_get_wtime();
  printf("%s  time %.2f s, %.1f GB/s, efficency %.1f%%
", name, dtime, mem/dtime, 100*mem/dtime/maxbw);
}

int main() {
  double *a = (double*)_mm_malloc(sizeof *a * N, 32);
  double *b = (double*)_mm_malloc(sizeof *b * N, 32);

  //b must be initialized to get the correct bandwidth!!!
  memset(a, 1, sizeof *a * N);
  memset(b, 1, sizeof *a * N);

  fp = mul,         test(a,b, "mul        ");
  fp = mul_SSE,     test(a,b, "mul_SSE    ");
  fp = mul_SSE_NT,  test(a,b, "mul_SSE_NT ");
  fp = mul_SSE_OMP, test(a,b, "mul_SSE_OMP");

  _mm_free(a), _mm_free(b);
}

Now the first test

g++ -O2 -fopenmp test.cpp
./a.out
mul              time 1.67 s, 13.1 GB/s, efficiency 38.5%
mul_SSE          time 1.00 s, 21.9 GB/s, efficiency 64.3%
mul_SSE_NT       time 1.05 s, 20.9 GB/s, efficiency 61.4%
mul_SSE_OMP      time 0.74 s, 29.7 GB/s, efficiency 87.0%

So with -O2 which does not vectorize loops we see that the intrinsic SSE version is much faster than the plain C solution mul. efficiency = bandwith_measured/max_bandwidth where the max is 34.1 GB/s for my system.

Second test

g++ -O3 -fopenmp test.cpp
./a.out
mul              time 1.05 s, 20.9 GB/s, efficiency 61.2%
mul_SSE          time 0.99 s, 22.3 GB/s, efficiency 65.3%
mul_SSE_NT       time 1.01 s, 21.7 GB/s, efficiency 63.7%
mul_SSE_OMP      time 0.68 s, 32.5 GB/s, efficiency 95.2%

With -O3 vectorizes the loop and the intrinsic function offers essentially no advantage.

Third test

g++ -O3 -fopenmp -funroll-loops test.cpp
./a.out
mul              time 0.85 s, 25.9 GB/s, efficency 76.1%
mul_SSE          time 0.84 s, 26.2 GB/s, efficency 76.7%
mul_SSE_NT       time 1.06 s, 20.8 GB/s, efficency 61.0%
mul_SSE_OMP      time 0.76 s, 29.0 GB/s, efficency 85.0%

With -funroll-loops GCC unrolls the loops eight times and we see a significant improvement except for the non-temporal store solution and not real advantage for OpenMP solution.

Before unrolling the loop the assembly for mul wiht -O3 is

    xor     eax, eax
.L2:
    movupd  xmm0, XMMWORD PTR [rsi+rax]
    mulpd   xmm0, XMMWORD PTR [rdi+rax]
    movaps  XMMWORD PTR [rdi+rax], xmm0
    add     rax, 16
    cmp     rax, 8000000
    jne     .L2
    rep ret

With -O3 -funroll-loops the assembly for mul is:

   xor     eax, eax
.L2:
    movupd  xmm0, XMMWORD PTR [rsi+rax]
    movupd  xmm1, XMMWORD PTR [rsi+16+rax]
    mulpd   xmm0, XMMWORD PTR [rdi+rax]
    movupd  xmm2, XMMWORD PTR [rsi+32+rax]
    mulpd   xmm1, XMMWORD PTR [rdi+16+rax]
    movupd  xmm3, XMMWORD PTR [rsi+48+rax]
    mulpd   xmm2, XMMWORD PTR [rdi+32+rax]
    movupd  xmm4, XMMWORD PTR [rsi+64+rax]
    mulpd   xmm3, XMMWORD PTR [rdi+48+rax]
    movupd  xmm5, XMMWORD PTR [rsi+80+rax]
    mulpd   xmm4, XMMWORD PTR [rdi+64+rax]
    movupd  xmm6, XMMWORD PTR [rsi+96+rax]
    mulpd   xmm5, XMMWORD PTR [rdi+80+rax]
    movupd  xmm7, XMMWORD PTR [rsi+112+rax]
    mulpd   xmm6, XMMWORD PTR [rdi+96+rax]
    movaps  XMMWORD PTR [rdi+rax], xmm0
    mulpd   xmm7, XMMWORD PTR [rdi+112+rax]
    movaps  XMMWORD PTR [rdi+16+rax], xmm1
    movaps  XMMWORD PTR [rdi+32+rax], xmm2
    movaps  XMMWORD PTR [rdi+48+rax], xmm3
    movaps  XMMWORD PTR [rdi+64+rax], xmm4
    movaps  XMMWORD PTR [rdi+80+rax], xmm5
    movaps  XMMWORD PTR [rdi+96+rax], xmm6
    movaps  XMMWORD PTR [rdi+112+rax], xmm7
    sub     rax, -128
    cmp     rax, 8000000
    jne     .L2
    rep ret

Fourth test

g++ -O3 -fopenmp -mavx test.cpp
./a.out
mul              time 0.87 s, 25.3 GB/s, efficiency 74.3%
mul_SSE          time 0.88 s, 24.9 GB/s, efficiency 73.0%
mul_SSE_NT       time 1.07 s, 20.6 GB/s, efficiency 60.5%
mul_SSE_OMP      time 0.76 s, 29.0 GB/s, efficiency 85.2%

Now the non-intrinsic function is the fastest (excluding the OpenMP version).

So there is no reason to use intrinsics or inline assembly in this case because we can get the best performance with appropriate compiler options (e.g. -O3, -funroll-loops, -mavx).

Test system: Ubuntu 16.10, Skylake ([email protected]), 32GB RAM. Maximum memory bandwidth (34.1 GB/s) https://ark.intel.com/products/88967/Intel-Core-i7-6700HQ-Processor-6M-Cache-up-to-3_50-GHz


Here is another solution worth considering. The cmp instruction is not necessary if we count from -N up to zero and access the arrays as N+i. GCC should have fixed this a long time ago. It eliminates one instruction (though due to macro-op fusion the cmp and jmp often count as one micro-op).

void mul_SSE_v2(double*  a, double*  b) {
  for (ptrdiff_t i = -N; i<0; i+=2)
    _mm_store_pd(&a[N + i], _mm_mul_pd(_mm_load_pd(&a[N + i]),_mm_load_pd(&b[N + i])));

Assembly with -O3

mul_SSE_v2(double*, double*):
    mov     rax, -1000000
.L9:
    movapd  xmm0, XMMWORD PTR [rdi+8000000+rax*8]
    mulpd   xmm0, XMMWORD PTR [rsi+8000000+rax*8]
    movaps  XMMWORD PTR [rdi+8000000+rax*8], xmm0
    add     rax, 2
    jne     .L9
    rep ret
}

This optimization will only possibly be helpful the arrays fit e.g. the L1 cache i.e. not reading from main memory.


I finally found a way to get the plain C solutio


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...