Why is numpy's einsum faster than numpy's built in functions?

Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds:

    arr_1D=np.arange(500,dtype=np.double)
    large_arr_1D=np.arange(100000,dtype=np.double)
    arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500)
    arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500)

First lets look at the np.sum function:

    np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D))
    True

    %timeit np.sum(arr_3D)
    10 loops, best of 3: 142 ms per loop

    %timeit np.einsum('ijk->', arr_3D)
    10 loops, best of 3: 70.2 ms per loop

Powers:

    np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D))
    True

    %timeit arr_3D*arr_3D*arr_3D
    1 loops, best of 3: 1.32 s per loop

    %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D)
    1 loops, best of 3: 694 ms per loop

Outer product:

    np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D))
    True

    %timeit np.outer(arr_1D, arr_1D)
    1000 loops, best of 3: 411 us per loop

    %timeit np.einsum('i,k->ik', arr_1D, arr_1D)
    1000 loops, best of 3: 245 us per loop

All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this:

    np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D))
    True

    %timeit np.sum(arr_2D*arr_3D)
    1 loops, best of 3: 813 ms per loop

    %timeit np.einsum('ij,oij->', arr_2D, arr_3D)
    10 loops, best of 3: 85.1 ms per loop

Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent?

The DGEMM case for completeness:

    np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D))
    True

    %timeit np.einsum('ij,jk',arr_2D,arr_2D)
    10 loops, best of 3: 56.1 ms per loop

    %timeit np.dot(arr_2D,arr_2D)
    100 loops, best of 3: 5.17 ms per loop

The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

Now that numpy 1.8 is released, where according to the docs all ufuncs should use SSE2, I wanted to double check that Seberg's comment about SSE2 was valid.

To perform the test a new python 2.7 install was created- numpy 1.7 and 1.8 were compiled with icc using standard options on a AMD opteron core running Ubuntu.

This is the test run both before and after the 1.8 upgrade:

    import numpy as np
    import timeit

    arr_1D=np.arange(5000,dtype=np.double)
    arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500)
    arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500)

    print 'Summation test:'
    print timeit.timeit('np.sum(arr_3D)',
                          'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
                          number=5)/5
    print timeit.timeit('np.einsum("ijk->", arr_3D)',
                          'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
                          number=5)/5
    print '----------------------\n'


    print 'Power test:'
    print timeit.timeit('arr_3D*arr_3D*arr_3D',
                          'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
                          number=5)/5
    print timeit.timeit('np.einsum("ijk,ijk,ijk->ijk", arr_3D, arr_3D, arr_3D)',
                          'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
                          number=5)/5
    print '----------------------\n'


    print 'Outer test:'
    print timeit.timeit('np.outer(arr_1D, arr_1D)',
                          'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
                          number=5)/5
    print timeit.timeit('np.einsum("i,k->ik", arr_1D, arr_1D)',
                          'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
                          number=5)/5
    print '----------------------\n'


    print 'Einsum test:'
    print timeit.timeit('np.sum(arr_2D*arr_3D)',
                          'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
                          number=5)/5
    print timeit.timeit('np.einsum("ij,oij->", arr_2D, arr_3D)',
                          'import numpy as np; from __main__ import arr_1D, arr_2D, arr_3D',
                          number=5)/5
    print '----------------------\n'

Numpy 1.7.1:

    Summation test:
    0.172988510132
    0.0934836149216
    ----------------------

    Power test:
    1.93524689674
    0.839519000053
    ----------------------

    Outer test:
    0.130380821228
    0.121401786804
    ----------------------

    Einsum test:
    0.979052495956
    0.126066613197

Numpy 1.8:

    Summation test:
    0.116551589966
    0.0920487880707
    ----------------------

    Power test:
    1.23683619499
    0.815982818604
    ----------------------

    Outer test:
    0.131808176041
    0.127472200394
    ----------------------

    Einsum test:
    0.781750011444
    0.129271841049

I think this is fairly conclusive that SSE plays a large role in the timing differences, it should be noted that repeating these tests the timings very by only ~0.003s. The remaining difference should be covered in the other answers to this question.

From: stackoverflow.com/q/18365073

Back to homepage or read more recommendations: