Pages

High Performance Computation with Python - Part 03

Saturday, September 24, 2011

Hi all,


This article is the third one of the series about High Computation with Python.  For anyone that missed the first and the second parts check this link and this one.  The goal is to present approaches to make CPU-demanding tasks in Python run much faster.


The techniques that are being covered:


  1.  Python Profiling - How to find bottlenecks
  2.  Cython -  Annotate your code and compile to C
  3.  Numpy Vectors - Fast vector operations using numpy arrays
  4.  Numpy integration with Cython - fast numerical Python library wrapped by Cython
  5.  PyPy - Python's new Just in Time  Compiler

In this post I will talk about Numpy Vectors and how you can wrap it with Cython!


The Problem

In this series we will analyze how to optimize the statistical Spearman Rank's Correlation coefficient,  which it is a particular measure used to compute the similarity between items in recommender systems and assesses how well the relationship between two variables can be described using a monotonic function. The source code for this metric can be found in the first post.


Numpy


Numpy is a powerful extension to Python, adding support for large, multi-dimensional array and matrices, along with several mathematical functions to manipulate these arrays. To install it you can type this command at your terminal

$ sudo easy_install numpy

Or

$ pip install numpy

In our example we will change the spearman.py . Import the numpy library and change the spearman_correlation to look  the one below. If you run and test it you will ger the same output as before.



The numpy strength is that can simplify lots of operations on vectors or matrixes of numbers since they work directly in all list rather than on individual elements at one time.  So before we had nested for loops over individual terms in a list, now with numpy you could do the same job in a faster and simple way.
Some notes:

  • You define an array with numpy.array statement, in our case a list of tuples indexed by the labels keys and ranks. (lines 29 and 30).
  • Lots of operations already implemented in numpy, such as numpy.in1d which finds where the elements in the first vector are in the second vector returning an array os bools.
  •  We have numpy.sort which sort the elements based on a key, in this example (ranks) (lines 16 and 17).
  • diffs * diffs does a pairwise multiplication, think of it as diff[0] = diff[0] * diff[0]; diff[1] = diff[1] * diff[1]...; diff[n-1] = diff[n-1] * diff[n-1]. (line 36)
  • size is an attribute from numpy.array to fetch the m*n elements (count) from an array.
If it stills unclear I suggest you to try it at the command line, step-by-step to look over the results. Put a small number of elements in the array and see it in action.

Numpy with Cython


Numpy is a powerful library and uses very fast C optimized math libraries to perform these calculations very quickly. You can also wrap your python code with Cython. The main difference is the annotation of the numpy arrays. You can see the tutorial for further details.  The difference are how we import: cimport numpy as np and the assinature of the function _rank_dists.




Special Notes - Meeting Scipy

Another poweful library is Scipy, it is a package for Python that brings several algebra techniques for dealing with matrices and vectors.  One special module is the scipy.stats, which comes with the spearmanr function. It receives two arrays with the observations and returns the spearman coefficient. Amazing! Let's see our code below:


In the next post we will study the Pypy, a JIT Compiler which can speed your code with minimal changes at your code!

I hope you enjoyed this article,

Marcel Caraciolo

High Performance Computation with Python - Part 02

Saturday, September 17, 2011

Hi all,

Let's continue our series of posts about High Performance with Python. Last post I presented how you analyze your code using Python Profiling.  If you missed the first part please check this link.  To sum up, our goal is to present several techniques to make CPU-demanding tasks in Python run much faster.

The techniques that will be covered:

  1.  Python Profiling - How to find bottlenecks
  2.  Cython -  Annotate your code and compile to C
  3.  Numpy Vectors - Fast vector operations using numpy arrays
  4.  Numpy integration with Cython - fast numerical Python library wrapped by Cython
  5.  PyPy - Python's new Just in Time  Compiler
In this post I will talk about Cython and how do you compile your code to C with this powerful tool!

The Problem

In this series we will analyze how to optimize the statistical Spearman Rank's Correlation coefficient,  which it is a particular measure used to compute the similarity between items in recommender systems and assesses how well the relationship between two variables can be described using a monotonic function. The source code for this metric can be found in the last post.

Cython

Cython is a Python extension that lets developers annotate functions so they can be compiled to C. It takes a little time to develop but typically give a nice speed-up.  If you're starting now with Cython, I recommend you to check this tutorial, it quite useful for beginners.

In our previous example we decided to optimize the function spearman_correlation. So first we will start a new module called spearman_correlation_cython.py , and move the spearman_correlation function into this module. In the original source you will have to import the spearman_correlation_cython and replace the reference to spearman_correlation(...)  with spearman_correlation_cython. spearman_correlation(...).

So the code for your spearman_correlation_cython.py now is:

def spearman_correlation(ranks1, ranks2):
    """Returns the Spearman correlation coefficient for two rankings, which
should be dicts or sequences of (key, rank). The coefficient ranges from
-1.0 (ranks are opposite) to 1.0 (ranks are identical), and is only
calculated for keys in both rankings (for meaningful results, remove keys
present in only one list before ranking)."""
    n = 0
    res = 0
    ranks1 = sorted(ranks1, key=lambda k: -k[1])
    ranks2 = sorted(ranks2, key=lambda k: -k[1])
    ranks1 = [(t[0], ix) for ix, t in enumerate(ranks1)]
    ranks2 = [(t[0], ix) for ix, t in enumerate(ranks2)]
    for k, d in _rank_dists(ranks1, ranks2):
        res += d * d
        n += 1
    try:
        return 1 - (6 * float(res) / (n * (n * n - 1)))
    except ZeroDivisionError:
        # Result is undefined if only one item is ranked
        return 0.0

Next we will have to rename the spearman_correlation_cython.py to spearman_correlation_cython.pyx.  Cython uses .pyx  to indicate that it is a file that will compile to C.  Add also a new setup.py with the following contents:

# setup.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
# for notes on compiler flags see:
# http://docs.python.org/install/index.html
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [Extension("spearman_correlation_cython", ["spearman_correlation_cython.pyx"])]
)

Now run the command:

$ python setup.py build_ext --inplace

This  command runs the setup.py script that we just created by calling the build_ext command. The new module is built in-place in the directory.  You will see that it will be generated a new r spearman_correlation_cython.so in the directory.

Run the new code using python cython_spearman 199999 and you will see a slight improvement in the speed of the calculation (very minor yet!).  You can take a look to see how well the slower Python calls are being replace with faster Cython calls by using:

$ cython -a rank_dists.pyx

It  will generate a rank_dists.html file. If you open it in your browser, you will see something like:

Result of cython -a spearman_correlation_cython

The workflow now that you will use at your code is progressive. Each time you add a type with Cython, it may improve the resulting code. When it does so successfully, you will see that the dark yellow lines will turn lighter and eventually they will turn white (it will represent that there is no need for improvements, it is faster!).  If you are interested to analyze deeper, you could expand the code by double clicking at one of the lines with yellow code. It will show the C Python API calls that it is making.

Double-Clicking at one of the yellow lines of code at html it will show the C  Python API calls

You could also add annotations. So if you add type definitions (such as cdef int or cdef double... )  and run the cython -a ... command, you will can monitor the reduction in yellow in your browser.  Don't forget to recompile using the setup.py command and confirm that the result is slightly faster!

Added some Cython types at the source code.


Cython Compiler Directives

You could also set several compiler directives that comes with Cython.  To enable them, you could use a comment at the top of the file or by changing the setup.py or even decorating the function individually.

Using the comment at top of the file.
   1 #cython: boundscheck=False

Using the decorate function
 import cython
...
@cython.boundscheck(False) # turn off boundscheck for this function
def f():
    ...
    with cython.boundscheck(True): # turn it temporarily on again for this block
        ...

Using the setup.py

ext_modules = [Extension("spam", ["spam.pyx"]),
               Extension("ham", ["ham.pyx"])]
for e in ext modules:
    e.pyrex_directives = {"boundscheck": False}

One of the most used is the cProfile, that is useful for profiling cython code. It gives you exactly same output as running cProfile on a normal python module. Another common directive is the boundscheck.  It desables out-of-bounds index checking on buffered arrays (common in numpy arrays, so since it does not check for IndexError exceptions it will run faster. Remember that any mistake prepare to expect a segmentation fault. So be careful when you decide to use boundscheck, that is, be sure that code is working correctly as you planned.  There is also another one, the infer_types which is supposed to guess the type of variables.


Cython directly with C

But you may asking yourself if it is possible to wrap with Cython your existing libraries of C code. Yes it is possible!  Cython uses external declarations to declare the C functions and variables from the library that you want to use. So let's see a quick example:

Let's consider a simple fatorial function written in C and we want to wrap it and call with Python/Cython:
 
fatorialEx.c 

#include <stdio.h>
int fatorial(int n){
if (n == 1) {
return 1;
}
return fatorial(n-1) * n;
}
Now, you have to wrap it at your fatorial.pyx module:

fatorial.pyx

cdef extern from "fatorialEx.c":
int fatorial(int n)

def fat(n):
return fatorial(n)

See the line cdef extern (it's how Cython knows how to include external libraries). Finally create your setup.py module to build the extension:

# setup.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
# for notes on compiler flags see:
# http://docs.python.org/install/index.html
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [Extension("fatorial", ["fatorial.pyx"])]
)
Build it:
$python setup.py build_ext --inplace 
You will see the module fatorial.so in the directory, this is the file that you will use now to import your code at Python. So in the Python console , type the following commands to test it:

>>> from fatorial import fat
>>> fat(5)
120

It is working!  I am providing the source for this example. For further information about writing your extension check the Cython docs.

That's all,   I hope  you enjoyed!

Regards,

Marcel Caraciolo