# Numeric Computation¶

Symbolic computer algebra systems like SymPy facilitate the construction and manipulation of mathematical expressions. Unfortunately when it comes time to evaluate these expressions on numerical data, symbolic systems often have poor performance.

Fortunately SymPy offers a number of easy-to-use hooks into other numeric systems, allowing you to create mathematical expressions in SymPy and then ship them off to the numeric system of your choice. This page documents many of the options available including the `math` library, the popular array computing package `numpy`, code generation in `Fortran` or `C`, and the use of the array compiler `Aesara`.

## Subs/evalf¶

Subs is the slowest but simplest option. It runs at SymPy speeds. The `.subs(...).evalf()` method can substitute a numeric value for a symbolic one and then evaluate the result within SymPy.

```>>> from sympy import *
>>> from sympy.abc import x
>>> expr = sin(x)/x
>>> expr.evalf(subs={x: 3.14})
0.000507214304613640
```

This method is slow. You should use this method production only if performance is not an issue. You can expect `.subs` to take tens of microseconds. It can be useful while prototyping or if you just want to see a value once.

## Lambdify¶

The `lambdify` function translates SymPy expressions into Python functions, leveraging a variety of numerical libraries. It is used as follows:

```>>> from sympy import *
>>> from sympy.abc import x
>>> expr = sin(x)/x
>>> f = lambdify(x, expr)
>>> f(3.14)
0.000507214304614
```

Here lambdify makes a function that computes `f(x) = sin(x)/x`. By default lambdify relies on implementations in the `math` standard library. This numerical evaluation takes on the order of hundreds of nanoseconds, roughly two orders of magnitude faster than the `.subs` method. This is the speed difference between SymPy and raw Python.

Lambdify can leverage a variety of numerical backends. By default it uses the `math` library. However it also supports `mpmath` and most notably, `numpy`. Using the `numpy` library gives the generated function access to powerful vectorized ufuncs that are backed by compiled C code.

```>>> from sympy import *
>>> from sympy.abc import x
>>> expr = sin(x)/x
>>> f = lambdify(x, expr, "numpy")
```
```>>> import numpy
>>> data = numpy.linspace(1, 10, 10000)
>>> f(data)
[ 0.84147098  0.84119981  0.84092844 ... -0.05426074 -0.05433146
-0.05440211]
```

If you have array-based data this can confer a considerable speedup, on the order of 10 nano-seconds per element. Unfortunately numpy incurs some start-up time and introduces an overhead of a few microseconds.

CuPy is a NumPy-compatible array library that mainly runs on CUDA, but has increasing support for other GPU manufacturers. It can in many cases be used as a drop-in replacement for numpy.

```>>> f = lambdify(x, expr, "cupy")
>>> import cupy as cp
>>> data = cp.linspace(1, 10, 10000)
>>> y = f(data) # perform the computation
>>> cp.asnumpy(y) # explicitly copy from GPU to CPU / numpy array
[ 0.84147098  0.84119981  0.84092844 ... -0.05426074 -0.05433146
-0.05440211]
```

## uFuncify¶

The `autowrap` module contains methods that help in efficient computation.

• autowrap method for compiling code generated by the codegen module, and wrap the binary for use in python.

• binary_function method automates the steps needed to autowrap the SymPy expression and attaching it to a `Function` object with `implemented_function()`.

• ufuncify generates a binary function that supports broadcasting on numpy arrays using different backends that are faster as compared to `subs/evalf` and `lambdify`.

The API reference of all the above is listed here: `sympy.utilities.autowrap()`.

## Aesara¶

SymPy has a strong connection with Aesara, a mathematical array compiler. SymPy expressions can be easily translated to Aesara graphs and then compiled using the Aesara compiler chain.

```>>> from sympy import *
>>> from sympy.abc import x
>>> expr = sin(x)/x
```
```>>> from sympy.printing.aesaracode import aesara_function
>>> f = aesara_function([x], [expr])
```

If array broadcasting or types are desired then Aesara requires this extra information

```>>> f = aesara_function([x], [expr], dims={x: 1}, dtypes={x: 'float64'})
```

Aesara has a more sophisticated code generation system than SymPy’s C/Fortran code printers. Among other things it handles common sub-expressions and compilation onto the GPU. Aesara also supports SymPy Matrix and Matrix Expression objects.

## So Which Should I Use?¶

The options here were listed in order from slowest and least dependencies to fastest and most dependencies. For example, if you have Aesara installed then that will often be the best choice. If you don’t have Aesara but do have `f2py` then you should use `ufuncify`. If you have been comfortable using lambdify with the numpy module, but have a GPU, CuPy can provide substantial speedups with little effort.

Tool

Speed

Qualities

Dependencies

subs/evalf

50us

Simple

None

lambdify

1us

Scalar functions

math

lambdify-numpy

10ns

Vector functions

numpy

ufuncify

10ns

Complex vector expressions

f2py, Cython

lambdify-cupy

10ns

Vector functions on GPUs

cupy

Aesara

10ns

Many outputs, CSE, GPUs

Aesara