Series Expansions#

Limits#

The main purpose of this module is the computation of limits.

sympy.series.limits.limit(e, z, z0, dir='+')[source]#

Computes the limit of e(z) at the point z0.

Parameters:

e : expression, the limit of which is to be taken

z : symbol representing the variable in the limit.

Other symbols are treated as constants. Multivariate limits are not supported.

z0 : the value toward which z tends. Can be any expression,

including oo and -oo.

dir : string, optional (default: “+”)

The limit is bi-directional if dir="+-", from the right (z->z0+) if dir="+", and from the left (z->z0-) if dir="-". For infinite z0 (oo or -oo), the dir argument is determined from the direction of the infinity (i.e., dir="-" for oo).

Examples

>>> from sympy import limit, sin, oo
>>> from sympy.abc import x
>>> limit(sin(x)/x, x, 0)
1
>>> limit(1/x, x, 0) # default dir='+'
oo
>>> limit(1/x, x, 0, dir="-")
-oo
>>> limit(1/x, x, 0, dir='+-')
zoo
>>> limit(1/x, x, oo)
0

Notes

First we try some heuristics for easy and frequent cases like “x”, “1/x”, “x**2” and similar, so that it’s fast. For all other cases, we use the Gruntz algorithm (see the gruntz() function).

See also

limit_seq

returns the limit of a sequence.

class sympy.series.limits.Limit(e, z, z0, dir='+')[source]#

Represents an unevaluated limit.

Examples

>>> from sympy import Limit, sin
>>> from sympy.abc import x
>>> Limit(sin(x)/x, x, 0)
Limit(sin(x)/x, x, 0, dir='+')
>>> Limit(1/x, x, 0, dir="-")
Limit(1/x, x, 0, dir='-')
doit(**hints)[source]#

Evaluates the limit.

Parameters:

deep : bool, optional (default: True)

Invoke the doit method of the expressions involved before taking the limit.

hints : optional keyword arguments

To be passed to doit methods; only used if deep is True.

As is explained above, the workhorse for limit computations is the function gruntz() which implements Gruntz’ algorithm for computing limits.

The Gruntz Algorithm#

This section explains the basics of the algorithm used for computing limits. Most of the time the limit() function should just work. However it is still useful to keep in mind how it is implemented in case something does not work as expected.

First we define an ordering on functions. Suppose \(f(x)\) and \(g(x)\) are two real-valued functions such that \(\lim_{x \to \infty} f(x) = \infty\) and similarly \(\lim_{x \to \infty} g(x) = \infty\). We shall say that \(f(x)\) dominates \(g(x)\), written \(f(x) \succ g(x)\), if for all \(a, b \in \mathbb{R}_{>0}\) we have \(\lim_{x \to \infty} \frac{f(x)^a}{g(x)^b} = \infty\). We also say that \(f(x)\) and \(g(x)\) are of the same comparability class if neither \(f(x) \succ g(x)\) nor \(g(x) \succ f(x)\) and shall denote it as \(f(x) \asymp g(x)\).

Note that whenever \(a, b \in \mathbb{R}_{>0}\) then \(a f(x)^b \asymp f(x)\), and we shall use this to extend the definition of \(\succ\) to all functions which tend to \(0\) or \(\pm \infty\) as \(x \to \infty\). Thus we declare that \(f(x) \asymp 1/f(x)\) and \(f(x) \asymp -f(x)\).

It is easy to show the following examples:

  • \(e^x \succ x^m\)

  • \(e^{x^2} \succ e^{mx}\)

  • \(e^{e^x} \succ e^{x^m}\)

  • \(x^m \asymp x^n\)

  • \(e^{x + \frac{1}{x}} \asymp e^{x + \log{x}} \asymp e^x\).

From the above definition, it is possible to prove the following property:

Suppose \(\omega\), \(g_1, g_2, \ldots\) are functions of \(x\), \(\lim_{x \to \infty} \omega = 0\) and \(\omega \succ g_i\) for all \(i\). Let \(c_1, c_2, \ldots \in \mathbb{R}\) with \(c_1 < c_2 < \cdots\).

Then \(\lim_{x \to \infty} \sum_i g_i \omega^{c_i} = \lim_{x \to \infty} g_1 \omega^{c_1}\).

For \(g_1 = g\) and \(\omega\) as above we also have the following easy result:

  • \(\lim_{x \to \infty} g \omega^c = 0\) for \(c > 0\)

  • \(\lim_{x \to \infty} g \omega^c = \pm \infty\) for \(c < 0\), where the sign is determined by the (eventual) sign of \(g\)

  • \(\lim_{x \to \infty} g \omega^0 = \lim_{x \to \infty} g\).

Using these results yields the following strategy for computing \(\lim_{x \to \infty} f(x)\):

  1. Find the set of most rapidly varying subexpressions (MRV set) of \(f(x)\). That is, from the set of all subexpressions of \(f(x)\), find the elements that are maximal under the relation \(\succ\).

  2. Choose a function \(\omega\) that is in the same comparability class as the elements in the MRV set, such that \(\lim_{x \to \infty} \omega = 0\).

  3. Expand \(f(x)\) as a series in \(\omega\) in such a way that the antecedents of the above theorem are satisfied.

  4. Apply the theorem and conclude the computation of \(\lim_{x \to \infty} f(x)\), possibly by recursively working on \(g_1(x)\).

Notes#

This exposition glossed over several details. Many are described in the file gruntz.py, and all can be found in Gruntz’ very readable thesis. The most important points that have not been explained are:

  1. Given f(x) and g(x), how do we determine if \(f(x) \succ g(x)\), \(g(x) \succ f(x)\) or \(g(x) \asymp f(x)\)?

  2. How do we find the MRV set of an expression?

  3. How do we compute series expansions?

  4. Why does the algorithm terminate?

If you are interested, be sure to take a look at Gruntz Thesis.

Reference#

sympy.series.gruntz.gruntz(e, z, z0, dir='+')[source]#

Compute the limit of e(z) at the point z0 using the Gruntz algorithm.

Explanation

z0 can be any expression, including oo and -oo.

For dir="+" (default) it calculates the limit from the right (z->z0+) and for dir="-" the limit from the left (z->z0-). For infinite z0 (oo or -oo), the dir argument does not matter.

This algorithm is fully described in the module docstring in the gruntz.py file. It relies heavily on the series expansion. Most frequently, gruntz() is only used if the faster limit() function (which uses heuristics) fails.

sympy.series.gruntz.compare(a, b, x)[source]#

Returns “<” if a<b, “=” for a == b, “>” for a>b

sympy.series.gruntz.rewrite(e, Omega, x, wsym)[source]#

e(x) … the function Omega … the mrv set wsym … the symbol which is going to be used for w

Returns the rewritten e in terms of w and log(w). See test_rewrite1() for examples and correct results.

sympy.series.gruntz.build_expression_tree(Omega, rewrites)[source]#

Helper function for rewrite.

We need to sort Omega (mrv set) so that we replace an expression before we replace any expression in terms of which it has to be rewritten:

e1 ---> e2 ---> e3
         \
          -> e4

Here we can do e1, e2, e3, e4 or e1, e2, e4, e3. To do this we assemble the nodes into a tree, and sort them by height.

This function builds the tree, rewrites then sorts the nodes.

sympy.series.gruntz.mrv_leadterm(e, x)[source]#

Returns (c0, e0) for e.

sympy.series.gruntz.calculate_series(e, x, logx=None)[source]#

Calculates at least one term of the series of e in x.

This is a place that fails most often, so it is in its own function.

sympy.series.gruntz.limitinf(e, x)[source]#

Limit e(x) for x-> oo.

sympy.series.gruntz.sign(e, x)[source]#

Returns a sign of an expression e(x) for x->oo.

e >  0 for x sufficiently large ...  1
e == 0 for x sufficiently large ...  0
e <  0 for x sufficiently large ... -1

The result of this function is currently undefined if e changes sign arbitrarily often for arbitrarily large x (e.g. sin(x)).

Note that this returns zero only if e is constantly zero for x sufficiently large. [If e is constant, of course, this is just the same thing as the sign of e.]

sympy.series.gruntz.mrv(e, x)[source]#

Returns a SubsSet of most rapidly varying (mrv) subexpressions of ‘e’, and e rewritten in terms of these

sympy.series.gruntz.mrv_max1(f, g, exps, x)[source]#

Computes the maximum of two sets of expressions f and g, which are in the same comparability class, i.e. mrv_max1() compares (two elements of) f and g and returns the set, which is in the higher comparability class of the union of both, if they have the same order of variation. Also returns exps, with the appropriate substitutions made.

sympy.series.gruntz.mrv_max3(f, expsf, g, expsg, union, expsboth, x)[source]#

Computes the maximum of two sets of expressions f and g, which are in the same comparability class, i.e. max() compares (two elements of) f and g and returns either (f, expsf) [if f is larger], (g, expsg) [if g is larger] or (union, expsboth) [if f, g are of the same class].

class sympy.series.gruntz.SubsSet[source]#

Stores (expr, dummy) pairs, and how to rewrite expr-s.

Explanation

The gruntz algorithm needs to rewrite certain expressions in term of a new variable w. We cannot use subs, because it is just too smart for us. For example:

> Omega=[exp(exp(_p - exp(-_p))/(1 - 1/_p)), exp(exp(_p))]
> O2=[exp(-exp(_p) + exp(-exp(-_p))*exp(_p)/(1 - 1/_p))/_w, 1/_w]
> e = exp(exp(_p - exp(-_p))/(1 - 1/_p)) - exp(exp(_p))
> e.subs(Omega[0],O2[0]).subs(Omega[1],O2[1])
-1/w + exp(exp(p)*exp(-exp(-p))/(1 - 1/p))

is really not what we want!

So we do it the hard way and keep track of all the things we potentially want to substitute by dummy variables. Consider the expression:

exp(x - exp(-x)) + exp(x) + x.

The mrv set is {exp(x), exp(-x), exp(x - exp(-x))}. We introduce corresponding dummy variables d1, d2, d3 and rewrite:

d3 + d1 + x.

This class first of all keeps track of the mapping expr->variable, i.e. will at this stage be a dictionary:

{exp(x): d1, exp(-x): d2, exp(x - exp(-x)): d3}.

[It turns out to be more convenient this way round.] But sometimes expressions in the mrv set have other expressions from the mrv set as subexpressions, and we need to keep track of that as well. In this case, d3 is really exp(x - d2), so rewrites at this stage is:

{d3: exp(x-d2)}.

The function rewrite uses all this information to correctly rewrite our expression in terms of w. In this case w can be chosen to be exp(-x), i.e. d2. The correct rewriting then is:

exp(-w)/w + 1/w + x.
copy()[source]#

Create a shallow copy of SubsSet

do_subs(e)[source]#

Substitute the variables with expressions

meets(s2)[source]#

Tell whether or not self and s2 have non-empty intersection

union(s2, exps=None)[source]#

Compute the union of self and s2, adjusting exps

More Intuitive Series Expansion#

This is achieved by creating a wrapper around Basic.series(). This allows for the use of series(x*cos(x),x), which is possibly more intuitive than (x*cos(x)).series(x).

Examples#

>>> from sympy import Symbol, cos, series
>>> x = Symbol('x')
>>> series(cos(x),x)
1 - x**2/2 + x**4/24 + O(x**6)

Reference#

sympy.series.series.series(expr, x=None, x0=0, n=6, dir='+')[source]#

Series expansion of expr around point \(x = x0\).

Parameters:

expr : Expression

The expression whose series is to be expanded.

x : Symbol

It is the variable of the expression to be calculated.

x0 : Value

The value around which x is calculated. Can be any value from -oo to oo.

n : Value

The number of terms upto which the series is to be expanded.

dir : String, optional

The series-expansion can be bi-directional. If dir="+", then (x->x0+). If dir="-", then (x->x0-). For infinite x0 (oo or -oo), the dir argument is determined from the direction of the infinity (i.e., dir="-" for oo).

Returns:

Expr

Series expansion of the expression about x0

Examples

>>> from sympy import series, tan, oo
>>> from sympy.abc import x
>>> f = tan(x)
>>> series(f, x, 2, 6, "+")
tan(2) + (1 + tan(2)**2)*(x - 2) + (x - 2)**2*(tan(2)**3 + tan(2)) +
(x - 2)**3*(1/3 + 4*tan(2)**2/3 + tan(2)**4) + (x - 2)**4*(tan(2)**5 +
5*tan(2)**3/3 + 2*tan(2)/3) + (x - 2)**5*(2/15 + 17*tan(2)**2/15 +
2*tan(2)**4 + tan(2)**6) + O((x - 2)**6, (x, 2))
>>> series(f, x, 2, 3, "-")
tan(2) + (2 - x)*(-tan(2)**2 - 1) + (2 - x)**2*(tan(2)**3 + tan(2))
+ O((x - 2)**3, (x, 2))
>>> series(f, x, 2, oo, "+")
Traceback (most recent call last):
...
TypeError: 'Infinity' object cannot be interpreted as an integer

See also

sympy.core.expr.Expr.series

See the docstring of Expr.series() for complete details of this wrapper.

Order Terms#

This module also implements automatic keeping track of the order of your expansion.

Examples#

>>> from sympy import Symbol, Order
>>> x = Symbol('x')
>>> Order(x) + x**2
O(x)
>>> Order(x) + 1
1 + O(x)

Reference#

class sympy.series.order.Order(expr, *args, **kwargs)[source]#

Represents the limiting behavior of some function.

Explanation

The order of a function characterizes the function based on the limiting behavior of the function as it goes to some limit. Only taking the limit point to be a number is currently supported. This is expressed in big O notation [R835].

The formal definition for the order of a function \(g(x)\) about a point \(a\) is such that \(g(x) = O(f(x))\) as \(x \rightarrow a\) if and only if there exists a \(\delta > 0\) and an \(M > 0\) such that \(|g(x)| \leq M|f(x)|\) for \(|x-a| < \delta\). This is equivalent to \(\limsup_{x \rightarrow a} |g(x)/f(x)| < \infty\).

Let’s illustrate it on the following example by taking the expansion of \(\sin(x)\) about 0:

\[\sin(x) = x - x^3/3! + O(x^5)\]

where in this case \(O(x^5) = x^5/5! - x^7/7! + \cdots\). By the definition of \(O\), there is a \(\delta > 0\) and an \(M\) such that:

\[|x^5/5! - x^7/7! + ....| <= M|x^5| \text{ for } |x| < \delta\]

or by the alternate definition:

\[\lim_{x \rightarrow 0} | (x^5/5! - x^7/7! + ....) / x^5| < \infty\]

which surely is true, because

\[\lim_{x \rightarrow 0} | (x^5/5! - x^7/7! + ....) / x^5| = 1/5!\]

As it is usually used, the order of a function can be intuitively thought of representing all terms of powers greater than the one specified. For example, \(O(x^3)\) corresponds to any terms proportional to \(x^3, x^4,\ldots\) and any higher power. For a polynomial, this leaves terms proportional to \(x^2\), \(x\) and constants.

Examples

>>> from sympy import O, oo, cos, pi
>>> from sympy.abc import x, y
>>> O(x + x**2)
O(x)
>>> O(x + x**2, (x, 0))
O(x)
>>> O(x + x**2, (x, oo))
O(x**2, (x, oo))
>>> O(1 + x*y)
O(1, x, y)
>>> O(1 + x*y, (x, 0), (y, 0))
O(1, x, y)
>>> O(1 + x*y, (x, oo), (y, oo))
O(x*y, (x, oo), (y, oo))
>>> O(1) in O(1, x)
True
>>> O(1, x) in O(1)
False
>>> O(x) in O(1, x)
True
>>> O(x**2) in O(x)
True
>>> O(x)*x
O(x**2)
>>> O(x) - O(x)
O(x)
>>> O(cos(x))
O(1)
>>> O(cos(x), (x, pi/2))
O(x - pi/2, (x, pi/2))

Notes

In O(f(x), x) the expression f(x) is assumed to have a leading term. O(f(x), x) is automatically transformed to O(f(x).as_leading_term(x),x).

O(expr*f(x), x) is O(f(x), x)

O(expr, x) is O(1)

O(0, x) is 0.

Multivariate O is also supported:

O(f(x, y), x, y) is transformed to O(f(x, y).as_leading_term(x,y).as_leading_term(y), x, y)

In the multivariate case, it is assumed the limits w.r.t. the various symbols commute.

If no symbols are passed then all symbols in the expression are used and the limit point is assumed to be zero.

References

[R835] (1,2)

Big O notation

contains(expr)[source]#

Return True if expr belongs to Order(self.expr, *self.variables). Return False if self belongs to expr. Return None if the inclusion relation cannot be determined (e.g. when self and expr have different symbols).

Series Acceleration#

TODO

Reference#

sympy.series.acceleration.richardson(A, k, n, N)[source]#

Calculate an approximation for lim k->oo A(k) using Richardson extrapolation with the terms A(n), A(n+1), …, A(n+N+1). Choosing N ~= 2*n often gives good results.

Examples

A simple example is to calculate exp(1) using the limit definition. This limit converges slowly; n = 100 only produces two accurate digits:

>>> from sympy.abc import n
>>> e = (1 + 1/n)**n
>>> print(round(e.subs(n, 100).evalf(), 10))
2.7048138294

Richardson extrapolation with 11 appropriately chosen terms gives a value that is accurate to the indicated precision:

>>> from sympy import E
>>> from sympy.series.acceleration import richardson
>>> print(round(richardson(e, n, 10, 20).evalf(), 10))
2.7182818285
>>> print(round(E.evalf(), 10))
2.7182818285

Another useful application is to speed up convergence of series. Computing 100 terms of the zeta(2) series 1/k**2 yields only two accurate digits:

>>> from sympy.abc import k, n
>>> from sympy import Sum
>>> A = Sum(k**-2, (k, 1, n))
>>> print(round(A.subs(n, 100).evalf(), 10))
1.6349839002

Richardson extrapolation performs much better:

>>> from sympy import pi
>>> print(round(richardson(A, n, 10, 20).evalf(), 10))
1.6449340668
>>> print(round(((pi**2)/6).evalf(), 10))     # Exact value
1.6449340668
sympy.series.acceleration.shanks(A, k, n, m=1)[source]#

Calculate an approximation for lim k->oo A(k) using the n-term Shanks transformation S(A)(n). With m > 1, calculate the m-fold recursive Shanks transformation S(S(…S(A)…))(n).

The Shanks transformation is useful for summing Taylor series that converge slowly near a pole or singularity, e.g. for log(2):

>>> from sympy.abc import k, n
>>> from sympy import Sum, Integer
>>> from sympy.series.acceleration import shanks
>>> A = Sum(Integer(-1)**(k+1) / k, (k, 1, n))
>>> print(round(A.subs(n, 100).doit().evalf(), 10))
0.6881721793
>>> print(round(shanks(A, n, 25).evalf(), 10))
0.6931396564
>>> print(round(shanks(A, n, 25, 5).evalf(), 10))
0.6931471806

The correct value is 0.6931471805599453094172321215.

Residues#

TODO

Reference#

sympy.series.residues.residue(expr, x, x0)[source]#

Finds the residue of expr at the point x=x0.

The residue is defined as the coefficient of 1/(x-x0) in the power series expansion about x=x0.

Examples

>>> from sympy import Symbol, residue, sin
>>> x = Symbol("x")
>>> residue(1/x, x, 0)
1
>>> residue(1/x**2, x, 0)
0
>>> residue(2/sin(x), x, 0)
2

This function is essential for the Residue Theorem [1].

References