This page describes how the function `hyperexpand()` and related code
work. For usage, see the documentation of the symplify module.

This section describes the algorithm used to expand hypergeometric functions. Most of it is based on the papers [Roach1996] and [Roach1997].

Recall that the hypergeometric function is (initially) defined as

\[\begin{split}{}_pF_q\left.\left(\begin{matrix} a_1, \dots, a_p \\ b_1, \dots, b_q \end{matrix}
\right| z \right)
= \sum_{n=0}^\infty \frac{(a_1)_n \dots (a_p)_n}{(b_1)_n \dots (b_q)_n}
\frac{z^n}{n!}.\end{split}\]

It turns out that there are certain differential operators that can change the \(a_p\) and \(p_q\) parameters by integers. If a sequence of such operators is known that converts the set of indices \(a_r^0\) and \(b_s^0\) into \(a_p\) and \(b_q\), then we shall say the pair \(a_p, b_q\) is reachable from \(a_r^0, b_s^0\). Our general strategy is thus as follows: given a set \(a_p, b_q\) of parameters, try to look up an origin \(a_r^0, b_s^0\) for which we know an expression, and then apply the sequence of differential operators to the known expression to find an expression for the Hypergeometric function we are interested in.

In the following, the symbol \(a\) will always denote a numerator parameter and the symbol \(b\) will always denote a denominator parameter. The subscripts \(p, q, r, s\) denote vectors of that length, so e.g. \(a_p\) denotes a vector of \(p\) numerator parameters. The subscripts \(i\) and \(j\) denote “running indices”, so they should usually be used in conjuction with a “for all \(i\)”. E.g. \(a_i < 4\) for all \(i\). Uppercase subscripts \(I\) and \(J\) denote a chosen, fixed index. So for example \(a_I > 0\) is true if the inequality holds for the one index \(I\) we are currently interested in.

Suppose \(a_i \ne 0\). Set \(A(a_i) = \frac{z}{a_i}\frac{\mathrm{d}}{dz}+1\). It is then easy to show that \(A(a_i) {}_p F_q\left.\left({a_p \atop b_q} \right| z \right) = {}_p F_q\left.\left({a_p + e_i \atop b_q} \right| z \right)\), where \(e_i\) is the i-th unit vector. Similarly for \(b_j \ne 1\) we set \(B(b_j) = \frac{z}{b_j-1} \frac{\mathrm{d}}{dz}+1\) and find \(B(b_j) {}_p F_q\left.\left({a_p \atop b_q} \right| z \right) = {}_p F_q\left.\left({a_p \atop b_q - e_i} \right| z \right)\). Thus we can increment upper and decrement lower indices at will, as long as we don’t go through zero. The \(A(a_i)\) and \(B(b_j)\) are called shift operators.

It is also easy to show that \(\frac{\mathrm{d}}{dz} {}_p F_q\left.\left({a_p \atop b_q} \right| z \right) = \frac{a_1 \dots a_p}{b_1 \dots b_q} {}_p F_q\left.\left({a_p + 1 \atop b_q + 1} \right| z \right)\), where \(a_p + 1\) is the vector \(a_1 + 1, a_2 + 1, \dots\) and similarly for \(b_q + 1\). Combining this with the shift operators, we arrive at one form of the Hypergeometric differential equation: \(\left[ \frac{\mathrm{d}}{dz} \prod_{j=1}^q B(b_j) - \frac{a_1 \dots a_p}{(b_1-1) \dots (b_q-1)} \prod_{i=1}^p A(a_i) \right] {}_p F_q\left.\left({a_p \atop b_q} \right| z \right) = 0\). This holds if all shift operators are defined, i.e. if no \(a_i = 0\) and no \(b_j = 1\). Clearing denominators and multiplying through by z we arrive at the following equation: \(\left[ z\frac{\mathrm{d}}{dz} \prod_{j=1}^q \left(z\frac{\mathrm{d}}{dz} + b_j-1 \right) - z \prod_{i=1}^p \left( z\frac{\mathrm{d}}{\mathrm{d}z} + a_i \right) \right] {}_p F_q\left.\left({a_p \atop b_q} \right| z\right) = 0\). Even though our derivation does not show it, it can be checked that this equation holds whenever the \({}_p F_q\) is defined.

Notice that, under suitable conditions on \(a_I, b_J\), each of the operators \(A(a_i)\), \(B(b_j)\) and \(z\frac{\mathrm{d}}{\mathrm{d}z}\) can be expressed in terms of \(A(a_I)\) or \(B(b_J)\). Our next aim is to write the Hypergeometric differential equation as follows: \([X A(a_I) - r] {}_p F_q\left.\left({a_p \atop b_q} \right| z\right) = 0\), for some operator \(X\) and some constant \(r\) to be determined. If \(r \ne 0\), then we can write this as \(\frac{-1}{r} X {}_p F_q\left.\left({a_p + e_I \atop b_q} \right| z\right) = {}_p F_q\left.\left({a_p \atop b_q} \right| z\right)\), and so \(\frac{-1}{r}X\) undoes the shifting of \(A(a_I)\), whence it will be called an inverse-shift operator.

Now \(A(a_I)\) exists if \(a_I \ne 0\), and then \(z\frac{\mathrm{d}}{\mathrm{d}z} = a_I A(a_I) - a_I\). Observe also that all the operators \(A(a_i)\), \(B(b_j)\) and \(z\frac{\mathrm{d}}{\mathrm{d}z}\) commute. We have \(\prod_{i=1}^p \left( z\frac{\mathrm{d}}{\mathrm{d}z} + a_i \right) = \left(\prod_{i=1, i \ne I}^p \left( z\frac{\mathrm{d}}{\mathrm{d}z} + a_i \right)\right) a_I A(a_I)\), so this gives us the first half of \(X\). The other half does not have such a nice expression. We find \(z\frac{\mathrm{d}}{dz} \prod_{j=1}^q \left(z\frac{\mathrm{d}}{dz} + b_j-1 \right) = \left(a_I A(a_I) - a_I\right) \prod_{j=1}^q \left(a_I A(a_I) - a_I + b_j - 1\right)\). Since the first half had no constant term, we infer \(r = -a_I\prod_{j=1}^q(b_j - 1 -a_I)\).

This tells us under which conditions we can “un-shift” \(A(a_I)\), namely when \(a_I \ne 0\) and \(r \ne 0\). Substituting \(a_I - 1\) for \(a_I\) then tells us under what conditions we can decrement the index \(a_I\). Doing a similar analysis for \(B(a_J)\), we arrive at the following rules:

- An index \(a_I\) can be decremented if \(a_I \ne 1\) and \(a_I \ne b_j\) for all \(b_j\).
- An index \(b_J\) can be incremented if \(b_J \ne -1\) and \(b_J \ne a_i\) for all \(a_i\).

Combined with the conditions (stated above) for the existence of shift operators, we have thus established the rules of the game!

Notice that, quite trivially, if \(a_I = b_J\), we have \({}_p F_q\left.\left({a_p \atop b_q} \right| z \right) = {}_{p-1} F_{q-1}\left.\left({a_p^* \atop b_q^*} \right| z \right)\), where \(a_p^*\) means \(a_p\) with \(a_I\) omitted, and similarly for \(b_q^*\). We call this reduction of order.

In fact, we can do even better. If \(a_I - b_J \in \mathbb{Z}_{>0}\), then it is easy to see that \(\frac{(a_I)_n}{(b_J)_n}\) is actually a polynomial in \(n\). It is also easy to see that \((z\frac{\mathrm{d}}{\mathrm{d}z})^k z^n = n^k z^n\). Combining these two remarks we find:

If \(a_I - b_J \in \mathbb{Z}_{>0}\), then there exists a polynomial \(p(n) = p_0 + p_1 n + \dots\) (of degree \(a_I - b_J\)) such that \(\frac{(a_I)_n}{(b_J)_n} = p(n)\) and \({}_p F_q\left.\left({a_p \atop b_q} \right| z \right) = \left(p_0 + p_1 z\frac{\mathrm{d}}{\mathrm{d}z} + p_2 \left.\left(z\frac{\mathrm{d}}{\mathrm{d}z}\right)^2 + \dots \right) {}_{p-1} F_{q-1}\left({a_p^* \atop b_q^*} \right| z \right)\).

Thus any set of parameters \(a_p, b_q\) is reachable from a set of parameters \(c_r, d_s\) where \(c_i - d_j \in \mathbb{Z}\) implies \(c_i < d_j\). Such a set of parameters \(c_r, d_s\) is called suitable. Our database of known formulae should only contain suitable origins. The reasons are twofold: firstly, working from suitable origins is easier, and secondly, a formula for a non-suitable origin can be deduced from a lower order formula, and we should put this one into the database instead.

It remains to investigate the following question: suppose \(a_p, b_q\) and \(a_p^0, b_q^0\) are both suitable, and also \(a_i - a_i^0 \in \mathbb{Z}\), \(b_j - b_j^0 \in \mathbb{Z}\). When is \(a_p, b_q\) reachable from \(a_p^0, b_q^0\)? It is clear that we can treat all parameters independently that are incongruent mod 1. So assume that \(a_i\) and \(b_j\) are congruent to \(r\) mod 1, for all \(i\) and \(j\). The same then follows for \(a_i^0\) and \(b_j^0\).

If \(r \ne 0\), then any such \(a_p, b_q\) is reachable from any \(a_p^0, b_q^0\). To see this notice that there exist constants \(c, c^0\), congruent mod 1, such that \(a_i < c < b_j\) for all \(i\) and \(j\), and similarly \(a_i^0 < c^0 < b_j^0\). If \(n = c - c^0 > 0\) then we first inverse-shift all the \(b_j^0\) \(n\) times up, and then similarly shift shift up all the \(a_i^0\) \(n\) times. If \(n < 0\) then we first inverse-shift down the \(a_i^0\) and then shift down the \(b_j^0\). This reduces to the case \(c = c^0\). But evidently we can now shift or inverse-shift around the \(a_i^0\) arbitrarily so long as we keep them less than \(c\), and similarly for the \(b_j^0\) so long as we keep them bigger than \(c\). Thus \(a_p, b_q\) is reachable from \(a_p^0, b_q^0\).

If \(r = 0\) then the problem is slightly more involved. WLOG no parameter is zero. We now have one additional complication: no parameter can ever move through zero. Hence \(a_p, b_q\) is reachable from \(a_p^0, b_q^0\) if and only if the number of \(a_i < 0\) equals the number of \(a_i^0 < 0\), and similarly for the \(b_i\) and \(b_i^0\). But in a suitable set of parameters, all \(b_j > 0\)! This is because the Hypergeometric function is undefined if one of the \(b_j\) is a non-positive integer and all \(a_i\) are smaller than the \(b_j\). Hence the number of \(b_j \le 0\) is always zero.

We can thus associate to every suitable set of parameters \(a_p, b_q\), where no \(a_i = 0\), the following invariants:

- For every \(r \in [0, 1)\) the number \(\alpha_r\) of parameters \(a_i \equiv r \pmod{1}\), and similarly the number \(\beta_r\) of parameters \(b_i \equiv r \pmod{1}\).
- The number \(\gamma\) of integers \(a_i\) with \(a_i < 0\).

The above reasoning shows that \(a_p, b_q\) is reachable from \(a_p^0, b_q^0\) if and only if the invariants \(\alpha_r, \beta_r, \gamma\) all agree. Thus in particular “being reachable from” is a symmetric relation on suitable parameters without zeros.

If all goes well then for a given set of parameters we find an origin in our database for which we have a nice formula. We now have to apply (potentially) many differential operators to it. If we do this blindly then the result will be very messy. This is because with Hypergeometric type functions, the derivative is usually expressed as a sum of two contiguous functions. Hence if we compute \(N\) derivatives, then the answer will involve \(2N\) contiguous functions! This is clearly undesirable. In fact we know from the Hypergeometric differential equation that we need at most \(\max(p, q+1)\) contiguous functions to express all derivatives.

Hence instead of differentiating blindly, we will work with a \(\mathbb{C}(z)\)-module basis: for an origin \(a_r^0, b_s^0\) we either store (for particularly pretty answers) or compute a set of \(N\) functions (typically \(N = \max(r, s+1)\)) with the property that the derivative of any of them is a \(\mathbb{C}(z)\)-linear combination of them. In formulae, we store a vector \(B\) of \(N\) functions, a matrix \(M\) and a vector \(C\) (the latter two with entries in \(\mathbb{C}(z)\)), with the following properties:

- \({}_r F_s\left.\left({a_r^0 \atop b_s^0} \right| z \right) = C B\)
- \(z\frac{\mathrm{d}}{\mathrm{d}z} B = M B\).

Then we can compute as many derivatives as we want and we will always end up with \(\mathbb{C}(z)\)-linear combination of at most \(N\) special functions.

As hinted above, \(B\), \(M\) and \(C\) can either all be stored (for particularly pretty answers) or computed from a single \({}_p F_q\) formula.

This describes the bulk of the hypergeometric function algorithm. There a few further tricks, described in the hyperexpand.py source file. The extension to Meijer G-functions is also described there.

Slater’s theorem essentially evaluates a \(G\)-function as a sum of residues. If all poles are simple, the resulting series can be recognised as hypergeometric series. Thus a \(G\)-function can be evaluated as a sum of Hypergeometric functions.

If the poles are not simple, the resulting series are not hypergeometric. This is known as the “confluent” or “logarithmic” case (the latter because the resulting series tend to contain logarithms). The answer depends in a complicated way on the multiplicities of various poles, and there is no accepted notation for representing it (as far as I know). However if there are only finitely many multiple poles, we can evaluate the \(G\) function as a sum of hypergeometric functions, plus finitely many extra terms. I could not find any good reference for this, which is why I work it out here.

Recall the general setup. We define

\[G(z) = \frac{1}{2\pi i} \int_L \frac{\prod_{j=1}^m \Gamma(b_j - s)
\prod_{j=1}^n \Gamma(1 - a_j + s)}{\prod_{j=m+1}^q \Gamma(1 - b_j + s)
\prod_{j=n+1}^p \Gamma(a_j - s)} z^s \mathrm{d}s,\]

where \(L\) is a contour starting and ending at \(+\infty\), enclosing all of the poles of \(\Gamma(b_j - s)\) for \(j = 1, \dots, n\) once in the negative direction, and no other poles. Also the integral is assumed absolutely convergent.

In what follows, for any complex numbers \(a, b\), we write \(a \equiv b \pmod{1}\) if and only if there exists an integer \(k\) such that \(a - b = k\). Thus there are double poles iff \(a_i \equiv a_j \pmod{1}\) for some \(i \ne j \le n\).

We now assume that whenever \(b_j \equiv a_i \pmod{1}\) for \(i \le m\), \(j > n\) then \(b_j < a_i\). This means that no quotient of the relevant gamma functions is a polynomial, and can always be achieved by “reduction of order”. Fix a complex number \(c\) such that \(\{b_i | b_i \equiv c \pmod{1}, i \le m\}\) is not empty. Enumerate this set as \(b, b+k_1, \dots, b+k_u\), with \(k_i\) non-negative integers. Enumerate similarly \(\{a_j | a_j \equiv c \pmod{1}, j > n\}\) as \(b + l_1, \dots, b + l_v\). Then \(l_i > k_j\) for all \(i, j\). For finite confluence, we need to assume \(v \ge u\) for all such \(c\).

Let \(c_1, \dots, c_w\) be distinct \(\pmod{1}\) and exhaust the congruence classes of the \(b_i\). I claim

\[G(z) = -\sum_{j=1}^w (F_j(z) + R_j(z)),\]

where \(F_j(z)\) is a hypergeometric function and \(R_j(z)\) is a finite sum, both to be specified later. Indeed corresponding to every \(c_j\) there is a sequence of poles, at mostly finitely many of them multiple poles. This is where the \(j\)-th term comes from.

Hence fix again \(c\), enumerate the relevant \(b_i\) as \(b, b + k_1, \dots, b + k_u\). We will look at the \(a_j\) corresponding to \(a + l_1, \dots, a + l_u\). The other \(a_i\) are not treated specially. The corresponding gamma functions have poles at (potentially) \(s = b + r\) for \(r = 0, 1, \dots\). For \(r \ge l_u\), pole of the integrand is simple. We thus set

\[R(z) = \sum_{r=0}^{l_u - 1} res_{s = r + b}.\]

We finally need to investigate the other poles. Set \(r = l_u + t\), \(t \ge 0\). A computation shows

\[\frac{\Gamma(k_i - l_u - t)}{\Gamma(l_i - l_u - t)}
= \frac{1}{(k_i - l_u - t)_{l_i - k_i}}
= \frac{(-1)^{\delta_i}}{(l_u - l_i + 1)_{\delta_i}}
\frac{(l_u - l_i + 1)_t}{(l_u - k_i + 1)_t},\]

where \(\delta_i = l_i - k_i\).

Also

\[\begin{split}\Gamma(b_j - l_u - b - t) =
\frac{\Gamma(b_j - l_u - b)}{(-1)^t(l_u + b + 1 - b_j)_t}, \\\end{split}\]\[\Gamma(1 - a_j + l_u + b + t) =
\Gamma(1 - a_j + l_u + b) (1 - a_j + l_u + b)_t\]

and

\[res_{s = b + l_u + t} \Gamma(b - s) = -\frac{(-1)^{l_u + t}}{(l_u + t)!}
= -\frac{(-1)^{l_u}}{l_u!} \frac{(-1)^t}{(l_u+1)_t}.\]

Hence

\[\begin{split}res_{s = b + l_u + t} =& -z^{b + l_u}
\frac{(-1)^{l_u}}{l_u!}
\prod_{i=1}^{u} \frac{(-1)^{\delta_i}}{(l_u - k_i + 1)_{\delta_i}}
\frac{\prod_{j=1}^n \Gamma(1 - a_j + l_u + b)
\prod_{j=1}^m \Gamma(b_j - l_u - b)^*}
{\prod_{j=n+1}^p \Gamma(a_j - l_u - b)^* \prod_{j=m+1}^q
\Gamma(1 - b_j + l_u + b)}
\\ &\times
z^t
\frac{(-1)^t}{(l_u+1)_t}
\prod_{i=1}^{u} \frac{(l_u - l_i + 1)_t}{(l_u - k_i + 1)_t}
\frac{\prod_{j=1}^n (1 - a_j + l_u + b)_t
\prod_{j=n+1}^p (-1)^t (l_u + b + 1 - a_j)_t^*}
{\prod_{j=1}^m (-1)^t (l_u + b + 1 - b_j)_t^*
\prod_{j=m+1}^q (1 - b_j + l_u + b)_t},\end{split}\]

where the \(*\) means to omit the terms we treated specially.

We thus arrive at

\[\begin{split}F(z) = C \times {}_{p+1}F_{q}\left.\left(
\begin{matrix} 1, (1 + l_u - l_i), (1 + l_u + b - a_i)^* \\
1 + l_u, (1 + l_u - k_i), (1 + l_u + b - b_i)^*
\end{matrix} \right| (-1)^{p-m-n} z\right),\end{split}\]

where \(C\) designates the factor in the residue independent of \(t\). (This result can also be written in slightly simpler form by converting all the \(l_u\) etc back to \(a_* - b_*\), but doing so is going to require more notation still and is not helpful for computation.)

Adding new formulae to the tables is straightforward. At the top of the file
`sympy/simplify/hyperexpand.py`, there is a function called
`add_formulae()`. Nested in it are defined two helpers,
`add(ap, bq, res)` and `addb(ap, bq, B, C, M)`, as well as dummys
`a`, `b`, `c`, and `z`.

The first step in adding a new formula is by using `add(ap, bq, res)`. This
declares `hyper(ap, bq, z) == res`. Here `ap` and `bq` may use the
dummys `a`, `b`, and `c` as free symbols. For example the well-known formula
\(\sum_0^\infty \frac{(-a)_n z^n}{n!} = (1-z)^a\) is declared by the following
line: `add((-a, ), (), (1-z)**a)`.

From the information provided, the matrices \(B\), \(C\) and \(M\) will be computed,
and the formula is now available when expanding hypergeometric functions.
Next the test file `sympy/simplify/tests/test_hyperexpand.py` should be run,
in particular the test `test_formulae()`. This will test the newly added
formula numerically. If it fails, there is (presumably) a typo in what was
entered.

Since all newly-added formulae are probably relatively complicated, chances
are that the automatically computed basis is rather suboptimal (there is no
good way of testing this, other than observing very messy output). In this
case the matrices \(B\), \(C\) and \(M\) should be computed by hand. Then the helper
`addb` can be used to declare a hypergeometric formula with hand-computed
basis.

A vital part of the algorithm is a relatively large table of hypergeometric function represantions. The following automatically generated list contains all the representations implemented in SymPy (of course many more are derived from them). These formulae are mostly taken from [Luke1969] and [Prudnikov1990]. They are all tested numerically.

\[\begin{split}{{}_{0}F_{0}\left.\left(\begin{matrix} \\ \end{matrix}\right| {z} \right)} = e^{z}\end{split}\]

\[\begin{split}{{}_{1}F_{0}\left.\left(\begin{matrix} - a \\ \end{matrix}\right| {z} \right)} = \left(- z + 1\right)^{a}\end{split}\]

\[\begin{split}{{}_{2}F_{1}\left.\left(\begin{matrix} a, a - \frac{1}{2} \\ 2 a \end{matrix}\right| {z} \right)} = 2^{2 a -1} \left(\sqrt{- z + 1} + 1\right)^{- 2 a + 1}\end{split}\]

\[\begin{split}{{}_{2}F_{1}\left.\left(\begin{matrix} 1, 1 \\ 2 \end{matrix}\right| {z} \right)} = - \frac{\operatorname{log}\left(- z + 1\right)}{z}\end{split}\]

\[\begin{split}{{}_{2}F_{1}\left.\left(\begin{matrix} \frac{1}{2}, 1 \\ \frac{3}{2} \end{matrix}\right| {z} \right)} = \frac{\operatorname{log}\left(\frac{\sqrt{z} + 1}{- \sqrt{z} + 1}\right)}{2 \sqrt{z}}\end{split}\]

\[\begin{split}{{}_{2}F_{1}\left.\left(\begin{matrix} \frac{1}{2}, \frac{1}{2} \\ \frac{3}{2} \end{matrix}\right| {z} \right)} = \frac{\operatorname{asin}\left(\sqrt{z}\right)}{\sqrt{z}}\end{split}\]

\[\begin{split}{{}_{2}F_{1}\left.\left(\begin{matrix} - a, - a + \frac{1}{2} \\ \frac{1}{2} \end{matrix}\right| {z} \right)} = \frac{1}{2} \left(- \sqrt{z} + 1\right)^{2 a} + \frac{1}{2} \left(\sqrt{z} + 1\right)^{2 a}\end{split}\]

\[\begin{split}{{}_{2}F_{1}\left.\left(\begin{matrix} a, - a \\ \frac{1}{2} \end{matrix}\right| {z} \right)} = \operatorname{cos}\left(2 a \operatorname{asin}\left(\sqrt{z}\right)\right)\end{split}\]

\[\begin{split}{{}_{2}F_{1}\left.\left(\begin{matrix} 1, 1 \\ \frac{3}{2} \end{matrix}\right| {z} \right)} = \frac{\operatorname{asin}\left(\sqrt{z}\right)}{\sqrt{z \left(- z + 1\right)}}\end{split}\]

\[\begin{split}{{}_{3}F_{2}\left.\left(\begin{matrix} - \frac{1}{2}, 1, 1 \\ \frac{1}{2}, 2 \end{matrix}\right| {z} \right)} = - \frac{2}{3} \sqrt{z} \operatorname{atanh}\left(\sqrt{z}\right) + \frac{2}{3} - \frac{\operatorname{log}\left(- z + 1\right)}{3 z}\end{split}\]

\[\begin{split}{{}_{3}F_{2}\left.\left(\begin{matrix} - \frac{1}{2}, 1, 1 \\ 2, 2 \end{matrix}\right| {z} \right)} = \left(\frac{4}{9} - \frac{16}{9 z}\right) \sqrt{- z + 1} + \frac{4}{3} \frac{\operatorname{log}\left(\frac{1}{2} \sqrt{- z + 1} + \frac{1}{2}\right)}{z} + \frac{16}{9 z}\end{split}\]

\[\begin{split}{{}_{1}F_{1}\left.\left(\begin{matrix} 1 \\ b \end{matrix}\right| {z} \right)} = z^{- b + 1} \left(b -1\right) e^{z} \operatorname{\gamma}\left(b -1, z\right)\end{split}\]

\[\begin{split}{{}_{1}F_{1}\left.\left(\begin{matrix} a \\ 2 a \end{matrix}\right| {z} \right)} = 4^{a - \frac{1}{2}} z^{- a + \frac{1}{2}} e^{\frac{1}{2} z} I_{a - \frac{1}{2}}\left(\frac{1}{2} z\right) \operatorname{\Gamma}\left(a + \frac{1}{2}\right)\end{split}\]

\[\begin{split}{{}_{1}F_{1}\left.\left(\begin{matrix} - \frac{1}{2} \\ \frac{1}{2} \end{matrix}\right| {z} \right)} = \sqrt{z} \mathbf{\imath} \sqrt{\pi} \operatorname{erf}\left(\sqrt{z} \mathbf{\imath}\right) + e^{z}\end{split}\]

\[\begin{split}{{}_{2}F_{2}\left.\left(\begin{matrix} \frac{1}{2}, a \\ \frac{3}{2}, a + 1 \end{matrix}\right| {z} \right)} = - \frac{a \mathbf{\imath} \sqrt{\pi} \sqrt{\frac{1}{z}} \operatorname{erf}\left(\sqrt{z} \mathbf{\imath}\right)}{2 a -1} - \frac{a \operatorname{\gamma}\left(a, - z\right)}{\left(- z\right)^{a} \left(2 a -1\right)}\end{split}\]

\[\begin{split}{{}_{0}F_{1}\left.\left(\begin{matrix} \\ \frac{1}{2} \end{matrix}\right| {z} \right)} = \operatorname{cosh}\left(2 \sqrt{z}\right)\end{split}\]

\[\begin{split}{{}_{0}F_{1}\left.\left(\begin{matrix} \\ b \end{matrix}\right| {z} \right)} = z^{- \frac{1}{2} b + \frac{1}{2}} I_{b -1}\left(2 \sqrt{z}\right) \operatorname{\Gamma}\left(b\right)\end{split}\]

\[\begin{split}{{}_{0}F_{3}\left.\left(\begin{matrix} \\ \frac{1}{2}, a, a + \frac{1}{2} \end{matrix}\right| {z} \right)} = \frac{z^{- \frac{1}{2} a + \frac{1}{4}} \left(I_{2 a -1}\left(4 \sqrt[4]{z}\right) + J_{2 a -1}\left(4 \sqrt[4]{z}\right)\right) \operatorname{\Gamma}\left(2 a\right)}{2^{2 a}}\end{split}\]

\[\begin{split}{{}_{0}F_{3}\left.\left(\begin{matrix} \\ a, a + \frac{1}{2}, 2 a \end{matrix}\right| {z} \right)} = \left(2 \sqrt{- z}\right)^{- 2 a + 1} I_{2 a -1}\left(2 \sqrt{2} \sqrt[4]{- z}\right) J_{2 a -1}\left(2 \sqrt{2} \sqrt[4]{- z}\right) \operatorname{\Gamma}^{2}\left(2 a\right)\end{split}\]

\[\begin{split}{{}_{1}F_{2}\left.\left(\begin{matrix} a \\ a - \frac{1}{2}, 2 a \end{matrix}\right| {z} \right)} = 2 \times 4^{a -1} z^{- a + 1} I_{a - \frac{3}{2}}\left(\sqrt{z}\right) I_{a - \frac{1}{2}}\left(\sqrt{z}\right) \operatorname{\Gamma}\left(a - \frac{1}{2}\right) \operatorname{\Gamma}\left(a + \frac{1}{2}\right) - 4^{a - \frac{1}{2}} z^{- a + \frac{1}{2}} I^{2}_{a - \frac{1}{2}}\left(\sqrt{z}\right) \operatorname{\Gamma}^{2}\left(a + \frac{1}{2}\right)\end{split}\]

\[\begin{split}{{}_{1}F_{2}\left.\left(\begin{matrix} \frac{1}{2} \\ b, - b + 2 \end{matrix}\right| {z} \right)} = \frac{\pi \left(- b + 1\right) I_{- b + 1}\left(\sqrt{z}\right) I_{b -1}\left(\sqrt{z}\right)}{\operatorname{sin}\left(b \pi\right)}\end{split}\]

\[\begin{split}{{}_{2}F_{3}\left.\left(\begin{matrix} a, a + \frac{1}{2} \\ 2 a, b, 2 a - b + 1 \end{matrix}\right| {z} \right)} = \left(\frac{1}{2} \sqrt{z}\right)^{- 2 a + 1} I_{2 a - b}\left(\sqrt{z}\right) I_{b -1}\left(\sqrt{z}\right) \operatorname{\Gamma}\left(b\right) \operatorname{\Gamma}\left(2 a - b + 1\right)\end{split}\]

[Roach1996] | Kelly B. Roach. Hypergeometric Function Representations. In: Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation, pages 301-308, New York, 1996. ACM. |

[Roach1997] | Kelly B. Roach. Meijer G Function Representations. In: Proceedings of the 1997 International Symposium on Symbolic and Algebraic Computation, pages 205-211, New York, 1997. ACM. |

[Luke1969] | Luke, Y. L. (1969), The Special Functions and Their Approximations, Volume 1. |

[Prudnikov1990] | A. P. Prudnikov, Yu. A. Brychkov and O. I. Marichev (1990). Integrals and Series: More Special Functions, Vol. 3, Gordon and Breach Science Publisher. |