`is_*assumption*`

directly
on the class.
For example, our [example `divides`
function](custom-functions-divides-definition) is always an integer, because
its value is always either 0 or 1:
```{sidebar} Note
From here on out in this guide, in the interest of space, we will omit the
previous method definitions in the examples unless they are needed for the
given example to work. There are [complete
examples](custom-functions-complete-examples) at the end of this guide with
all the methods.
```
```py
>>> class divides(Function):
... is_integer = True
... is_negative = False
```
```py
>>> divides(m, n).is_integer
True
>>> divides(m, n).is_nonnegative
True
```
In general, however, the assumptions of a function depend on the assumptions
of its inputs. In this case, you should define an `\_eval\_*assumption*`

method.
For our [$\operatorname{versin}(x)$
example](custom-functions-versine-definition), the function is always in $[0,
2]$ when $x$ is real, and it is 0 exactly when $x$ is an even multiple of
$\pi$. So `versin(x)` should be *nonnegative* whenever `x` is *real* and
*positive* whenever `x` is *real* and not an *even* multiple of π. Remember
that by default, a function's domain is all of $\mathbb{C}$, and indeed
`versin(x)` makes perfect sense with non-real `x`.
To see if `x` is an even multiple of `pi`, we can use {meth}`~.as_independent`
to match `x` structurally as `coeff*pi`. Pulling apart subexpressions
structurally like this in assumptions handlers is preferable to using
something like `(x/pi).is_even`, because that will create a new expression
`x/pi`. The creation of a new expression is much slower. Furthermore, whenever
an expression is created, the constructors that are called when creating the
expression will often themselves cause assumptions to be queried. If you are
not careful, this can lead to infinite recursion. So a good general rule for
assumptions handlers is, **never create a new expression in an assumptions
handler**. Always pull apart the args of the function using structural methods
like `as_independent`.
Note that $\operatorname{versin}(x)$ can be
nonnegative for nonreal $x$, for example:
```py
>>> from sympy import I
>>> 1 - cos(pi + I*pi)
1 + cosh(pi)
>>> (1 - cos(pi + I*pi)).evalf()
12.5919532755215
```
So for the `_eval_is_nonnegative` handler, we want to return `True` if
`x.is_real` is `True` but `None` if `x.is_real` is either `False` or `None`.
It is left as an exercise to the reader to handle the cases for nonreal `x`
that make `versin(x)` nonnegative, using similar logic from the
`_eval_is_positive` handler.
In the assumptions handler methods, as in all methods, we can access the
arguments of the function using `self.args`.
```py
>>> from sympy.core.logic import fuzzy_and, fuzzy_not
>>> class versin(Function):
... def _eval_is_nonnegative(self):
... # versin(x) is nonnegative if x is real
... x = self.args[0]
... if x.is_real is True:
... return True
...
... def _eval_is_positive(self):
... # versin(x) is positive if x is real and not an even multiple of pi
... x = self.args[0]
...
... # x.as_independent(pi, as_Add=False) will split x as a Mul of the
... # form coeff*pi
... coeff, pi_ = x.as_independent(pi, as_Add=False)
... # If pi_ = pi, x = coeff*pi. Otherwise x is not (structurally) of
... # the form coeff*pi.
... if pi_ == pi:
... return fuzzy_and([x.is_real, fuzzy_not(coeff.is_even)])
... elif x.is_real is False:
... return False
... # else: return None. We do not know for sure whether x is an even
... # multiple of pi
```
```py
>>> versin(1).is_nonnegative
True
>>> versin(2*pi).is_positive
False
>>> versin(3*pi).is_positive
True
```
Note the use of `fuzzy_` functions in the more complicated
`_eval_is_positive()` handler, and the careful handling of the `if`/`elif`. It
is important when working with assumptions to always be careful about
[handling three-valued logic correctly](booleans-guide). This ensures that the
method returns the correct answer when `x.is_real` or `coeff.is_even` are
`None`.
```{warning}
Never define `is_*assumption*`

as a `@property` method. Doing so
will break the automatic deduction of other assumptions. `is_*assumption*`

should
only ever be defined as a class variable equal to `True` or `False`. If the
assumption depends on the `.args` of the function somehow, define the `\_eval\_*assumption*`

method.
```
In this example, it is not necessary to define `_eval_is_real()` because it is
deduced automatically from the other assumptions, since `nonnegative -> real`.
In general, you should avoid defining assumptions that the assumptions system
can deduce automatically given its [known
facts](assumptions-guide-predicates).
```py
>>> versin(1).is_real
True
```
The assumptions system is often able to deduce more than you might think.
For example, from the above, it can deduce that `versin(2*n*pi)` is zero when
`n` is an integer.
```py
>>> n = symbols('n', integer=True)
>>> versin(2*n*pi).is_zero
True
```
It's always worth checking if the assumptions system can deduce something
automatically before manually coding it.
Finally, a word of warning: be very careful about correctness when coding
assumptions. Make sure to use the exact
[definitions](assumptions-guide-predicates) of the various assumptions, and
always check that you're handling `None` cases correctly with the fuzzy
three-valued logic functions. Incorrect or inconsistent assumptions can lead
to subtle bugs. It's recommended to use unit tests to check all the various
cases whenever your function has a nontrivial assumption handler. All
functions defined in SymPy itself are required to be extensively tested.
(custom-functions-evalf)=
### Numerical Evaluation with `evalf()`
Here we show how to define how a function should numerically evaluate to a
floating point {class}`~.Float` value, for instance, via `evalf()`.
Implementing numerical evaluation enables several behaviors in SymPy. For
example, once `evalf()` is defined, you can plot your function, and things
like inequalities can evaluate to explicit values.
If your function has the same name as a function in
[mpmath](https://mpmath.org/doc/current/), which is the case for most
functions included with SymPy, numerical evaluation will happen automatically
and you do not need to do anything.
If this is not the case, numerical evaluation can be specified by defining the
method `_eval_evalf(self, prec)`, where `prec` is the binary precision of the
input. The method should return the expression evaluated to the given
precision, or `None` if this is not possible.
```{note}
The `prec` argument to `_eval_evalf()` is the *binary* precision, that is, the
number of bits in the floating-point representation. This differs from the
first argument to the `evalf()` method, which is the *decimal* precision, or
`dps`. For example, the default binary precision of `Float` is 53,
corresponding to a decimal precision of 15. Therefore, if your `_eval_evalf()`
method recursively calls evalf on another expression, it should call
`expr._eval_evalf(prec)` rather than `expr.evalf(prec)`, as the latter will
incorrectly use `prec` as the decimal precision.
```
We can define numerical evaluation for [our example $\operatorname{versin}(x)$
function](custom-functions-versine-definition) by recursively evaluating
$2\sin^2\left(\frac{x}{2}\right)$, which is a more numerically stable way of writing $1 -
\cos(x)$.
```py
>>> from sympy import sin
>>> class versin(Function):
... def _eval_evalf(self, prec):
... return (2*sin(self.args[0]/2)**2)._eval_evalf(prec)
```
```
>>> versin(1).evalf()
0.459697694131860
```
Once `_eval_evalf()` is defined, this enables the automatic evaluation of
floating-point inputs. It is not required to implement this manually in
[`eval()`](custom-functions-eval).
```py
>>> versin(1.)
0.459697694131860
```
Note that `evalf()` may be passed any expression, not just one that can be
evaluated numerically. In this case, it is expected that the numerical parts
of an expression will be evaluated. A general pattern to follow is to
recursively call `_eval_evalf(prec)` on the arguments of the function.
Whenever possible, it's best to reuse the evalf functionality defined in
existing SymPy functions. However, in some cases it will be necessary to use
mpmath directly.
(custom-functions-rewriting-and-simplification)=
### Rewriting and Simplification
Various simplification functions and methods allow specifying their behavior
on custom subclasses. Not every function in SymPy has such hooks. See the
documentation of each individual function for details.
(custom-functions-rewrite)=
#### `rewrite()`
The {meth}`~.rewrite` method allows rewriting an expression in terms of a
specific function or rule. For example,
```
>>> sin(x).rewrite(cos)
cos(x - pi/2)
```
To implement rewriting, define a method `_eval_rewrite(self, rule, args,
**hints)`, where
- `rule` is the *rule* passed to the `rewrite()` method. Typically `rule` will
be the class of the object to be rewritten to, although for more complex
rewrites, it can be anything. Each object that defines `_eval_rewrite()`
defines what rule(s) it supports. Many SymPy functions rewrite to common
classes, like `expr.rewrite(Add)`, to perform simplifications or other
computations.
- `args` are the arguments of the function to be used for rewriting. This
should be used instead of `self.args` because any recursive expressions in
the args will be rewritten in `args` (assuming the caller used
`rewrite(deep=True)`, which is the default).
- `**hints` are additional keyword arguments which may be used to specify the
behavior of the rewrite. Unknown hints should be ignored as they may be
passed to other `_eval_rewrite()` methods. If you recursively call rewrite,
you should pass the `**hints` through.
The method should return a rewritten expression, using `args` as the
arguments to the function, or `None` if the expression should be unchanged.
For our [`versin` example](custom-functions-versine-definition), an obvious
rewrite we can implement is rewriting `versin(x)` as `1 - cos(x)`:
```py
>>> class versin(Function):
... def _eval_rewrite(self, rule, args, **hints):
... if rule == cos:
... return 1 - cos(*args)
>>> versin(x).rewrite(cos)
1 - cos(x)
```
Once we've defined this, {func}`~.simplify` is now able to simplify some
expressions containing `versin`:
```
>>> from sympy import simplify
>>> simplify(versin(x) + cos(x))
1
```
(custom-functions-doit)=
#### `doit()`
The {meth}`doit() `\_eval_expand_`*hint*(self,
**hints)

. See the documentation of {func}`~.expand` for details
on which hints are defined and the documentation for each specific `expand_*hint*()`

function (e.g.,
{func}`~.expand_trig`) for details on what each hint is designed to do.
The `**hints` keyword arguments are additional hints that may be passed to the
expand function to specify additional behavior (these are separate from the
predefined *hints* described in the previous paragraph). Unknown hints should
be ignored as they may apply to other functions' custom `expand()` methods. A
common hint to define is `force`, where `force=True` would force an expansion
that might not be mathematically valid for all the given input assumptions.
For example, `expand_log(log(x*y), force=True)` produces `log(x) + log(y)`
even though this identity is not true for all complex `x` and `y` (typically
`force=False` is the default).
Note that `expand()` automatically takes care of recursively expanding
expressions using its own `deep` flag, so `_eval_expand_*` methods should not
recursively call expand on the arguments of the function.
For our [`versin` example](custom-functions-versine-definition), we can define
rudimentary `trig` expansion by defining an `_eval_expand_trig` method,
which recursively calls `expand_trig()` on `1 - cos(x)`:
```
>>> from sympy import expand_trig
>>> y = symbols('y')
>>> class versin(Function):
... def _eval_expand_trig(self, **hints):
... x = self.args[0]
... return expand_trig(1 - cos(x))
>>> versin(x + y).expand(trig=True)
sin(x)*sin(y) - cos(x)*cos(y) + 1
```
A more sophisticated implementation might attempt to rewrite the result of
`expand_trig(1 - cos(x))` back into `versin` functions. This is left as an
exercise for the reader.
(custom-functions-differentiation)=
### Differentiation
To define differentiation via {func}`~.diff`, define a method `fdiff(self,
argindex)`. `fdiff()` should return the derivative of the function, without
considering the chain rule, with respect to the `argindex`-th variable.
`argindex` is indexed starting at `1`.
That is, `f(x1, ..., xi, ..., xn).fdiff(i)` should return $\frac{d}{d x_i}
f(x_1, \ldots, x_i, \ldots, x_n)$, where $x_k$ are independent of one another.
`diff()` will automatically apply the chain rule using the result of
`fdiff()`. User code should use `diff()` and not call `fdiff()` directly.
```{note}
`Function` subclasses should define differentiation using `fdiff()`. Subclasses
of {class}`~.Expr` that aren't `Function` subclasses will need to define
`_eval_derivative()` instead. It is not recommended to redefine
`_eval_derivative()` on a `Function` subclass.
```
For our [$\operatorname{versin}$ example
function](custom-functions-versine-definition), the derivative is $\sin(x)$.
```py
>>> class versin(Function):
... def fdiff(self, argindex=1):
... # argindex indexes the args, starting at 1
... return sin(self.args[0])
```
(custom-functions-differentiation-examples)=
```py
>>> versin(x).diff(x)
sin(x)
>>> versin(x**2).diff(x)
2*x*sin(x**2)
>>> versin(x + y).diff(x)
sin(x + y)
```
As an example of a function that has multiple arguments, consider the [fused
multiply-add (FMA) example](custom-functions-fma-definition) defined above
($\operatorname{FMA}(x, y, z) = xy + z$).
We have
$$\frac{d}{dx} \operatorname{FMA}(x, y, z) = y,$$
$$\frac{d}{dy} \operatorname{FMA}(x, y, z) = x,$$
$$\frac{d}{dz} \operatorname{FMA}(x, y, z) = 1.$$
So the `fdiff()` method for `FMA` would look like this:
```py
>>> from sympy import Number, symbols
>>> x, y, z = symbols('x y z')
>>> class FMA(Function):
... """
... FMA(x, y, z) = x*y + z
... """
... def fdiff(self, argindex):
... # argindex indexes the args, starting at 1
... x, y, z = self.args
... if argindex == 1:
... return y
... elif argindex == 2:
... return x
... elif argindex == 3:
... return 1
```
```py
>>> FMA(x, y, z).diff(x)
y
>>> FMA(x, y, z).diff(y)
x
>>> FMA(x, y, z).diff(z)
1
>>> FMA(x**2, x + 1, y).diff(x)
x**2 + 2*x*(x + 1)
```
To leave a derivative unevaluated, raise
`sympy.core.function.ArgumentIndexError(self, argindex)`. This is the default
behavior if `fdiff()` is not defined. Here is an example function $f(x, y)$ that
is linear in the first argument and has an unevaluated derivative on the
second argument.
```py
>>> from sympy.core.function import ArgumentIndexError
>>> class f(Function):
... @classmethod
... def eval(cls, x, y):
... pass
...
... def fdiff(self, argindex):
... if argindex == 1:
... return 1
... raise ArgumentIndexError(self, argindex)
```
```py
>>> f(x, y).diff(x)
1
>>> f(x, y).diff(y)
Derivative(f(x, y), y)
```
### Printing
You can define how a function prints itself with the varions
[printers](module-printing) such as the {class}`string printer