The linear algebra module is designed to be as simple as possible. First, we import and declare our first Matrix object:
>>> from sympy.interactive.printing import init_printing
>>> init_printing(use_unicode=False, wrap_line=False, no_global=True)
>>> from sympy.matrices import *
>>> Matrix([[1,0], [0,1]])
[1 0]
[ ]
[0 1]
>>> Matrix((
... Matrix((
... (1, 0, 0),
... (0, 0, 0)
... )),
... (0, 0, 1)
... ))
[1 0 0 ]
[ ]
[0 0 0 ]
[ ]
[0 0 1]
>>> Matrix([[1, 2, 3]])
[1 2 3]
>>> Matrix([1, 2, 3])
[1]
[ ]
[2]
[ ]
[3]
This is the standard manner one creates a matrix, i.e. with a list of appropriatelysizes lists and/or matrices. SymPy also supports more advanced methods of matrix creation including a single list of values and dimension inputs:
>>> Matrix(2, 3, [1, 2, 3, 4, 5, 6])
[1 2 3]
[ ]
[4 5 6]
More interestingly (and usefully), we can use a 2variable function (or lambda) to make one. Here we create an indicator function which is 1 on the diagonal and then use it to make the identity matrix:
>>> def f(i,j):
... if i == j:
... return 1
... else:
... return 0
...
>>> Matrix(4, 4, f)
[1 0 0 0]
[ ]
[0 1 0 0]
[ ]
[0 0 1 0]
[ ]
[0 0 0 1]
Finally let’s use lambda to create a 1line matrix with 1’s in the even permutation entries:
>>> Matrix(3, 4, lambda i,j: 1  (i+j) % 2)
[1 0 1 0]
[ ]
[0 1 0 1]
[ ]
[1 0 1 0]
There are also a couple of special constructors for quick matrix construction  eye is the identity matrix, zeros and ones for matrices of all zeros and ones, respectively:
>>> eye(4)
[1 0 0 0]
[ ]
[0 1 0 0]
[ ]
[0 0 1 0]
[ ]
[0 0 0 1]
>>> zeros(2)
[0 0]
[ ]
[0 0]
>>> zeros(2, 5)
[0 0 0 0 0]
[ ]
[0 0 0 0 0]
>>> ones(3)
[1 1 1]
[ ]
[1 1 1]
[ ]
[1 1 1]
>>> ones(1, 3)
[1 1 1]
While learning to work with matrices, let’s choose one where the entries are readily identifiable. One useful thing to know is that while matrices are 2dimensional, the storage is not and so it is allowable  though one should be careful  to access the entries as if they were a 1d list.
>>> M = Matrix(2, 3, [1, 2, 3, 4, 5, 6])
>>> M[4]
5
Now, the more standard entry access is a pair of indices:
>>> M[1,2]
6
>>> M[0,0]
1
>>> M[1,1]
5
Since this is Python we’re also able to slice submatrices:
>>> M[0:2,0:2]
[1 2]
[ ]
[4 5]
>>> M[1:2,2]
[6]
>>> M[:,2]
[3]
[ ]
[6]
Remember in the 2nd example above that slicing 2:2 gives an empty range and that, as in python, a 4 column list is indexed from 0 to 3. In particular, this mean a quick way to create a copy of the matrix is:
>>> M2 = M[:,:]
>>> M2[0,0] = 100
>>> M
[1 2 3]
[ ]
[4 5 6]
See? Changing M2 didn’t change M. Since we can slice, we can also assign entries:
>>> M = Matrix(([1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]))
>>> M
[1 2 3 4 ]
[ ]
[5 6 7 8 ]
[ ]
[9 10 11 12]
[ ]
[13 14 15 16]
>>> M[2,2] = M[0,3] = 0
>>> M
[1 2 3 0 ]
[ ]
[5 6 7 8 ]
[ ]
[9 10 0 12]
[ ]
[13 14 15 16]
as well as assign slices:
>>> M = Matrix(([1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]))
>>> M[2:,2:] = Matrix(2,2,lambda i,j: 0)
>>> M
[1 2 3 4]
[ ]
[5 6 7 8]
[ ]
[9 10 0 0]
[ ]
[13 14 0 0]
All the standard arithmetic operations are supported:
>>> M = Matrix(([1,2,3],[4,5,6],[7,8,9]))
>>> M  M
[0 0 0]
[ ]
[0 0 0]
[ ]
[0 0 0]
>>> M + M
[2 4 6 ]
[ ]
[8 10 12]
[ ]
[14 16 18]
>>> M * M
[30 36 42 ]
[ ]
[66 81 96 ]
[ ]
[102 126 150]
>>> M2 = Matrix(3,1,[1,5,0])
>>> M*M2
[11]
[ ]
[29]
[ ]
[47]
>>> M**2
[30 36 42 ]
[ ]
[66 81 96 ]
[ ]
[102 126 150]
As well as some useful vector operations:
>>> M.row_del(0)
>>> M
[4 5 6]
[ ]
[7 8 9]
>>> M.col_del(1)
>>> M
[4 6]
[ ]
[7 9]
>>> v1 = Matrix([1,2,3])
>>> v2 = Matrix([4,5,6])
>>> v3 = v1.cross(v2)
>>> v1.dot(v2)
32
>>> v2.dot(v3)
0
>>> v1.dot(v3)
0
Recall that the row_del() and col_del() operations don’t return a value  they simply change the matrix object. We can also ‘’glue’’ together matrices of the appropriate size:
>>> M1 = eye(3)
>>> M2 = zeros(3, 4)
>>> M1.row_join(M2)
[1 0 0 0 0 0 0]
[ ]
[0 1 0 0 0 0 0]
[ ]
[0 0 1 0 0 0 0]
>>> M3 = zeros(4, 3)
>>> M1.col_join(M3)
[1 0 0]
[ ]
[0 1 0]
[ ]
[0 0 1]
[ ]
[0 0 0]
[ ]
[0 0 0]
[ ]
[0 0 0]
[ ]
[0 0 0]
We are not restricted to having multiplication between two matrices:
>>> M = eye(3)
>>> 2*M
[2 0 0]
[ ]
[0 2 0]
[ ]
[0 0 2]
>>> 3*M
[3 0 0]
[ ]
[0 3 0]
[ ]
[0 0 3]
but we can also apply functions to our matrix entries using applyfunc(). Here we’ll declare a function that double any input number. Then we apply it to the 3x3 identity matrix:
>>> f = lambda x: 2*x
>>> eye(3).applyfunc(f)
[2 0 0]
[ ]
[0 2 0]
[ ]
[0 0 2]
One more useful matrixwide entry application function is the substitution function. Let’s declare a matrix with symbolic entries then substitute a value. Remember we can substitute anything  even another symbol!:
>>> from sympy import Symbol
>>> x = Symbol('x')
>>> M = eye(3) * x
>>> M
[x 0 0]
[ ]
[0 x 0]
[ ]
[0 0 x]
>>> M.subs(x, 4)
[4 0 0]
[ ]
[0 4 0]
[ ]
[0 0 4]
>>> y = Symbol('y')
>>> M.subs(x, y)
[y 0 0]
[ ]
[0 y 0]
[ ]
[0 0 y]
Now that we have the basics out of the way, let’s see what we can do with the actual matrices. Of course the first things that come to mind are the basics like the determinant:
>>> M = Matrix(( [1, 2, 3], [3, 6, 2], [2, 0, 1] ))
>>> M.det()
28
>>> M2 = eye(3)
>>> M2.det()
1
>>> M3 = Matrix(( [1, 0, 0], [1, 0, 0], [1, 0, 0] ))
>>> M3.det()
0
and the inverse. In SymPy the inverse is computed by Gaussian elimination by default but we can specify it be done by LU decomposition as well:
>>> M2.inv()
[1 0 0]
[ ]
[0 1 0]
[ ]
[0 0 1]
>>> M2.inv("LU")
[1 0 0]
[ ]
[0 1 0]
[ ]
[0 0 1]
>>> M.inv("LU")
[3/14 1/14 1/2 ]
[ ]
[1/28 5/28 1/4]
[ ]
[ 3/7 1/7 0 ]
>>> M * M.inv("LU")
[1 0 0]
[ ]
[0 1 0]
[ ]
[0 0 1]
We can perform a QR factorization which is handy for solving systems:
>>> A = Matrix([[1,1,1],[1,1,3],[2,3,4]])
>>> Q, R = A.QRdecomposition()
>>> Q
[ ___ ___ ___]
[\/ 6 \/ 3 \/ 2 ]
[  ]
[ 6 3 2 ]
[ ]
[ ___ ___ ___ ]
[\/ 6 \/ 3 \/ 2 ]
[   ]
[ 6 3 2 ]
[ ]
[ ___ ___ ]
[\/ 6 \/ 3 ]
[  0 ]
[ 3 3 ]
>>> R
[ ___ ]
[ ___ 4*\/ 6 ___]
[\/ 6  2*\/ 6 ]
[ 3 ]
[ ]
[ ___ ]
[ \/ 3 ]
[ 0  0 ]
[ 3 ]
[ ]
[ ___ ]
[ 0 0 \/ 2 ]
>>> Q*R
[1 1 1]
[ ]
[1 1 3]
[ ]
[2 3 4]
In addition to the solvers in the solver.py file, we can solve the system Ax=b by passing the b vector to the matrix A’s LUsolve function. Here we’ll cheat a little choose A and x then multiply to get b. Then we can solve for x and check that it’s correct:
>>> A = Matrix([ [2, 3, 5], [3, 6, 2], [8, 3, 6] ])
>>> x = Matrix(3,1,[3,7,5])
>>> b = A*x
>>> soln = A.LUsolve(b)
>>> soln
[3]
[ ]
[7]
[ ]
[5]
There’s also a nice GramSchmidt orthogonalizer which will take a set of vectors and orthogonalize then with respect to another another. There is an optional argument which specifies whether or not the output should also be normalized, it defaults to False. Let’s take some vectors and orthogonalize them  one normalized and one not:
>>> L = [Matrix([2,3,5]), Matrix([3,6,2]), Matrix([8,3,6])]
>>> out1 = GramSchmidt(L)
>>> out2 = GramSchmidt(L, True)
Let’s take a look at the vectors:
>>> for i in out1:
... print(i)
...
[2]
[3]
[5]
[ 23/19]
[ 63/19]
[47/19]
[ 1692/353]
[1551/706]
[ 423/706]
>>> for i in out2:
... print(i)
...
[ sqrt(38)/19]
[3*sqrt(38)/38]
[5*sqrt(38)/38]
[ 23*sqrt(6707)/6707]
[ 63*sqrt(6707)/6707]
[47*sqrt(6707)/6707]
[ 12*sqrt(706)/353]
[11*sqrt(706)/706]
[ 3*sqrt(706)/706]
We can spotcheck their orthogonality with dot() and their normality with norm():
>>> out1[0].dot(out1[1])
0
>>> out1[0].dot(out1[2])
0
>>> out1[1].dot(out1[2])
0
>>> out2[0].norm()
1
>>> out2[1].norm()
1
>>> out2[2].norm()
1
So there is quite a bit that can be done with the module including eigenvalues, eigenvectors, nullspace calculation, cofactor expansion tools, and so on. From here one might want to look over the matrices.py file for all functionality.
Byelement conjugation.
Hermite conjugation.
>>> from sympy import Matrix, I
>>> m=Matrix(((1,2+I),(3,4)))
>>> m
[1, 2 + I]
[3, 4]
>>> m.H
[ 1, 3]
[2  I, 4]
Returns the LDL Decomposition (L,D) of matrix A, such that L * D * L.T == A This method eliminates the use of square root. Further this ensures that all the diagonal entries of L are 1. A must be a square, symmetric, positivedefinite and nonsingular matrix.
>>> from sympy.matrices import Matrix, eye
>>> A = Matrix(((25,15,5),(15,18,0),(5,0,11)))
>>> L, D = A.LDLdecomposition()
>>> L
[ 1, 0, 0]
[ 3/5, 1, 0]
[1/5, 1/3, 1]
>>> D
[25, 0, 0]
[ 0, 9, 0]
[ 0, 0, 9]
>>> L * D * L.T * A.inv() == eye(A.rows)
True
See also
Solves Ax = B using LDL decomposition, for a general square and nonsingular matrix.
For a nonsquare matrix with rows > cols, the least squares solution is returned.
Returns the decomposition LU and the row swaps p.
See also
cholesky, LDLdecomposition, QRdecomposition, LUdecomposition_Simple, LUdecompositionFF, LUsolve
Examples
>>> from sympy import Matrix
>>> a = Matrix([[4, 3], [6, 3]])
>>> L, U, _ = a.LUdecomposition()
>>> L
[ 1, 0]
[3/2, 1]
>>> U
[4, 3]
[0, 3/2]
Compute a fractionfree LU decomposition.
Returns 4 matrices P, L, D, U such that PA = L D**1 U. If the elements of the matrix belong to some integral domain I, then all elements of L, D and U are guaranteed to belong to I.
See also
Returns A comprised of L,U (L’s diag entries are 1) and p which is the list of the row swaps (in order).
See also
Solve the linear system Ax = b for x. self is the coefficient matrix A and rhs is the right side b.
This is for symbolic matrices, for real or complex ones use sympy.mpmath.lu_solve or sympy.mpmath.qr_solve.
Return Q,R where A = Q*R, Q is orthogonal and R is upper triangular.
See also
Examples
This is the example from wikipedia:
>>> from sympy import Matrix, eye
>>> A = Matrix([[12,51,4],[6,167,68],[4,24,41]])
>>> Q, R = A.QRdecomposition()
>>> Q
[ 6/7, 69/175, 58/175]
[ 3/7, 158/175, 6/175]
[2/7, 6/35, 33/35]
>>> R
[14, 21, 14]
[ 0, 175, 70]
[ 0, 0, 35]
>>> A == Q*R
True
QR factorization of an identity matrix:
>>> A = Matrix([[1,0,0],[0,1,0],[0,0,1]])
>>> Q, R = A.QRdecomposition()
>>> Q
[1, 0, 0]
[0, 1, 0]
[0, 0, 1]
>>> R
[1, 0, 0]
[0, 1, 0]
[0, 0, 1]
Solve the linear system ‘Ax = b’.
‘self’ is the matrix ‘A’, the method argument is the vector ‘b’. The method returns the solution vector ‘x’. If ‘b’ is a matrix, the system is solved for each column of ‘b’ and the return value is a matrix of the same shape as ‘b’.
This method is slower (approximately by a factor of 2) but more stable for floatingpoint arithmetic than the LUsolve method. However, LUsolve usually uses an exact arithmetic, so you don’t need to use QRsolve.
This is mainly for educational purposes and symbolic matrices, for real (or complex) matrices use sympy.mpmath.qr_solve.
Matrix transposition.
Returns the adjugate matrix.
Adjugate matrix is the transpose of the cofactor matrix.
http://en.wikipedia.org/wiki/Adjugate
See also
Apply a function to each element of the matrix.
>>> from sympy import Matrix
>>> m = Matrix(2,2,lambda i,j: i*2+j)
>>> m
[0, 1]
[2, 3]
>>> m.applyfunc(lambda i: 2*i)
[0, 2]
[4, 6]
Returns a Mutable version of this Matrix
>>> from sympy import ImmutableMatrix
>>> X = ImmutableMatrix([[1,2],[3,4]])
>>> Y = X.as_mutable()
>>> Y[1,1] = 5 # Can set values in Y
>>> Y
[1, 2]
[3, 5]
The Berkowitz algorithm.
Given N x N matrix with symbolic content, compute efficiently coefficients of characteristic polynomials of ‘self’ and all its square submatrices composed by removing both ith row and column, without division in the ground domain.
This method is particularly useful for computing determinant, principal minors and characteristic polynomial, when ‘self’ has complicated coefficients e.g. polynomials. Semidirect usage of this algorithm is also important in computing efficiently subresultant PRS.
Assuming that M is a square matrix of dimension N x N and I is N x N identity matrix, then the following following definition of characteristic polynomial is begin used:
charpoly(M) = det(t*I  M)As a consequence, all polynomials generated by Berkowitz algorithm are monic.
>>> from sympy import Matrix >>> from sympy.abc import x, y, z>>> M = Matrix([[x,y,z], [1,0,0], [y,z,x]])>>> p, q, r = M.berkowitz()>>> p # 1 x 1 M's submatrix (1, x)>>> q # 2 x 2 M's submatrix (1, x, y)>>> r # 3 x 3 M's submatrix (1, 2*x, x**2  y*z  y, x*y  z**2)For more information on the implemented algorithm refer to:
 [1] S.J. Berkowitz, On computing the determinant in small
 parallel time using a small number of processors, ACM, Information Processing Letters 18, 1984, pp. 147150
 [2] M. Keber, DivisionFree computation of subresultants
 using Bezout matrices, Tech. Report MPII20061006, Saarbrucken, 2006
Computes characteristic polynomial minors using Berkowitz method.
A PurePoly is returned so using different variables for x does not affect the comparison or the polynomials:
>>> from sympy import Matrix
>>> from sympy.abc import x, y
>>> A = Matrix([[1, 3], [2, 0]])
>>> A.berkowitz_charpoly(x) == A.berkowitz_charpoly(y)
True
Specifying x is optional; a Dummy with name lambda is used by default (which looks good when prettyprinted in unicode):
>>> A.berkowitz_charpoly().as_expr()
_lambda**2  _lambda  6
No test is done to see that x doesn’t clash with an existing symbol, so using the default (lambda) or your own Dummy symbol is the safest option:
>>> A = Matrix([[1, 2], [x, 0]])
>>> A.charpoly().as_expr()
_lambda**2  _lambda  2*x
>>> A.charpoly(x).as_expr()
x**2  3*x
See also
Computes eigenvalues of a Matrix using Berkowitz method.
See also
Computes characteristic polynomial minors using Berkowitz method.
A PurePoly is returned so using different variables for x does not affect the comparison or the polynomials:
>>> from sympy import Matrix
>>> from sympy.abc import x, y
>>> A = Matrix([[1, 3], [2, 0]])
>>> A.berkowitz_charpoly(x) == A.berkowitz_charpoly(y)
True
Specifying x is optional; a Dummy with name lambda is used by default (which looks good when prettyprinted in unicode):
>>> A.berkowitz_charpoly().as_expr()
_lambda**2  _lambda  6
No test is done to see that x doesn’t clash with an existing symbol, so using the default (lambda) or your own Dummy symbol is the safest option:
>>> A = Matrix([[1, 2], [x, 0]])
>>> A.charpoly().as_expr()
_lambda**2  _lambda  2*x
>>> A.charpoly(x).as_expr()
x**2  3*x
See also
Returns the Cholesky Decomposition L of a Matrix A such that L * L.T = A
A must be a square, symmetric, positivedefinite and nonsingular matrix
>>> from sympy.matrices import Matrix
>>> A = Matrix(((25,15,5),(15,18,0),(5,0,11)))
>>> A.cholesky()
[ 5, 0, 0]
[ 3, 3, 0]
[1, 1, 3]
>>> A.cholesky() * A.cholesky().T
[25, 15, 5]
[15, 18, 0]
[5, 0, 11]
See also
Solves Ax = B using Cholesky decomposition, for a general square nonsingular matrix. For a nonsquare matrix with rows > cols, the least squares solution is returned.
Return a matrix containing the cofactor of each element.
See also
Insert a column at the given position.
>>> from sympy import Matrix, zeros
>>> M = Matrix(3,3,lambda i,j: i+j)
>>> M
[0, 1, 2]
[1, 2, 3]
[2, 3, 4]
>>> V = zeros(3, 1)
>>> V
[0]
[0]
[0]
>>> M.col_insert(1,V)
[0, 0, 1, 2]
[1, 0, 2, 3]
[2, 0, 3, 4]
See also
col, row_insert
Concatenates two matrices along self’s last and bott’s first row
>>> from sympy import Matrix, ones
>>> M = ones(3, 3)
>>> V = Matrix([[7,7,7]])
>>> M.col_join(V)
[1, 1, 1]
[1, 1, 1]
[1, 1, 1]
[7, 7, 7]
See also
col, row_join
Returns the condition number of a matrix.
This is the maximum singular value divided by the minimum singular value
>>> from sympy import Matrix, S
>>> A = Matrix([[1, 0, 0], [0, 10, 0], [0,0,S.One/10]])
>>> A.condition_number()
100
See also
Creates a copy of the matrix with the given row and column deleted.
See also
row_del, col_del
Examples
>>> import sympy
>>> I = sympy.matrices.eye(4)
>>> I.delRowCol(1,2)
[1, 0, 0]
[0, 0, 0]
[0, 0, 1]
Computes the matrix determinant using the method “method”.
See also
det_bareis, berkowitz_det, det_LU
Compute matrix determinant using LU decomposition
Note that this method fails if the LU decomposition itself fails. In particular, if the matrix has no inverse this method will fail.
TODO: Implement algorithm for sparse matrices (SFF).
See also
Compute matrix determinant using Bareis’ fractionfree algorithm which is an extension of the well known Gaussian elimination method. This approach is best suited for dense symbolic matrices and will result in a determinant with minimal number of fractions. It means that less term rewriting is needed on resulting formulae.
TODO: Implement algorithm for sparse matrices (SFF).
See also
Solves Ax = B efficiently, where A is a diagonal Matrix, with nonzero diagonal entries.
Return diagonalized matrix D and transformation P such as
D = P^1 * M * P
where M is current matrix.
See also
Examples
>>> from sympy import Matrix
>>> m = Matrix(3,3,[1, 2, 0, 0, 3, 0, 2, 4, 2])
>>> m
[1, 2, 0]
[0, 3, 0]
[2, 4, 2]
>>> (P, D) = m.diagonalize()
>>> D
[1, 0, 0]
[0, 2, 0]
[0, 0, 3]
>>> P
[1, 0, 1]
[ 0, 0, 1]
[ 2, 1, 2]
>>> P.inv() * m * P
[1, 0, 0]
[0, 2, 0]
[0, 0, 3]
Calculate the derivative of each element in the matrix.
Examples
>>> import sympy
>>> from sympy.abc import x, y
>>> M = sympy.matrices.Matrix([[x, y], [1, 0]])
>>> M.diff(x)
[1, 0]
[0, 0]
Return the dot product of Matrix self and b relaxing the condition of compatible dimensions: if either the number of rows or columns are the same as the length of b then the dot product is returned. If self is a row or column vector, a scalar is returned. Otherwise, a list of results is returned (and in that case the number of columns in self must match the length of b).
>>> from sympy import Matrix
>>> M = Matrix([[1,2,3], [4,5,6], [7,8,9]])
>>> v = [1, 1, 1]
>>> M.row(0).dot(v)
6
>>> M.col(0).dot(v)
12
>>> M.dot(v)
[6, 15, 24]
See also
Returns the dual of a matrix, which is:
\((1/2)*levicivita(i,j,k,l)*M(k,l)\) summed over indices \(k\) and \(l\)
Since the levicivita method is anti_symmetric for any pairwise exchange of indices, the dual of a symmetric matrix is the zero matrix. Strictly speaking the dual defined here assumes that the ‘matrix’ \(M\) is a contravariant anti_symmetric second rank tensor, so that the dual is a covariant second rank tensor.
Return eigen values using the berkowitz_eigenvals routine.
Since the roots routine doesn’t always work well with Floats, they will be replaced with Rationals before calling that routine. If this is not desired, set flag rational to False.
Return list of triples (eigenval, multiplicity, basis).
If the matrix contains any Floats, they will be changed to Rationals for computation purposes, but the answers will be returned after being evaluated with evalf. If it is desired to removed small imaginary portions during the evalf step, pass a value for the chop flag.
Extract a submatrix by specifying a list of rows and columns. Negative indices can be given. All indices must be in the range n <= i < n where n is the number of rows or columns.
See also
Examples
>>> from sympy import Matrix
>>> m = Matrix(4, 3, list(range(12)))
>>> m
[0, 1, 2]
[3, 4, 5]
[6, 7, 8]
[9, 10, 11]
>>> m.extract([0,1,3],[0,1])
[0, 1]
[3, 4]
[9, 10]
Rows or columns can be repeated:
>>> m.extract([0,0,1], [1])
[2]
[2]
[5]
Every other row can be taken by using range to provide the indices:
>>> m.extract(list(range(0, m.rows, 2)),[1])
[2]
[8]
Obtains the square submatrices on the main diagonal of a square matrix.
Useful for inverting symbolic matrices or solving systems of linear equations which may be decoupled by having a block diagonal structure.
Examples
>>> from sympy import Matrix, symbols
>>> from sympy.abc import x, y, z
>>> A = Matrix([[1, 3, 0, 0], [y, z*z, 0, 0], [0, 0, x, 0], [0, 0, 0, 0]])
>>> a1, a2, a3 = A.get_diag_blocks()
>>> a1
[1, 3]
[y, z**2]
>>> a2
[x]
>>> a3
[0]
>>>
Test whether any subexpression matches any of the patterns.
Examples
>>> from sympy import Matrix, Float
>>> from sympy.abc import x, y
>>> A = Matrix(((1, x), (0.2, 3)))
>>> A.has(x)
True
>>> A.has(y)
False
>>> A.has(Float)
True
Integrate each element of the matrix.
Examples
>>> import sympy
>>> from sympy.abc import x, y
>>> M = sympy.matrices.Matrix([[x, y], [1, 0]])
>>> M.integrate((x,))
[x**2/2, x*y]
[ x, 0]
>>> M.integrate((x, 0, 2))
[2, 2*y]
[2, 0]
Calculates the matrix inverse.
According to the “method” parameter, it calls the appropriate method:
GE .... inverse_GE() LU .... inverse_LU() ADJ ... inverse_ADJ()
According to the “try_block_diag” parameter, it will try to form block diagonal matrices using the method get_diag_blocks(), invert these individually, and then reconstruct the full inverse matrix.
Note, the GE and LU methods may require the matrix to be simplified before it is inverted in order to properly detect zeros during pivoting. In difficult cases a custom zero detection function can be provided by setting the iszerosfunc argument to a function that should return True if its argument is zero. The ADJ routine computes the determinant and uses that to detect singular matrices in addition to testing for zeros on the diagonal.
See also
Calculates the inverse using the adjugate matrix and a determinant.
See also
Calculates the inverse using Gaussian elimination.
See also
Calculates the inverse using LU decomposition.
See also
Check if matrix M is an antisymmetric matrix, that is, M is a square matrix with all M[i, j] == M[j, i].
When simplify=True (default), the sum M[i, j] + M[j, i] is simplified before testing to see if it is zero. By default, the SymPy simplify function is used. To use a custom function set simplify to a function that accepts a single argument which returns a simplified expression. To skip simplification, set simplify to False but note that although this will be faster, it may induce false negatives.
Examples
>>> from sympy import Matrix, symbols
>>> m = Matrix(2,2,[0, 1, 1, 0])
>>> m
[ 0, 1]
[1, 0]
>>> m.is_anti_symmetric()
True
>>> x, y = symbols('x y')
>>> m = Matrix(2,3,[0, 0, x, y, 0, 0])
>>> m
[ 0, 0, x]
[y, 0, 0]
>>> m.is_anti_symmetric()
False
>>> from sympy.abc import x, y
>>> m = Matrix(3, 3, [0, x**2 + 2*x + 1, y,
... (x + 1)**2 , 0, x*y,
... y, x*y, 0])
Simplification of matrix elements is done by default so even though two elements which should be equal and opposite wouldn’t pass an equality test, the matrix is still reported as antisymmetric:
>>> m[0, 1] == m[1, 0]
False
>>> m.is_anti_symmetric()
True
If ‘simplify=False’ is used for the case when a Matrix is already simplified, this will speed things up. Here, we see that without simplification the matrix does not appear antisymmetric:
>>> m.is_anti_symmetric(simplify=False)
False
But if the matrix were already expanded, then it would appear antisymmetric and simplification in the is_anti_symmetric routine is not needed:
>>> m = m.expand()
>>> m.is_anti_symmetric(simplify=False)
True
Check if matrix is diagonal, that is matrix in which the entries outside the main diagonal are all zero.
See also
Examples
>>> from sympy import Matrix, diag
>>> m = Matrix(2,2,[1, 0, 0, 2])
>>> m
[1, 0]
[0, 2]
>>> m.is_diagonal()
True
>>> m = Matrix(2,2,[1, 1, 0, 2])
>>> m
[1, 1]
[0, 2]
>>> m.is_diagonal()
False
>>> m = diag(1, 2, 3)
>>> m
[1, 0, 0]
[0, 2, 0]
[0, 0, 3]
>>> m.is_diagonal()
True
Check if matrix is diagonalizable.
If reals_only==True then check that diagonalized matrix consists of the only not complex values.
Some subproducts could be used further in other methods to avoid double calculations, By default (if clear_subproducts==True) they will be deleted.
See also
Examples
>>> from sympy import Matrix
>>> m = Matrix(3,3,[1, 2, 0, 0, 3, 0, 2, 4, 2])
>>> m
[1, 2, 0]
[0, 3, 0]
[2, 4, 2]
>>> m.is_diagonalizable()
True
>>> m = Matrix(2,2,[0, 1, 0, 0])
>>> m
[0, 1]
[0, 0]
>>> m.is_diagonalizable()
False
>>> m = Matrix(2,2,[0, 1, 1, 0])
>>> m
[ 0, 1]
[1, 0]
>>> m.is_diagonalizable()
True
>>> m.is_diagonalizable(True)
False
Check if matrix is a lower triangular matrix.
See also
Examples
>>> from sympy import Matrix
>>> m = Matrix(2,2,[1, 0, 0, 1])
>>> m
[1, 0]
[0, 1]
>>> m.is_lower()
True
>>> m = Matrix(3,3,[2, 0, 0, 1, 4 , 0, 6, 6, 5])
>>> m
[2, 0, 0]
[1, 4, 0]
[6, 6, 5]
>>> m.is_lower()
True
>>> from sympy.abc import x, y
>>> m = Matrix(2,2,[x**2 + y, y**2 + x, 0, x + y])
>>> m
[x**2 + y, x + y**2]
[ 0, x + y]
>>> m.is_lower()
False
Checks if the matrix is in the lower hessenberg form.
The lower hessenberg matrix has zero entries above the first superdiagonal.
See also
Examples
>>> from sympy.matrices import Matrix
>>> a = Matrix([[1,2,0,0],[5,2,3,0],[3,4,3,7],[5,6,1,1]])
>>> a
[1, 2, 0, 0]
[5, 2, 3, 0]
[3, 4, 3, 7]
[5, 6, 1, 1]
>>> a.is_lower_hessenberg()
True
Checks if a matrix is nilpotent.
A matrix B is nilpotent if for some integer k, B**k is a zero matrix.
Examples
>>> from sympy import Matrix
>>> a = Matrix([[0,0,0],[1,0,0],[1,1,0]])
>>> a.is_nilpotent()
True
>>> a = Matrix([[1,0,1],[1,0,0],[1,1,0]])
>>> a.is_nilpotent()
False
Checks if a matrix is square.
A matrix is square if the number of rows equals the number of columns. The empty matrix is square by definition, since the number of rows and the number of columns are both zero.
Examples
>>> from sympy import Matrix
>>> a = Matrix([[1, 2, 3], [4, 5, 6]])
>>> b = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> c = Matrix([])
>>> a.is_square
False
>>> b.is_square
True
>>> c.is_square
True
Checks if any elements are symbols.
Examples
>>> import sympy
>>> from sympy.abc import x, y
>>> M = sympy.matrices.Matrix([[x, y], [1, 0]])
>>> M.is_symbolic()
True
Check if matrix is symmetric matrix, that is square matrix and is equal to its transpose.
By default, simplifications occur before testing symmetry. They can be skipped using ‘simplify=False’; while speeding things a bit, this may however induce false negatives.
Examples
>>> from sympy import Matrix
>>> m = Matrix(2,2,[0, 1, 1, 2])
>>> m
[0, 1]
[1, 2]
>>> m.is_symmetric()
True
>>> m = Matrix(2,2,[0, 1, 2, 0])
>>> m
[0, 1]
[2, 0]
>>> m.is_symmetric()
False
>>> m = Matrix(2,3,[0, 0, 0, 0, 0, 0])
>>> m
[0, 0, 0]
[0, 0, 0]
>>> m.is_symmetric()
False
>>> from sympy.abc import x, y
>>> m = Matrix(3,3,[1, x**2 + 2*x + 1, y, (x + 1)**2 , 2, 0, y, 0, 3])
>>> m
[ 1, x**2 + 2*x + 1, y]
[(x + 1)**2, 2, 0]
[ y, 0, 3]
>>> m.is_symmetric()
True
If the matrix is already simplified, you may speedup is_symmetric() test by using ‘simplify=False’.
>>> m.is_symmetric(simplify=False)
False
>>> m1 = m.expand()
>>> m1.is_symmetric(simplify=False)
True
Check if matrix is an upper triangular matrix.
See also
Examples
>>> from sympy import Matrix
>>> m = Matrix(2,2,[1, 0, 0, 1])
>>> m
[1, 0]
[0, 1]
>>> m.is_upper()
True
>>> m = Matrix(3,3,[5, 1, 9, 0, 4 , 6, 0, 0, 5])
>>> m
[5, 1, 9]
[0, 4, 6]
[0, 0, 5]
>>> m.is_upper()
True
>>> m = Matrix(2,3,[4, 2, 5, 6, 1, 1])
>>> m
[4, 2, 5]
[6, 1, 1]
>>> m.is_upper()
False
Checks if the matrix is the upper hessenberg form.
The upper hessenberg matrix has zero entries below the first subdiagonal.
See also
Examples
>>> from sympy.matrices import Matrix
>>> a = Matrix([[1,4,2,3],[3,4,1,7],[0,2,3,4],[0,0,1,3]])
>>> a
[1, 4, 2, 3]
[3, 4, 1, 7]
[0, 2, 3, 4]
[0, 0, 1, 3]
>>> a.is_upper_hessenberg()
True
Checks if a matrix is a zero matrix.
A matrix is zero if every element is zero. A matrix need not be square to be considered zero. The empty matrix is zero by the principle of vacuous truth.
Examples
>>> from sympy import Matrix, zeros
>>> a = Matrix([[0, 0], [0, 0]])
>>> b = zeros(3, 4)
>>> c = Matrix([[0, 1], [0, 0]])
>>> d = Matrix([])
>>> a.is_zero
True
>>> b.is_zero
True
>>> c.is_zero
False
>>> d.is_zero
True
Calculates the Jacobian matrix (derivative of a vectorial function).
Both self and X can be a row or a column matrix in any order (jacobian() should always work).
Examples
>>> from sympy import sin, cos, Matrix
>>> from sympy.abc import rho, phi
>>> X = Matrix([rho*cos(phi), rho*sin(phi), rho**2])
>>> Y = Matrix([rho, phi])
>>> X.jacobian(Y)
[cos(phi), rho*sin(phi)]
[sin(phi), rho*cos(phi)]
[ 2*rho, 0]
>>> X = Matrix([rho*cos(phi), rho*sin(phi)])
>>> X.jacobian(Y)
[cos(phi), rho*sin(phi)]
[sin(phi), rho*cos(phi)]
Return a list of Jordan cells of current matrix. This list shape Jordan matrix J.
If calc_transformation is specified as False, then transformation P such that
J = P^1 * M * P
will not be calculated.
See also
Notes
Calculation of transformation P is not implemented yet.
Examples
>>> from sympy import Matrix
>>> m = Matrix(4, 4, [6, 5, 2, 3, 3, 1, 3, 3, 2, 1, 2, 3, 1, 1, 5, 5])
>>> m
[ 6, 5, 2, 3]
[3, 1, 3, 3]
[ 2, 1, 2, 3]
[1, 1, 5, 5]
>>> (P, Jcells) = m.jordan_cells()
>>> Jcells[0]
[2, 1]
[0, 2]
>>> Jcells[1]
[2, 1]
[0, 2]
Return Jordan form J of current matrix.
If calc_transformation is specified as False, then transformation P such that
J = P^1 * M * P
will not be calculated.
See also
Notes
Calculation of transformation P is not implemented yet.
Examples
>>> from sympy import Matrix
>>> m = Matrix(4, 4, [6, 5, 2, 3, 3, 1, 3, 3, 2, 1, 2, 3, 1, 1, 5, 5])
>>> m
[ 6, 5, 2, 3]
[3, 1, 3, 3]
[ 2, 1, 2, 3]
[1, 1, 5, 5]
>>> (P, J) = m.jordan_form()
>>> J
[2, 1, 0, 0]
[0, 2, 0, 0]
[0, 0, 2, 1]
[0, 0, 0, 2]
Converts a key with potentially mixed types of keys (integer and slice) into a tuple of ranges and raises an error if any index is out of self’s range.
See also
Converts key=(4,6) to 4,6 and ensures the key is correct. Negative indices are also supported and are remapped to positives provided they are valid indices.
See also
Calculate the limit of each element in the matrix.
Examples
>>> import sympy
>>> from sympy.abc import x, y
>>> M = sympy.matrices.Matrix([[x, y], [1, 0]])
>>> M.limit(x, 2)
[2, y]
[1, 0]
Solves Ax = B, where A is a lower triangular matrix.
See also
upper_triangular_solve, cholesky_solve, diagonal_solve, LDLsolve, LUsolve, QRsolve
Return the Hadamard product (elementwise product) of A and B
>>> from sympy.matrices.matrices import Matrix
>>> A = Matrix([[0, 1, 2], [3, 4, 5]])
>>> B = Matrix([[1, 10, 100], [100, 10, 1]])
>>> A.multiply_elementwise(B)
[ 0, 10, 200]
[300, 40, 5]
Evaluate each element of the matrix as a float.
Return the Norm of a Matrix or Vector. In the simplest case this is the geometric size of the vector Other norms can be specified by the ord parameter
ord  norm for matrices  norm for vectors 

None  Frobenius norm  2norm 
‘fro’  Frobenius norm 

inf  –  max(abs(x)) 
inf  –  min(abs(x)) 
1  –  as below 
1  –  as below 
2  2norm (largest sing. value)  as below 
2  smallest singular value  as below 
other 

sum(abs(x)**ord)**(1./ord) 
>>> from sympy import Matrix, Symbol, trigsimp, cos, sin
>>> x = Symbol('x', real=True)
>>> v = Matrix([cos(x), sin(x)])
>>> trigsimp( v.norm() )
1
>>> v.norm(10)
(sin(x)**10 + cos(x)**10)**(1/10)
>>> A = Matrix([[1,1], [1,1]])
>>> A.norm(2)# Spectral norm (max of Ax/x under 2vectornorm)
2
>>> A.norm(2) # Inverse spectral norm (smallest singular value)
0
>>> A.norm() # Frobenius Norm
2
See also
Returns list of vectors (Matrix objects) that span nullspace of self
Permute the rows of the matrix with the given permutation in reverse.
>>> import sympy
>>> M = sympy.matrices.eye(3)
>>> M.permuteBkwd([[0,1],[0,2]])
[0, 1, 0]
[0, 0, 1]
[1, 0, 0]
See also
Permute the rows of the matrix with the given permutation.
>>> import sympy
>>> M = sympy.matrices.eye(3)
>>> M.permuteFwd([[0,1],[0,2]])
[0, 0, 1]
[1, 0, 0]
[0, 1, 0]
See also
Shows location of nonzero entries for fast shape lookup.
Examples
>>> from sympy import Matrix, matrices
>>> m = Matrix(2,3,lambda i,j: i*3+j)
>>> m
[0, 1, 2]
[3, 4, 5]
>>> m.print_nonzero()
[ XX]
[XXX]
>>> m = matrices.eye(4)
>>> m.print_nonzero("x")
[x ]
[ x ]
[ x ]
[ x]
Return the projection of self onto the line containing v.
>>> from sympy import Matrix, S, sqrt
>>> V = Matrix([sqrt(3)/2,S.Half])
>>> x = Matrix([[1, 0]])
>>> V.project(x)
[sqrt(3)/2, 0]
>>> V.project(x)
[sqrt(3)/2, 0]
Reshape the matrix. Total number of elements must remain the same.
>>> from sympy import Matrix
>>> m = Matrix(2,3,lambda i,j: 1)
>>> m
[1, 1, 1]
[1, 1, 1]
>>> m.reshape(1,6)
[1, 1, 1, 1, 1, 1]
>>> m.reshape(3,2)
[1, 1]
[1, 1]
[1, 1]
Insert a row at the given position.
>>> from sympy import Matrix, zeros
>>> M = Matrix(3,3,lambda i,j: i+j)
>>> M
[0, 1, 2]
[1, 2, 3]
[2, 3, 4]
>>> V = zeros(1, 3)
>>> V
[0, 0, 0]
>>> M.row_insert(1,V)
[0, 1, 2]
[0, 0, 0]
[1, 2, 3]
[2, 3, 4]
See also
row, col_insert
Concatenates two matrices along self’s last and rhs’s first column
>>> from sympy import Matrix
>>> M = Matrix(3,3,lambda i,j: i+j)
>>> V = Matrix(3,1,lambda i,j: 3+i+j)
>>> M.row_join(V)
[0, 1, 2, 3]
[1, 2, 3, 4]
[2, 3, 4, 5]
See also
row, col_join
Return reduced rowechelon form of matrix and indices of pivot vars.
To simplify elements before finding nonzero pivots set simplify=True (to use the default SymPy simplify function) or pass a custom simplify function.
>>> from sympy import Matrix
>>> from sympy.abc import x
>>> m = Matrix([[1, 2], [x, 1  1/x]])
>>> m.rref()
([1, 0]
[0, 1], [0, 1])
Compute the singular values of a Matrix
>>> from sympy import Matrix, Symbol, eye
>>> x = Symbol('x', real=True)
>>> A = Matrix([[0, 1, 0], [0, x, 0], [1, 0, 0]])
>>> A.singular_values()
[sqrt(x**2 + 1), 1, 0]
See also
Takes slice or number and returns (min,max) for iteration Takes a default maxval to deal with the slice ‘:’ which is (none, none)
See also
Get a slice/submatrix of the matrix using the given slice.
>>> from sympy import Matrix
>>> m = Matrix(4,4,lambda i,j: i+j)
>>> m
[0, 1, 2, 3]
[1, 2, 3, 4]
[2, 3, 4, 5]
[3, 4, 5, 6]
>>> m[:1, 1]
[1]
>>> m[:2, :1]
[0]
[1]
>>> m[2:4, 2:4]
[4, 5]
[5, 6]
See also
Return the Matrix converted in a python list.
>>> from sympy import Matrix
>>> m=Matrix(3, 3, list(range(9)))
>>> m
[0, 1, 2]
[3, 4, 5]
[6, 7, 8]
>>> m.tolist()
[[0, 1, 2], [3, 4, 5], [6, 7, 8]]
Calculate the trace of a (square) matrix.
>>> import sympy
>>> M = sympy.matrices.eye(3)
>>> M.trace()
3
Matrix transposition.
>>> from sympy import Matrix, I
>>> m=Matrix(((1,2+I),(3,4)))
>>> m
[1, 2 + I]
[3, 4]
>>> m.transpose()
[ 1, 3]
[2 + I, 4]
>>> m.T == m.transpose()
True
See also
Solves Ax = B, where A is an upper triangular matrix.
See also
lower_triangular_solve, cholesky_solve, diagonal_solve, LDLsolve, LUsolve, QRsolve
Return the Matrix converted into a one column matrix by stacking columns
>>> from sympy import Matrix
>>> m=Matrix([[1,3], [2,4]])
>>> m
[1, 3]
[2, 4]
>>> m.vec()
[1]
[2]
[3]
[4]
See also
Return the unique elements of a symmetric Matrix as a one column matrix by stacking the elements in the lower triangle.
Arguments: diagonal – include the diagonal cells of self or not check_symmetry – checks symmetry of self but not completely reliably
>>> from sympy import Matrix
>>> m=Matrix([[1,2], [2,3]])
>>> m
[1, 2]
[2, 3]
>>> m.vech()
[1]
[2]
[3]
>>> m.vech(diagonal=False)
[2]
See also
alias of MutableMatrix
Elementary column selector (default) or operation using functor which is a function two args interpreted as (self[i, j], i).
>>> from sympy import ones
>>> I = ones(3)
>>> I.col(0, lambda v, i: v*3)
>>> I
[3, 1, 1]
[3, 1, 1]
[3, 1, 1]
>>> I.col(0)
[3]
[3]
[3]
Delete the given column.
>>> import sympy
>>> M = sympy.matrices.eye(3)
>>> M.col_del(1)
>>> M
[1, 0]
[0, 0]
[0, 1]
Swap the two given columns of the matrix inplace.
>>> from sympy.matrices import Matrix
>>> M = Matrix([[1,0],[1,0]])
>>> M
[1, 0]
[1, 0]
>>> M.col_swap(0, 1)
>>> M
[0, 1]
[0, 1]
Copy in elements from a list.
Parameters :  key : slice
value : iterable


See also
Examples
>>> import sympy
>>> I = sympy.matrices.eye(5)
>>> I.copyin_list((slice(0,2), slice(0,1)), [1,2])
>>> I
[1, 0, 0, 0, 0]
[2, 1, 0, 0, 0]
[0, 0, 1, 0, 0]
[0, 0, 0, 1, 0]
[0, 0, 0, 0, 1]
Copy in values from a matrix into the given bounds.
Parameters :  key : slice
value : Matrix


See also
Examples
>>> import sympy
>>> M = sympy.matrices.Matrix([[0,1],[2,3]])
>>> I = sympy.matrices.eye(5)
>>> I.copyin_matrix((slice(0,2), slice(0,2)),M)
>>> I
[0, 1, 0, 0, 0]
[2, 3, 0, 0, 0]
[0, 0, 1, 0, 0]
[0, 0, 0, 1, 0]
[0, 0, 0, 0, 1]
Elementary row selector (default) or operation using functor which is a function two args interpreted as (self[i, j], j).
>>> from sympy import ones
>>> I = ones(3)
>>> I.row(1, lambda v,i: v*3)
>>> I
[1, 1, 1]
[3, 3, 3]
[1, 1, 1]
>>> I.row(1)
[3, 3, 3]
Delete the given row.
>>> import sympy
>>> M = sympy.matrices.eye(3)
>>> M.row_del(1)
>>> M
[1, 0, 0]
[0, 0, 1]
Swap the two given rows of the matrix inplace.
>>> from sympy.matrices import Matrix
>>> M = Matrix([[0,1],[1,0]])
>>> M
[0, 1]
[1, 0]
>>> M.row_swap(0, 1)
>>> M
[1, 0]
[0, 1]
A sparse matrix (a matrix with a large number of zero elements).
See also
Alternate faster representation
Alternate faster representation
Matrix transposition.
Add two sparse matrices with dictionary representation.
>>> from sympy.matrices.matrices import SparseMatrix
>>> A = SparseMatrix(5, 5, lambda i, j: i * j + i)
>>> A
[0, 0, 0, 0, 0]
[1, 2, 3, 4, 5]
[2, 4, 6, 8, 10]
[3, 6, 9, 12, 15]
[4, 8, 12, 16, 20]
>>> B = SparseMatrix(5, 5, lambda i, j: i + 2 * j)
>>> B
[0, 2, 4, 6, 8]
[1, 3, 5, 7, 9]
[2, 4, 6, 8, 10]
[3, 5, 7, 9, 11]
[4, 6, 8, 10, 12]
>>> A + B
[0, 2, 4, 6, 8]
[2, 5, 8, 11, 14]
[4, 8, 12, 16, 20]
[6, 11, 16, 21, 26]
[8, 14, 20, 26, 32]
See also
Delete the given column of the matrix.
See also
Examples
>>> import sympy
>>> M = sympy.matrices.SparseMatrix([[0,0],[0,1]])
>>> M
[0, 0]
[0, 1]
>>> M.col_del(0)
>>> M
[0]
[1]
Returns a Columnsorted list of nonzero elements of the matrix. >>> from sympy.matrices import SparseMatrix >>> a=SparseMatrix(((1,2),(3,4))) >>> a [1, 2] [3, 4] >>> a.CL [(0, 0, 1), (1, 0, 3), (0, 1, 2), (1, 1, 4)]
See also
Delete the given row of the matrix.
See also
Examples
>>> import sympy
>>> M = sympy.matrices.SparseMatrix([[0,0],[0,1]])
>>> M
[0, 0]
[0, 1]
>>> M.row_del(0)
>>> M
[0, 1]
Returns a Rowsorted list of nonzero elements of the matrix.
>>> from sympy.matrices import SparseMatrix
>>> a=SparseMatrix(((1,2),(3,4)))
>>> a
[1, 2]
[3, 4]
>>> a.RL
[(0, 0, 1), (0, 1, 2), (1, 0, 3), (1, 1, 4)]
See also
Matrix product A*B.
A and B must be of appropriate dimensions. If A is an m x k matrix, and B is a k x n matrix, the product will be an m x n matrix.
See also
Examples
>>> from sympy import Matrix
>>> A = Matrix([[1, 2, 3], [4, 5, 6]])
>>> B = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> A*B
[30, 36, 42]
[66, 81, 96]
>>> B*A
Traceback (most recent call last):
...
ShapeError: Matrices size mismatch.
>>>
Return the Hadamard product (elementwise product) of A and B
>>> from sympy.matrices.matrices import Matrix, matrix_multiply_elementwise
>>> A = Matrix([[0, 1, 2], [3, 4, 5]])
>>> B = Matrix([[1, 10, 100], [100, 10, 1]])
>>> matrix_multiply_elementwise(A, B)
[ 0, 10, 200]
[300, 40, 5]
See also
Returns a matrix of zeros with r rows and c columns; if c is omitted a square matrix will be returned.
Returns a matrix of ones with r rows and c columns; if c is omitted a square matrix will be returned.
Create square identity matrix n x n
Create diagonal matrix from a list as a diagonal values.
Arguments might be matrices too, in case of it they are fitted in result matrix
See also
Examples
>>> from sympy.matrices import diag, Matrix
>>> diag(1, 2, 3)
[1, 0, 0]
[0, 2, 0]
[0, 0, 3]
>>> from sympy.abc import x, y, z
>>> a = Matrix([x, y, z])
>>> b = Matrix([[1, 2], [3, 4]])
>>> c = Matrix([[5, 6]])
>>> diag(a, 7, b, c)
[x, 0, 0, 0, 0, 0]
[y, 0, 0, 0, 0, 0]
[z, 0, 0, 0, 0, 0]
[0, 7, 0, 0, 0, 0]
[0, 0, 1, 2, 0, 0]
[0, 0, 3, 4, 0, 0]
[0, 0, 0, 0, 5, 6]
Create random matrix with dimensions r x c. If c is omitted the matrix will be square. If symmetric is True the matrix must be square.
Examples
>>> from sympy.matrices import randMatrix
>>> randMatrix(3)
[25, 45, 27]
[44, 54, 9]
[23, 96, 46]
>>> randMatrix(3, 2)
[87, 29]
[23, 37]
[90, 26]
>>> randMatrix(3, 3, 0, 2)
[0, 2, 0]
[2, 0, 1]
[0, 0, 1]
>>> randMatrix(3, symmetric=True)
[85, 26, 29]
[26, 71, 43]
[29, 43, 57]
>>> A = randMatrix(3, seed=1)
>>> B = randMatrix(3, seed=2)
>>> A == B
False
>>> A == randMatrix(3, seed=1)
True
Compute Hessian matrix for a function f
see: http://en.wikipedia.org/wiki/Hessian_matrix
See also
sympy.matrices.matrices.Matrix.jacobian, wronskian
Apply the GramSchmidt process to a set of vectors.
see: http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process
Compute Wronskian for [] of functions
 f1 f2 ... fn 
 f1' f2' ... fn' 
 . . . . 
W(f1,...,fn) =  . . . . 
 . . . . 
 (n) (n) (n) 
 D (f1) D (f2) ... D (fn) 
see: http://en.wikipedia.org/wiki/Wronskian
See also
sympy.matrices.matrices.Matrix.jacobian, hessian
Given linear difference operator L of order ‘k’ and homogeneous equation Ly = 0 we want to compute kernel of L, which is a set of ‘k’ sequences: a(n), b(n), ... z(n).
Solutions of L are linearly independent iff their Casoratian, denoted as C(a, b, ..., z), do not vanish for n = 0.
Casoratian is defined by k x k determinant:
+ a(n) b(n) . . . z(n) +
 a(n+1) b(n+1) . . . z(n+1) 
 . . . . 
 . . . . 
 . . . . 
+ a(n+k1) b(n+k1) . . . z(n+k1) +
It proves very useful in rsolve_hyper() where it is applied to a generating set of a recurrence to factor out linearly dependent solutions and return a basis:
>>> from sympy import Symbol, casoratian, factorial
>>> n = Symbol('n', integer=True)
Exponential and factorial are linearly independent:
>>> casoratian([2**n, factorial(n)], n) != 0
True
Converts python list of SymPy expressions to a NumPy array.
See also
Tries to convert “a” to an index, returns None on failure.
The result of a2idx() (if not None) can be safely used as an index to arrays/matrices.
Create a numpy ndarray of symbols (as an object array).
The created symbols are named prefix_i1_i2_... You should thus provide a nonempty prefix if you want your symbols to be unique for different output arrays, as SymPy symbols with identical names are the same object.
Parameters :  prefix : string
shape : int or tuple


Examples
These doctests require numpy.
>>> from sympy import symarray
>>> symarray('', 3)
[_0, _1, _2]
If you want multiple symarrays to contain distinct symbols, you must provide unique prefixes:
>>> a = symarray('', 3)
>>> b = symarray('', 3)
>>> a[0] is b[0]
True
>>> a = symarray('a', 3)
>>> b = symarray('b', 3)
>>> a[0] is b[0]
False
Creating symarrays with a prefix:
>>> symarray('a', 3)
[a_0, a_1, a_2]
For more than one dimension, the shape must be given as a tuple:
>>> symarray('a', (2, 3))
[[a_0_0, a_0_1, a_0_2],
[a_1_0, a_1_1, a_1_2]]
>>> symarray('a', (2, 3, 2))
[[[a_0_0_0, a_0_0_1],
[a_0_1_0, a_0_1_1],
[a_0_2_0, a_0_2_1]],
[[a_1_0_0, a_1_0_1],
[a_1_1_0, a_1_1_1],
[a_1_2_0, a_1_2_1]]]
Returns a rotation matrix for a rotation of theta (in radians) about the 1axis.
See also
Examples
>>> from sympy import pi
>>> from sympy.matrices import rot_axis1
A rotation of pi/3 (60 degrees):
>>> theta = pi/3
>>> rot_axis1(theta)
[1, 0, 0]
[0, 1/2, sqrt(3)/2]
[0, sqrt(3)/2, 1/2]
If we rotate by pi/2 (90 degrees):
>>> rot_axis1(pi/2)
[1, 0, 0]
[0, 0, 1]
[0, 1, 0]
Returns a rotation matrix for a rotation of theta (in radians) about the 2axis.
See also
Examples
>>> from sympy import pi
>>> from sympy.matrices import rot_axis2
A rotation of pi/3 (60 degrees):
>>> theta = pi/3
>>> rot_axis2(theta)
[ 1/2, 0, sqrt(3)/2]
[ 0, 1, 0]
[sqrt(3)/2, 0, 1/2]
If we rotate by pi/2 (90 degrees):
>>> rot_axis2(pi/2)
[0, 0, 1]
[0, 1, 0]
[1, 0, 0]
Returns a rotation matrix for a rotation of theta (in radians) about the 3axis.
See also
Examples
>>> from sympy import pi
>>> from sympy.matrices import rot_axis3
A rotation of pi/3 (60 degrees):
>>> theta = pi/3
>>> rot_axis3(theta)
[ 1/2, sqrt(3)/2, 0]
[sqrt(3)/2, 1/2, 0]
[ 0, 0, 1]
If we rotate by pi/2 (90 degrees):
>>> rot_axis3(pi/2)
[ 0, 1, 0]
[1, 0, 0]
[ 0, 0, 1]