sympy#

Tools that facilitate in building sympy expressions.

@unevaluated(cls: type[ExprClass]) type[ExprClass][source]#
@unevaluated(*, implement_doit: bool = True, **assumptions: Unpack[SymPyAssumptions]) Callable[[type[ExprClass]], type[ExprClass]]

Decorator for defining ‘unevaluated’ SymPy expressions.

Unevaluated expressions are handy for defining large expressions that consist of several sub-definitions. They are ‘unfolded’ to their definition once you call their :meth`~sympy.core.expr.Expr.doit` method. For example:

>>> @unevaluated
... class MyExpr(sp.Expr):
...     x: sp.Symbol
...     y: sp.Symbol
...     _latex_repr_ = R"z\left({x}, {y}\right)"
...
...     def evaluate(self) -> sp.Expr:
...         x, y = self.args
...         return x**2 + y**2
>>> a, b = sp.symbols("a b")
>>> expr = MyExpr(a, b**2)
>>> sp.latex(expr)
'z\\left(a, b^{2}\\right)'
>>> expr.doit()
a**2 + b**4

A LaTeX representation for the unevaluated state can be provided by providing an f-string or method called _latex_repr_:

>>> @unevaluated
... class Function(sp.Expr):
...     x: sp.Symbol
...     _latex_repr_ = R"f\left({x}\right)"  # not an f-string!
...
...     def evaluate(self) -> sp.Expr:
...         return sp.sqrt(self.x)
>>> y = sp.Symbol("y", nonnegative=True)
>>> expr = Function(x=y**2)
>>> sp.latex(expr)
'f\\left(y^{2}\\right)'
>>> expr.doit()
y

Or, as a method:

>>> from sympy.printing.latex import LatexPrinter
>>> @unevaluated
... class Function(sp.Expr):
...     x: sp.Symbol
...
...     def evaluate(self) -> sp.Expr:
...         return self.x**2
...
...     def _latex_repr_(self, printer: LatexPrinter, *args) -> str:
...         x = printer._print(self.x)  # important to convert to string first
...         x, *_ = map(printer._print, self.args)  # also possible via its args
...         return Rf"g\left({x}\right)"  # this is an f-string
>>> expr = Function(y)
>>> sp.latex(expr)
'g\\left(y\\right)'

Attributes to the class are fed to the __new__ constructor of the Expr class and are therefore also called “arguments”. Just like in the Expr class, these arguments are automatically sympified. Attributes/arguments that should not be sympified with argument():

>>> class Transformation:
...     def __call__(self, x: sp.Basic, y: sp.Basic) -> sp.Expr: ...
>>> @unevaluated
... class MyExpr(sp.Expr):
...     x: Any
...     y: Any
...     functor: Callable = argument(sympify=False)
...
...     def evaluate(self) -> sp.Expr:
...         return self.functor(self.x, self.y)
>>> expr = MyExpr(0, y=3.14, functor=Transformation)
>>> isinstance(expr.x, sp.Integer)
True
>>> isinstance(expr.y, sp.Float)
True
>>> expr.functor is Transformation
True

Added in version 0.14.8.

Changed in version 0.14.7: Renamed from @unevaluated_expression() to @unevaluated().`

argument(*, default: T = MISSING, sympify: bool = True) T[source]#
argument(*, default_factory: Callable[[], T] = MISSING, sympify: bool = True) T

Add qualifiers to fields of unevaluated SymPy expression classes.

Creates a dataclasses.Field with additional metadata for unevaluated() by wrapping around dataclasses.field().

Added in version 0.14.8.

SymPy assumptions
ExprClass = ~ExprClass#

Type variable.

The preferred way to construct a type variable is via the dedicated syntax for generic functions, classes, and type aliases:

class Sequence[T]:  # T is a TypeVar
    ...

This syntax can also be used to create bound and constrained type variables:

# S is a TypeVar bound to str
class StrSequence[S: str]:
    ...

# A is a TypeVar constrained to str or bytes
class StrOrBytesSequence[A: (str, bytes)]:
    ...

Type variables can also have defaults:

class IntDefault[T = int]:

However, if desired, reusable type variables can also be constructed manually, like so:

T = TypeVar('T')  # Can be anything
S = TypeVar('S', bound=str)  # Can be any subtype of str
A = TypeVar('A', str, bytes)  # Must be exactly str or bytes
D = TypeVar('D', default=int)  # Defaults to int

Type variables exist primarily for the benefit of static type checkers. They serve as the parameters for generic types as well as for generic function and type alias definitions.

The variance of type variables is inferred by type checkers when they are created through the type parameter syntax and when infer_variance=True is passed. Manually created type variables may be explicitly marked covariant or contravariant by passing covariant=True or contravariant=True. By default, manually created type variables are invariant. See PEP 484 and PEP 695 for more details.

class SymPyAssumptions[source]#

Bases: TypedDict

See https://docs.sympy.org/latest/guides/assumptions.html#predicates.

algebraic: bool[source]#
commutative: bool[source]#
complex: bool[source]#
extended_negative: bool[source]#
extended_nonnegative: bool[source]#
extended_nonpositive: bool[source]#
extended_nonzero: bool[source]#
extended_positive: bool[source]#
extended_real: bool[source]#
finite: bool[source]#
hermitian: bool[source]#
imaginary: bool[source]#
infinite: bool[source]#
integer: bool[source]#
irrational: bool[source]#
negative: bool[source]#
noninteger: bool[source]#
nonnegative: bool[source]#
nonpositive: bool[source]#
nonzero: bool[source]#
positive: bool[source]#
rational: bool[source]#
real: bool[source]#
transcendental: bool[source]#
zero: bool[source]#
partial_doit(expr: T, types: type[Basic] | tuple[type[Basic], ...], recursive: bool = False) T[source]#
class NumPyPrintable(*args)[source]#

Bases: Expr

Expr class that can lambdify to NumPy code.

This interface is for classes that derive from sympy.Expr and that require a _numpycode() method in case the class does not correctly lambdify() to NumPy code. For more info on SymPy printers, see Printing.

Several computational frameworks try to converge their interface to that of NumPy. See for instance TensorFlow’s NumPy API and jax.numpy. This fact is used in TensorWaves to lambdify() SymPy expressions to these different backends with the same lambdification code.

Warning

If you decorate this class with unevaluated(), you usually want to do so with implement_doit=False, because you do not want the class to be ‘unfolded’ with doit() before lambdification.

Warning

The implemented _numpycode() method should countain as little SymPy computations as possible. Instead, it should get most information from its construction args, so that SymPy can use printer tricks like cse(), prior expanding with doit(), and other simplifications that can make the generated code shorter. An example is the BoostZMatrix class, which takes \(\beta\) as input instead of the FourMomentumSymbol from which \(\beta\) is computed.

abstractmethod _numpycode(printer: NumPyPrinter, *args) str[source]#

Lambdify this NumPyPrintable class to NumPy code.

create_symbol_matrix(name: str, m: int, n: int) MutableDenseMatrix[source]#

Create a Matrix with symbols as elements.

The MatrixSymbol has some issues when one is interested in the elements of the matrix. This function instead creates a Matrix where the elements are Indexed instances.

To convert these Indexed instances to a Symbol, use symplot.substitute_indexed_symbols().

>>> create_symbol_matrix("A", m=2, n=3)
Matrix([
[A[0, 0], A[0, 1], A[0, 2]],
[A[1, 0], A[1, 1], A[1, 2]]])
class PoolSum(expression, *indices: tuple[Symbol, Iterable[Basic]], evaluate: bool = False, **hints)[source]#

Bases: Expr

Sum over indices where the values are taken from a domain set.

>>> i, j, m, n = sp.symbols("i j m n")
>>> expr = PoolSum(i**m + j**n, (i, (-1, 0, +1)), (j, (2, 4, 5)))
>>> expr
PoolSum(i**m + j**n, (i, (-1, 0, 1)), (j, (2, 4, 5)))
>>> print(sp.latex(expr))
\sum_{i=-1}^{1} \sum_{j\in\left\{2,4,5\right\}}{i^{m} + j^{n}}
>>> expr.doit()
3*(-1)**m + 3*0**m + 3*2**n + 3*4**n + 3*5**n + 3
property expression: Expr[source]#
property indices: list[tuple[Symbol, tuple[Float, ...]]][source]#
property free_symbols: set[Basic][source]#

Return from the atoms of self those which are free symbols.

Not all free symbols are Symbol (see examples)

For most expressions, all symbols are free symbols. For some classes this is not true. e.g. Integrals use Symbols for the dummy variables which are bound variables, so Integral has a method to return all symbols except those. Derivative keeps track of symbols with respect to which it will perform a derivative; those are bound variables, too, so it has its own free_symbols method.

Any other method that uses bound variables should implement a free_symbols method.

Examples

>>> from sympy import Derivative, Integral, IndexedBase
>>> from sympy.abc import x, y, n
>>> (x + 1).free_symbols
{x}
>>> Integral(x, y).free_symbols
{x, y}

Not all free symbols are actually symbols:

>>> IndexedBase('F')[0].free_symbols
{F, F[0]}

The symbols of differentiation are not included unless they appear in the expression being differentiated.

>>> Derivative(x + y, y).free_symbols
{x, y}
>>> Derivative(x, y).free_symbols
{x}
>>> Derivative(x, (y, n)).free_symbols
{n, x}

If you want to know if a symbol is in the variables of the Derivative you can do so as follows:

>>> Derivative(x, y).has_free(y)
True
cleanup() Expr | PoolSum[source]#

Remove redundant summations, like indices with one or no value.

>>> x, i = sp.symbols("x i")
>>> PoolSum(x**i, (i, [0, 1, 2])).cleanup().doit()
x**2 + x + 1
>>> PoolSum(x, (i, [0, 1, 2])).cleanup()
x
>>> PoolSum(x).cleanup()
x
>>> PoolSum(x**i, (i, [0])).cleanup()
1
determine_indices(symbol: Basic) list[int][source]#

Extract any indices if available from a Symbol.

>>> determine_indices(sp.Symbol("m1"))
[1]
>>> determine_indices(sp.Symbol("m_12"))
[12]
>>> determine_indices(sp.Symbol("m_a2"))
[2]
>>> determine_indices(sp.Symbol(R"\alpha_{i2, 5}"))
[2, 5]
>>> determine_indices(sp.Symbol("m"))
[]

Indexed instances can also be handled: >>> m_a = sp.IndexedBase(“m_a”) >>> determine_indices(m_a[0]) [0]

class UnevaluatableIntegral(function, *symbols, **assumptions)[source]#

Bases: Integral

See Numerical integrals.

Added in version 0.14.10.

abs_tolerance = 1e-05[source]#
rel_tolerance = 1e-05[source]#
limit = 50[source]#
dummify = True[source]#

Submodules and Subpackages