# 7 Built-in Prefix Operators

In the following subsections are descriptions of the most useful prefix operators built into REDUCE that are not defined in other sections (such as substitution operators). Some are fully defined internally as procedures; others are more nearly abstract operators, with only some of their properties known to the system.

In many cases, an operator is described by a prototypical header line as follows. Each formal parameter is given a name and followed by its allowed type. The names of classes referred to in the definition are printed in lower case, and parameter names in upper case. If a parameter type is not commonly used, it may be a specific set enclosed in brackets `{ … }`

. Operators that accept formal parameter lists of arbitrary length have the parameter and type class enclosed in square brackets indicating that zero or more occurrences of that argument are permitted. Optional parameters and their type classes are enclosed in angle brackets.

- 7 Built-in Prefix Operators
- 7.1 Numerical Operators
- 7.2 Mathematical Functions
- 7.3 Bernoulli Numbers and Euler Numbers
- 7.4 Fibonacci Numbers and Fibonacci Polynomials
- 7.5 Motzkin numbers
- 7.6 CHANGEVAR Operator
- 7.7 CONTINUED_FRACTION Operator
- 7.8 DF Operator
- 7.9 INT Operator
- 7.10 LENGTH Operator
- 7.11 MAP Operator
- 7.12 MKID Operator
- 7.13 The Pochhammer Notation
- 7.14 PF Operator
- 7.15 SELECT Operator
- 7.16 SOLVE Operator
- 7.17 Even and Odd Operators
- 7.18 Linear Operators
- 7.19 Non-Commuting Operators
- 7.20 Symmetric and Antisymmetric Operators
- 7.21 Declaring New Prefix Operators
- 7.22 Declaring New Infix Operators
- 7.23 Creating/Removing Variable Dependency

## 7.1 Numerical Operators

REDUCE includes a number of functions that are analogs of those found in most numerical systems. With numerical arguments, such functions return the expected result. However, they may also be called with non-numerical arguments. In such cases, except where noted, the system attempts to simplify the expression as far as it can. In such cases, a residual expression involving the original operator usually remains. These operators are as follows:

`AbstractTensors.abs`

— Function`abs(r)`

`abs`

returns the absolute value of its single argument, if that argument has a numerical value. A non-numerical argument is returned as an absolute value, with an overall numerical coefficient taken outside the absolute value operator. For example:

```
julia> Algebra.abs(-3/4)
0.75
julia> Algebra.abs(:(2a))
:(2 * abs(a))
julia> Algebra.abs(im)
1.0
julia> Algebra.abs(:(-x))
:(abs(x))
```

`Reduce.Algebra.ceiling`

— Function`ceiling(r)`

This operator returns the ceiling (i.e., the least integer greater than the given argument) if its single argument has a numerical value. A non-numerical argument is returned as an expression in the original operator. For example:

```
julia> Algebra.ceiling(-5/4)
-1
julia> Algebra.ceiling(:(-a))
:(ceiling(-a))
```

`AbstractTensors.conj`

— Function`conj(r)`

This operator returns the ceiling (i.e., the least integer greater than the given argument) if its single argument has a numerical value. A non-numerical argument is returned as an expression in the original operator. For example:

```
julia> Algebra.conj(1+im)
1 - 1im
julia> Algebra.conj(:(a+im*b))
:(repart(a) - ((impart(a) + repart(b)) * im + impart(b)))
```

`Reduce.Algebra.factorial`

— Function`factorial(r)`

If the single argument of `factorial`

evaluates to a non-negative integer, its factorial is returned. Otherwise an expression involving `factorial`

is returned. For example:

```
julia> Algebra.factorial(5)
120
julia> Algebra.factorial(:a)
:(factorial(a))
```

`Reduce.Algebra.fix`

— Function`fix(r)`

This operator returns the fixed value (i.e., the integer part of the given argument) if its single argument has a numerical value. A non-numerical argument is returned as an expression in the original operator. For example:

```
julia> Algebra.fix(-5/4)
-1
julia> Algebra.fix(:a)
:(fix(a))
```

`Reduce.Algebra.floor`

— Function`floor(r)`

This operator returns the floor (i.e., the greatest integer less than the given argument) if its single argument has a numerical value. A non-numerical argument is returned as an expression in the original operator. For example:

```
julia> Algebra.floor(-5/4)
-2.0
julia> Algebra.floor(:a)
:(floor(a))
```

`Reduce.Algebra.impart`

— Function`impart(r)`

This operator returns the imaginary part of an expression, if that argument has an numerical value. A non-numerical argument is returned as an expression in the operators `repart`

and `impart`

. For example:

```
julia> Algebra.impart(1+im)
1
julia> Algebra.impart(:(a+im*b))
:(impart(a) + repart(b))
```

`Reduce.Algebra.max`

— Function`max(r...)`

`max`

can take an arbitrary number of expressions as their arguments. If all arguments evaluate to numerical values, the maximum of the argument list is returned. If any argument is non-numeric, an appropriately reduced expression is returned. For example:

```
julia> Algebra.max(2,-3,4,5)
5
julia> Algebra.max(:a,2,3)
:(max(3, a))
```

`max`

of an empty list returns 0.

`Reduce.Algebra.min`

— Function`min(r...)`

`min`

can take an arbitrary number of expressions as their arguments. If all arguments evaluate to numerical values, the minimum of the argument list is returned. If any argument is non-numeric, an appropriately reduced expression is returned. For example:

```
julia> Algebra.min(2,-2)
-2
julia> Algebra.min(:x)
:x
```

`min`

of an empty list returns 0.

`Reduce.Algebra.nextprime`

— Function`nextprime(r)`

`nextprime`

returns the next prime greater than its integer argument, using a probabilistic algorithm. A type error occurs if the value of the argument is not an integer. For example:

```
julia> Algebra.nextprime(5)
7
julia> Algebra.nextprime(-2)
2
julia> Algebra.nextprime(-7)
-5
julia> Algebra.nextprime(1000000)
1000003
```

whereas `Algebra.nextprime(:a)`

gives a type error.

`Reduce.Algebra.random`

— Function`random(r)`

`random(n)`

returns a random number $r$ in the range $0 ≤ r < n$. A type error occurs if the value of the argument is not a positive integer in algebraic mode, or positive number in symbolic mode. For example:

```
julia> Algebra.random(5)
3
julia> Algebra.random(1000)
191
```

whereas `Algebra.random(:a)`

gives a type error.

`Reduce.Algebra.random_new_seed`

— Function`random_new_seed(r)`

`random_new_seed(n)`

reseeds the random number generator to a sequence determined by the integer argument `n`

. It can be used to ensure that a repeatable pseudo-random sequence will be delivered regardless of any previous use of `random`

, or can be called early in a run with an argument derived from something variable (such as the time of day) to arrange that different runs of a REDUCE program will use different random sequences. When a fresh copy of REDUCE is first created it is as if `random_new_seed(1)`

has been obeyed.

A type error occurs if the value of the argument is not a positive integer.

`Reduce.Algebra.repart`

— Function`repart(r)`

This returns the real part of an expression, if that argument has an numerical value. A non-numerical argument is returned as an expression in the operators `repart`

and `impart`

. For example:

```
julia> Algebra.repart(1+im)
1
julia> Algebra.repart(:(a+im*b))
:(repart(a) - impart(b))
```

`Reduce.Algebra.round`

— Function`round(r)`

This operator returns the rounded value (i.e, the nearest integer) of its single argument if that argument has a numerical value. A non-numeric argument is returned as an expression in the original operator. For example:

```
julia> Algebra.round(-5/4)
-1.0
julia> Algebra.round(:a)
:(round(a))
```

`Reduce.Algebra.sign`

— Function`sign(r)`

`sign`

tries to evaluate the sign of its argument. If this is possible `sign`

returns one of `1`

, `0`

or `-1`

. Otherwise, the result is the original form or a simplified variant. For example:

```
julia> Algebra.sign(-5)
-1
julia> Algebra.sign(:(-a^2*b))
:(-(sign(b)))
```

Note that even powers of formal expressions are assumed to be positive only as long as the switch `complex`

is off.

## 7.2 Mathematical Functions

REDUCE knows that the following represent mathematical functions that can take arbitrary scalar expressions as their argument(s):

```
acos acosh acot acoth acsc acsch asec asech asin
asinh atan atanh atan2 beta ci cos cosh cot coth csc
csch dilog ei exp gamma hypot ibeta igamma ln log
logb log10 sec sech si sin sinh sqrt tan tanh
airy_ai airy_aiprime airy_bi airy_biprime
besseli besselj besselk bessely
hankel1 hankel2 kummerm kummeru lommel1 lommel2
struveh struvel whittakerm whittakeru
polygamma psi zeta
solidharmonicy sphericalharmonicy
```

where `log`

is the natural logarithm (and equivalent to `ln`

), and `logb`

has two arguments of which the second is the logarithmic base.

The derivatives of all these functions are also known to the system.

REDUCE knows various elementary identities and properties of these functions. For example:

`cos(-x) = cos(x) sin(-x) = - sin (x) cos(n*pi) = (-1)^n sin(n*pi) = 0 log(e) = 1 e^(i*pi/2) = i log(1) = 0 e^(i*pi) = -1 log(e^x) = x e^(3*i*pi/2) = -i`

Beside these identities, there are a lot of simplifications for elementary functions defined in the REDUCE system as rulelists. In order to view these, the `showrules`

operator can be used, e.g.

```
reduce> showrules tan;
{tan(~n*arbint(~i)*pi + ~(~ x)) => tan(x) when fixp(n),
tan(~x)
=> trigquot(sin(x),cos(x)) when knowledge_about(sin,x,tan),
~x + ~(~ k)*pi
tan(----------------)
~d
x k k 1
=> - cot(---) + i*pi*impart(---)) when abs(repart(---))=---,
d d d 2
~(~ w) + ~(~ k)*pi w k k
tan(--------------------) => tan(--- + (--- - fix(repart(---)))*pi)
~(~ d) d d d
k
when whereexp({rp => repart(---)},bool-eval(ratnump(rp) and abs(rp)>=1)),
d
tan(atan(~x)) => x,
2
df(tan(~x),~x) => 1 + tan(x) }
```

For further simplification, especially of expressions involving trigonometric functions, see the TRIGSIMP package (chapter 16.72) documentation.

Functions not listed above may be defined in the special functions package SPECFN.

The user can add further rules for the reduction of expressions involving these operators by using the `let`

command.

In many cases it is desirable to expand product arguments of logarithms, or collect a sum of logarithms into a single logarithm. Since these are inverse operations, it is not possible to provide rules for doing both at the same time and preserve the REDUCE concept of idempotent evaluation. As an alternative, REDUCE provides two switches `expandlogs`

and `combinelogs`

to carry out these operations. Both are off by default, and are subject to the value of the switch `precise`

. This switch is on by default and prevents modifications that may be false in a complex domain. Thus to expand `log(3*y)`

into a sum of logs, one can say

```
julia> Algebra.on(:expandlogs);
julia> Algebra.log(:(3*y))
```

whereas to expand `log(x*y)`

into a sum of logs, one needs to say

```
julia> Algebra.off(:precise); Algebra.on(:expandlogs);
julia> Algebra.log(:(x*y))
```

To combine this sum into a single log:

```
julia> Algebra.off(:precise); Algebra.on(:combinelogs);
julia> Alebra.:+(log(:x),log(:y))
```

These switches affect the logarithmic functions `log10`

(base 10) and `logb`

(arbitrary base) as well.

At the present time, it is possible to have both switches on at once, which could lead to infinite recursion. However, an expression is switched from one form to the other in this case. Users should not rely on this behavior, since it may change in the next release.

The current version of REDUCE does a poor job of simplifying surds. In particular, expressions involving the product of variables raised to non-integer powers do not usually have their powers combined internally, even though they are printed as if those powers were combined. For example, the expression

`reduce> x^(1/3)*x^(1/6)`

will print as

`sqrt(x)`

but will have an internal form containing the two exponentiated terms. If you now subtract `sqrt(x)`

from this expression, you will not get zero. Instead, the confusing form

`sqrt(x) - sqrt(x)`

will result. To combine such exponentiated terms, the switch `combineexpt`

should be turned on.

The square root function can be input using the name `sqrt`

, or the power operation `^(1/2)`

. On output, unsimplified square roots are normally represented by the operator `sqrt`

rather than a fractional power. With the default system switch settings, the argument of a square root is first simplified, and any divisors of the expression that are perfect squares taken outside the square root argument. The remaining expression is left under the square root. Thus the expression

`julia> Algebra.sqrt(:(-8a^2*b))`

becomes

`:(2 * sqrt(b) * sqrt(2) * a * im)`

Note that such simplifications can cause trouble if A is eventually given a value that is a negative number. If it is important that the positive property of the square root and higher even roots always be preserved, the switch PRECISE should be set on (the default value). This causes any non-numerical factors taken out of surds to be represented by their absolute value form. With PRECISE on then, the above example would become

`:(2 * sqrt(-2b) * abs(a))`

However, this is incorrect in the complex domain, where the $\sqrt{x^2}$ is not identical to $|x|$. To avoid the above simplification, the switch `precise_complex`

should be set on (default is off). For example:

`julia> Algebra.on(:precise_complex); Algebra.sqrt(:(-8a^2*b))`

yields the output

`:(2 * sqrt(-2 * a ^ 2 * b))`

The statement that REDUCE knows very little about these functions applies only in the mathematically exact off rounded mode. If `rounded`

is on, any of the functions

```
acos acosh acot acoth acsc acsch asec asech asin
asinh atan atanh atan2 cos cosh cot coth csc csch
exp hypot ibeta igamma ln log logb log10 psi sec
sech sin sinh sqrt tan tanh
```

when given a numerical argument has its value calculated to the current degree of floating point precision. In addition, real (non-integer valued) powers of numbers will also be evaluated.

If the `complex`

switch is turned on in addition to `rounded`

, these functions will also calculate a real or complex result, again to the current degree of floating point precision, if given complex arguments. For example,

```
julia> @rounded @complex 2.3^(5.6im)
:(-0.0480793490914 - 0.998843519372im)
julia> @rounded @complex cos(2+3im)
:(-4.18962569097 - 9.10922789376im)
```

## 7.3 Bernoulli Numbers and Euler Numbers

`Reduce.Algebra.bernoulli`

— Function`bernoulli(n)`

The unary operator `bernoulli`

provides notation and computation for Bernoulli numbers. `bernoulli(n)`

evaluates to the `n`

th Bernoulli number; all of the odd Bernoulli numbers, except `bernoulli(1)`

, are zero.

The algorithms are based upon those by Herbert Wilf, presented by Sandra Fillebrown [?]. If the `rounded`

switch is off, the algorithms are exactly those; if it is on, some further rounding may be done to prevent computation of redundant digits. Hence, these functions are particularly fast when used to approximate the Bernoulli numbers in rounded mode.

Euler numbers are computed by the unary operator Euler, which return the nth Euler number. The computation is derived directly from Pascal’s triangle of binomial coefficients.

## 7.4 Fibonacci Numbers and Fibonacci Polynomials

`Reduce.Algebra.fibonacci`

— Function`fibonacci(n)`

The unary operator `fibonacci`

provides notation and computation for Fibonacci numbers. `fibonacci(n)`

evaluates to the `n`

th Fibonacci number. If `n`

is a positive or negative integer, it will be evaluated following the definition:

`Reduce.Algebra.fibonaccip`

— Function`fibonacci(n,x)`

Fibonacci Polynomials are computed by the binary operator `fibonaccip`

. `fibonaccip(n,x)`

returns the `n`

th Fibonacci polynomial in the variable `x`

. If `n`

is a positive or negative integer, it will be evaluated following the definition:

## 7.5 Motzkin numbers

`Reduce.Algebra.motzkin`

— Function`motzkin(n)`

A Motzkin number $M_n$ (named after Theodore Motzkin) is the number of different ways of drawing non-intersecting chords on a circle between n points. For a non-negative integer `n`

, the operator `motzkin(n)`

returns the `n`

th Motzkin number, according to the recursion formula

## 7.6 CHANGEVAR Operator

Author: G. Üçoluk.

`Reduce.Algebra.changevar`

— Function`changevar(depvars,newvars,eqlist,diffeq)`

The operator `changevar`

does a variable transformation in a set of differential equations. Syntax:

`changevar(⟨depvars⟩,⟨newvars⟩,⟨eqlist⟩,⟨diffeq⟩)`

`⟨diffeq⟩`

is either a single differential equation or a list of differential equations, `⟨depvars⟩`

are the dependent variables to be substituted, `⟨newvars⟩`

are the new depend variables, and `⟨eqlist⟩`

is a list of equations of the form `⟨depvar⟩=⟨expression⟩`

where `⟨expression⟩`

is some function in the new dependent variables.

The three lists `⟨depvars⟩`

, `⟨newvars⟩`

, and `⟨eqlist⟩`

must be of the same length. If there is only one variable to be substituted, then it can be given instead of the list. The same applies to the list of differential equations, i.e., the following two commands are equivalent

```
Algebra.operator(:u)
Algebra.changevar(:u,:y,:(x==e^y),:(df(u(x),x) - log(x)))
Algebra.changevar((:u,),(:y,),(:(x=e^y),),(:(df(u(x),x) - log(x)),))
```

except for one difference: the first command returns the transformed differential equation, the second one a list with a single element.

The switch `dispjacobian`

governs the display the entries of the inverse Jacobian, it is `off`

per default.

The mathematics behind the change of independent variable(s) in differential equations is quite straightforward. It is basically the application of the chain rule. If the dependent variable of the differential equation is $F$ , the independent variables are $x_i$ and the new independent variables are $u_i$ (where $i=1…n$) then the first derivatives are: $\frac{-\partial F}{\partial x_i} = \frac{\partial F}{\partial u_j} \frac{\partial u_j}{\partial x_i}$

We assumed Einstein’s summation convention. Here the problem is to calculate the $∂u_j∕∂x_i$ terms if the change of variables is given by $x_i = f_i (u_1 ,...,u_n ).$

The first thought might be solving the above given equations for $u_j$ and then differentiating them with respect to $x_i$, then again making use of the equations above, substituting new variables for the old ones in the calculated derivatives. This is not always a preferable way to proceed. Mainly because the functions $f_i$ may not always be easily invertible. Another approach that makes use of the Jacobian is better. Consider the above given equations which relate the old variables to the new ones. Let us differentiate them:

The first derivative is nothing but the $(j,k)$ th entry of the Jacobian matrix.

So if we speak in matrix language $1 = J ⋅ D$ where we defined the Jacobian $J_{ij} = \frac{\partial f_i}{\partial u_j}$ and the matrix of the derivatives we wanted to obtain as $D_{ij} = \frac{\partial u_i}{\partial x_j}.$ If the Jacobian has a non-vanishing determinant then it is invertible and we are able to write from the matrix equation above: $D = J^{-1}$ so finally we have what we want $\frac{\partial u_i}{\partial x_j} = [ J^{-1} ]_{ij}$

The higher derivatives are obtained by the successive application of the chain rule and using the definitions of the old variables in terms of the new ones. It can be easily verified that the only derivatives that are needed to be calculated are the first order ones which are obtained above.

### 7.6.1 CHANGEVAR example: The 2-dim. Laplace Equation

The 2-dimensional Laplace equation in cartesian coordinates is: $\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0$ Now assume we want to obtain the polar coordinate form of Laplace equation. The change of variables is: $x = r\cos θ, y = r\sin θ$ The solution using `changevar`

is as follows

```
Algebra.changevar((:u,),(:r,:θ),(:(x=r*cos(θ)),:(y=r*sin(θ))),
(:(df(u(x,y),x,2)+df(u(x,y),y,2)),) )
```

Here we could omit the list parenthesis in the first and last arguments (because those lists have only one member) and the list parenthesis in the third argument (because they are optional), but you cannot leave off the list parenthesis in the second argument. So one could equivalently write

```
Algebra.changevar(:u,(:r,:θ),:(x==r*cos(θ)),:(y==r*sin(θ)),
:(df(u(x,y),x,2)+df(u(x,y),y,2)) )
```

If you have tried out the above example, you will notice that the denominator contains a $\cos^2θ + \sin^2θ$ which is actually equal to 1. This has of course nothing to do with `changevar`

. One has to be overcome these pattern matching problems by the conventional methods REDUCE provides (a rule, for example, will fix it).

Secondly you will notice that your `u(x,y)`

operator has changed to `u(r,θ)`

in the result. Nothing magical about this. That is just what we do with pencil and paper. `u(r,θ)`

represents the the transformed dependent variable.

### 7.6.2 Another CHANGEVAR example: An Euler Equation

Consider a differential equation which is of Euler type, for instance: $x^3y ′′′ - 3x^2y′′ + 6xy ′ - 6y = 0$ where prime denotes differentiation with respect to $x$. As is well known, Euler type of equations are solved by a change of variable: $x = e^u.$ So our call to `changevar`

reads as follows:

```
Algebra.changevar(:y, :u, :(x==e^u), :(x^3*df(y(x),x,3)-
3*x^2*df(y(x),x,2)+6*x*df(y(x),x)-6*y(x)))
```

and returns the result

`:(((11 * df(y(u), u) - 6 * y(u)) - 6 * df(y(u), u, 2)) + df(y(u), u, 3))`

## 7.7 CONTINUED_FRACTION Operator

Not initially supported by Reduce.jl parser, see upstream docs for more information.

## 7.8 DF Operator

`Reduce.Algebra.df`

— Function`df(r...)`

The operator `df`

is used to represent partial differentiation with respect to one or more variables. It is used with the syntax:

`R"df(⟨EXPRN:algebraic⟩[,⟨VAR:kernel⟩ <,⟨NUM:integer⟩ >])"`

The first argument is the expression to be differentiated. The remaining arguments specify the differentiation variables and the number of times they are applied.

The number `num`

may be omitted if it is `1`

. For example,

```
reduce> df(y,x)
reduce> df(y,x,2)
reduce> df(y,x1,2,x2,x3,2)
```

The evaluation of `df(y,x)`

proceeds as follows: first, the values of `y`

and `x`

are found. Let us assume that `x`

has no assigned value, so its value is `x`

. Each term or other part of the value of `y`

that contains the variable `x`

is differentiated by the standard rules. If `z`

is another variable, not `x`

itself, then its derivative with respect to `x`

is taken to be `0`

, unless `z`

has previously been declared to `depend`

on `x`

, in which case the derivative is reported as the symbol `df(z,x)`

.

### 7.8.1 Switches influencing differentiation

Consider `df(u,x,y,z)`

. If none of `x`

,`y`

,`z`

are equal to `u`

then the order of differentiation is commuted into a canonical form, unless the switch `nocommutedf`

is turned on (default is off). if at least one of `x`

,`y`

,`z`

is equal to `u`

then the order of differentiation is not commuted and the derivative is not simplified to zero, unless the switch `commutedf`

is turned on. It is off by default.

If `commutedf`

is off and the switch `simpnoncomdf`

is on then simplify as follows:

```
df(u,x,u) -> df(u,x,2) / df(u,x)
df(u,x,n,u) -> df(u,x,n+1) / df(u,x)
```

provided `u`

depends only on the one variable `x`

. This simplification removes the non-commutative aspect of the derivative.

If the switch `expanddf`

is turned on then REDUCE uses the chain rule to expand symbolic derivatives of indirectly dependent variables provided the result is unambiguous, i.e. provided there is no direct dependence. It is off by default. Thus, for example, given

```
julia> Algebra.depend(:f,:u,:v); Algebra.depend((:u,:v),:x)
julia> Algebra.on(:expanddf)
julia> Algebra.df(:f,:x)
:(df(f, u) * df(u, x) + df(f, v) * df(v, x))
```

whereas after

`julia> Algebra.depend(:f,:x)`

`df(:f,:x)`

does not expand at all (since the result would be ambiguous and the algorithm would loop).

Turning on the switch `allowdfint`

allows “differentiation under the integral sign”, i.e.

`df(int(y, x), v) -> int(df(y, v), x)`

if this results in a simplification. If the switch `dfint`

is also turned on then this happens regardless of whether the result simplifies. Both switches are off by default.

### 7.8.2 Adding Differentiation Rules

The `let`

statement can be used to introduce rules for differentiation of user-defined operators. Its general form is

`R"for all ⟨var1⟩,…,⟨varn⟩ let df(⟨operator⟩⟨varlist⟩,⟨vari⟩)=⟨expression⟩"`

where `⟨varlist⟩ ::= (⟨var1⟩,…,⟨varn⟩)`

, and `⟨var1⟩,…,⟨varn⟩`

are the dummy variable arguments of `⟨operator⟩`

.

An analogous form applies to infix operators.

*Examples:*

`R"for all x let df(tan x,x)= 1 + tan(x)^2"`

(This is how the tan differentiation rule appears in the REDUCE source.)

`R"for all x,y let df(f(x,y),x)=2*f(x,y), df(f(x,y),y)=x*f(x,y)"`

Notice that all dummy arguments of the relevant operator must be declared arbitrary by the `for all`

command, and that rules may be supplied for operators with any number of arguments. If no differentiation rule appears for an argument in an operator, the differentiation routines will return as result an expression in terms of `df`

. For example, if the rule for the differentiation with respect to the second argument of `f`

is not supplied, the evaluation of `df(f(x,z),z)`

would leave this expression unchanged. (No `depend`

declaration is needed here, since `f(x,z)`

obviously “depends on” `z`

.)

Once such a rule has been defined for a given operator, any future differentiation rules for that operator must be defined with the same number of arguments for that operator, otherwise we get the error message

```
ERROR: Reduce:
Incompatible df rule argument length for <operator>
```

## 7.9 INT Operator

`Reduce.Algebra.int`

— Function`int(r...)`

`int`

is an operator in REDUCE for indefinite integration using a combination of the Risch-Norman algorithm and pattern matching. It is used with the syntax:

`R"int(⟨EXPRN:algebraic⟩,⟨VAR:kernel⟩)"`

This will return correctly the indefinite integral for expressions comprising polynomials, log functions, exponential functions and tan and atan. The arbitrary constant is not represented. If the integral cannot be done in closed terms, it returns a formal integral for the answer in one of two ways:

- It returns the input,
`int(…,…)`

unchanged. - It returns an expression involving
`int`

s of some other functions (sometimes more complicated than the original one, unfortunately).

Rational functions can be integrated when the denominator is factorizable by the program. In addition it will attempt to integrate expressions involving error functions, dilogarithms and other trigonometric expressions. In these cases it might not always succeed in finding the solution, even if one exists.

*Examples:*

```
julia> Algebra.int(:(log(x)),:x)
:((log(x) - 1) * x)
julia> Algebra.int(:(e^x),:x)
:(e ^ x)
```

The program checks that the second argument is a variable and gives an error if it is not.

*Note:* If the `int`

operator is called with 4 arguments, REDUCE will implicitly call the definite integration package (DEFINT) and this package will interpret the third and fourth arguments as the lower and upper limit of integration, respectively. For details, consult the documentation on the DEFINT package.

### 7.9.1 Options

The switch `trint`

when on will trace the operation of the algorithm. It produces a great deal of output in a somewhat illegible form, and is not of much interest to the general user. It is normally off.

The switch `trintsubst`

when on will trace the heuristic attempts to solve the integral by substitution. It is normally off.

If the switch `failhard`

is on the algorithm will terminate with an error if the integral cannot be done in closed terms, rather than return a formal integration form. `failhard`

is normally off.

The switch `nolnr`

suppresses the use of the linear properties of integration in cases when the integral cannot be found in closed terms. It is normally off.

The switch `nointsubst`

disables the heuristic attempts to solve the integral by substitution. It is normally off.

### 7.9.2 Advanced Use

If a function appears in the integrand that is not one of the functions `exp`

, `erf`

, `tan`

, `atan`

, `log`

, `dilog`

then the algorithm will make an attempt to integrate the argument if it can, differentiate it and reach a known function. However the answer cannot be guaranteed in this case. If a function is known to be algebraically independent of this set it can be flagged transcendental by

`R"flag(’(trilog),’transcendental)"`

in which case this function will be added to the permitted field descriptors for a genuine decision procedure. If this is done the user is responsible for the mathematical correctness of his actions.

The standard version does not deal with algebraic extensions. Thus integration of expressions involving square roots and other like things can lead to trouble. A contributed package that supports integration of functions involving square roots is available, however (ALGINT, chapter 16.1). In addition there is a definite integration package, DEFINT( chapter 16.18).

### 7.9.3 References

A. C. Norman & P. M. A. Moore, “Implementing the New Risch Algorithm”, Proc. 4th International Symposium on Advanced Comp. Methods in Theor. Phys., CNRS, Marseilles, 1977.

S. J. Harrington, “A New Symbolic Integration System in Reduce”, Comp. Journ. 22 (1979) 2.

A. C. Norman & J. H. Davenport, “Symbolic Integration — The Dust Settles?”, Proc. EUROSAM 79, Lecture Notes in Computer Science 72, Springer-Verlag, Berlin Heidelberg New York (1979) 398-407.

## 7.10 LENGTH Operator

`Reduce.Algebra.length`

— Function`length(r)`

`length`

is a generic operator for finding the length of various objects in the system. The meaning depends on the type of the object. In particular, the length of an algebraic expression is the number of additive top-level terms its expanded representation.

*Examples:*

```
julia> length(:(a+b))
2
julia> length(2)
1
```

Other objects that support a length operator include arrays, lists and matrices. The explicit meaning in these cases is included in the description of these objects.

## 7.11 MAP Operator

`Reduce.Algebra.map`

— Function`map(fnc,obj)`

The `map`

operator applies a uniform evaluation pattern to all members of a composite structure: a matrix, a list, or the arguments of an operator expression. The evaluation pattern can be a unary procedure, an operator, or an algebraic expression with one free variable.

It is used with the syntax:

` map(FNC:function,OBJ:object)`

Here `OBJ`

is a list, a matrix or an operator expression. `FNC`

can be one of the following:

- the name of an operator with a single argument: the operator is evaluated once with each element of
`OBJ`

as its single argument; - an algebraic expression with exactly one free variable, i.e. a variable preceded by the tilde symbol. The expression is evaluated for each element of
`OBJ`

, with the element substituted for the free variable; - a replacement rule of the form
`var => rep`

where`var`

is a variable (a kernel without a subscript) and`rep`

is an expression that contains`var`

. The replacement expression`rep`

is evaluated for each element of`OBJ`

with the element substituted for`var`

. The variable`var`

may be optionally preceded by a tilde.

The rule form for `FNC`

is needed when more than one free variable occurs.

*Examples:*

```
julia> Algebra.map(:abs, (1,-2,:a,:(-a)))
(1, 2, :(abs(a)), :(abs(a)))
julia> Algebra.map(:(int(~w,x)), [:(x^2) :(x^5); :(x^4) :(x^5)])
2×2 Array{Any,2}:
:(x ^ 3 // 3) :(x ^ 6 // 6)
:(x ^ 5 // 5) :(x ^ 6 // 6)
julia> Algebra.map(:(~w*6), :(x^2/3 == y^3/2 -1))
:(2 * x ^ 2 = 3 * (y ^ 3 - 2))
```

You can use `map`

in nested expressions. However, you cannot apply `map`

to a non-composite object, e.g. an identifier or a number.

## 7.12 MKID Operator

`Reduce.Algebra.mkid`

— Function`mkid(u,v)`

In many applications, it is useful to create a set of identifiers for naming objects in a consistent manner. In most cases, it is sufficient to create such names from two components. The operator `mkid`

is provided for this purpose. Its syntax is:

`R"mkid(U:id,V:id|non-negative integer)"`

for example

```
julia> Algebra.mkid(:a,3)
:a3
julia> Algebra.mkid(:apple,:s)
:apples
```

while `mkid(:(a+b),2)`

gives an error.

The `set`

statement can be used to give a value to the identifiers created by `mkid`

, for example

`julia> Algebra.set(Algebra.mkid(:a,3),2)`

will give `a3`

the value 2. Similarly, the `unset`

statement can be used to remove the value from these identifiers, for example

`julia> Algebra.unset(Algebra.mkid(:a,3))`

## 7.13 The Pochhammer Notation

`Reduce.Algebra.pochhammer`

— Function`pochhammer(a,k)`

The Pochhammer notation $(a)_k$ (also called Pochhammer’s symbol) is supported by the binary operator `pochhammer(a,k)`

. For a non-negative integer `k`

, it is defined as (http://dlmf.nist.gov/5.2.iii)

$(a)_0 = 1,$

$(a)_k = a(a + 1)(a + 2)⋅⋅⋅(a + k - 1).$

For $a ⁄= 0,±1,±2,…$, this is equivalent to

$(a)k = \frac{\Gamma (a+-k-)}{\Gamma (a)}$

With `rounded`

off, this expression is evaluated numerically if `a`

and `k`

are both integral, and otherwise may be simplified where appropriate. The simplification rules are based upon algorithms supplied by Wolfram Koepf.

## 7.14 PF Operator

`Reduce.Algebra.pf`

— Function`pf(expr,var)`

`R"pf(⟨exp⟩,⟨var⟩)"`

transforms the expression `⟨exp⟩`

into a list of partial fractions with respect to the main variable, `⟨var⟩`

. `pf`

does a complete partial fraction decomposition, and as the algorithms used are fairly unsophisticated (factorization and the extended Euclidean algorithm), the code may be unacceptably slow in complicated cases.

*Example:* Given `R"2/((x+1)^2*(x+2))"`

in the workspace, `R"pf(ws,x)"`

gives the result

```
2 - 2 2
{-------,-------,--------------}
x + 2 x + 1 2
x + 2*x + 1
```

If you want the denominators in factored form, use `off(:exp)`

. Thus, with `R"2/((x+1)^2*(x+2))"`

in the workspace, the commands `R"off(exp); pf(ws,x)"`

give the result

```
2 - 2 2
{-------,-------,----------}
x + 2 x + 1 2
(x + 1)
```

To recombine the terms, `for each… sum`

can be used. So with the above list in the workspace, `R"for each j in ws sum j"`

returns the result

```
2
------------------
2
(x + 2)*(x + 1)
```

Alternatively, one can use the operations on lists to extract any desired term.

## 7.15 SELECT Operator

`Reduce.Algebra.select`

— Function`select(fnc,lst)`

The `select`

operator extracts from a list, or from the arguments of an n–ary operator, elements corresponding to a boolean predicate. It is used with the syntax:

`select(⟨FNC:function⟩,⟨LST:list⟩)`

`FNC`

can be one of the following forms:

- the name of an operator with a single argument: the operator is evaluated once on each element of
`LST`

; - an algebraic expression with exactly one free variable, i.e. a variable preceded by the tilde symbol. The expression is evaluated for each element of
`⟨LST⟩`

, with the element substituted for the free variable; - a replacement rule of the form
`⟨var⟩ => ⟨rep⟩`

where`⟨var⟩`

is a variable (a kernel without subscript) and`⟨rep⟩`

is an expression that contains`⟨var⟩`

.`⟨rep⟩`

is evaluated for each element of`LST`

with the element substituted for`⟨var⟩`

.`⟨var⟩`

may be optionally preceded by a tilde.

The rule form for `FNC`

is needed when more than one free variable occurs.

The result of evaluating `FNC`

is interpreted as a boolean value corresponding to the conventions of REDUCE. These values are composed with the leading operator of the input expression.

*Examples:*

```
julia> Algebra.select(:(~w>0), (1,-1,2,-3,3))
(1, 2, 3)
```

```
select(evenp deg(~w,y),part((x+y)^5,0):=list)
-> {X^5 ,10*X^3*Y^2 ,5*X*Y^4}
select(evenp deg(~w,x),2x^2+3x^3+4x^4) -> 4X^4 + 2X^2
```

## 7.16 SOLVE Operator

`Reduce.Algebra.solve`

— Function`solve(r...)`

`solve`

is an operator for solving one or more simultaneous algebraic equations. It is used with the syntax:

`R"SOLVE(⟨EXPRN:algebraic⟩[,⟨VAR:kernel⟩∣,⟨VARLIST:list of kernels⟩])"`

`exprn`

is of the form `⟨expression⟩`

or `{⟨expression1⟩,⟨expression2⟩, …}`

. Each expression is an algebraic equation, or is the difference of the two sides of the equation. The second argument is either a kernel or a list of kernels representing the unknowns in the system. This argument may be omitted if the number of distinct, non-constant, top-level kernels equals the number of unknowns, in which case these kernels are presumed to be the unknowns.

For one equation, `solve`

recursively uses factorization and decomposition, together with the known inverses of `log`

, `sin`

, `cos`

, `^`

, `acos`

, `asin`

, and linear, quadratic, cubic, quartic, or binomial factors. Solutions of equations built with exponentials or logarithms are often expressed in terms of Lambert’s `W`

function. This function is (partially) implemented in the special functions package.

Linear equations are solved by the multi-step elimination method due to Bareiss, unless the switch `cramer`

is on, in which case Cramer’s method is used. The Bareiss method is usually more efficient unless the system is large and dense.

Non-linear equations are solved using the Groebner basis package (chapter 16.28). Users should note that this can be quite a time consuming process.

*Examples:*

```
Algebra.solve(:(log(sin(x+3))^5 == 8),:x)
Algebra.solve(:(a*log(sin(x+3))^5 - b), :(sin(x+3)))
Algebra.solve((:(a*x+y==3),:(y=-2)),(:x,:y))
```

`solve`

returns a list of solutions. If there is one unknown, each solution is an equation for the unknown. If a complete solution was found, the unknown will appear by itself on the left-hand side of the equation. On the other hand, if the solve package could not find a solution, the “solution” will be an equation for the unknown in terms of the operator `root_of`

. If there are several unknowns, each solution will be a list of equations for the unknowns. For example,

```
julia> Algebra.solve(:(x^2==1),:x)
(:(x = 1), :(x = -1))
julia> Algebra.solve(:(x^7-x^6+x^2==1),:x)
(:(x = root_of(x_ ^ 6 + x_ + 1, x_, tag_1)), :(x = 1))
julia> Algebra.solve((:(x+3y==7),:(y-x==1)),(:x,:y))
(:(x = 1), :(y = 2))
```

The `tag`

argument is used to uniquely identify those particular solutions. ```

`Reduce.Algebra.root_multiplicities`

— Function`root_multiplicities()`

Solution multiplicities are stored in the global variable `root_multiplicities`

rather than the solution list. The value of this variable is a list of the multiplicities of the solutions for the last call of `solve`

. For example,

`julia> Algebra.solve(:(x^2==2x-1),:x); Algebra.root_multiplicities()`

gives the results

```
(:(x = 1),)
(2,)
```

`Reduce.multiplicities`

— Function`multiplicities(::Bool)`

If you want the multiplicities explicitly displayed, the switch `multiplicities`

can be turned on. For example

`julia> Algebra.on(:multiplicities); Algebra.solve(:(x^2==2x-1),:x)`

yields the result

`(:(x = 1), :(x = 1))`

### 7.16.1 Handling of Undetermined Solutions

When `solve`

cannot find a solution to an equation, it normally returns an equation for the relevant indeterminates in terms of the operator `root_of`

. For example, the expression

`julia> Algebra.solve(:(cos(x)+log(x)),:x)`

returns the result

`(:(x = root_of(cos(x_) + log(x_), x_, tag_1)),)`

An expression with a top-level `root_of`

operator is implicitly a list with an unknown number of elements (since we don’t always know how many solutions an equation has). If a substitution is made into such an expression, closed form solutions can emerge. If this occurs, the `root_of`

construct is replaced by an operator `one_of`

. At this point it is of course possible to transform the result of the original `solve`

operator expression into a standard `solve`

solution. To effect this, the operator `expand_cases`

can be used.

The following example shows the use of these facilities:

```
julia> Algebra.solve(:(-a*x^3+a*x^2+x^4-x^3-4*x^2+4),:x)
(:(x = root_of((a * x_ ^ 2 - x_ ^ 3) + 4x_ + 4, x_, tag_2)), :(x = 1))
julia> Algebra.sub(:a=-1,ans)
(:(x=one_of((2, -1, -2), tag_2)), :(x=1))
julia> Algebra.expand_cases(ans)
(:(x=2), :(x=-1), :(x=-2), :(x=1))
```

### 7.16.2 Solutions of Equations Involving Cubics and Quartics

Since roots of cubics and quartics can often be very messy, a switch `fullroots`

is available, that, when off (the default), will prevent the production of a result in closed form. The `root_of`

construct will be used in this case instead.

In constructing the solutions of cubics and quartics, trigonometrical forms are used where appropriate. This option is under the control of a switch `trigform`

, which is normally on.

The following example illustrates the use of these facilities:

```
julia> Algebra.rlet(:xx => :(solve(x^3+x+1,x)))
julia> rcall(:xx)
(:(x = root_of(x_ ^ 3 + x_ + 1, x_, tag_1)),)
julia> Algebra.on(:fullroots)
julia> collect(rcall(:xx))
3-element Array{Expr,1}:
:(x = -((sqrt(3) * cosh(asinh((3 * sqrt(3)) // 2) // 3) * im - sinh(asinh((3 * sqrt(3)) // 2) // 3))) // sqrt(3))
:(x = (sqrt(3) * cosh(asinh((3 * sqrt(3)) // 2) // 3) * im + sinh(asinh((3 * sqrt(3)) // 2) // 3)) // sqrt(3))
:(x = (-2 * sinh(asinh((3 * sqrt(3)) // 2) // 3)) // sqrt(3))
julia> Algebra.off(:trigform)
julia> collect(rcall(:xx))
3-element Array{Expr,1}:
:(x = -(((sqrt(31) - 3 * sqrt(3)) ^ (2 / 3) * (sqrt(3) * im + 1) + 2 ^ (2 / 3) * (sqrt(3) * im - 1))) / (2 * (sqrt(31) - 3 * sqrt(3)) ^ (1 / 3) * 6 ^ (1 / 3) * 3 ^ (1 / 6)))
:(x = (2 ^ (2 / 3) * (sqrt(3) * im + 1) + (sqrt(31) - 3 * sqrt(3)) ^ (2 / 3) * (sqrt(3) * im - 1)) / (2 * (sqrt(31) - 3 * sqrt(3)) ^ (1 / 3) * 6 ^ (1 / 3) * 3 ^ (1 / 6)))
:(x = ((sqrt(31) - 3 * sqrt(3)) ^ (2 / 3) - 2 ^ (2 / 3)) / ((sqrt(31) - 3 * sqrt(3)) ^ (1 / 3) * 6 ^ (1 / 3) * 3 ^ (1 / 6)))
```

### 7.16.3 Other Options

If `solvesingular`

is on (the default setting), degenerate systems such as `x+y=0`

, `2x+2y=0`

will be solved by introducing appropriate arbitrary constants. The consistent singular equation `0=0`

or equations involving functions with multiple inverses may introduce unique new indeterminant kernels `arbcomplex(j)`

, or `arbint(j)`

, ($j=1,2,...$), representing arbitrary complex or integer numbers respectively. To automatically select the principal branches, do `off(:allbranch)`

. To avoid the introduction of new indeterminant kernels do `off(:arbvars)`

– then no equations are generated for the free variables and their original names are used to express the solution forms. To suppress solutions of consistent singular equations do `off(:solvesingular)`

.

To incorporate additional inverse functions do, for example:

```
R"put(’sinh,’inverse,’asinh)"
R"put(’asinh,’inverse,’sinh)"
```

together with any desired simplification rules such as

`R"for all x let sinh(asinh(x))=x, asinh(sinh(x))=x"`

For completeness, functions with non-unique inverses should be treated as `^`

, `sin`

, and `cos`

are in the `solve`

module source.

Arguments of `asin`

and `acos`

are not checked to ensure that the absolute value of the real part does not exceed 1; and arguments of `log`

are not checked to ensure that the absolute value of the imaginary part does not exceed `π`

; but checks (perhaps involving user response for non-numerical arguments) could be introduced using `let`

statements for these operators.

### 7.16.4 Parameters and Variable Dependency

`Reduce.Algebra.requirements`

— Function`requirements()`

The proper design of a variable sequence supplied as a second argument to `solve`

is important for the structure of the solution of an equation system. Any unknown in the system not in this list is considered totally free. E.g. the call

`Algebra.solve((:(x==2z),:(z==2y)),(:z,))`

produces an empty list as a result because there is no function `z = z(x,y)`

which fulfills both equations for arbitrary `x`

and `y`

values. In such a case the share variable `requirements`

displays a set of restrictions for the parameters of the system:

```
julia> Algebra.requirements()
(:(x - 4y),)
```

The non-existence of a formal solution is caused by a contradiction which disappears only if the parameters of the initial system are set such that all members of the requirements list take the value zero. For a linear system the set is complete: a solution of the requirements list makes the initial system solvable. E.g. in the above case a substitution $x = 4y$ makes the equation set consistent. For a non-linear system only one inconsistency is detected. If such a system has more than one inconsistency, you must reduce them one after the other. 1 The set shows you also the dependency among the parameters: here one of $x$ and $y$ is free and a formal solution of the system can be computed by adding it to the variable list of `solve`

. The requirement set is not unique – there may be other such sets.

`Reduce.Algebra.assumptions`

— Function`assumptions()`

A system with parameters may have a formal solution, e.g.

```
julia> Algebra.solve((:(x==a*z+1),:(0==b*z-y)),(:z,:x))
(:(z = y // b), :(x = (a * y + b) // b))
```

which is not valid for all possible values of the parameters. The variable `assumptions`

contains then a list of restrictions: the solutions are valid only as long as none of these expressions vanishes. Any zero of one of them represents a special case that is not covered by the formal solution. In the above case the value is

```
julia> Algebra.assumptions()
(:b,)
```

which excludes formally the case $b = 0$; obviously this special parameter value makes the system singular. The set of assumptions is complete for both, linear and non–linear systems.

`solve`

rearranges the variable sequence to reduce the (expected) computing time. This behavior is controlled by the switch `varopt`

, which is on by default. If it is turned off, the supplied variable sequence is used or the system kernel ordering is taken if the variable list is omitted. The effect is demonstrated by an example:

```
julia> @rcall s=(y^3+3x=0,x^2+y^2=1);
julia> Algebra.solve(:s,(:y,:x)) |> collect
2-element Array{Expr,1}:
:(y = root_of((y_ ^ 6 + 9 * y_ ^ 2) - 9, y_, tag_2))
:(x = -(y ^ 3) // 3)
julia> Algebra.off(:varopt); Algebra.solve(:s,(:y,:x)) |> collect
2-element Array{Expr,1}:
:(y = (-(((x ^ 4 - 2 * x ^ 2) + 10)) * x) // 3)
:(x = root_of(((x_ ^ 6 - 3 * x_ ^ 4) + 12 * x_ ^ 2) - 1, x_, tag_3))
```

In the first case, `solve`

forms the solution as a set of pairs $(y_i,x(y_i))$ because the degree of x is higher – such a rearrangement makes the internal computation of the Gröbner basis generally faster. For the second case the explicitly given variable sequence is used such that the solution has now the form `(x_i,y(x_i))`

. Controlling the variable sequence is especially important if the system has one or more free variables. As an alternative to turning off `varopt`

, a partial dependency among the variables can be declared using the `depend`

statement: `solve`

then rearranges the variable sequence but keeps any variable ahead of those on which it depends.

```
julia> Algebra.on(:varopt)
julia> @rcall s=(a^3+b,b^2+c);
julia> Algebra.solve(:s,(:a,:b,:c))
(:(a = arbcomplex(1)), :(b = -(a ^ 3)), :(c = -(a ^ 6)))
julia> Algebra.depend(:a,:c); Algebra.depend(:b,:c)
julia> Algebra.solve(:s,(:a,:b,:c))
3-element Array{Expr,1}:
:(c = arbcomplex(2))
:(a = root_of(a_ ^ 6 + c, a_, tag_3))
:(b = -(a ^ 3))
```

Here `solve`

is forced to put $c$ after $a$ and after $b$, but there is no obstacle to interchanging $a$ and $b$.

## 7.17 Even and Odd Operators

`Reduce.Algebra.even`

— Function`even(r...)`

An operator can be declared to be even in its first argument by the declarations `even`

. Expressions involving an operator declared in this manner are transformed if the first argument contains a minus sign. Any other arguments are not affected. For example, the declaration

`julia> Algebra.even(:f1)`

means that

```
f1(-a) -> f1(a)
f1(-a,-b) -> f1(a,-b)
```

`Reduce.Algebra.odd`

— Function`odd(r...)`

An operator can be declared to be odd in its first argument by the declarations `odd`

. Expressions involving an operator declared in this manner are transformed if the first argument contains a minus sign. Any other arguments are not affected. In addition, if say `f`

is declared odd, then `f(0)`

is replaced by zero unless `f`

is also declared non zero by the declaration `nonzero`

. For example, the declarations

`julia> Algebra.odd(:f2)`

means that

```
f2(-a) -> -f2(a)
f2(0) -> 0
```

To inhibit the last transformation, say `nonzero(:f2)`

.

## 7.18 Linear Operators

`Reduce.Algebra.linear`

— Function`linear(r...)`

An operator can be declared to be linear in its first argument over powers of its second argument. If an operator `f`

is so declared, `f`

of any sum is broken up into sums of `f`

s, and any factors that are not powers of the variable are taken outside. This means that `f`

must have (at least) two arguments. In addition, the second argument must be an identifier (or more generally a kernel), not an expression.

*Example:* If `f`

were declared linear, then

`f(a*x^5+b*x+c,x) -> f(x^5,x)*a + f(x,x)*b + f(1,x)*c`

More precisely, not only will the variable and its powers remain within the scope of the `f`

operator, but so will any variable and its powers that had been declared to `depend`

on the prescribed variable; and so would any expression that contains that variable or a dependent variable on any level, e.g. `cos(sin(x))`

.

To declare operators `f`

and `g`

to be linear operators, use:

`julia> Algebra.linear(:f,:g)`

The analysis is done of the first argument with respect to the second; any other arguments are ignored. It uses the following rules of evaluation:

```
f(0) -> 0
f(-y,x) -> -f(y,x)
f(y+z,x) -> f(y,x)+f(z,x)
f(y*z,x) -> z*f(y,x) if z does not depend on x
f(y/z,x) -> f(y,x)/z if z does not depend on x
```

To summarize, `y`

“depends” on the indeterminate `x`

in the above if either of the following hold:

`y`

is an expression that contains`x`

at any level as a variable, e.g.:`cos(sin(x))`

- Any variable in the expression
`y`

has been declared dependent on`x`

by use of the declaration`depend`

.

The use of such linear operators can be seen in the paper Fox, J.A. and A. C. Hearn, “Analytic Computation of Some Integrals in Fourth Order Quantum Electrodynamics” Journ. Comp. Phys. 14 (1974) 301-317, which contains a complete listing of a program for definite integration of some expressions that arise in fourth order quantum electrodynamics.

## 7.19 Non-Commuting Operators

`Reduce.Algebra.noncom`

— Function`noncom(r...)`

An operator can be declared to be non-commutative under multiplication by the declaration `noncom`

.

*Example:* After the declaration

`julia> Algebra.noncom(:u,:v);`

the expressions `u(x)*u(y)-u(y)*u(x)`

and `u(x)*v(y)-v(y)*u(x)`

will remain unchanged on simplification, and in particular will not simplify to zero.

Note that it is the operator (`u`

and `v`

in the above example) and not the variable that has the non-commutative property.

The `let`

statement may be used to introduce rules of evaluation for such operators. In particular, the boolean operator `ordp`

is useful for introducing an ordering on such expressions.

*Example:* The rule

`R"for all x,y such that x neq y and ordp(x,y) let u(x)*u(y)= u(y)*u(x)+comm(x,y)"`

would introduce the commutator of `u(x)`

and `u(y)`

for all `x`

and `y`

. Note that since `ordp(x,x)`

is true, the equality check is necessary in the degenerate case to avoid a circular loop in the rule.

## 7.20 Symmetric and Antisymmetric Operators

`Reduce.Algebra.symmetric`

— Function`symmetric(r...)`

An operator can be declared to be symmetric with respect to its arguments by the declaration `symmetric`

. For example

`julia> Algebra.symmetric(:u,:v);`

means that any expression involving the top level operators `u`

or `v`

will have its arguments reordered to conform to the internal order used by REDUCE. The user can change this order for kernels by the command `korder`

. For example, `u(x,v(1,2))`

would become `u(v(2,1),x)`

, since numbers are ordered in decreasing order, and expressions are ordered in decreasing order of complexity.

`Reduce.Algebra.antisymmetric`

— Function`antisymmetric(r...)`

the declaration `antisymmetric`

declares an operator antisymmetric. For example,

`julia> Algebra.antisymmetric(:l,:m);`

means that any expression involving the top level operators `l`

or `m`

will have its arguments reordered to conform to the internal order of the system, and the sign of the expression changed if there are an odd number of argument interchanges necessary to bring about the new order.

For example, `l(x,m(1,2))`

would become `-l(-m(2,1),x)`

since one interchange occurs with each operator. An expression like `l(x,x)`

would also be replaced by `0`

.

## 7.21 Declaring New Prefix Operators

`Reduce.Algebra.operator`

— Function`operator(r...)`

The user may add new prefix operators to the system by using the declaration `operator`

. For example:

`julia> Algebra.operator(:h,:g1,:arctan)`

adds the prefix operators `h`

, `g1`

and `arctan`

to the system.

This allows symbols like `h(w)`

, `h(x,y,z)`

, `g1(p+q)`

, `arctan(u/v)`

to be used in expressions, but no meaning or properties of the operator are implied. The same operator symbol can be used equally well as a 0-, 1-, 2-, 3-, etc.-place operator.

To give a meaning to an operator symbol, or express some of its properties, `let`

statements can be used, or the operator can be given a definition as a procedure.

If the user forgets to declare an identifier as an operator, the system will prompt the user to do so in interactive mode, or do it automatically in non-interactive mode. A diagnostic message will also be printed if an identifier is declared `operator`

more than once.

Operators once declared are global in scope, and so can then be referenced anywhere in the program. In other words, a declaration within a block (or a procedure) does not limit the scope of the operator to that block, nor does the operator go away on exiting the block (use `clear`

instead for this purpose).

## 7.22 Declaring New Infix Operators

`Reduce.Algebra.infix`

— Function`infix(r...)`

Users can add new infix operators by using the declarations `infix`

and `precedence`

. For example,

`julia> Algebra.infix(:mm)`

The declaration `infix(:mm)`

would allow one to use the symbol `mm`

as an infix operator: `R"a mm b"`

instead of `R"mm(a,b)"`

.

`Reduce.Algebra.precedence`

— Function`precedence(a,b)`

Users can add new infix operators by using the declarations `infix`

and `precedence`

. For example,

`julia> Algebra.precedence(:mm,:-)`

The declaration `precedence(:mm,:-)`

says that `mm`

should be inserted into the infix operator precedence list just after the `-`

operator. This gives it higher precedence than `-`

and lower precedence than `*`

. Thus `R"a - b mm c - d"`

means `R"a - (b mm c) - d"`

, while `R"a * b mm c * d"`

means `R"(a * b) mm (c * d)"`

.

Both infix and prefix operators have no transformation properties unless `let`

statements or procedure declarations are used to assign a meaning.

We should note here that infix operators so defined are always binary: `R"a mm b mm c"`

means `R"(a mm b) mm c"`

.

## 7.23 Creating/Removing Variable Dependency

`Reduce.Algebra.depend`

— Function`depend(r...)`

There are several facilities in REDUCE, such as the differentiation operator and the linear operator facility, that can utilize knowledge of the dependency between various variables, or kernels. Such dependency may be expressed by the command `depend`

. This takes an arbitrary number of arguments and sets up a dependency of the first argument on the remaining arguments. For example,

`julia> Algebra.depend(:x,:y,:z)`

says that `x`

is dependent on both `y`

and `z`

.

`julia> Algebra.depend(:z,:(cos(x)),:y)`

says that `z`

is dependent on `cos(x)`

and `y`

.

`Reduce.Algebra.nodepend`

— Function`nodepend(r...)`

Dependencies introduced by `depend`

can be removed by `nodepend`

. The arguments of this are the same as for `depend`

. For example, given the above dependencies,

`julia> Algebra.nodepend(:z,:(cos(x)))`

says that `z`

is no longer dependent on `cos(x)`

, although it remains dependent on `y`

.