Typically linear operator defined in terms of differentiation of functions
Inmathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order functionincomputer science).
This article considers mainly linear differential operators, which are the most common type. However, non-linear differential operators also exist, such as the Schwarzian derivative.
Given a nonnegative integer m, an order- linear differential operator is a map from a function space to another function space that can be written as:
where is a multi-index of non-negative integers, , and for each , is a function on some open domain in n-dimensional space. The operator is interpreted as
Thus for a function :
The notation is justified (i.e., independent of order of differentiation) because of the symmetry of second derivatives.
The polynomial p obtained by replacing D by variables inP is called the total symbolofP; i.e., the total symbol of P above is:
where The highest homogeneous component of the symbol, namely,
is called the principal symbolofP. While the total symbol is not intrinsically defined, the principal symbol is intrinsically defined (i.e., it is a function on the cotangent bundle).[1]
More generally, let E and Fbevector bundles over a manifold X. Then the linear operator
is a differential operator of order if, in local coordinatesonX, we have
whose domain is the tensor product of the kthsymmetric power of the cotangent bundleofX with E, and whose codomain is F. This symmetric tensor is known as the principal symbol (or just the symbol) of P.
The coordinate system xi permits a local trivialization of the cotangent bundle by the coordinate differentials dxi, which determine fiber coordinates ξi. In terms of a basis of frames eμ, fνofE and F, respectively, the differential operator P decomposes into components
on each section uofE. Here Pνμ is the scalar differential operator defined by
With this trivialization, the principal symbol can now be written
In the cotangent space over a fixed point xofX, the symbol defines a homogeneous polynomial of degree kin with values in .
A differential operator P and its symbol appear naturally in connection with the Fourier transform as follows. Let ƒ be a Schwartz function. Then by the inverse Fourier transform,
This exhibits P as a Fourier multiplier. A more general class of functions p(x,ξ) which satisfy at most polynomial growth conditions in ξ under which this integral is well-behaved comprises the pseudo-differential operators.
The differential operator iselliptic if its symbol is invertible; that is for each nonzero the bundle map is invertible. On a compact manifold, it follows from the elliptic theory that P is a Fredholm operator: it has finite-dimensional kernel and cokernel.
In the development of holomorphic functions of a complex variablez = x + iy, sometimes a complex function is considered to be a function of two real variables x and y. Use is made of the Wirtinger derivatives, which are partial differential operators:
The differential operator del, also called nabla, is an important vector differential operator. It appears frequently in physics in places like the differential form of Maxwell's equations. In three-dimensional Cartesian coordinates, del is defined as
The most common differential operator is the action of taking the derivative. Common notations for taking the first derivative with respect to a variable x include:
, , and .
When taking higher, nth order derivatives, the operator may be written:
, , , or .
The derivative of a function f of an argumentx is sometimes given as either of the following:
The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form
In writing, following common mathematical convention, the argument of a differential operator is usually placed on the right side of the operator itself. Sometimes an alternative notation is used: The result of applying the operator to the function on the left side of the operator and on the right side of the operator, and the difference obtained when applying the differential operator to the functions on both sides, are denoted by arrows as follows:
Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.
the adjoint of this operator is defined as the operator such that
where the notation is used for the scalar productorinner product. This definition therefore depends on the definition of the scalar product (or inner product).
where the line over f(x) denotes the complex conjugateoff(x). If one moreover adds the condition that forg vanishes as and , one can also define the adjoint of Tby
This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When is defined according to this formula, it is called the formal adjointofT.
A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.
If Ω is a domain in Rn, and P a differential operator on Ω, then the adjoint of P is defined in L2(Ω) by duality in the analogous manner:
for all smooth L2 functions f, g. Since smooth functions are dense in L2, this defines the adjoint on a dense subset of L2: P* is a densely defined operator.
The Sturm–Liouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form
This property can be proven using the formal adjoint definition above.[4]
Any polynomialinD with function coefficients is also a differential operator. We may also compose differential operators by the rule
Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. For example we have the relation basic in quantum mechanics:
The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.
The differential operators also obey the shift theorem.
IfR is a ring, let be the non-commutative polynomial ring over R in the variables D and X, and I the two-sided ideal generated by DX − XD − 1. Then the ring of univariate polynomial differential operators over R is the quotient ring. This is a non-commutativesimple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form . It supports an analogue of Euclidean division of polynomials.
Differential modules[clarification needed] over (for the standard derivation) can be identified with modules over .
Ring of multivariate polynomial differential operators[edit]
IfR is a ring, let be the non-commutative polynomial ring over R in the variables , and I the two-sided ideal generated by the elements
for all where isKronecker delta. Then the ring of multivariate polynomial differential operators over R is the quotient ring .
This is a non-commutativesimple ring.
Every element can be written in a unique way as a R-linear combination of monomials of the form .
Indifferential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifoldM. An R-linear mapping of sectionsP : Γ(E) → Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundleJk(E).
In other words, there exists a linear mapping of vector bundles
such that
where jk: Γ(E) → Γ(Jk(E)) is the prolongation that associates to any section of E its k-jet.
This just means that for a given sectionsofE, the value of P(s) at a point x ∈ M is fully determined by the kth-order infinitesimal behavior of sinx. In particular this implies that P(s)(x) is determined by the germofsinx, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.
An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k + 1 smooth functions we have
Here the bracket is defined as the commutator
This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.
A differential operator acting on two functions is called a bidifferential operator. The notion appears, for instance, in an associative algebra structure on a deformation quantization of a Poisson algebra.[5]
Amicrodifferential operator is a type of operator on an open subset of a cotangent bundle, as opposed to an open subset of a manifold. It is obtained by extending the notion of a differential operator to the cotangent bundle.[6]