J u m p t o c o n t e n t
M a i n m e n u
M a i n m e n u
N a v i g a t i o n
● M a i n p a g e
● C o n t e n t s
● C u r r e n t e v e n t s
● R a n d o m a r t i c l e
● A b o u t W i k i p e d i a
● C o n t a c t u s
● D o n a t e
C o n t r i b u t e
● H e l p
● L e a r n t o e d i t
● C o m m u n i t y p o r t a l
● R e c e n t c h a n g e s
● U p l o a d f i l e
S e a r c h
Search
A p p e a r a n c e
● C r e a t e a c c o u n t
● L o g i n
P e r s o n a l t o o l s
● C r e a t e a c c o u n t
● L o g i n
P a g e s f o r l o g g e d o u t e d i t o r s l e a r n m o r e
● C o n t r i b u t i o n s
● T a l k
( T o p )
1
D e f i n i t i o n
2
G e n e r a l i z a t i o n
T o g g l e G e n e r a l i z a t i o n s u b s e c t i o n
2 . 1
G e n e r a l S e r i e s
2 . 1 . 1
E x a m p l e s
2 . 1 . 1 . 1
R e d e r i v e ( T a y l o r ) C a r l e m a n M a t r i x
2 . 1 . 1 . 2
C a r l e m a n M a t r i x F o r O r t h o n o r m a l B a s i s
2 . 1 . 1 . 3
C a r l e m a n M a t r i x f o r F o u r i e r S e r i e s
3
P r o p e r t i e s
4
E x a m p l e s
5
R e l a t e d m a t r i c e s
6
S e e a l s o
7
N o t e s
8
R e f e r e n c e s
T o g g l e t h e t a b l e o f c o n t e n t s
C a r l e m a n m a t r i x
2 l a n g u a g e s
● 日 本 語
● S l o v e n š č i n a
E d i t l i n k s
● A r t i c l e
● T a l k
E n g l i s h
● R e a d
● E d i t
● V i e w h i s t o r y
T o o l s
T o o l s
A c t i o n s
● R e a d
● E d i t
● V i e w h i s t o r y
G e n e r a l
● W h a t l i n k s h e r e
● R e l a t e d c h a n g e s
● U p l o a d f i l e
● S p e c i a l p a g e s
● P e r m a n e n t l i n k
● P a g e i n f o r m a t i o n
● C i t e t h i s p a g e
● G e t s h o r t e n e d U R L
● D o w n l o a d Q R c o d e
● W i k i d a t a i t e m
P r i n t / e x p o r t
● D o w n l o a d a s P D F
● P r i n t a b l e v e r s i o n
A p p e a r a n c e
F r o m W i k i p e d i a , t h e f r e e e n c y c l o p e d i a
The Carleman matrix of an infinitely differentiable function
f
(
x
)
{\displaystyle f(x )}
is defined as:
M
[
f
]
j
k
=
1
k
!
[
d
k
d
x
k
(
f
(
x
)
)
j
]
x
=
0
,
{\displaystyle M[f ]_{jk}={\frac {1}{k!}}\left[{\frac {d^{k}}{dx^{k}}}(f(x ))^{j}\right]_{x=0}~,}
so as to satisfy the (Taylor series ) equation:
(
f
(
x
)
)
j
=
∑
k
=
0
∞
M
[
f
]
j
k
x
k
.
{\displaystyle (f(x ))^{j}=\sum _{k=0}^{\infty }M[f ]_{jk}x^{k}.}
For instance, the computation of
f
(
x
)
{\displaystyle f(x )}
by
f
(
x
)
=
∑
k
=
0
∞
M
[
f
]
1
,
k
x
k
.
{\displaystyle f(x )=\sum _{k=0}^{\infty }M[f ]_{1,k}x^{k}.~}
simply amounts to the dot-product of row 1 of
M
[
f
]
{\displaystyle M[f ]}
with a column vector
[
1
,
x
,
x
2
,
x
3
,
.
.
.
]
τ
{\displaystyle \left[1,x,x^{2},x^{3},...\right]^{\tau }}
.
The entries of
M
[
f
]
{\displaystyle M[f ]}
in the next row give the 2nd power of
f
(
x
)
{\displaystyle f(x )}
:
f
(
x
)
2
=
∑
k
=
0
∞
M
[
f
]
2
,
k
x
k
,
{\displaystyle f(x )^{2}=\sum _{k=0}^{\infty }M[f ]_{2,k}x^{k}~,}
and also, in order to have the zeroth power of
f
(
x
)
{\displaystyle f(x )}
in
M
[
f
]
{\displaystyle M[f ]}
, we adopt the row 0 containing zeros everywhere except the first position, such that
f
(
x
)
0
=
1
=
∑
k
=
0
∞
M
[
f
]
0
,
k
x
k
=
1
+
∑
k
=
1
∞
0
⋅
x
k
.
{\displaystyle f(x )^{0}=1=\sum _{k=0}^{\infty }M[f ]_{0,k}x^{k}=1+\sum _{k=1}^{\infty }0\cdot x^{k}~.}
Thus, the dot product of
M
[
f
]
{\displaystyle M[f ]}
with the column vector
[
1
,
x
,
x
2
,
.
.
.
]
T
{\displaystyle {\begin{bmatrix}1,x,x^{2},...\end{bmatrix}}^{T}}
yields the column vector
[
1
,
f
(
x
)
,
f
(
x
)
2
,
.
.
.
]
T
{\displaystyle \left[1,f(x ),f(x )^{2},...\right]^{T}}
, i.e.,
M
[
f
]
[
1
x
x
2
x
3
⋮
]
=
[
1
f
(
x
)
(
f
(
x
)
)
2
(
f
(
x
)
)
3
⋮
]
.
{\displaystyle M[f ]{\begin{bmatrix}1\\x\\x^{2}\\x^{3}\\\vdots \end{bmatrix}}={\begin{bmatrix}1\\f(x )\\(f(x ))^{2}\\(f(x ))^{3}\\\vdots \end{bmatrix}}.}
Generalization
[ edit ]
A generalization of the Carleman matrix of a function can be defined around any point, such as:
M
[
f
]
x
0
=
M
x
[
x
−
x
0
]
M
[
f
]
M
x
[
x
+
x
0
]
{\displaystyle M[f ]_{x_{0}}=M_{x}[x-x_{0}]M[f ]M_{x}[x+x_{0}]}
or
M
[
f
]
x
0
=
M
[
g
]
{\displaystyle M[f ]_{x_{0}}=M[g ]}
where
g
(
x
)
=
f
(
x
+
x
0
)
−
x
0
{\displaystyle g(x )=f(x+x_{0})-x_{0}}
. This allows the matrix power to be related as:
(
M
[
f
]
x
0
)
n
=
M
x
[
x
−
x
0
]
M
[
f
]
n
M
x
[
x
+
x
0
]
{\displaystyle (M[f ]_{x_{0}})^{n}=M_{x}[x-x_{0}]M[f ]^{n}M_{x}[x+x_{0}]}
General Series
[ edit ]
Another way to generalize it even further is think about a general series in the following way:
Let
h
(
x
)
=
∑
n
c
n
(
h
)
⋅
ψ
n
(
x
)
{\displaystyle h(x )=\sum _{n}c_{n}(h )\cdot \psi _{n}(x )}
be a series approximation of
h
(
x
)
{\displaystyle h(x )}
, where
{
ψ
n
(
x
)
}
n
{\displaystyle \{\psi _{n}(x )\}_{n}}
is a basis of the space containing
h
(
x
)
{\displaystyle h(x )}
Assuming that
{
ψ
n
(
x
)
}
n
{\displaystyle \{\psi _{n}(x )\}_{n}}
is also a basis for
f
(
x
)
{\displaystyle f(x )}
, We can define
G
[
f
]
m
n
=
c
n
(
ψ
m
∘
f
)
{\displaystyle G[f ]_{mn}=c_{n}(\psi _{m}\circ f)}
, therefore we have
ψ
m
∘
f
=
∑
n
c
n
(
ψ
m
∘
f
)
⋅
ψ
n
=
∑
n
G
[
f
]
m
n
⋅
ψ
n
{\displaystyle \psi _{m}\circ f=\sum _{n}c_{n}(\psi _{m}\circ f)\cdot \psi _{n}=\sum _{n}G[f ]_{mn}\cdot \psi _{n}}
, now we can prove that
G
[
g
∘
f
]
=
G
[
g
]
⋅
G
[
f
]
{\displaystyle G[g\circ f]=G[g ]\cdot G[f ]}
, if we assume that
{
ψ
n
(
x
)
}
n
{\displaystyle \{\psi _{n}(x )\}_{n}}
is also a basis for
g
(
x
)
{\displaystyle g(x )}
and
g
(
f
(
x
)
)
{\displaystyle g(f(x ))}
.
Let
g
(
x
)
{\displaystyle g(x )}
be such that
ψ
l
∘
g
=
∑
m
G
[
g
]
l
m
⋅
ψ
m
{\displaystyle \psi _{l}\circ g=\sum _{m}G[g ]_{lm}\cdot \psi _{m}}
where
G
[
g
]
l
m
=
c
m
(
ψ
l
∘
g
)
{\displaystyle G[g ]_{lm}=c_{m}(\psi _{l}\circ g)}
.
Now
∑
n
G
[
g
∘
f
]
l
n
ψ
n
=
ψ
l
∘
(
g
∘
f
)
=
(
ψ
l
∘
g
)
∘
f
=
∑
m
G
[
g
]
l
m
(
ψ
m
∘
f
)
=
∑
m
G
[
g
]
l
m
∑
n
G
[
f
]
m
n
ψ
n
=
∑
n
,
m
G
[
g
]
l
m
G
[
f
]
m
n
ψ
n
=
∑
n
(
∑
m
G
[
g
]
l
m
G
[
f
]
m
n
)
ψ
n
{\displaystyle {\begin{aligned}\sum _{n}G[g\circ f]_{ln}\psi _{n}=\psi _{l}\circ (g\circ f)&=(\psi _{l}\circ g)\circ f\\&=\sum _{m}G[g ]_{lm}(\psi _{m}\circ f)\\&=\sum _{m}G[g ]_{lm}\sum _{n}G[f ]_{mn}\psi _{n}\\&=\sum _{n,m}G[g ]_{lm}G[f ]_{mn}\psi _{n}\\&=\sum _{n}(\sum _{m}G[g ]_{lm}G[f ]_{mn})\psi _{n}\end{aligned}}}
Comparing the first and the last term, and from
{
ψ
n
(
x
)
}
n
{\displaystyle \{\psi _{n}(x )\}_{n}}
being a base for
f
(
x
)
{\displaystyle f(x )}
,
g
(
x
)
{\displaystyle g(x )}
and
g
(
f
(
x
)
)
{\displaystyle g(f(x ))}
it follows that
G
[
g
∘
f
]
=
∑
m
G
[
g
]
l
m
G
[
f
]
m
n
=
G
[
g
]
⋅
G
[
f
]
{\displaystyle G[g\circ f]=\sum _{m}G[g ]_{lm}G[f ]_{mn}=G[g ]\cdot G[f ]}
Examples
[ edit ]
Rederive (Taylor) Carleman Matrix
[ edit ]
If we set
ψ
n
(
x
)
=
x
n
{\displaystyle \psi _{n}(x )=x^{n}}
we have the Carleman matrix . Because
h
(
x
)
=
∑
n
c
n
(
h
)
⋅
ψ
n
(
x
)
=
∑
n
c
n
(
h
)
⋅
x
n
{\displaystyle h(x )=\sum _{n}c_{n}(h )\cdot \psi _{n}(x )=\sum _{n}c_{n}(h )\cdot x^{n}}
then we know that the n-th coefficient
c
n
(
h
)
{\displaystyle c_{n}(h )}
must be the nth-coefficient of the taylor series of
h
{\displaystyle h}
. Therefore
c
n
(
h
)
=
1
n
!
h
(
n
)
(
0
)
{\displaystyle c_{n}(h )={\frac {1}{n!}}h^{(n )}(0)}
Therefore
G
[
f
]
m
n
=
c
n
(
ψ
m
∘
f
)
=
c
n
(
f
(
x
)
m
)
=
1
n
!
[
d
n
d
x
n
(
f
(
x
)
)
m
]
x
=
0
{\displaystyle G[f ]_{mn}=c_{n}(\psi _{m}\circ f)=c_{n}(f(x )^{m})={\frac {1}{n!}}\left[{\frac {d^{n}}{dx^{n}}}(f(x ))^{m}\right]_{x=0}}
Which is the Carleman matrix given above. (It's important to note that this is not an orthornormal basis)
Carleman Matrix For Orthonormal Basis
[ edit ]
If
{
e
n
(
x
)
}
n
{\displaystyle \{e_{n}(x )\}_{n}}
is an orthonormal basis for a Hilbert Space with a defined inner product
⟨
f
,
g
⟩
{\displaystyle \langle f,g\rangle }
, we can set
ψ
n
=
e
n
{\displaystyle \psi _{n}=e_{n}}
and
c
n
(
h
)
{\displaystyle c_{n}(h )}
will be
⟨
h
,
e
n
⟩
{\displaystyle {\displaystyle \langle h,e_{n}\rangle }}
. Then
G
[
f
]
m
n
=
c
n
(
e
m
∘
f
)
=
⟨
e
m
∘
f
,
e
n
⟩
{\displaystyle G[f ]_{mn}=c_{n}(e_{m}\circ f)=\langle e_{m}\circ f,e_{n}\rangle }
.
Carleman Matrix for Fourier Series
[ edit ]
If
e
n
(
x
)
=
e
i
n
x
{\displaystyle e_{n}(x )=e^{inx}}
we have the analogous for Fourier Series . Let
c
^
n
{\displaystyle {\hat {c}}_{n}}
and
G
^
{\displaystyle {\hat {G}}}
represent the carleman coefficient and matrix in the fourier basis. Because the basis is orthogonal, we have.
c
^
n
(
h
)
=
⟨
h
,
e
n
⟩
=
1
2
π
∫
−
π
π
h
(
x
)
⋅
e
−
i
n
x
d
x
{\displaystyle {\hat {c}}_{n}(h )=\langle h,e_{n}\rangle ={\cfrac {1}{2\pi }}\int _{-\pi }^{\pi }\displaystyle h(x )\cdot e^{-inx}dx}
.
Then, therefore,
G
^
[
f
]
m
n
=
c
n
^
(
e
m
∘
f
)
=
⟨
e
m
∘
f
,
e
n
⟩
{\displaystyle {\hat {G}}[f ]_{mn}={\hat {c_{n}}}(e_{m}\circ f)=\langle e_{m}\circ f,e_{n}\rangle }
which is
G
^
[
f
]
m
n
=
1
2
π
∫
−
π
π
e
i
m
f
(
x
)
⋅
e
−
i
n
x
d
x
{\displaystyle {\hat {G}}[f ]_{mn}={\cfrac {1}{2\pi }}\int _{-\pi }^{\pi }\displaystyle e^{imf(x )}\cdot e^{-inx}dx}
Properties
[ edit ]
Carleman matrices satisfy the fundamental relationship
M
[
f
∘
g
]
=
M
[
f
]
M
[
g
]
,
{\displaystyle M[f\circ g]=M[f ]M[g ]~,}
which makes the Carleman matrix M a (direct) representation of
f
(
x
)
{\displaystyle f(x )}
. Here the term
f
∘
g
{\displaystyle f\circ g}
denotes the composition of functions
f
(
g
(
x
)
)
{\displaystyle f(g(x ))}
.
Other properties include:
M
[
f
n
]
=
M
[
f
]
n
{\displaystyle \,M[f^{n}]=M[f ]^{n}}
, where
f
n
{\displaystyle \,f^{n}}
is an iterated function and
M
[
f
−
1
]
=
M
[
f
]
−
1
{\displaystyle \,M[f^{-1}]=M[f ]^{-1}}
, where
f
−
1
{\displaystyle \,f^{-1}}
is the inverse function (if the Carleman matrix is invertible ).
Examples
[ edit ]
The Carleman matrix of a constant is:
M
[
a
]
=
(
1
0
0
⋯
a
0
0
⋯
a
2
0
0
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M[a ]=\left({\begin{array}{cccc}1&0&0&\cdots \\a&0&0&\cdots \\a^{2}&0&0&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of the identity function is:
M
x
[
x
]
=
(
1
0
0
⋯
0
1
0
⋯
0
0
1
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[x ]=\left({\begin{array}{cccc}1&0&0&\cdots \\0&1&0&\cdots \\0&0&1&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of a constant addition is:
M
x
[
a
+
x
]
=
(
1
0
0
⋯
a
1
0
⋯
a
2
2
a
1
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[a+x]=\left({\begin{array}{cccc}1&0&0&\cdots \\a&1&0&\cdots \\a^{2}&2a&1&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of the successor function is equivalent to the Binomial coefficient :
M
x
[
1
+
x
]
=
(
1
0
0
0
⋯
1
1
0
0
⋯
1
2
1
0
⋯
1
3
3
1
⋯
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[1+x]=\left({\begin{array}{ccccc}1&0&0&0&\cdots \\1&1&0&0&\cdots \\1&2&1&0&\cdots \\1&3&3&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
M
x
[
1
+
x
]
j
k
=
(
j
k
)
{\displaystyle M_{x}[1+x]_{jk}={\binom {j}{k}}}
The Carleman matrix of the logarithm is related to the (signed) Stirling numbers of the first kind scaled by factorials :
M
x
[
log
(
1
+
x
)
]
=
(
1
0
0
0
0
⋯
0
1
−
1
2
1
3
−
1
4
⋯
0
0
1
−
1
11
12
⋯
0
0
0
1
−
3
2
⋯
0
0
0
0
1
⋯
⋮
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[\log(1+x)]=\left({\begin{array}{cccccc}1&0&0&0&0&\cdots \\0&1&-{\frac {1}{2}}&{\frac {1}{3}}&-{\frac {1}{4}}&\cdots \\0&0&1&-1&{\frac {11}{12}}&\cdots \\0&0&0&1&-{\frac {3}{2}}&\cdots \\0&0&0&0&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
M
x
[
log
(
1
+
x
)
]
j
k
=
s
(
k
,
j
)
j
!
k
!
{\displaystyle M_{x}[\log(1+x)]_{jk}=s(k,j){\frac {j!}{k!}}}
The Carleman matrix of the logarithm is related to the (unsigned) Stirling numbers of the first kind scaled by factorials :
M
x
[
−
log
(
1
−
x
)
]
=
(
1
0
0
0
0
⋯
0
1
1
2
1
3
1
4
⋯
0
0
1
1
11
12
⋯
0
0
0
1
3
2
⋯
0
0
0
0
1
⋯
⋮
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[-\log(1-x)]=\left({\begin{array}{cccccc}1&0&0&0&0&\cdots \\0&1&{\frac {1}{2}}&{\frac {1}{3}}&{\frac {1}{4}}&\cdots \\0&0&1&1&{\frac {11}{12}}&\cdots \\0&0&0&1&{\frac {3}{2}}&\cdots \\0&0&0&0&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
M
x
[
−
log
(
1
−
x
)
]
j
k
=
|
s
(
k
,
j
)
|
j
!
k
!
{\displaystyle M_{x}[-\log(1-x)]_{jk}=|s(k,j)|{\frac {j!}{k!}}}
The Carleman matrix of the exponential function is related to the Stirling numbers of the second kind scaled by factorials :
M
x
[
exp
(
x
)
−
1
]
=
(
1
0
0
0
0
⋯
0
1
1
2
1
6
1
24
⋯
0
0
1
1
7
12
⋯
0
0
0
1
3
2
⋯
0
0
0
0
1
⋯
⋮
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[\exp(x )-1]=\left({\begin{array}{cccccc}1&0&0&0&0&\cdots \\0&1&{\frac {1}{2}}&{\frac {1}{6}}&{\frac {1}{24}}&\cdots \\0&0&1&1&{\frac {7}{12}}&\cdots \\0&0&0&1&{\frac {3}{2}}&\cdots \\0&0&0&0&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
M
x
[
exp
(
x
)
−
1
]
j
k
=
S
(
k
,
j
)
j
!
k
!
{\displaystyle M_{x}[\exp(x )-1]_{jk}=S(k,j){\frac {j!}{k!}}}
The Carleman matrix of exponential functions is:
M
x
[
exp
(
a
x
)
]
=
(
1
0
0
0
⋯
1
a
a
2
2
a
3
6
⋯
1
2
a
2
a
2
4
a
3
3
⋯
1
3
a
9
a
2
2
9
a
3
2
⋯
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[\exp(ax )]=\left({\begin{array}{ccccc}1&0&0&0&\cdots \\1&a&{\frac {a^{2}}{2}}&{\frac {a^{3}}{6}}&\cdots \\1&2a&2a^{2}&{\frac {4a^{3}}{3}}&\cdots \\1&3a&{\frac {9a^{2}}{2}}&{\frac {9a^{3}}{2}}&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
M
x
[
exp
(
a
x
)
]
j
k
=
(
j
a
)
k
k
!
{\displaystyle M_{x}[\exp(ax )]_{jk}={\frac {(ja )^{k}}{k!}}}
The Carleman matrix of a constant multiple is:
M
x
[
c
x
]
=
(
1
0
0
⋯
0
c
0
⋯
0
0
c
2
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[cx ]=\left({\begin{array}{cccc}1&0&0&\cdots \\0&c&0&\cdots \\0&0&c^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of a linear function is:
M
x
[
a
+
c
x
]
=
(
1
0
0
⋯
a
c
0
⋯
a
2
2
a
c
c
2
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M_{x}[a+cx]=\left({\begin{array}{cccc}1&0&0&\cdots \\a&c&0&\cdots \\a^{2}&2ac&c^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of a function
f
(
x
)
=
∑
k
=
1
∞
f
k
x
k
{\displaystyle f(x )=\sum _{k=1}^{\infty }f_{k}x^{k}}
is:
M
[
f
]
=
(
1
0
0
⋯
0
f
1
f
2
⋯
0
0
f
1
2
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M[f ]=\left({\begin{array}{cccc}1&0&0&\cdots \\0&f_{1}&f_{2}&\cdots \\0&0&f_{1}^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
The Carleman matrix of a function
f
(
x
)
=
∑
k
=
0
∞
f
k
x
k
{\displaystyle f(x )=\sum _{k=0}^{\infty }f_{k}x^{k}}
is:
M
[
f
]
=
(
1
0
0
⋯
f
0
f
1
f
2
⋯
f
0
2
2
f
0
f
1
f
1
2
+
2
f
0
f
2
⋯
⋮
⋮
⋮
⋱
)
{\displaystyle M[f ]=\left({\begin{array}{cccc}1&0&0&\cdots \\f_{0}&f_{1}&f_{2}&\cdots \\f_{0}^{2}&2f_{0}f_{1}&f_{1}^{2}+2f_{0}f_{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}
[ edit ]
The Bell matrix or the Jabotinsky matrix of a function
f
(
x
)
{\displaystyle f(x )}
is defined as[1] [2] [3]
B
[
f
]
j
k
=
1
j
!
[
d
j
d
x
j
(
f
(
x
)
)
k
]
x
=
0
,
{\displaystyle B[f ]_{jk}={\frac {1}{j!}}\left[{\frac {d^{j}}{dx^{j}}}(f(x ))^{k}\right]_{x=0}~,}
so as to satisfy the equation
(
f
(
x
)
)
k
=
∑
j
=
0
∞
B
[
f
]
j
k
x
j
,
{\displaystyle (f(x ))^{k}=\sum _{j=0}^{\infty }B[f ]_{jk}x^{j}~,}
These matrices were developed in 1947 by Eri Jabotinsky to represent convolutions of polynomials.[4] It is the transpose of the Carleman matrix and satisfy
B
[
f
∘
g
]
=
B
[
g
]
B
[
f
]
,
{\displaystyle B[f\circ g]=B[g ]B[f ]~,}
which makes the Bell matrix B an anti-representation of
f
(
x
)
{\displaystyle f(x )}
.
See also
[ edit ]
Notes
[ edit ]
^ Lang, W. (2000). "On generalizations of the stirling number triangles". Journal of Integer Sequences . 3 (2.4): 1–19. Bibcode :2000JIntS...3...24L .
^ Jabotinsky, Eri (1947). "Sur la représentation de la composition de fonctions par un produit de matrices. Applicaton à l'itération de e^x et de e^x-1". Comptes rendus de l'Académie des Sciences . 224 : 323–324.
References
[ edit ]
R Aldrovandi, Special Matrices of Mathematical Physics : Stochastic, Circulant and Bell Matrices, World Scientific, 2001. (preview )
R. Aldrovandi, L. P. Freitas, Continuous Iteration of Dynamical Maps , online preprint, 1997.
P. Gralewicz, K. Kowalski, Continuous time evolution from iterated maps and Carleman linearization , online preprint, 2000.
K Kowalski and W-H Steeb, Nonlinear Dynamical Systems and Carleman Linearization , World Scientific, 1991. (preview )
R e t r i e v e d f r o m " https://en.wikipedia.org/w/index.php?title=Carleman_matrix&oldid=1235680348 "
C a t e g o r i e s :
● F u n c t i o n s a n d m a p p i n g s
● M a t r i x t h e o r y
● E p o n y m s i n m a t h e m a t i c s
● T h i s p a g e w a s l a s t e d i t e d o n 2 0 J u l y 2 0 2 4 , a t 1 6 : 1 6 ( U T C ) .
● T e x t i s a v a i l a b l e u n d e r t h e C r e a t i v e C o m m o n s A t t r i b u t i o n - S h a r e A l i k e L i c e n s e 4 . 0 ;
a d d i t i o n a l t e r m s m a y a p p l y . B y u s i n g t h i s s i t e , y o u a g r e e t o t h e T e r m s o f U s e a n d P r i v a c y P o l i c y . W i k i p e d i a ® i s a r e g i s t e r e d t r a d e m a r k o f t h e W i k i m e d i a F o u n d a t i o n , I n c . , a n o n - p r o f i t o r g a n i z a t i o n .
● P r i v a c y p o l i c y
● A b o u t W i k i p e d i a
● D i s c l a i m e r s
● C o n t a c t W i k i p e d i a
● C o d e o f C o n d u c t
● D e v e l o p e r s
● S t a t i s t i c s
● C o o k i e s t a t e m e n t
● M o b i l e v i e w