J u m p t o c o n t e n t
M a i n m e n u
M a i n m e n u
N a v i g a t i o n
● M a i n p a g e
● C o n t e n t s
● C u r r e n t e v e n t s
● R a n d o m a r t i c l e
● A b o u t W i k i p e d i a
● C o n t a c t u s
● D o n a t e
C o n t r i b u t e
● H e l p
● L e a r n t o e d i t
● C o m m u n i t y p o r t a l
● R e c e n t c h a n g e s
● U p l o a d f i l e
S e a r c h
Search
A p p e a r a n c e
● C r e a t e a c c o u n t
● L o g i n
P e r s o n a l t o o l s
● C r e a t e a c c o u n t
● L o g i n
P a g e s f o r l o g g e d o u t e d i t o r s l e a r n m o r e
● C o n t r i b u t i o n s
● T a l k
( T o p )
1
C l e n s h a w a l g o r i t h m
2
E x a m p l e s
T o g g l e E x a m p l e s s u b s e c t i o n
2 . 1
H o r n e r a s a s p e c i a l c a s e o f C l e n s h a w
2 . 2
S p e c i a l c a s e f o r C h e b y s h e v s e r i e s
2 . 3
M e r i d i a n a r c l e n g t h o n t h e e l l i p s o i d
2 . 4
D i f f e r e n c e i n m e r i d i a n a r c l e n g t h s
3
S e e a l s o
4
R e f e r e n c e s
T o g g l e t h e t a b l e o f c o n t e n t s
C l e n s h a w a l g o r i t h m
5 l a n g u a g e s
● ا ل ع ر ب ي ة
● D e u t s c h
● F r a n ç a i s
● P o l s k i
● 中 文
E d i t l i n k s
● A r t i c l e
● T a l k
E n g l i s h
● R e a d
● E d i t
● V i e w h i s t o r y
T o o l s
T o o l s
A c t i o n s
● R e a d
● E d i t
● V i e w h i s t o r y
G e n e r a l
● W h a t l i n k s h e r e
● R e l a t e d c h a n g e s
● U p l o a d f i l e
● S p e c i a l p a g e s
● P e r m a n e n t l i n k
● P a g e i n f o r m a t i o n
● C i t e t h i s p a g e
● G e t s h o r t e n e d U R L
● D o w n l o a d Q R c o d e
● W i k i d a t a i t e m
P r i n t / e x p o r t
● D o w n l o a d a s P D F
● P r i n t a b l e v e r s i o n
A p p e a r a n c e
F r o m W i k i p e d i a , t h e f r e e e n c y c l o p e d i a
Clenshaw algorithm
[ edit ]
In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions
ϕ
k
(
x
)
{\displaystyle \phi _{k}(x )}
:
S
(
x
)
=
∑
k
=
0
n
a
k
ϕ
k
(
x
)
{\displaystyle S(x )=\sum _{k=0}^{n}a_{k}\phi _{k}(x )}
where
ϕ
k
,
k
=
0
,
1
,
…
{\displaystyle \phi _{k},\;k=0,1,\ldots }
is a sequence of functions that satisfy the linear recurrence relation
ϕ
k
+
1
(
x
)
=
α
k
(
x
)
ϕ
k
(
x
)
+
β
k
(
x
)
ϕ
k
−
1
(
x
)
,
{\displaystyle \phi _{k+1}(x )=\alpha _{k}(x )\,\phi _{k}(x )+\beta _{k}(x )\,\phi _{k-1}(x ),}
where the coefficients
α
k
(
x
)
{\displaystyle \alpha _{k}(x )}
and
β
k
(
x
)
{\displaystyle \beta _{k}(x )}
are known in advance.
The algorithm is most useful when
ϕ
k
(
x
)
{\displaystyle \phi _{k}(x )}
are functions that are complicated to compute directly, but
α
k
(
x
)
{\displaystyle \alpha _{k}(x )}
and
β
k
(
x
)
{\displaystyle \beta _{k}(x )}
are particularly simple. In the most common applications,
α
(
x
)
{\displaystyle \alpha (x )}
does not depend on
k
{\displaystyle k}
, and
β
{\displaystyle \beta }
is a constant that depends on neither
x
{\displaystyle x}
nor
k
{\displaystyle k}
.
To perform the summation for given series of coefficients
a
0
,
…
,
a
n
{\displaystyle a_{0},\ldots ,a_{n}}
, compute the values
b
k
(
x
)
{\displaystyle b_{k}(x )}
by the "reverse" recurrence formula:
b
n
+
1
(
x
)
=
b
n
+
2
(
x
)
=
0
,
b
k
(
x
)
=
a
k
+
α
k
(
x
)
b
k
+
1
(
x
)
+
β
k
+
1
(
x
)
b
k
+
2
(
x
)
.
{\displaystyle {\begin{aligned}b_{n+1}(x )&=b_{n+2}(x )=0,\\b_{k}(x )&=a_{k}+\alpha _{k}(x )\,b_{k+1}(x )+\beta _{k+1}(x )\,b_{k+2}(x ).\end{aligned}}}
Note that this computation makes no direct reference to the functions
ϕ
k
(
x
)
{\displaystyle \phi _{k}(x )}
. After computing
b
2
(
x
)
{\displaystyle b_{2}(x )}
and
b
1
(
x
)
{\displaystyle b_{1}(x )}
,
the desired sum can be expressed in terms of them and the simplest functions
ϕ
0
(
x
)
{\displaystyle \phi _{0}(x )}
and
ϕ
1
(
x
)
{\displaystyle \phi _{1}(x )}
:
S
(
x
)
=
ϕ
0
(
x
)
a
0
+
ϕ
1
(
x
)
b
1
(
x
)
+
β
1
(
x
)
ϕ
0
(
x
)
b
2
(
x
)
.
{\displaystyle S(x )=\phi _{0}(x )\,a_{0}+\phi _{1}(x )\,b_{1}(x )+\beta _{1}(x )\,\phi _{0}(x )\,b_{2}(x ).}
See Fox and Parker[4] for more information and stability analyses.
Examples
[ edit ]
Horner as a special case of Clenshaw
[ edit ]
A particularly simple case occurs when evaluating a polynomial of the form
S
(
x
)
=
∑
k
=
0
n
a
k
x
k
.
{\displaystyle S(x )=\sum _{k=0}^{n}a_{k}x^{k}.}
The functions are simply
ϕ
0
(
x
)
=
1
,
ϕ
k
(
x
)
=
x
k
=
x
ϕ
k
−
1
(
x
)
{\displaystyle {\begin{aligned}\phi _{0}(x )&=1,\\\phi _{k}(x )&=x^{k}=x\phi _{k-1}(x )\end{aligned}}}
and are produced by the recurrence coefficients
α
(
x
)
=
x
{\displaystyle \alpha (x )=x}
and
β
=
0
{\displaystyle \beta =0}
.
In this case, the recurrence formula to compute the sum is
b
k
(
x
)
=
a
k
+
x
b
k
+
1
(
x
)
{\displaystyle b_{k}(x )=a_{k}+xb_{k+1}(x )}
and, in this case, the sum is simply
S
(
x
)
=
a
0
+
x
b
1
(
x
)
=
b
0
(
x
)
,
{\displaystyle S(x )=a_{0}+xb_{1}(x )=b_{0}(x ),}
which is exactly the usual Horner's method .
Special case for Chebyshev series
[ edit ]
Consider a truncated Chebyshev series
p
n
(
x
)
=
a
0
+
a
1
T
1
(
x
)
+
a
2
T
2
(
x
)
+
⋯
+
a
n
T
n
(
x
)
.
{\displaystyle p_{n}(x )=a_{0}+a_{1}T_{1}(x )+a_{2}T_{2}(x )+\cdots +a_{n}T_{n}(x ).}
The coefficients in the recursion relation for the Chebyshev polynomials are
α
(
x
)
=
2
x
,
β
=
−
1
,
{\displaystyle \alpha (x )=2x,\quad \beta =-1,}
with the initial conditions
T
0
(
x
)
=
1
,
T
1
(
x
)
=
x
.
{\displaystyle T_{0}(x )=1,\quad T_{1}(x )=x.}
Thus, the recurrence is
b
k
(
x
)
=
a
k
+
2
x
b
k
+
1
(
x
)
−
b
k
+
2
(
x
)
{\displaystyle b_{k}(x )=a_{k}+2xb_{k+1}(x )-b_{k+2}(x )}
and the final sum is
p
n
(
x
)
=
a
0
+
x
b
1
(
x
)
−
b
2
(
x
)
.
{\displaystyle p_{n}(x )=a_{0}+xb_{1}(x )-b_{2}(x ).}
One way to evaluate this is to continue the recurrence one more step, and compute
b
0
(
x
)
=
a
0
+
2
x
b
1
(
x
)
−
b
2
(
x
)
,
{\displaystyle b_{0}(x )=a_{0}+2xb_{1}(x )-b_{2}(x ),}
(note the doubled a 0 coefficient) followed by
p
n
(
x
)
=
1
2
[
a
0
+
b
0
(
x
)
−
b
2
(
x
)
]
.
{\displaystyle p_{n}(x )={\tfrac {1}{2}}\left[a_{0}+b_{0}(x )-b_{2}(x )\right].}
Meridian arc length on the ellipsoid
[ edit ]
Clenshaw summation is extensively used in geodetic applications.[2] A simple application is summing the trigonometric series to compute the meridian arc distance on the surface of an ellipsoid. These have the form
m
(
θ
)
=
C
0
θ
+
C
1
sin
θ
+
C
2
sin
2
θ
+
⋯
+
C
n
sin
n
θ
.
{\displaystyle m(\theta )=C_{0}\,\theta +C_{1}\sin \theta +C_{2}\sin 2\theta +\cdots +C_{n}\sin n\theta .}
Leaving off the initial
C
0
θ
{\displaystyle C_{0}\,\theta }
term, the remainder is a summation of the appropriate form. There is no leading term because
ϕ
0
(
θ
)
=
sin
0
θ
=
sin
0
=
0
{\displaystyle \phi _{0}(\theta )=\sin 0\theta =\sin 0=0}
.
The recurrence relation for
sin
k
θ
{\displaystyle \sin k\theta }
is
sin
(
k
+
1
)
θ
=
2
cos
θ
sin
k
θ
−
sin
(
k
−
1
)
θ
,
{\displaystyle \sin(k+1)\theta =2\cos \theta \sin k\theta -\sin(k-1)\theta ,}
making the coefficients in the recursion relation
α
k
(
θ
)
=
2
cos
θ
,
β
k
=
−
1.
{\displaystyle \alpha _{k}(\theta )=2\cos \theta ,\quad \beta _{k}=-1.}
and the evaluation of the series is given by
b
n
+
1
(
θ
)
=
b
n
+
2
(
θ
)
=
0
,
b
k
(
θ
)
=
C
k
+
2
cos
θ
b
k
+
1
(
θ
)
−
b
k
+
2
(
θ
)
,
f
o
r
n
≥
k
≥
1.
{\displaystyle {\begin{aligned}b_{n+1}(\theta )&=b_{n+2}(\theta )=0,\\b_{k}(\theta )&=C_{k}+2\cos \theta \,b_{k+1}(\theta )-b_{k+2}(\theta ),\quad \mathrm {for\ } n\geq k\geq 1.\end{aligned}}}
The final step is made particularly simple because
ϕ
0
(
θ
)
=
sin
0
=
0
{\displaystyle \phi _{0}(\theta )=\sin 0=0}
, so the end of the recurrence is simply
b
1
(
θ
)
sin
(
θ
)
{\displaystyle b_{1}(\theta )\sin(\theta )}
; the
C
0
θ
{\displaystyle C_{0}\,\theta }
term is added separately:
m
(
θ
)
=
C
0
θ
+
b
1
(
θ
)
sin
θ
.
{\displaystyle m(\theta )=C_{0}\,\theta +b_{1}(\theta )\sin \theta .}
Note that the algorithm requires only the evaluation of two trigonometric quantities
cos
θ
{\displaystyle \cos \theta }
and
sin
θ
{\displaystyle \sin \theta }
.
Difference in meridian arc lengths
[ edit ]
Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write
m
(
θ
1
)
−
m
(
θ
2
)
=
C
0
(
θ
1
−
θ
2
)
+
∑
k
=
1
n
2
C
k
sin
(
1
2
k
(
θ
1
−
θ
2
)
)
cos
(
1
2
k
(
θ
1
+
θ
2
)
)
.
{\displaystyle m(\theta _{1})-m(\theta _{2})=C_{0}(\theta _{1}-\theta _{2})+\sum _{k=1}^{n}2C_{k}\sin {\bigl (}{\textstyle {\frac {1}{2}}}k(\theta _{1}-\theta _{2}){\bigr )}\cos {\bigl (}{\textstyle {\frac {1}{2}}}k(\theta _{1}+\theta _{2}){\bigr )}.}
Clenshaw summation can be applied in this case[5]
provided we simultaneously compute
m
(
θ
1
)
+
m
(
θ
2
)
{\displaystyle m(\theta _{1})+m(\theta _{2})}
and perform a matrix summation,
M
(
θ
1
,
θ
2
)
=
[
(
m
(
θ
1
)
+
m
(
θ
2
)
)
/
2
(
m
(
θ
1
)
−
m
(
θ
2
)
)
/
(
θ
1
−
θ
2
)
]
=
C
0
[
μ
1
]
+
∑
k
=
1
n
C
k
F
k
(
θ
1
,
θ
2
)
,
{\displaystyle {\mathsf {M}}(\theta _{1},\theta _{2})={\begin{bmatrix}(m(\theta _{1})+m(\theta _{2}))/2\\(m(\theta _{1})-m(\theta _{2}))/(\theta _{1}-\theta _{2})\end{bmatrix}}=C_{0}{\begin{bmatrix}\mu \\1\end{bmatrix}}+\sum _{k=1}^{n}C_{k}{\mathsf {F}}_{k}(\theta _{1},\theta _{2}),}
where
δ
=
1
2
(
θ
1
−
θ
2
)
,
μ
=
1
2
(
θ
1
+
θ
2
)
,
F
k
(
θ
1
,
θ
2
)
=
[
cos
k
δ
sin
k
μ
sin
k
δ
δ
cos
k
μ
]
.
{\displaystyle {\begin{aligned}\delta &={\tfrac {1}{2}}(\theta _{1}-\theta _{2}),\\[1ex]\mu &={\tfrac {1}{2}}(\theta _{1}+\theta _{2}),\\[1ex]{\mathsf {F}}_{k}(\theta _{1},\theta _{2})&={\begin{bmatrix}\cos k\delta \sin k\mu \\{\dfrac {\sin k\delta }{\delta }}\cos k\mu \end{bmatrix}}.\end{aligned}}}
The first element of
M
(
θ
1
,
θ
2
)
{\displaystyle {\mathsf {M}}(\theta _{1},\theta _{2})}
is the average
value of
m
{\displaystyle m}
and the second element is the average slope.
F
k
(
θ
1
,
θ
2
)
{\displaystyle {\mathsf {F}}_{k}(\theta _{1},\theta _{2})}
satisfies the recurrence
relation
F
k
+
1
(
θ
1
,
θ
2
)
=
A
(
θ
1
,
θ
2
)
F
k
(
θ
1
,
θ
2
)
−
F
k
−
1
(
θ
1
,
θ
2
)
,
{\displaystyle {\mathsf {F}}_{k+1}(\theta _{1},\theta _{2})={\mathsf {A}}(\theta _{1},\theta _{2}){\mathsf {F}}_{k}(\theta _{1},\theta _{2})-{\mathsf {F}}_{k-1}(\theta _{1},\theta _{2}),}
where
A
(
θ
1
,
θ
2
)
=
2
[
cos
δ
cos
μ
−
δ
sin
δ
sin
μ
−
sin
δ
δ
sin
μ
cos
δ
cos
μ
]
{\displaystyle {\mathsf {A}}(\theta _{1},\theta _{2})=2{\begin{bmatrix}\cos \delta \cos \mu &-\delta \sin \delta \sin \mu \\-\displaystyle {\frac {\sin \delta }{\delta }}\sin \mu &\cos \delta \cos \mu \end{bmatrix}}}
takes the place of
α
{\displaystyle \alpha }
in the recurrence relation, and
β
=
−
1
{\displaystyle \beta =-1}
.
The standard Clenshaw algorithm can now be applied to yield
B
n
+
1
=
B
n
+
2
=
0
,
B
k
=
C
k
I
+
A
B
k
+
1
−
B
k
+
2
,
f
o
r
n
≥
k
≥
1
,
M
(
θ
1
,
θ
2
)
=
C
0
[
μ
1
]
+
B
1
F
1
(
θ
1
,
θ
2
)
,
{\displaystyle {\begin{aligned}{\mathsf {B}}_{n+1}&={\mathsf {B}}_{n+2}={\mathsf {0}},\\[1ex]{\mathsf {B}}_{k}&=C_{k}{\mathsf {I}}+{\mathsf {A}}{\mathsf {B}}_{k+1}-{\mathsf {B}}_{k+2},\qquad \mathrm {for\ } n\geq k\geq 1,\\[1ex]{\mathsf {M}}(\theta _{1},\theta _{2})&=C_{0}{\begin{bmatrix}\mu \\1\end{bmatrix}}+{\mathsf {B}}_{1}{\mathsf {F}}_{1}(\theta _{1},\theta _{2}),\end{aligned}}}
where
B
k
{\displaystyle {\mathsf {B}}_{k}}
are 2×2 matrices. Finally we have
m
(
θ
1
)
−
m
(
θ
2
)
θ
1
−
θ
2
=
M
2
(
θ
1
,
θ
2
)
.
{\displaystyle {\frac {m(\theta _{1})-m(\theta _{2})}{\theta _{1}-\theta _{2}}}={\mathsf {M}}_{2}(\theta _{1},\theta _{2}).}
This technique can be used in the limit
θ
2
=
θ
1
=
μ
{\displaystyle \theta _{2}=\theta _{1}=\mu }
and
δ
=
0
{\displaystyle \delta =0}
to simultaneously compute
m
(
μ
)
{\displaystyle m(\mu )}
and the derivative
d
m
(
μ
)
/
d
μ
{\displaystyle dm(\mu )/d\mu }
, provided that, in evaluating
F
1
{\displaystyle {\mathsf {F}}_{1}}
and
A
{\displaystyle {\mathsf {A}}}
, we take
lim
δ
→
0
(
sin
k
δ
)
/
δ
=
k
{\displaystyle \lim _{\delta \to 0}(\sin k\delta )/\delta =k}
.
See also
[ edit ]
References
[ edit ]
^ a b Tscherning, C. C.; Poder, K. (1982), "Some Geodetic applications of Clenshaw Summation" (PDF) , Bolletino di Geodesia e Scienze Affini , 41 (4 ): 349–375, archived from the original (PDF) on 2007-06-12, retrieved 2012-08-02
^ Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 5.4.2. Clenshaw's Recurrence Formula" , Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
^ Fox, Leslie; Parker, Ian B. (1968), Chebyshev Polynomials in Numerical Analysis , Oxford University Press, ISBN 0-19-859614-6
^ Karney, C. F. F. (2014), Clenshaw evaluation of differenced sums
R e t r i e v e d f r o m " https://en.wikipedia.org/w/index.php?title=Clenshaw_algorithm&oldid=1222415280 "
C a t e g o r y :
● N u m e r i c a l a n a l y s i s
● T h i s p a g e w a s l a s t e d i t e d o n 5 M a y 2 0 2 4 , a t 2 1 : 0 1 ( U T C ) .
● T e x t i s a v a i l a b l e u n d e r t h e C r e a t i v e C o m m o n s A t t r i b u t i o n - S h a r e A l i k e L i c e n s e 4 . 0 ;
a d d i t i o n a l t e r m s m a y a p p l y . B y u s i n g t h i s s i t e , y o u a g r e e t o t h e T e r m s o f U s e a n d P r i v a c y P o l i c y . W i k i p e d i a ® i s a r e g i s t e r e d t r a d e m a r k o f t h e W i k i m e d i a F o u n d a t i o n , I n c . , a n o n - p r o f i t o r g a n i z a t i o n .
● P r i v a c y p o l i c y
● A b o u t W i k i p e d i a
● D i s c l a i m e r s
● C o n t a c t W i k i p e d i a
● C o d e o f C o n d u c t
● D e v e l o p e r s
● S t a t i s t i c s
● C o o k i e s t a t e m e n t
● M o b i l e v i e w