Suyeong Park - 지성을 추구하는 삶
/
[지식 정리]
/
[수학]
/
수학/ 유용한 미적분 항등식들
Search
Duplicate
수학/ 유용한 미적분 항등식들
Scalar to Scalar
Vector to Scalar
Matrix to Scalar
참조
•
다음은 널리 사용되는 미적분의 항등식들이다.
Scalar to Scalar
d
d
x
c
x
n
=
c
n
x
n
−
1
d
d
x
log
(
x
)
=
1
/
x
d
d
x
exp
(
x
)
=
exp
(
x
)
d
d
x
[
f
(
x
)
+
g
(
x
)
]
=
d
f
(
x
)
d
x
+
d
g
(
x
)
d
x
d
d
x
[
f
(
x
)
g
(
x
)
]
=
f
(
x
)
d
g
(
x
)
d
x
+
g
(
x
)
d
f
(
x
)
d
x
d
d
x
f
(
u
(
x
)
)
=
d
u
d
x
d
f
(
u
)
d
u
\begin{aligned} {d \over dx}cx^n &= cnx^{n-1} \\ {d \over dx} \log(x) &= 1/x \\ {d \over dx} \exp(x) &= \exp(x) \\ {d \over dx}[f(x) + g(x)] &= {df(x) \over dx} + {dg(x) \over dx} \\ {d \over dx}[f(x)g(x)] &= f(x){d g(x) \over dx} + g(x) {d f(x) \over dx} \\ {d \over dx}f(u(x)) &= {du \over dx}{df(u) \over du} \end{aligned}
d
x
d
c
x
n
d
x
d
lo
g
(
x
)
d
x
d
exp
(
x
)
d
x
d
[
f
(
x
)
+
g
(
x
)]
d
x
d
[
f
(
x
)
g
(
x
)]
d
x
d
f
(
u
(
x
))
=
c
n
x
n
−
1
=
1/
x
=
exp
(
x
)
=
d
x
df
(
x
)
+
d
x
d
g
(
x
)
=
f
(
x
)
d
x
d
g
(
x
)
+
g
(
x
)
d
x
df
(
x
)
=
d
x
d
u
d
u
df
(
u
)
Vector to Scalar
∂
(
a
⊤
x
)
∂
x
=
a
∂
(
b
T
A
x
)
∂
x
=
A
⊤
b
∂
(
x
⊤
A
x
)
∂
x
=
(
A
+
A
⊤
)
x
\begin{aligned} {\partial (\bold{a}^\top\bold{x}) \over \partial \bold{x}} &= \bold{a} \\ {\partial (\bold{b}^T\bold{Ax}) \over \partial \bold{x}} &= \bold{A}^\top\bold{b} \\ {\partial (\bold{x}^\top\bold{Ax}) \over \partial \bold{x}} &= (\bold{A} + \bold{A}^\top)\bold{x} \end{aligned}
∂
x
∂
(
a
⊤
x
)
∂
x
∂
(
b
T
Ax
)
∂
x
∂
(
x
⊤
Ax
)
=
a
=
A
⊤
b
=
(
A
+
A
⊤
)
x
Matrix to Scalar
∂
f
∂
X
=
(
∂
f
∂
x
11
.
.
.
∂
f
∂
x
1
n
⋮
⋱
⋮
∂
f
∂
x
m
1
.
.
.
∂
f
∂
x
m
n
)
{\partial f \over \partial \bold{X}} = \left( \begin{matrix} {\partial f \over \partial x_{11}} & ... & {\partial f \over \partial x_{1n}} \\ \vdots & \ddots & \vdots \\ {\partial f \over \partial x_{m1}} & ... & {\partial f \over \partial x_{mn}} \end{matrix} \right)
∂
X
∂
f
=
∂
x
11
∂
f
⋮
∂
x
m
1
∂
f
...
⋱
...
∂
x
1
n
∂
f
⋮
∂
x
mn
∂
f
•
이차식을 포함한 항등식
∂
∂
X
(
a
⊤
X
b
)
=
a
b
⊤
∂
∂
X
(
a
⊤
X
⊤
b
)
=
b
a
⊤
\begin{aligned} {\partial \over \partial \bold{X}}(\bold{a}^\top{\bold{Xb}}) &= \bold{ab}^\top \\ {\partial \over \partial \bold{X}}(\bold{a}^\top{\bold{X}^\top\bold{b}}) &= \bold{ba}^\top \end{aligned}
∂
X
∂
(
a
⊤
Xb
)
∂
X
∂
(
a
⊤
X
⊤
b
)
=
ab
⊤
=
ba
⊤
•
trace를 포함한 항등식
∂
∂
X
tr
(
A
X
B
)
=
A
⊤
B
⊤
∂
∂
X
tr
(
X
⊤
A
)
=
A
∂
∂
X
tr
(
X
−
1
A
)
=
−
X
⊤
A
⊤
X
−
⊤
∂
∂
X
tr
(
X
⊤
A
X
)
=
(
A
+
A
⊤
)
X
\begin{aligned} {\partial \over \partial \bold{X}}\text{tr}(\bold{AXB}) &= \bold{A}^\top\bold{B}^\top \\ {\partial \over \partial \bold{X}}\text{tr}(\bold{X}^\top{\bold{A}}) &= \bold{A} \\ {\partial \over \partial \bold{X}}\text{tr}(\bold{X}^{-1}\bold{A}) &= -\bold{X}^\top\bold{A}^\top\bold{X}^{-\top} \\ {\partial \over \partial \bold{X}}\text{tr}(\bold{X}^\top\bold{AX}) &= (\bold{A+A}^\top)\bold{X} \end{aligned}
∂
X
∂
tr
(
AXB
)
∂
X
∂
tr
(
X
⊤
A
)
∂
X
∂
tr
(
X
−
1
A
)
∂
X
∂
tr
(
X
⊤
AX
)
=
A
⊤
B
⊤
=
A
=
−
X
⊤
A
⊤
X
−
⊤
=
(
A
+
A
⊤
)
X
•
determinant을 포함한 항등식
∂
∂
X
det
(
A
X
B
)
=
det
(
A
X
B
)
B
⊤
⋅
adj
(
A
X
B
)
A
⊤
=
det
(
A
X
B
)
X
−
⊤
∂
∂
X
log
(
det
(
X
)
)
=
X
−
⊤
\begin{aligned} {\partial \over \partial \bold{X}}\text{det}(\bold{AXB}) &= \det(\bold{AXB})\bold{B}^\top \cdot \text{adj}(\bold{AXB})\bold{A}^\top = \text{det}(\bold{AXB})\bold{X}^{-\top} \\ {\partial \over \partial \bold{X}} \log(\det(\bold{X})) &= \bold{X}^{-\top} \end{aligned}
∂
X
∂
det
(
AXB
)
∂
X
∂
lo
g
(
det
(
X
))
=
det
(
AXB
)
B
⊤
⋅
adj
(
AXB
)
A
⊤
=
det
(
AXB
)
X
−
⊤
=
X
−
⊤
참조
•
Probabilistic Machine Learning: An Introduction