... | ... | @@ -243,10 +243,16 @@ We need to think about the last rotation: U. We need to rotate it back via $`U^* |
|
|
We are ready to explore the math behind the curtains. The first step was to figure out the last rotation step with U. Herein, we are after the angle $`θ`$. If you look at the 2D example above, it is easy to see that after these transformations, data is to be oriented with respect to variance (this was the objective in PCA, maximize the variance). In other words, we are looking for the angle that gives the maximum variance. So, we need to formulate how the variance changes as we look at different $`θ`$ values. In the above 2D example:
|
|
|
|
|
|
```math
|
|
|
\sigma(θ) = \sum_{n}^{N} [x_1(i) x_2(i)]\begin{bmatrix} cos(θ) \\ sin(θ) \end{bmatrix}
|
|
|
\sigma(θ) = \sum_{n}^{N} {[x_1(n) x_2(n)]\begin{bmatrix} cos(θ) \\ sin(θ) \end{bmatrix}}^2
|
|
|
```
|
|
|
|
|
|
where $`x_j`$ is the measured signal.
|
|
|
where $`x`$ is the measured signal. Note that n is the elements in X. In the next step, we take the derivative with respect to θ and make it equal to zero. The maximum will give us the first principle component, where the minimum will give the second principle component, which are orthogonal. In practice, it does not matter whether we find the min or max with the derivative; they differ by 90 degree here. After couple of calculation steps, we can get the angle θ:
|
|
|
|
|
|
```math
|
|
|
θ = 1/2 tan^{-1}(-2\sum{x_1x_2}/\sum(x_2^2-x_1^2))
|
|
|
```
|
|
|
The ratio gives information between the ratios of the covariance to the variances of $`x_2`$ and $`x_1`$.
|
|
|
|
|
|
|
|
|
...
|
|
|
to be contd
|
... | ... | |