... | ... | @@ -220,19 +220,12 @@ In order to claim that i & j are independent, the general form of the equation m |
|
|
\rho(i^p,j^q) = 0
|
|
|
```
|
|
|
|
|
|
Implementation of this strategy can be seen in [project pursuit approach](https://en.wikipedia.org/wiki/Projection_pursuit). Another way is to use the concept of entropy as a measure. We know that the signals that have maximum joint entropy are mutually independent and this is what will go into the optimizer. The approach is called [Infomax](https://en.wikipedia.org/wiki/Infomax). [This interpretation](http://www.inf.fu-berlin.de/lehre/WS05/Mustererkennung/infomax/infomax.pdf) is relatively new.
|
|
|
|
|
|
If the probability of the getting an outcome of $`X_t`$ is $`p_X(X_t)`$, then the entropy in average of N observation can be found as:
|
|
|
|
|
|
```math
|
|
|
H(X) -1/N \sum_{n=1}^N ln(p_X(X_n))
|
|
|
```
|
|
|
Implementation of this strategy can be seen in [project pursuit approach](https://en.wikipedia.org/wiki/Projection_pursuit). Another way is to use the concept of entropy as a measure. We know that the signals that have maximum joint entropy are mutually independent and this is what will go into the optimizer. The approach is called [Infomax](https://en.wikipedia.org/wiki/Infomax). You can find the details of the proposed method [here](http://www.inf.fu-berlin.de/lehre/WS05/Mustererkennung/infomax/infomax.pdf).
|
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
|
|
|
|
----------------------
|
|
|
|
|
|
Note: In ICA, we assume that we do not have Gaussian distributions in the variables.
|
... | ... | |