Skip to content
GitLab
  • Menu
Projects Groups Snippets
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in
  • Data Driven Engineering Data Driven Engineering
  • Project information
    • Project information
    • Activity
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Deployments
    • Deployments
    • Releases
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Commits
Collapse sidebar
  • cihan.ates
  • Data Driven EngineeringData Driven Engineering
  • Wiki
  • Dde 1
  • Dimensionality reduction

Dimensionality reduction · Changes

Page history
Update Dimensionality reduction authored Nov 30, 2021 by cihan.ates's avatar cihan.ates
Hide whitespace changes
Inline Side-by-side
DDE-1/Dimensionality-reduction.md
View page @ 06cbf997
......@@ -240,10 +240,10 @@ We need to think about the last rotation: U. We need to rotate it back via $`U^*
<img src="uploads/1b71bb1a15e8c1a2e1849e8ea14a9b55/ica_3.png" width="600">
</div>
We are ready to explore the math behind the curtains. The first step was to figure out the last rotation step with U. Herein, we are after the angle $`\teta`$. If you look at the 2D example above, it is easy to see that after these transformations, data is to be oriented with respect to variance (this was the objective in PCA, maximize the variance). In other words, we are looking for the angle that gives the maximum variance. So, we need to formulate how the variance changes as we look at different $`\teta`$ values. In the above 2D example:
We are ready to explore the math behind the curtains. The first step was to figure out the last rotation step with U. Herein, we are after the angle $`θ`$. If you look at the 2D example above, it is easy to see that after these transformations, data is to be oriented with respect to variance (this was the objective in PCA, maximize the variance). In other words, we are looking for the angle that gives the maximum variance. So, we need to formulate how the variance changes as we look at different $`θ`$ values. In the above 2D example:
```math
\sigma(\teta) = \sum_{n}^{N} [x_1(i) x_2(i)]
\sigma(θ) = \sum_{n}^{N} [x_1(i) x_2(i)]\begin{bmatrix} cos(θ) \\ sin(θ) \end{bmatrix}
```
where $`x_j`$ is the measured signal.
......
Clone repository
Home
  • Recommended resources
  • Toolbox
  • Miscellaneous topics
DDE 1: ML for Dynamical systems
  • Complex Systems
  • Ode to Learning
  • Regression
  • Classification
  • Clustering
  • Dimensionality Reduction
  • Outlier Detection
DDE 2: Advanced topics
  • Evolutionary learning

Imprint