Linear Regression

Creator
Creator
Seonglae Cho
Created
Created
2023 Mar 9 1:44
Editor
Edited
Edited
2025 Feb 13 11:56

Linear Predictor (minimal
Perceptron
)

h(x)=Σi=0dθixi=θTxh(x) = \Sigma_{i=0}^{d}\theta_ix_i = \theta^Tx
in Linear Regression there is always global optimum
  • update θ\theta -
    Gradient Descent
    like algorithms
    • add gradient of derivative of cost function J
J(θ)=12Σi=1n(hθ(x(i))y(i))2=12(Xθy)T(Xθy)J(\theta) = \frac{1}{2}\Sigma_{i=1}^n(h_\theta(x^{(i)}) - y^{(i)})^2 = \frac{1}{2}(X\theta - \vec{y})^T(X\theta - \vec{y})θ:=θ+αΣ(yihθ(xi))xi\theta := \theta + \alpha \Sigma(y_i - h_\theta(x_i))x_i
  • Closed form - solve equation - assume XTXX^TX is invertible
θ=(XTX)1XTy\theta = (X^TX)^{-1}X^T\vec{y}
Linear Regression Notion
 
 
 
 
 
 
 
 
 

Recommendations