Kernel Clustering

Creator
Created
Created
2019 Nov 5 5:18
Editor
Edited
Edited
2024 Oct 17 10:17
Refs
case based learning (reasoning)
 
Kernel trick: we can substitute any*similarity function in place of the dot product
 

classification by similarity

image 0 < i < 1 vector dot product is similar function

similar by distance

knn - nonditermine, local answer

Trade-offs: Small k gives relevant neighbors(오버피팅), Large k gives smoother functions (적당한 피팅)
  1. k random center pick
  1. 가까운 center에이 거기 클러스터 - initial membership
  1. 3. 거기의 중간으로 이동 (좌표값평균)
  1. 다시 가꾸은 center에 속하게
  1. 반복
stop no change
 
 
Agglomerative clustering
모든 거 사이에 간격 있다
간격 제일 작은거부터 합치는데
cluster도 점으로 처리
 
notion image
 
Parametric models - fixed seet of parameter non parametric - often limit - classifier increasees with data
 

kernelization

weight vectors (the primal representation) from update counts (the dual representation)
 
 

Recommendations