You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a material scientist who started to learn Python a few months ago. Now I am studying machine learning thanks to your excellent textbook!
I need some help understanding Chapter 5.3 of Python Machine Learning 3rd ed.
When we project a new data point onto the principal component axis in KPCA, the dot product of 'lower case k' and (α / λ) seems to substitute the eigendecomposition in PCA. I see it gives the correct result, but the reason why we have to normalize α with λ is unclear to me.
In Chapter 5.3 of Python Machine Learning 3rd ed.,
lower case k = The similarity between x(i) and the new example, x'
upper case K = The similarity between each x(i)
alphas = eigenvectors of upper case K
lambdas = corresponding eigenvalues of alphas
Since there is no projection matrix and all samples in the data used to obtain the kernel matrix are already projected onto the principal component axis in kernel PCA, we have to calculate Φ(x')v to project new data.
when we normalize kα with λ, kα / λ = kKα / (λ ** 2)
In the book, you used alphas[25] as a new data point. Since alpha[25] belongs to the original data, kα is equal to Kα. Thus if we normalize kα with λ, we can get α.
kα / λ = Kα / λ = λα / λ = α
So the result is identical to alpha[25].
But does it also work for the data point that does not belong to the original dataset?
In summary,
(1) I have trouble understanding the reason why we have to normalize the eigenvectors with their eigenvalues when we project a new data point onto the PC axis in the case of kernel PCA.
(2) I am not sure we could project new data that is not included in the original dataset in the same way.
Best regards,
Dongwoo Hahn
The text was updated successfully, but these errors were encountered:
Dear Dr. Raschka,
I am a material scientist who started to learn Python a few months ago. Now I am studying machine learning thanks to your excellent textbook!
I need some help understanding Chapter 5.3 of Python Machine Learning 3rd ed.
When we project a new data point onto the principal component axis in KPCA, the dot product of 'lower case k' and (α / λ) seems to substitute the eigendecomposition in PCA. I see it gives the correct result, but the reason why we have to normalize α with λ is unclear to me.
In Chapter 5.3 of Python Machine Learning 3rd ed.,
Since there is no projection matrix and all samples in the data used to obtain the kernel matrix are already projected onto the principal component axis in kernel PCA, we have to calculate Φ(x')v to project new data.
In the book, you used alphas[25] as a new data point. Since alpha[25] belongs to the original data, kα is equal to Kα. Thus if we normalize kα with λ, we can get α.
So the result is identical to alpha[25].
But does it also work for the data point that does not belong to the original dataset?
In summary,
(1) I have trouble understanding the reason why we have to normalize the eigenvectors with their eigenvalues when we project a new data point onto the PC axis in the case of kernel PCA.
(2) I am not sure we could project new data that is not included in the original dataset in the same way.
Best regards,
Dongwoo Hahn
The text was updated successfully, but these errors were encountered: