Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Neuron Coloring #2

Open
qweazxsd opened this issue May 17, 2023 · 0 comments
Open

Neuron Coloring #2

qweazxsd opened this issue May 17, 2023 · 0 comments

Comments

@qweazxsd
Copy link

qweazxsd commented May 17, 2023

If I'm not mistaken the neuron is colored by the value of the squished bias (like in line 89 in gym.c). If I'm not mistaken usually the neuron is colored by the activation, i.e. sigmoid(wx+b). The value of the bias itself doesn't hold much information, if the neuron fired holds more information in the context of visualization.

The transformation z=wx+b takes the vector x to another vector z, but because usually the vectors are of different dimensions the new vector z is in an entirely different vector space. Hence the new vector holds no meaningful information (to us, the NN can understand the meaning, that's exactly why it exists). If z holds no meaningful information then b definitely does not.

NOTE: It might be useful to think about NN as tools to reduce the dimensionality of the problem. Like in the video of 3blue1brown, a 28*28 vector which holds the information about the image is transformed by the NN to a 10 elements vector which holds a very different information - the digit in the image. We as humans can't draw the line between the two vectors (if we only saw a series of numbers between 0 and 1, of course if we saw the image we could and vise versa).

I think that another common visualization that is worthy to add is the activation of the neurons in the first hidden layer for every input. For example in the case of the video of 3blue1brown this would show a 2D activation pattern for each neuron. As I mentioned before after the first transformation the vector does not hold much information to us so this visualization is usually only relevant for the first hidden layer.

You mentioned in the video that it is interesting how the NN is learning in large steps. I think if the neurons were colored as I mentioned it could be seen that in those large steps suddenly a neuron was fired or turned of, but because the bias is colored it changes more continuously (and not abruptly like a neuron would).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant