Quantcast
Viewing latest article 28
Browse Latest Browse All 35

How do I encourage risk-taking in a feedfoward neural network?

I am doing my first real dive into Neural Networks, and I am trying to construct the classic "number-identification network" without any outside databases. After some initial testing, I have been running into the issue of my network keeping all weights incredibly low so that the outputs are all nearly zero. I see why this happens (every time the program gets 9 out of the 10 outputs correct!) but obviously this is something I need to discourage from happening.

Does anybody have tips on how I could fix this? I am using a sigmoid activation function and cross-entropy cost function for a feedforward neural network, and I am wondering whether there are better choices to make the program consider the error in the correct digit more significantly.


Viewing latest article 28
Browse Latest Browse All 35

Trending Articles