Abstract
When humans learn a new concept, they might ignore examples that they cannot make sense of at first, and only later focus on such examples, when they are more useful for learning. We propose incorporating this idea of tunable sensitivity for hard examples in neural network learning, using a new generalization of the cross-entropy gradient step, which can be used in place of the gradient in any gradient-based training method. The generalized gradient is parameterized by a value that controls the sensitivity of the training process to harder training examples. We tested our method on several benchmark datasets. We propose, and corroborate in our experiments, that the optimal level of sensitivity to hard example is positively correlated with the depth of the network. Moreover, the test prediction error obtained by our method is generally lower than that of the vanilla cross-entropy gradient learner. We therefore conclude that tunable sensitivity can be helpful for neural network learning.
Original language | English |
---|---|
Pages | 2087-2093 |
Number of pages | 7 |
State | Published - 1 Jan 2017 |
Event | 31st AAAI Conference on Artificial Intelligence, AAAI 2017 - San Francisco, United States Duration: 4 Feb 2017 → 10 Feb 2017 |
Conference
Conference | 31st AAAI Conference on Artificial Intelligence, AAAI 2017 |
---|---|
Country/Territory | United States |
City | San Francisco |
Period | 4/02/17 → 10/02/17 |
ASJC Scopus subject areas
- Artificial Intelligence