Abstract
The averaged-perceptron learning algorithm is simple, versatile and effective. However, when used in NLP settings it tends to produce very dense solutions, while much sparser ones are also possible. We present a simple modification to the perceptron algorithm which allows it to produce sparser solutions while remaining accurate and computationally efficient. We test the method on a multiclass classification task, a structured prediction task, and a guided learning task. In all of the experiments the method produced models which are about 4-5 times smaller than the averaged perceptron, while remaining as accurate.
Original language | English |
---|---|
Journal | Acl |
State | Published - 2011 |