Abstract
Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (paralyzing vs. non-paralyzing), and the stochastic nature of the fission chains. The most basic analytic models for a paralyzing dead time correction assume a non-correlated source, resulting in an exponential model for the dead time correction. While this model is often used and very useful for correcting the average count rate in low count rates, it is totally impractical in noise experiments and the so-called Feynman-α experiments. In the present study, a new technique is introduced for dead time corrections, based on backward extrapolation of the losses, created by imposing increasing artificial dead time on the data, back to zero. The method is implemented on neutron noise measurements carried out in the MINERVE reactor, demonstrating high accuracy in restoring the corrected values of the Feynman-Y variance-to-mean-ratio.
Original language | English |
---|---|
Pages (from-to) | 229-237 |
Number of pages | 9 |
Journal | Journal of Nuclear Science and Technology |
Volume | 55 |
Issue number | 2 |
DOIs | |
State | Published - 1 Feb 2018 |
Keywords
- Dead time correction
- Feynman-Y method
- backward extrapolation method
- reactor physics
- subcriticality
ASJC Scopus subject areas
- Nuclear and High Energy Physics
- Nuclear Energy and Engineering