Auditory systems of humanoid robots usually acquire the surrounding sound field by means of microphone arrays. These arrays can undergo motion related to the robot's activity. The conventional approach to dealing with this motion is to stop the robot during sound acquisition. This approach avoids changing the positions of the microphones during the acquisition and reduces the robot's ego-noise. However, stopping the robot can interfere with the naturalness of its behaviour. Moreover, the potential performance improvement due to motion of the sound acquiring system can not be attained. This potential is analysed in the current paper. The analysis considers two different types of motion: (i) rotation of the robot's head and (ii) limb gestures. The study presented here combines both theoretical and numerical simulation approaches. The results show that rotation of the head improves the high-frequency performance of the microphone array positioned on the head of the robot. This is complemented by the limb gestures, which improve the low-frequency performance of the array positioned on the torso and limbs of the robot.