Many musical interfaces have used the musical conductor metaphor, allowing users to control the expressive aspects of a performance by imitating the gestures of conductors. In most of them, the rules to control these expressive aspects are predefined and users have to adapt to them. Other works have studied conductors’ gestures in relation to the performance of the orchestra. The goal of this study is to analyze, following the path initiated by this latter kind of works, how simple motion capture descriptors can explain the relationship between the loudness of a given performance and the way in which different subjects move when asked to impersonate the conductor of that performance. Twenty-five subjects were asked to impersonate the conductor of three classical music fragments while listening to them. The results of different linear regression models with motion capture descriptors as explanatory variables show that, by studying how descriptors correlate to loudness differently among subjects, different tendencies can be found and exploited to design models that better adjust to their expectations.