This paper describes a generalized motion-based framework forthe generation of large musical control fields from imaging data.The framework is general in the sense that it does not depend ona particular source of sensing data. Real-time images of stageperformers, pre-recorded and live video, as well as more exoticdata from imaging systems such as thermography, pressuresensor arrays, etc. can be used as a source of control. Featurepoints are extracted from the candidate images, from whichmotion vector fields are calculated. After some processing, thesemotion vectors are mapped individually to sound synthesisparameters. Suitable synthesis techniques include granular andmicrosonic algorithms, additive synthesis and micro-polyphonicorchestration. Implementation details of this framework isdiscussed, as well as suitable creative and artistic uses andapproaches.