Controlling the timbre generated by an audio synthesizerin a goal-oriented way requires a profound understandingof the synthesizer’s manifold structural parameters. Especially shapingtimbre expressively to communicate emotional affect requires expertise.Therefore, novices in particular may not be able to adequately control timbrein viewof articulating the wealth of affects musically. In this context, the focus ofthis paper is the development of a model that can represent a relationshipbetween timbre and an expected emotional affect . The results of the evaluationof the presented model are encouraging which supports its use in steering oraugmenting the control of the audio synthesis. We explicitly envision thispaper as a contribution to the field of Synthesis by Analysis in the broadersense, albeit being potentially suitable to other related domains.