In this paper an emotionally justified approach for controlling sound with physiology is presented. Measurements of listeners’ physiology, while they are listening to recorded music of their own choosing, are used to create a regression model that predicts features extracted from music with the help of the listeners’ physiological response patterns. This information can be used as a control signal to drive musical composition and synthesis of new sounds an approach involving concatenative sound synthesis is suggested. An evaluation study was conducted to test the feasibility of the model. A multiple linear regression model and an artificial neural network model were evaluated against a constant regressor, or dummy model. The dummy model outperformed the other models in prediction accuracy, but the artificial neural network model achieved significant correlations between predictions and target values for many acoustic features.