Real-time audio analysis has great potential for being used to create musically responsive applications in live performances. There have been many examples of such use, including sound-responsive visualisations, adaptive audio effects and machine musicianship. However, at present, using audio analysis algorithms in live performance requires either some detailed knowledge about the algorithms themselves, or programming or both. Those wishing to use audio analysis in live performances may not have either of these as their strengths. Rather, they may instead wish to focus upon systems that respond to audio analysis data, such as visual projections or sound generators. In response, this paper introduces the Sound Analyser an audio plug-in allowing users to a) select a custom set of audio analyses to be performed in real-time and b) send that information via OSC so that it can easily be used by other systems to develop responsive applications for live performances and installations. A description of the system architecture and audio analysis algorithms implemented in the plug-in is presented before moving on to two case studies where the plug-in has been used in the field with artists.