Speech is the most natural mode of communication and yet attempts to build systems which support robust habitable conversations between a human and a machine have so far had only limited success. A key reason is that current systems treat speech input as equivalent to a keyboard or mouse, and behaviour is controlled by predefined scripts that try to anticipate what the user will say and act accordingly. But speech recognisers make many errors and humans are not predictable; the result is systems which are difficult to design and fragile in use.
Statistical methods for spoken dialogue managementtakes a radically different view. It treats dialogue as the problem of inferring a user''s intentions based on what is said. The dialogue is modelled as a probabilistic network and the input speech acts are observations that provide evidence for performing Bayesian inference. The result is a system which is much more robust to speech recognition errors and for which a dialogue strategy can be learned automatically using reinforcement learning. The thesis describes both the architecture, the algorithms needed for fast real-time inference over very large networks, model parameter estimation and policy optimisation.
This ground-breaking work will be of interest both to practitioners in spoken dialogue systems and to cognitive scientists interested in models of human behaviour.