English for Spoken Programming
Existing commercial and open source speech recognition engines do not come with pre-built models that lend themselves to natural input of programming languages. Prior approaches to this problem have largely concentrated on developing spoken syntax for existing programming languages. In this paper, we instead describe a new programming language and environment that is being developed to use “closer to English” syntax. In addition to providing a more intuitive spoken syntax for users, this allows existing speech recognizers to achieve improved accuracy using their pre-built English models. Our basic recognizer is built from a standard context-free grammar together with the CMU Sphinx pre-trained English models. To improve its accuracy, we modify the language model during runtime by factoring in additional context derived from the program text, such as variable scoping and type inference. While still a work in progress, we anticipate that this will yield measurable improvements in speed and accuracy of spoken program dictation. English for Spoken Programming
The dominant paradigm for programming a computer today is text entry via keyboard and mouse. Keyboard-based entry has served us well for decades, but it is not ideal in all situations. People may have many reasons to wish for usable alternative input methods, ranging from disabilities or injuries to naturalness of input. For example, a person with a wrist or hand injury may find herself entirely unable to type, but with no impairment to her thinking abilities or desire to program. What a frustrating combination!