Large language models (LLMs) are an artificial intelligence (AI) technology that promises to revolutionize programming by translating a user's informal intent expressed in natural language into computer code. This technology has the potential to democratize programming and allow anyone, regardless of their skills, to generate code from a simple task description. However, LLMs do not offer any guarantees about the quality of the code they generate, or whether the generated code actually does what the user intended. With LLMs becoming popular, it is thus crucial to build formal techniques that can produce code that provably matches the user's intent and convince the user that the code will do what is expected of it. Recent work has proposed grammar-constrained decoding as a way to enforce that the output generated by large language models belongs to the language of a user-provided formal grammar. This project will contribute new grammar decoding techniques that can align LLMs with formal specifications and enable efficient generation of high-quality code. <br/><br/>To this end, this project will integrate program analysis and synthesis techniques from formal methods with structured prediction methods from natural-language processing. Concretely, the project will (1) develop grammar-aligned decoding, a suite of decoding algorithms that more faithfully capture the LLM's underlying distribution than existing grammar-constrained decoding methods; (2) adapt program analysis and synthesis techniques to encode a variety of formal specifications as grammars that grammar-aligned decoding can handle; and (3) develop interactive techniques that help users formalize their intent as specifications.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.