The accelerating demand for efficient and scalable software development has catalyzed the exploration of AI-driven solutions for automating complex programming tasks. This chapter presents a comprehensive study on the application of transformer-based frameworks for automated code generation and software optimization. We examine the ability of these models to translate high-level natural language descriptions and formal specifications into executable, high-quality code. The chapter introduces a novel transformer-based methodology that integrates a structure-aware encoder with a dedicated optimization module to enhance both code generation accuracy and runtime performance. We evaluate our proposed model against several leading benchmarks, including HumanEval, MBPP, and CodeXGLUE, demonstrating significant improvements over existing state-of-the-art models like CodeBERT, GraphCodeBERT, and AlphaCode. Our findings reveal that the proposed framework excels in capturing programming intent, generating context-aware code, and performing automated refactoring to optimize for execution speed and memory efficiency. The results and discussion section provides an in-depth analysis of performance metrics, error distribution, and the trade-offs between model size and accuracy. By synthesizing current advancements and addressing existing limitations, this work contributes to the evolving field of code intelligence and highlights future directions for developing more robust, generalizable, and trustworthy AI systems for software development.