LLM-Powered Declarative Blueprint Synthesis for Enterprise Back-End Workflows

Authors

  • Vinopriya Vijayaboopathy CVS Health, USA Author
  • Tejas Dhanorkar Capgemini, USA Author

Keywords:

LLM, code generation, blueprint synthesis, Kubernetes, Terraform

Abstract

The objective of this research is to explore the innovative LLM-powered declarative blueprint that synthesises architecture for business back-end infrastructure and automation. A code-generation agent trained on domain-specific corpora and supplemented with reinforcement learning from human feedback (RLHF) turns high-level business rules into production-ready Kubernetes-native manifests, Terraform modules, and policy-as-code limits. 

Downloads

Download data is not yet available.

References

M. Fowler, Infrastructure as Code: Managing Servers in the Cloud, 1st ed., O'Reilly Media, 2016.

K. Hightower, B. Burns, and J. Beda, Kubernetes: Up and Running: Dive into the Future of Infrastructure, 1st ed., O'Reilly Media, 2017.

H. Chen, R. S. Kazman, and L. C. Briand, "An empirical study on the use of infrastructure-as-code scripts," IEEE Transactions on Software Engineering, vol. 46, no. 4, pp. 382–400, Apr. 2020.

HashiCorp, "Terraform: Infrastructure as Code," 2021.

C. De Sa, "Declarative Infrastructure for Modern Cloud Architectures," ACM Queue, vol. 18, no. 3, pp. 32–44, 2020.

T. Wolf et al., "A Survey on the Use of Machine Learning in Infrastructure as Code," Proceedings of the 2020 IEEE/ACM 42nd International Conference on Software Engineering, pp. 1323–1333, 2020.

A. Radford et al., "Language Models are Few-Shot Learners," arXiv preprint arXiv:2005.14165, 2020.

M. Chen et al., "Evaluating Large Language Models Trained on Code," arXiv preprint arXiv:2107.03374, 2021.

O. Polozov et al., "Neural Program Synthesis with Priority Queue Training," International Conference on Learning Representations (ICLR), 2018.

P. Ammanabrolu and M. Riedl, "Learning to Guide Decoding for Constrained Story Generation," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 05, pp. 8978–8985, Apr. 2020.

C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.

E. Chen, J. H. Lee, and S. C. Stolfo, "Policy as Code: The Next Step for Cloud Governance," Proceedings of the IEEE International Conference on Cloud Computing Technology and Science, pp. 28–33, 2019.

N. Jones et al., "OPA: Open Policy Agent: Unified Policy Enforcement Across the Cloud Native Stack," KubeCon + CloudNativeCon North America, 2019.

J. P. Borie, "Sentinel: Policy as Code for Infrastructure," HashiCorp Blog, 2020. [Online]. Available: https://www.hashicorp.com/blog/sentinel-policy-as-code

P. Christiano et al., "Deep Reinforcement Learning from Human Preferences," Advances in Neural Information Processing Systems (NeurIPS), pp. 4299–4307, 2017.

R. Iyer et al., "Leveraging Reinforcement Learning from Human Feedback to Improve Code Generation," arXiv preprint arXiv:2107.13474, 2021.

F. Chollet, Deep Learning with Python, 2nd ed., Manning Publications, 2021.

N. T. Nguyen and D. Lo, "Program Synthesis and Machine Learning: An Overview," IEEE Transactions on Software Engineering, vol. 47, no. 2, pp. 233–256, Feb. 2021.

J. Dean et al., "Large Scale Distributed Deep Networks," Advances in Neural Information Processing Systems, vol. 25, pp. 1223–1231, 2012.

A. Vaswani et al., "Attention is All You Need," Advances in Neural Information Processing Systems, vol. 30, pp. 5998–6008, 2017.

Downloads

Published

19-05-2021

How to Cite

[1]
Vinopriya Vijayaboopathy and Tejas Dhanorkar, “LLM-Powered Declarative Blueprint Synthesis for Enterprise Back-End Workflows”, American J Auton Syst Robot Eng, vol. 1, pp. 617–655, May 2021, Accessed: Dec. 12, 2025. [Online]. Available: https://ajasre.org/index.php/publication/article/view/74