Large Language Models have opened up a fascinating new avenue of development work that's becoming increasingly popular as businesses look to experiment and understand the benefits that integrating with LLM systems can bring. It's an exciting space where traditional software engineering meets cutting-edge AI capabilities, creating opportunities that simply didn't exist a few years ago.
I have worked with a client to create a GPT-based RAG system operating across their internal wikis, code bases, release notes and commit comments, for use by their dev team. The system helps developers answer questions about parts of the system they know nothing about, straight away, without having to approach other developers. Developers can ask natural language questions like "Before I change this service, what other systems might depend on it?" or "Which version of our product was the workflow module introduced in?" and get contextual answers drawn from the organisation's collective knowledge base. This helps with onboarding, reduces the bottleneck of overstretched domain experts and has had the unexpected upside of improving story sizing and sprint planning for the organisation.
In my own freelance work, I've implemented and make good use of agentic coders. Built using a Dockerised version of Linux, authenticated with my Azure DevOps, and a terminal-based implementation of Claude AI, I'm able to spin up a new container, feed it a spec and it will clone a repo, create a feature branch, implement the change, commit and push changes and create a PR for my review. Whilst I'm yet to be convinced by what some of the big tech CEOs are claiming about the oncoming onslaught of AGI and superintelligence, I've got to say I find agentic coding pretty nifty.
This is very new technology that seems to be changing weekly. Some of it is hype, but there's undeniable value in some of the benefits we're seeing from its application.