How OpenAI is using AI agents internally to build software faster

Sia
0 replies
Tinkerers and reverse engineers are starting to understand how OpenAI is using AI to significantly improve engineering throughput internally. This is what I think will be revealed eventually: 1. OpenAI, like any other company, likes to eat their own dogfood. And software engineering is one of the best proof of concept cases for LLMs. In the first step, OpenAI trains their most advanced models to understand the sub-technologies of their systems, such as Python, C++, CUDA, etc. 2. In the second step, they train the LLMs on the sub components of their system, these are the systems they have designed themselves. 3. In the third step, they created a trial-and-error pipeline to test software that the LLM-agents produced in step 1 and 2 produce. This means that they can build and test systems much faster, as not only do they remove the manual labour of typing the changes, but they also significantly decrease the need for explaining to the system. When I provide Anthropic's Claude with a task, I have to be very meticulous in my description, and where I have significant legacy code, I have to find a way of generalising my request to Claude. If I trained a RAG on the legacy code though, I could just tell it: "In the invoicing system, could you add issue date to the ouput PDF?"
🤔
No comments yet be the first to help