Software Engineer and AI
Ally or adversary? Large language models, ChatGPT Plugins, GitHub Copilot and Replit Ghostwriter
If you are new to Behind the Mutex, please check out the intro post BEGIN;.
After the launch of GPT-4, the latest large language model by OpenAI, there was an avalanche of news and tweets covering all new capabilities and use-cases that people would come up with. As with the previous release of GPT-3.5-TURBO aka ChatGPT, it was quite overwhelming to get the signal from noice.
But some people, realizing the current limitations of LLMs, started contemplating on advanced applications driven by those models. With no memory nor online learning LLMs would have limited uses.
Like many others, I was imagining an execution engine that would be driven by a LLM, have a long- and short-term memory and a queue, and would to some extent resemble relatively modern computer architectures. This topic deserves a separate long post covering the most recent experiments and attempts to implement such engines, and I will definitely write one soon.
And just a few moments later OpenAI announces their ChatGPT Plugins. For me it was a bit intimidating at that time. You just had an idea how one could potentially build an execution engine driven by an LLM, and here it is. The smartest and brightest minds, backed by substantial funding roll out something so profound while others would only start thinking about it. Moreover, by that time OpenAI was probably way ahead, and might have been close to finishing training their next biggest model, considering the possible latency of releasing their previous products to public. You might have noticed that many companies announced integration of their products with GPT-4 on the day the model was released.
While reading the announcement, one specific paragraph kept my attention:
Plugins will likely have wide-ranging societal implications. For example, we recently released a working paper which found that language models with access to tools will likely have much greater economic impacts than those without, and more generally, in line with other researchers’ findings, we expect the current wave of AI technologies to have a big effect on the pace of job transformation, displacement, and creation. We are eager to collaborate with external researchers and our customers to study these impacts.
It felt a bit surreal to read it, as the paragraph claimed that the so-called Plugins would have significant effects on jobs and society in general. Usually you perceive a plugin as a small something that makes a given product slightly more useful. But here Plugins are meant to change our lives in a big way.
Now with Plugins, partially inspired by LangChain, and many other experimental projects like AutoGPT, it is becoming apparent that we indeed will witness those transformations soon. Many product teams are working hard to add these new capabilities into their solutions, whether it is semantic search on top of their information or more sophisticated LLM-based workflows.
Here come the big questions: How will all this affect software engineering jobs? Should you be worried?
Of course there are active experiments to build LLM-driven agents which write code and tests, debug and improve, but for now they are quite limited as their underlying LLMs. There are lots of very intricate projects and codebases that implement complex distributed systems that even humans currently have hard time understanding without really deep analysis and knowledge of the underlying technologies, algorithms and protocols.
More realistic relatively short-term scenario is substantial acceleration of software development by applying AI, and LLMs in part, to your favorite daily engineering practice. There is a notion of 10x engineers, who may be an order of magnitude more productive than other engineers. Well, it might be fair to assume that AI would be able to boost engineers’ productivity to new levels. A 1x engineer who decides to apply AI in their daily work now can compete with a fellow 10xer who ignores it.


So, the short answer to the big question: Well, no, you shouldn’t, really. It never helps.
Instead, let’s embrace these new opportunities. One does not need to toss everything they’ve been working on right away and start experimenting with LLM prompts. A possible immediate action could be trying to incorporate the existing productivity boosting products like GitHub Copilot and Replit Ghostwriter into their tooling.
I’ve been using GiHub Copilot for months now and I must say it was a really good decision. I was skeptical at first, but the product turned out to be so great I don’t really want to go back to not using such a tool. I highly recommend it to everyone. For the price of two cups of coffee you get the ultimate code boilerplate killer at the least. Later in a separate post I will share how exactly Copilot helps me and integrates with my old rusty Vim.
The recent announcements from GitHub were even more exciting. Copilot X as a conversational interface to your codebase with semantic search over the documentation to your tools and packages, automatic pull request descriptions and innovation around CLI seem like the next major step towards the mentioned order of magnitude productivity boost. If you haven’t gotten access yet, consider joining the waitlists for separate features that seem most interesting to you.