Last month the company I work for purchased a subscription to Augment Code for all its developers. Augment Code is an AI coding engine that is broadly similar to Claude Code, which I played around with a few months ago. You can read the linked post, but the summary is that I came away mostly skeptical about AI coding. However, I am not a luddite, and am willing to learn new tools and try things again, so I installed the Visual Studio Code Augment plugin and have been giving AI coding another shot.
What I've learned is that AI coding agents are more useful than I gave them credit previously if you give them small enough jobs that are well-defined. Asking it to do too much, which perhaps I did in my earlier blog post, is not (yet) what it's good at. Augment code has a few modes. There is a chat box in which you can conversationally interact with the AI, asking it questions or giving it tasks to complete. The agent will also give autocomplete suggestions as you type new code, which I'd say is mostly helpful, but not always. Sometimes the suggestions are just plain wrong, but a few times the suggestions have been subtly wrong, which is dangerous.
Here are some example tasks that Augment Code has been at least 90% successful at:
- In one of my Python projects, I asked it to create a file that tries to import all the packages used in the project, and output which packages do and do not import successfully. This project uses a few custom & private packages I keep elsewhere, so a requirements.txt file doesn't work with pip. Having a file I can run to quickly check I installed all the packages is useful. It did a pretty good job of this, but it did miss one import from one file, probably because the import wasn't near the top of the file (which is an anti-pattern, but it's there for reasons)
- It is quite good at adding type hinting to Python projects. You can ask it to "add type hinting to all functions in all Python files in this directory" and it does it
- I have a PyO3 project that it successfully threaded/parallelized using Rayon. I had to be very specific about how the inputs and outputs would change, but with that it did a good job. It wasn't perfect. Instead of using lightweight vector slices, it was creating new vectors for each chunk of parallel work, which involves copying memory. When I suggested a change, it did a good job fixing that oversight
- I am an unapologetic user of Mercurial. The rest of the world uses Git. Kind of like Mac and Windows, Mercurial and Git are 99% the same in what you can do with them, but they differ in methods and style. In fact, there is a Mercurial plugin that allows perfect 1:1 interoperability between the two, which I use. Sometimes I don't want to bother installing Mercurial on a temporary virtual instance that already has Git installed, and I want to do something quick in Git. I've asked the AI agent to translate a Mercurial command to Git and it's done a fine job
It appears that my company has a (grandfathered) $50/mo/user plan, which has jumped to $60/mo/user now for new purchases. I would say that it has saved me enough time to justify that price. The real question is how much the service actually costs? The AI industry is spending so much money that "to recoup their existing and announced investments, AI companies will have to bring in $2 trillion (every 2-3 years), more than the combined revenue of Amazon, Google, Microsoft, Apple, Nvidia and Meta." It feels like the early days of Uber, where the fares were subsidized by venture capital, and once most competitors were vanquished, prices went up. To reach $2 trillion in revenue, how high would prices have to go? There are currently a bit over 8 billion humans on earth, which means that over 3 years, the whole AI industry will need to take in about $80 from each and every person on earth per year. That is a ridiculous number and will not be reached any time soon.
My opinion is that AI does has some value, but not nearly as much as it costs in real terms. I'll use it if it works for me. But I won't rely on it.