0x01D - AI Coding ⌨️
We're almost at 4k subscribers! Thanks so much to everyone who's been engaging and to our sponsors for their support - ❤️.
- PostHog: want to improve your product skills? PostHog has created a newsletter for you, check it out here.
- WorkOS - the modern identity platform for B2B SaaS. Ship SSO, SCIM, RBAC in minutes.
AI Coding
I also like to call it AI IDEs or AI Assistants.
- Hands-on developers
TL;DR:
- Problem: Coding manually is painstakingly slow and inefficient.
- Solution: Innovative ways to integrate AI during development.
- In Sum: Auto-completions and code generation in the IDE are awesome quality-of-life improvements, and they are just getting better.
How does it work?💡
At the most basic level, AI coding tools generate code. They work by sending your code or prompt to an LLM which returns new code.
Unless your codebase is very small (so it fits the context window) most of these AI IDEs will generate a RAG and add relevant code segments to the context. Recently I’ve been seeing new innovative ways to integrate LLMs even further, like:
- Full codebase editing (multi-file edits basically)
- Better code generation from scratch (planning and testing)
- Faster code generation
- Voice controlled changes
- Integration into the CI/CD stage
- Testing, learning to code…
Questions ❔
- How do you use it Agam? Well, I kind of went all-in into AI coding tools. It has made my work crazy fast. I'm really glad I learned coding beforehand – it helps me understand and improve the AI-generated code and not blindly follow it. I do get into some spots where I need to code myself, but it’s about 20% of the time, which is pretty crazy when you think about it.
- What are the 20% of cases where it doesn’t work? From my use, any obscure codebase like WebGPU APIs, or unique solutions that are novel, or web frameworks on the latest version (looking at you Next 14) might produce subpar (or often simply outdated) code. I mitigate that by adding the latest documentation relevant to my codebase usage.
Why? 🤔
- 10X Development: AI Coding tools can be a huge boost in productivity. Sorry for the "10x" buzzword, you get the point.
- Focus: Instead of dealing with mundane parts of the codebase, developers can now focus on the big picture, the architecture, and the program flow instead of the tedious parts of the coding process.
- Fun: It just feels so much freaking better to be able to only code the hard/interesting stuff.
Why not? 🙅
- Sensitive codebases: Using AI IDEs can be a no-go in sensitive industries (government, financial, security…). You are essentially sending your IP to a 3rd party (Caveat: see forecast about offline LLMs).
- Unique needs: If you are dealing with libraries or unique code scenarios that aren’t proliferated on the web, LLMs might still be generating crap because they weren’t trained on your data or the scenario you’re dealing with.
- Bugs: Sometimes the code generated looks fine but isn’t top-notch, you still need to double-check the output in most cases.
- Time waste: Some people are pretty adamant these AI coding tools aren’t helpful, meaning you will replace the AI code most of the time and as a best case scenario replace googling for a snippet of code. Right now, these tools aren’t 100% helpful to everyone, try them out and see if they help your workflow. Trying to force the model to give you a 100% right answer is a waste of time, take the 90% and continue manually.
Tools & players 🛠️
- Cursor - A VS Code fork built from the ground up with AI.
- GitHub copilot - The “OG” AI IDE everyone heard about. Still kind of feels behind, but does integrate nicely with GitHub.
- Supermaven - Heard really good things, especially around speed and context length.
- Repl.it AI - Repl.it’s code IDE using AI (has also a free limited plan).
- Jetbrains AI - JetBrains’s AI offering integrated into most of their IDEs.
- Fine - Pull-request oriented AI agent (Disclosure: I’m an investor ✨).
- IntelliCode - Visual Studio’s AI offering from Microsoft.
- Codium - Focusing on confident code generation (focused on testing well).
- Continue - AI coding assistant for VS Code and JetBrains.
- Tabnine - One of the OG players in the AI code assistant space.
- Tabby - Self-hosted open-source coding assistant.
- Cody - Sourcegraph’s coding assistant.
- CodeSquire - Browser extension for AI generation in Jupyter/BigQuery etc…
- Amazon Q - Amazon’s AI assistant.
- Claude Dev - An open source VS Code plugin for Claude as a coding assistant.
- Aider - Open-source terminal AI assistant.
- Marbelism - Create a full stack app with AI.
- Warp - Fully integrated AI in your terminal.
- CodeParrot - Frontend component AI assistant for VS Code.
- OpenHands - OpenDevin an open-source devin alternative.
Forecast 🧞
- Offline AI: Offline AI tooling can be a great mitigation to sensitive industries that still want to use AI in their code tooling.
- Testing-first: I love the direction Codium (not sponsored) is taking. One of the major problems of generating code without testing first, is that you don’t really know if the code is good. Having it anchored with tests is the right step forward in my opinion - thought: how do you check your test generation is good, though? (meta, I know)
- Saturation: I believe most of the AI Coding tooling will die out, but the usage will grow, and consolidate in a few state-of-the-art players. Careful with who you invest in ;)
- 2x’ing seniors: Generating code is easy, but being able to reason about it, and guide the AI is something that (🔥Hot take) I would argue benefits seniors more than juniors. I feel like LLMs magnify your skills. In some cases, it will uplift junior developers if they know how to use these tools correctly.
- Integrations: AI will integrate with most of our stack, from Sentry to Slack to anything, really. Working with these solutions will feel like working with another colleague (that lives very, very far away).
- Smaller teams: I might start a startup soon (emphasize on might (!) - hence less posts recently). I was asked to estimate how many people we’d need in the R&D team, my estimates are way lower than what I used to think for a dev team - like, drastically lower. Catch is, everyone needs to be a great at using AI tooling for it to work.
- Code understanding > code writing: I find myself appreciating the understanding of code much more than the writing of it nowadays. I’m the guide and checker for the generated code (which is written by a machine), so this skill just became the more critical one of the two. Security researchers are, to me, the best “understanders of code”, so try to learn from them or from open source.
- It’s all about trust: The better LLMs become the more we trust them, the more scary things become. I hope we learn how to balance the efficiency with the dangers of blindly approving things.
- Copycats: Is there really a tech moat nowadays? I'm not sure. Try to focus on data angles, unique architectures, human-in-the-loop guidance, strong QA pipelines, and strategic partnerships - because code will become a commodity eventually (if it isn't already).
Extra ✨
Additional information that is related:
- Why Kite (AI coding extension, 2014-2021) failed.
- WiseGPT - (Waitlist) promptless AI coding.
- DevHunt AI - Like ProductHunt but for devs - I linked the AI section (mostly AI coding tooling).
- ai-digest - A CLI tool to aggregate your codebase into a single Markdown file for use with AI (great for long context LLMs).
- 8-year-old coding with Cursor.
Thanks 🙏
I wanted to thank Tom Granot (who edits every issue, and owns a TPM - Technical Product Marketing - consultancy for deep-tech startups). Thanks also to Elie (the creator of Inbox Zero and Learn from open source).
EOF
(Where I tend to share unrelated things).
If you ever used Fine, I'd love to hear about it :)