0x01f - AI and Startup Moats
- WorkOS - the modern identity platform for B2B SaaS. Ship SSO, SCIM, RBAC in minutes.
- Sema - Enterprise SaaS? Prepare for GenAI compliance with a free pilot.
This is one of my more spicy thought experiments here on Unzip - not your usual issue, but I'm sure you'll either find it intriguing or at least have a good time poking holes in it.
Whether you’re a CTO, VP of R&D, or a forward-thinking developer, you're probably focused on building a business, securing a key role, or investing in companies (choose one or all of the above).
For all of these, a key requirement is that you focus on things that stand the test of time and hopefully improve at doing them at a rate faster than the market (i.e. faster than AI can catch up with).
A good way to do so is to find an unfair advantage - and more specifically, a moat. This article is my attempt to enumerate all the possible moats you can count on that will still be relevant in the age of AI (and the ones that, I think, will not fare so well).
📈 The curve
I think we can boil the possible futures down into two outcomes when it comes to AI:
The blue indicates AI eventually plateauing, becoming great at regurgitating outputs based on its training, and the red line indicates AI being able to improve itself and create results it hasn’t been trained on. Most people believe in one of these two potential futures, with each camp often being very emotional about their stance (since it touches on touchy things, like job security, business resilience, the fight between humans and machines etc…).
Because of all of these feelings flying around, I think it’s important to try to be objective and face reality head on - things can progress faster than you might allow yourself to believe, and you better be ready.
I am leaning towards the red camp, but I’m not here to convince you of that as the arc prize results should be sufficient on their own (tl;dr: o3 managed to solve a problem it wasn’t trained on, with orders of magnitude better performance than other state of the art models).
🦥 Let’s be conservative
The reason I am showing you this graph is to say that even in the blue scenario (slowing growth), things are about to change drastically. Even if we’re being super conservative, the current capabilities of AI - like Claude 3.5, GPT-o1 - are already powerful enough to disrupt nearly every industry we know.
Let’s assume for a minute that reality lies with the blue curve. Even in that case, I can still confidently say that AI will:
- Become cheaper
- Become faster
- Be better understood and used more optimally
This means that all the tasks we already see AI making strides in will get even better, including in many creative professions like designing, writing, coding and the like.
What if we order some LLM to improve its own reasoning skills, give it access to a GPU farm, web-searching, and give it some source code to start - then let it run indefinitely? Would this result in something like the monkeys typing Shakespeare? I’m really simplifying this, but you get my point - eventually, it might break out with some brilliant architecture and improve in ways we aren’t seeing today - after all, transformers appeared just 8 years ago.
Whether you’re riding the red curve or the blue curve, we all need to prepare for what is already headed our way. Let’s start defining some terms and assumptions before we kick things off more properly.
🤖 AGI?
I’ll use the term “AGI” (Artificial General Intelligence) here and there. I want to clarify that we’re pretty much like ants trying to debate quantum physics when trying to define AGI - it all seems a bit pointless since we know so little about what that actually means in practice.
So, every time I mention "AGI," just think of it as an LLM that is much better at reasoning than what we have today (5 stages of AGI).
This is also a direction many companies are headed towards, like Devin ($500/mo instead of a “junior developer”) - replacing humans seems like a recurring theme that VCs seem to like throwing money at.
⛳ Some Assumptions
We can’t make any predictions without agreeing on a few base assumptions. I think Bezos nailed it on this topic:
“I very frequently get the question: 'What's going to change in the next 10 years?' And that is a very interesting question; it's a very common one. I almost never get the question: 'What's not going to change in the next 10 years?' And I submit to you that that second question is actually the more important of the two – because you can build a business strategy around the things that are stable in time. ... [I]n our retail business, we know that customers want low prices, and I know that's going to be true 10 years from now. They want fast delivery; they want vast selection. It's impossible to imagine a future 10 years from now where a customer comes up and says, 'Jeff I love Amazon; I just wish the prices were a little higher,' [or] 'I love Amazon; I just wish you'd deliver a little more slowly.' Impossible. And so the effort we put into those things, spinning those things up, we know the energy we put into it today will still be paying off dividends for our customers 10 years from now. When you have something that you know is true, even over the long term, you can afford to put a lot of energy into it.”
You should consider what won’t change, and the following is a (non-exhaustive) list of things that I think won’t change:
AI assumptions
- I believe AI is and will continue to gain intelligence, even if we don’t consider it traditional human intelligence. Wait But Why wrote a great piece about this topic circa 2015, and if you haven’t already read it, you're in for a treat: Wait but why - AI revolution.
Economics, the world, and human nature
- We will still want cheaper, better, faster products.
- AGI will not eliminate economics and capitalism - we can have a big philosophical discussion here, but honestly, this is a bit over my current grasp of what is possible - so let’s limit this to something tangible.
- Resources are finite and limited.
- Accountability - we will need to blame someone when things don’t go well, and blaming AI will not work, at least not right out of the box.
- We'll still be here - no apocalyptic scenarios in this thought experiment.
And a few things that are already changing:
- R&D: With AI taking over more and more programming work, and with AGI lurking around the corner, many traditional moats around R&D seem to be in question.
- Remember those 6+ people ML teams a few years back, working full-time on outcomes that one LLM call could achieve today? Who says it won’t continue?
- What if that SaaS product that took you 10 developers and 2 years to build will be copied in 2 months with a 2-person team?
- Traditional Costs: Many traditional human (computer-facing) work will be replaced with LLMs, costing a fraction of the cost and being done in a fraction of the time.
- New costs: There was an assumption that with this new wave of AI, hardware and inference costs would go down, but what we are seeing with o3 is that it might be the opposite: it seems like we might pay for “more intelligence” (see chart x-axis) which is constrained by hardware that currently isn’t widely available.
⚰️ Dead Moats
⚠️ If your business relies heavily on one of these moats, I’d strongly suggest re-evaluating your strategy to mitigate potential vulnerabilities.
- “Better product”: We need to define "better" clearly, but if you're basing this off your R&D efforts, I would very much fear the competition coming my way. If someone can use enough compute to copy you and use AGI to make a product better than what you currently have, is it still "better"?
- R&D Team Size: Traditionally, big corporations had the advantage of capital, which was used on R&D labor to produce more and better products. Today that might mean you are slowing yourself down: the bigger the team, the slower you are.
- Superior Customer Support: What if agents could provide 99.9% or better customer support than you currently have, 24/7 at a fraction of the cost? We aren’t there yet, but I don’t see a reason we can’t get there eventually.
- Superior UI/UX: With tools like v0, Figma AI, etc. your competitors could copy many UI/UX components you have pretty rapidly - unless they require heavy backend R&D work tied to the UX (architecture, scalability decisions that aren’t copied easily).
- Personalization: I would argue there aren’t many successful personalization products out there, but now this will be a commodity with things like GenUI.
⌛ Short-term Moats
Sorted from least strong to strongest moats:
- AI-First: I define this as a company that embraces AI, learns about the latest tech and integrates it into their workflow and product. This is a short-term moat, as eventually, most companies will integrate AI the same way you could be doing right now. But at the time of writing this, it's still a big moat you can utilize and get ahead.
- Planning Skills: Planning and building a system correctly from the ground up will be more important than the act of coding - as coding becomes commoditized the architecture you build will be much more important - think about better scale, speed, and DX. This will also be a “short-term” moat, as eventually, I see this likely commoditized too (like the Herokus of the world made spinning up an app easier and the AWSes of the world made spinning up datacenter-grade infrastructure a less-expensive-up-front and easy thing to do).
- Reviewing Skills: Being able to review AI-generated outputs, like code, will be more important than creating said outputs (which we agreed is becoming a commodity).
- Contracts: Contracts can be a strong short-term moat (as long as your contracts don’t expire soon).
- Distribution: Companies with cheaper, more focused distribution channels will have an advantage. This is a relatively weak moat still, because, given more resources and creative AI, your distribution moat might be taken over.
- Reputation / Brand: Building a strong reputation often directly boosts sales, and AI is likely to make the brand-building process easier in surprising ways. Having a brand with a rich history can also be an advantage, given you consistently keep working on it and maintain its value over time.
- Secret Technology: If you have an algorithm or hardware that no one knows about, and LLMs couldn’t train on or easily infer, you might have a time-sensitive moat - think some crazy algo-trading / compression algorithm. This might work right now, but given enough compute and a good enough fitness function I believe AI will be able to approximate the solutions to this - after all, someone thought about this idea, so there is a solution lurking somewhere.
- Existing market penetration: Having a strong foothold in an industry can make it harder for newcomers. But incumbents are almost always on the defense given innovative startups, so this too is a short term moat unless the incumbent doesn’t rest on their laurels and continues to innovate.
🏰 Strong Moats
Sorted from least strong to strongest moats:
- Physical world: Anything digital will be replaced much faster, but physical industries like robots, defense, biology, construction, and the like will take a bit more time to disrupt (as translating LLM outputs to the real world is harder than moving bits). I think this is going to be one of the next big frontiers to tackle.
- Business operations: Being able to codify your business with strong processes, specifically with textual documentation, will make your company more suitable for AI "handover." You are laying the groundwork to make parts of your business automated. Just make sure documentation and those processes don't slow you down too much - those who fail to act now risk falling far behind.
- Access to capital: Having more cash to spend on more compute, better operations, and distribution seems like a no-brainer as a strong moat. If before, being a scrappy startup was a plus, over time AI could use those resources to optimize things, lower costs, dominate the market by deploying these resources in a smarter way, and be creative. Throwing more money into compute is going to be a big one. If I can run a team of 100 agents, and you can only pay for one, I’ll have the advantage. The reason this isn’t further down the list is that many players have capital, so this isn't unique.
- Physical resource dependent: If you are in an industry that relies on finite physical resources, that will be a moat - think land in real estate, satellite lanes, or lithium for electric cars. If you have access to those resources and others don’t, that’s an advantage.
- Partnerships: Having big players helping you with resources, data, and distribution will give you a leg up. Note: this is a great time to solidify these relationships while those big incumbents are looking to not lose their crown.
- Regulation: If you have the regulator on your side, you still have a strong moat. This has a people and bureaucracy bottleneck and takes years and years to change.
- Data: Having data others don’t have, that is valuable, will be a strong moat - I don’t see this changing any time soon. Bonus points for data that you have exclusive rights to, is private, and can’t be obtained after the fact. Don’t confuse this with cleaning data and processing it - that will become a commodity.
- Supply chain control: This can be a strong moat due to its reliance on agreements and processes.
- Accountability: Industries that require accountability will create a moat - will an AI bot be accountable for a mistake? Find places where accountability plays a role, like legal, insurance, governance, healthcare, and defense - those industries will have higher barriers to entry even if others can use AI.
- Network effects: Think about how adding more users/data/partners makes your business better and would make it harder for others to compete. Exclusivity rights play a big role here.
❓Uncertain Moats
- IP / Patents: From my understanding (IANAL), patents are a deterrent but don't hold well in many cases. They might still help as a moat (side note: I still really like Elon’s take on them).
- Customer Culture: Being a part of or fluent in the culture of your customers was and always will be an advantage. I'm not sure AI could fill this cultural gap unless we are talking about fully 100% synthetic teams that perform the same or better than local teams.
- Compute dominance: (Contrarian point from before) Training and running models today takes vast amounts of energy and capital. With more and more open source models, and taking into consideration that transformers are really a new technology - we might be on the cusp of a different architecture that won’t be limited by those constraints.
✅ Next Steps
So, what can you do right now:
- Evaluate your moats: Are you holding onto a dying moat? Identify a better one and move there.
- Use and learn about AI: Don’t stick to your ego, try new tools, and stay ahead.
- Systematize your business: Document and add proper processes to everything you can - it’s a quick win that sets you up for automation, improvement, and scalability down the line.
- Move quickly
- Smaller teams (fewer meetings, fewer things to be agreed upon)
- Better tools (debugging, easy to use, etc.)
- Faster response times (to new models and techniques)
- Better teams (smarter people, good at getting things done as a team)
- Less technical debt (that slows you down)
- Strong decision-making: Try to operate from first principles to increase your odds when everything changes so quickly.
- Forecasting skills: Work on your ability to predict the future - being proactive rather than reactive gives you a huge advantage - which basically improves your speed.
Conclusion
I hope this gives you some food for thought and hopefully helps you.
Try to see what moats your business has and how you can fortify your castle, because we need some lava shit instead of water with what’s coming up.
I’ll continue to think and process these new trends for you, and probably change my mind quite often.
P.S., For more ramblings about things like this, you’re welcome to follow me on LinkedIn - I’ll post a few more thoughts that didn’t make the cut here (jobs going away, what AI still sucks at, who are the best-positioned undervalued players right now, etc.).
Thank you notes!
- Tom Granot - edits every issue, and owns a technical content consultancy for deep-tech startups.
- Roy Feldman - building the next big thing in construction robotics (any robotics investors in the crowd?).
- Leeam - my sister, studying for her PhD on the intersection of human rights and technology.
- Amit Eliav - technical founder currently selling his AI consulting firm and hunting for the right CEO co-founder in the AI space (could that be you?)
- Andy Katz - last time I needed serious deep deep tech/ML consulting Andy was invaluable.
- Brandon Docusen - the most prolific AI bootstrapper I know.
- My girlfriend Jillian for editing this mess.