Software Synthesis analyses the evolution of software companies in the age of AI – from how they’re built and scaled, to how they go to market and create enduring value. You can reach me on LinkedIn and X..

Join readers from OpenAI, Databricks, Stripe, Figma and more


Roundtables

March 5: The Future of Software Engineering with Anthropic


Last week, we hosted a roundtable on the future of AI compute with Michael, CEO of Spectral Compute, who have built a compiler that compiles CUDA source code directly to native machine instructions for non-NVIDIA GPUs. We were joined by attendees from Graphcore, Cerebras, DeepMind and more.


1. Nvidia’s Monopoly Position

Michael opened by framing the current state of AI compute as a “monopoly moment.” Key points:


2. Why Vendor Optionality Matters

Michael argued that having alternatives to Nvidia is critical for multiple reasons beyond just cost.

Cost Leverage

Supply Chain Resilience

Geopolitics


3. Current Challengers and the Competitive Landscape

Michael surveyed the emerging alternatives to Nvidia:

AMD Instinct Series

Cerebras

Google TPUs

ASIC-Based Startups

Other Notable Mentions

Microsoft Maia


4. The Depreciation Problem and Programmability

A substantial debate emerged around chip depreciation and what it means for investment decisions.

The Financial Reality

Cerebras

If CUDA is truly so powerful and versatile, why does hardware depreciate so fast? In theory, CUDA’s universality should let you keep running older chips (e.g., multiple A100s instead of one H100).

Michael’s response acknowledged the tension:

The Core Argument

Michael’s central thesis: programmability and general-purpose capability are the most important chip design attributes because they preserve long-term value as workloads evolve. Application-specific chips face existential risk from architectural shifts in AI.

With tape-out costs at $20–30M minimum and 2–3 year chip lifecycles, a startup raising $100M gets essentially one generation of silicon. If the AI landscape shifts (e.g., mixture-of-experts changing distributed compute patterns), an architecture-specific chip is stranded.


5. Spectral Compute

The CPU Analogy

Michael explained the problem by contrasting CPUs and GPUs:

The Spectral Approach

Why Direct Compilation Matters

Performance Results

Coverage Status

Business Traction

Team and Hiring

Open Source Strategy


6. AI-Assisted Compiler Optimisation

A notable exchange occurred around AI’s role in GPU code optimisation.


7. The Cloud Hyperscaler Lock-In Dynamic

Discussion turned to how cloud providers are compounding the vendor lock-in problem:


8. Training vs. Inference Shift

A significant data point emerged in discussion:


9. Data Center and Infrastructure Constraints

Cooling Capacity

Multi-Data-Center Training


10. Key Debates and Open Questions

Will Nvidia’s Dominance Erode Through Interoperability?

Groq and Nvidia’s Acquisition

GPU Utilization Gaps

The Cisco Networking Analogy


Signals


What I’m Reading

A Level Headed Look at State of Software

The path to ubiquitous AI

Strategies for learning


Earnings Commentary

Agents also need to love MongoDB. That requires us to ensure that we have all the right integration with the right places, how we auto scale, how we ought to perform during the peaks and valleys. All of that truly needs to be autonomous and driven by machines. And that requires absolutely the focus from the engineering team that how would machines look at this if they want to provision an additional node or if they want to manage cluster because of resiliency across multiple clouds. So that will be the North Star for us that our agents will love MongoDB as much as today, human developers love MongoDB.

Chirantan Jitendra Desai, President & CEO MongoDB, Q4 FY2026 Earnings


Have any feedback? Reach out on LinkedIn or X.

Leave a Reply

Sign Up for TheVCDaily

The best news in VC, delivered every day!