If you’ve been following us, you know that we’ve been exploring the rise of AI agents and apps over the last few months. So over the long weekend, we decided to dive into experimenting with some of the latest models. A major inspiration for this experiment was DeepSeek’s recent v3 release. Many entrepreneurs we work with have been raving about how seamless it is to download and start experimenting with, so we figured – why not ourselves? We wanted to see firsthand just how accessible it really is.
As non-technical users, we leaned heavily on ChatGPT to navigate debugging, configuration, and making sense of everything under the hood. While we’ve used plenty of AI tools before, this was our first time running models on-device. It was fun, and frustrating! This week’s post is more of a muse as we reflect on the different ways to build AI agents and applications, the tools that make it possible, and the friction points that still remain. It’s exhilarating to witness how quickly AI development is evolving and how much you can do as a non-technical user. Let’s dig in!
Choosing the Right AI Setup (Ollama, OpenAI, Claude)
To run AI models locally, you can use tools like Ollama (CLI-based) or LMStudio (GUI-based). For this experiment, we primarily used Ollama. Ollama is a tool that lets you run open-source LLMs locally on your device (e.g., Mistral, Deepseek, LLaMA, etc.). The system runs in the background and once it is installed, you can run it via your Terminal. LMStudio has neat interface that is also worth trying.
Running AI models locally has its advantages and challenges. One of the biggest benefits is privacy and security, as all processing happens on your Mac without sending data to external servers. This makes it ideal for personal or confidential projects. Additionally, there are no API costs – once a model is downloaded, it runs offline indefinitely without per-request fees. For smaller models, response times are fast, with no network latency, making local AI useful for real-time applications like personal assistants. Another key advantage is full control over models, allowing users to choose from a variety of options like Mistral, LLaMA 2, or Gemma and even fine-tune models on their own data.
However, running AI locally comes with challenges. Technical knowledge is required, as setting up models involves installations and using the Terminal, which may be difficult for no-code users. The process is also time-consuming, especially when building an AI agent, as it requires additional tools like Streamlit, LangChain, and ChromaDB. Hardware limitations can also be an issue – larger models can slow down performance, and even smaller models may struggle depending on system specs. Finally, fine-tuning models with custom data requires extra setup and resources, making updates and improvements more complex.
We also experimented with OpenAI and Claude models to compare them to open-source alternatives. These models are accessed via API calls, making them significantly easier to use with minimal setup. As a non-technical user, I found that getting started was much simpler, and integrating them into existing web apps was more seamless than with local models.
One of the biggest advantages is that no setup is required. You simply call the API using an OpenAI key, with no need for local installation. Additionally, hardware limitations are not a concern, as the models run on cloud-based services, making them more accessible, especially on older devices. OpenAI and Claude also offer some of the most advanced models, which generally produce higher-quality responses out of the box.
However, there are trade-offs. API costs can add up over time, as each request incurs a fee, though model pricing has decreased significantly. Privacy is also a concern, as data is sent to OpenAI or Anthropic servers and may be used for retraining. Lastly, customization options, such as fine-tuning or RAG, are more limited compared to open-source models, making it harder to tailor these models to specific needs.
No-Code AI Deployment (Bolt.New, ReplitAI, Loveable, etc.)
Building an application from start to finish leveraging the open-source models was actually quite challenging as a non-technical user. So after some frustration and a lot of syntax errors, we tried leveraging Bolt.new to deploy a fitness tracking app. We were able to combine the ease of no-code deployment with the intelligence of GPT-4, creating a more advanced AI-driven fitness assistant.
How we used Bolt.new:
-
Built a Fitness Tracking AI App with Bolt.new’s no-code setup.
-
Integrated OpenAI’s API to enable advanced reasoning and personalized fitness insights.
-
Leveraged OpenAI’s GPT-4 to provide tailored workout suggestions based on user preferences and history.
-
Used real-time AI-generated recommendations to create adaptive fitness plans.
No-code builders like Bolt.new make AI integration accessible to non-technical users, but we quickly realized their limitations when it came to fine-tuning, debugging, and long-term scalability. At one point, we encountered some bugs and got stuck with deployment. While you can keep interacting with the system in natural language (this is a huge benefit), at some point, having some technical knowledge of what’s happening under the hood becomes helpful. Still pretty incredible what I was able to spin up with no coding knowledge in an hour. We believe there is huge opportunity with these type of natural language AI app builders.
Exploring Other No-Code AI Solutions
We also started exploring other solutions in the no-code AI deployment space. We have heard a lot about companies like Lovable, Replit with their new no-code AI agent builder, and Superblocks with their AI agentic builder. Superblocks recently developed an approach for building AI agents using Anthropic’s Claude, making AI deployment even more flexible for business application. They call it Superblocks AI Agentic Builder. We imagine many more vertical specific no-code AI solutions will emerge to solve for specific pain points.
As we continue experimenting, we’re excited to see how no-code and low-code AI development evolves, especially in making AI more accessible while still allowing for deeper customization when needed.
Final Thoughts – What These Experiments Made Us Think About
Building AI applications is more accessible than ever, even for non-technical users, but challenges remain – particularly around setup, understanding which models to use and for what tasks, debugging, fine-tuning, and making beautiful and delightful user experiences. While the future of AI deployment may be as intuitive as using a web app, we’re not quite there yet. Here are some key areas we’re exploring:
-
Unlocking AI for Non-Technical Users: A vast market of non-technical users (content creators, founders, business owners, etc.) are eager to build AI-powered applications but lacks coding experience. While tools like OpenAI’s API paired with ChatGPT have lowered the barrier, truly seamless, no-code AI development is still a work in progress. How can we empower curious, non-technical users (like us) to create AI applications that fit their workflows effortlessly?
-
Debugging Challenges: We imagine with more AI generated code, there will be poor code written. We imagine there will also be a need for more QA testing as more no-code apps are being built.
-
Fine-Tuning: More accessible fine-tuning tools would enable users to train self-hosted AI models on custom datasets, making them more robust and personalized for specific needs.
-
A Centralized AI Workspace: A unified platform for managing projects, models, configurations, and training data would significantly improve experimentation and iteration. Imagine an environment where switching between models, routing them for specific tasks, and managing AI workflows is as seamless as working within a well-designed web app.
-
UI/UX: Once an AI application is built, the next challenge is making it intuitive, user-friendly, and visually aligned with a brand’s identity.
We’d love to hear from others building and hacking in this space. What AI tools have you tried? What worked, what didn’t? What’s the coolest thing you’ve built?