I vibe coded Gumloop in a weekend

Vibe coding Gumloop

I realised that many companies offer no-code platforms to their users for automating workflows. The numbers were kinda shocking.

I spent a week deep-diving into Gumloop and other no-code platforms. They’re well-designed, but here’s the problem: they’re not built for agents. They’re built for workflows. There’s a difference.

Agents need customisation. They need to make decisions, route dynamically, and handle complex tool orchestration. Most platforms treat these as afterthoughts. I wanted to fix that.

Although it’s not production-ready and nowhere close to the maturity of companies like Gumloop and similar ones, this is intended to showcase how easily you can build sophisticated apps in a matter of days. We respect all the builders.

Picking my tech stack

NextJS was the obvious choice for the vibe coding stack. Could I have used FastAPI with a React frontend? Sure. But just thinking about coordinating deployments, managing CORS, and syncing types made me tired.

For adding a near-unlimited suite of SaaS app integrations, Composio was the obvious choice. It features a JS SDK that enables you to add agent integrations easily.

When it comes to agent frameworks to use, JS lacks the number of options that Python has.

It boiled down to two frameworks: Langgraph and AI SDK (I had heard about Mastra AI, but I didn’t want to spend the weekend getting familiar with it). I chose LangGraph over AI SDK; here’s why: LangGraph’s entire mental model is based on nodes and edges, exactly how visual agent builders should work. Every agent is just a graph. Every workflow is just a path through that graph. The mapping is pretty clean. I love AI SDK, but it’s not convenient for building graph-based agents.

Coding with Vibes

If you’re a vibe code hater, you’re going to want to skip ahead. Frontend is entirely vibe coded. I did not use Lovable or Bolt.new because it’s much easier to get the code in Cursor and make the changes rather than use Lovable, connect to my github and then open it in Cursor. My setup:

  • GPT-4.1: The sniper. It does exactly what you ask, nothing more, nothing less. Perfect for precise component tweaks.
  • Gemini 2.5 Pro: The machine gun. It rewrites entire components and understands context across files. Great for major refactors.
  • 21st Dev’s MCP Server: Let’s use the Cursor Agent to build beautiful shadow components. Instead of copying and pasting from the docs, I described what I wanted.

The canvas where users drag and drop nodes? Built with ReactFlow + a moving grid background from 21st Dev. Took around 30 minutes. Manually doing this would’ve exhausted me.

Building the Components

Strip away all the marketing fluff, and an AI agent is two things:

  1. 1. An LLM that makes decisions
  2. 2. The tools it can use to take action

That’s it. So I built exactly four fundamental nodes:

  • Input Node: Where data enters the system
  • Output Node: Where results emerge
  • LLM Node: Makes decisions
  • Tool Node: Takes actions

Then I added an Agent Node that combines LLM + Tool nodes for convenience. Every complex agent workflow is just a combination of these primitives.

Composio for unlimited tool integrations

Writing tool integrations are painful. Managing authentication for those tools? That’s where developers go to die.

Every tool has different auth flows. Some want API keys, some need OAuth, and some require webhook URLs. Multiply that by 100+ tools, and you have a maintenance nightmare.

Composio solves this. One SDK, hundreds of pre-built tools, and auth handled automatically. It’s the difference between shipping in a weekend vs spending months on OAuth flows.

API Routes

Each workflow is represented as a JSON graph. Here’s how it looks:

{
  "nodes": [
    {
      "id": "input_1",
      "type": "customInput", 
      "position": { "x": 100, "y": 100 },
      "data": { "label": "User Query" }
    }
  ],
  "edges": [
    {
      "source": "input_1",
      "target": "agent_1"
    }
  ]
}

I wanted one API route that takes the entire graph and executes it.

When a user hits “Run”, here’s what happens:

  1. 1. Graph Validation: Find the Input node, verify all edges connect, check for cycles
  2. 2. Topological Sort: Determine execution order (LangGraph handles this beautifully)
  3. 3. Node Execution: Each node type has its own execution logic
  4. 4. State Management: Pass data between nodes while maintaining context.
//Sample snippet
switch(node.type) {
  case 'llm':
    const model = getModelFromApiKey(node.data.apiKey);
    result = await model.invoke(previousOutput);
    break;
    
  case 'tool':
    const tool = await composio.getTool(node.data.action);
    result = await tool.execute(previousOutput);
    break;
    
  case 'agent':
    const agentTools = await composio.getTools(node.data.tools);
    const agent = createReActAgent(model, agentTools);
    result = await agent.invoke(previousOutput);
    break;
}

Managing Authentication with Tools

Authentication was my personal nightmare. Composio solved the technical part, but the UX? That took three complete rewrites. In the first version, users had to:

  1. 1. Manually type action names (spelt perfectly)
  2. 2. Leave my app to authenticate on Composio’s dashboard
  3. 3. Come back and hope it worked

I implemented a drop-down list with all the actions that you could select and add to the node, but authentication was still a problem. The user should not have to rely on connections made in the Composio dashboard. I pulled every available tool from Composio’s API and cached it in a JSON file. I built a modal that displays all available toolkits and their associated tools, along with the connection status. When users selected a tool, the UI adapted to its auth type:

  • API Keys: Password input with a link to obtain the key
  • OAuth2 (Hosted): Connect button that handles the flow in a pop-up
  • OAuth2 (Custom): Form for client credentials
  • Other Types: Dynamic forms based on required fields

Once authenticated, the same modal allows you to add available tools with a search function and one-click addition to the workflow.

Agent Orchestration Patterns

Anthropic released a guide called “Building Effective Agents” that contains several patterns for building highly effective AI agents. Any serious agent builder would like to build these patterns instantly on such a platform. So I created nodes that create structures in these patterns instantly. Some of these are:

1. Prompt Chaining

  • Pattern: Sequential steps where the output of one agent is used as input for the next.
  • •. Node Example:
    • • customInput → agent_1 → agent_2 → customOutput

2. Parallelisation

  • Pattern: Multiple agents or actions run in parallel, then their results are aggregated.
  • Node Example:
    • • customInput → agent_1 (parallel)
    • • customInput → agent_2 (parallel)
    • • Both → aggregator (could be an agent or llm node) → customOutput

3. Routing

  • Pattern: A router agent decides which branch or agent to send the input to, based on logic or input.
  • Node Example:
  • • customInput → router_agent
  • • router_agent → agent_1 or agent_2
  • • Both → customOutput

4. Evaluator-Optimiser

  • Pattern: A generator agent produces solutions, an evaluator agent checks them, and the process loops until a good solution is found.
  • Node Example:
    • • customInput → generator_agent → solution_output
    • • generator_agent → evaluator_agent → (loops back to) generator_agent

5. Augmented LLM

  • Pattern: An agent node is augmented with tool calls or external data fetching.
  • Node Example:
    • • customInput → agent (with tools/actions) → customOutput

After 48 hours of rapid development, I had a working agent platform.

The barrier to building agents has collapsed. You don’t need a 20-person team and six months anymore. You need clear thinking about what agents are (decision-makers with tools), the right abstractions (everything is a graph), and the wisdom to use existing solutions instead of rebuilding them.

The irony? I spent more time getting the authentication modal to feel correct than building the entire execution engine. Maybe that’s the real insight here. In the age of vibe code, the challenging problems are no longer technical. They’re about understanding what users need and having the taste to build it well.

The code is on GitHub. Fork it, break it, make it better.

Finally the fruits of 48hrs of vibe coding

Leave a Reply

Your email address will not be published. Required fields are marked *

  • Pricing
  • Explore
  • Blog