Blog

Introducing Val Town MCP

On Val Town, you deploy JavaScript in 100ms. Now with the Val Town MCP server, you can do that from Claude, ChatGPT, Cursor, VSCode, or wherever you do your AI coding.

Val Town MCP

If you've been following my tweets recently – "I've gotta rant about LLMs, MCP, and tool-calling for a second", "MCP is mostly nonsense", "MCP is overhyped" – you might be surprised by this announcement. Well, how did you think I got those salty takes except by building an MCP server?

Yes, I think MCP is dubious as a protocol. But for now, MCP is the right way for Val Town to meet developers where they are. In Cursor or Claude Code or Zed or wherever. For example, here we use Claude Code to make a blog. Every edit is immediately live and deployed on Val Town.

We have guides for some of the popular LLMs:

But the Val Town MCP server should work with any MCP client. If you'd like a hand with setup, ask in our discord server or send us an email.

Why MCP

MCP is not perfect (again, see tweets), but it has a few things going for it:

  1. Cheaper – Don't pay us for credits. Pay your inference provider directly.
  2. Better – Use whatever state-of-the-art LLM you want. We at Val Town don't have to fast-follow it.
  3. Val Town everywhere – Get the best parts of Val Town – instant deployments, built-in SQLite, etc – in your favorite LLM coding tool.

MCP also allows us to ship faster. Traditional APIs require careful versioning to prevent breaking changes, but an MCP server can change continuously because LLMs read the spec and run inference at runtime.

Fast feedback loops

There's a common thread running through every feature we build – AI or otherwise: enabling fast feedback loops.

Creators need an immediate connection to what they're creating.

If you make a change, you need to see the effect of that immediately.

- Bret Victor, Inventing on Principle

When you – or your LLM – make an edit on Val Town, your code is deployed in 100ms. This allows you to have insanely fast feedback loops in your production environment. No need to wait a minute or two to see how it'll actually look when deployed. Every change is immediately live, at a public URL.

Val Town isn't an AI company – we're a developer tools company – but this always-deployed model works quite well with LLMs. Just give your favorite LLM a branch, and the code it writes will be alive and sharable by default.

Bring Val Town MCP to your favorite LLM, and let us know what you think.

We’re hiring! Join our team and help build the future of programming.

View open positions →