Ask HN: Any enterprises experimenting with AI agents / MCP-style infra?

2 points by schappim 11 hours ago

Hi HN,

I've been building Ninja.ai solo for the past few months - a platform for deploying and observing “MCP servers,” which are essentially open protocol endpoints that AI assistants like ChatGPT or Claude can call to trigger real actions: APIs, workflows, database updates, etc.

It's all early, but I've shipped:

  • A basic MCP hosting platform (Rails + Deno Deploy for isolated execution) gateway and deployment model using Deno Deploy

  • A gateway that aggregates multiple mcv tools under a single server allowing you to add Ninja once to your ai and then one-click install further tools

  • A CLI that lets developers package APIs and tools into callable MCP tools

  • A live app store with installable tools + observability/logging (GUI not exposed for logging yet)

  • An interface for agents to chat with and use these remote MCPs, even without OpenAI/Claude (max-tier) paid accounts

I'm now trying to understand the real-world pain points inside enterprises that are experimenting with agents, tool-use, or "AI infra" more broadly.

If you're working at a company doing this - or know someone who is - I'd love to talk. Not to pitch, just to learn:

  • What's breaking?

  • What's hacky?

  • What's needed to make this stuff production-grade?

If you've shipped something similar internally (or even ruled it out), I'd really appreciate your perspective. Comment here or email's in my profile.

Happy to share what I've built so far or help troubleshoot agent infra if that's helpful too.

Thanks,

Marcus