I Built a Remote Control for My Mac (and It Runs Through Claude)
I was sitting on my couch the other night, phone in hand, talking to Claude about a project. I needed to check something on my Mac across the room. A file in a repo. Nothing complicated. But the AI I was talking to had no way to get to it. I’d have to get up, walk over, open a terminal, run the command, come back, and type the result into the chat.
That’s stupid. The whole promise of AI assistants is that they can do things for you, not just talk about doing things. And yet here I was, acting as the middleware between an LLM and my own computer.
So I built the bridge.
The gap, and why OpenClaw isn’t my answer
MCP has evolved fast. Six months ago it was almost entirely a local protocol. That’s changed. Remote MCP servers have gone mainstream. Projects like Basic Memory Cloud are bringing local-first tools to cloud clients. FastMCP makes it trivial to deploy MCP servers anywhere. The ecosystem is moving toward remote-first at a pace that would’ve seemed unlikely even at the end of last year.
But there’s a specific gap none of that solves: getting a cloud AI client to execute commands on your specific machine. Not a server somewhere. Not a container. Your Mac, with your repos, your configs, your running processes.
If you’ve been anywhere near the AI conversation lately, you’ve seen OpenClaw try to answer this. It’s an open source AI agent that runs on your machine with full system access, connects to your messaging apps, manages your email and calendar, and operates autonomously while you sleep. 116,000 GitHub stars. Scientific American coverage. Security researchers losing their minds. Depending on who you ask, it’s either the future of personal computing or a security nightmare in a lobster costume.
I’m not ready for that. Giving an AI agent high-privilege access to my machine, my accounts, my credentials, and then letting it act independently? That’s a trust leap I haven’t been willing to make. The security concerns around OpenClaw aren’t hypothetical. But I also don’t want to sit on the sidelines while the world figures out human-AI collaboration. I wanted a measured step. Something that gives AI the ability to act on my machine, but only when I’m in the loop.
What I built
Code Remote is an open source MCP server that lets any AI client run commands on your machine from anywhere. The architecture is simple: your AI client connects to a relay server over SSE (standard MCP transport), and that server holds a WebSocket connection to an agent running on your Mac. When the AI calls a tool like run_shell_command or read_file, the command goes through the relay and the agent executes it locally. Results flow back the same way.
Three components. AI on one end, your computer on the other, a relay in the middle. That’s it.
The key difference from something like OpenClaw: Code Remote is a tool, not an agent. It doesn’t do anything on its own. Every action is initiated by you, through your AI client, in the context of a conversation you’re actively having. No autonomy. No background behavior. It’s a remote control, not a robot. The difference between giving someone the keys to your house and letting them in while you’re standing at the door.
Will I eventually move further toward autonomous agents? Probably. The trajectory is clear. But I’d rather walk in that direction with my eyes open than jump in at the deep end because the hype cycle told me to.
Security
Even with the human-in-the-loop design, this deserves careful thought. Code Remote uses token-based authentication between all three components. The agent restricts file access to your home directory and temp directories. Every command is logged to SQLite for audit. You can run the agent in a Docker container for full sandboxing.
The piece I’m most happy with is the Tailscale integration. Configure the server to only accept agent connections from your private network, and even if someone gets your auth token, they can’t connect an agent unless they’re on your tailnet. The MCP endpoint stays publicly accessible for your AI client, but the execution path is locked to your network. Is it bulletproof? No. Is it more considered than most things people do with their development environments? I think so.
How I actually use it
I open Claude.ai on my phone. Code Remote is configured as an MCP connector. I say “check if the Basic Memory tests are passing” and Claude runs the test suite on my Mac and reports back. Or “show me what’s in the logs for the last hour.” Or “create a new branch and commit the changes I described.”
I also use it alongside Basic Memory, and the combination is where it gets powerful. Basic Memory gives Claude persistent knowledge about my projects, notes, and decisions. Code Remote gives Claude hands to act on that knowledge. Together, the AI knows what I’m working on AND can do things about it. That’s a meaningful jump from “chatbot” to “collaborator.”
The post you’re reading right now? I drafted it in a conversation where Claude had access to both. It read the Code Remote repo on my Mac, checked the existing blog post template, and we wrote this together. Not a hypothetical use case. Literally what just happened.
What’s next
Code Remote is open source (AGPL-3.0) and on GitHub. It works today. I use it daily. There are things I want to improve: better multi-agent support, a cleaner setup experience. But the core is solid and the architecture is simple enough that it shouldn’t break in ways that are hard to debug.
If you’re an MCP user who’s felt the gap between cloud AI and your local machine, give it a try. This is what building in public looks like: you have a problem, you build a solution, you put it on the internet, and you write about what you learned.