2026-03-29
Telegram Bridges for Gemini CLI and Codex After Hitting Claude Code Limits
I hit Claude Code usage limits and immediately noticed that the real loss was not just model access. It was the workflow.
Once you get used to sending work to an agent from your phone, waiting for the reply, then sending the next step while you are away from your desk, normal local CLI usage feels slow. So I built a stopgap: a small Telegram bridge for Gemini CLI and Codex.
This is not a polished product. It is a local-first workaround. But it brought back most of the async loop I actually cared about.
The Workflow Loss Was the Real Problem
Claude Code Channels changed how I work on side projects.
I could be in a cafe in Kuala Lumpur, out walking, or traveling, and still move a repo forward from Telegram. That changes the shape of the day. You stop thinking in terms of “sit down for a coding session” and start thinking in terms of “send the next instruction now, read the result later.”
When I hit usage limits, the obvious replacement was not obvious at all. Gemini CLI and Codex already existed on my machine, but both were still tied to the terminal. Good models were not enough. I wanted the same remote, async, mobile-friendly loop.
So I built a bridge.
The Architecture Is Intentionally Small
The setup is simple:
- Telegram receives the message
- a local Node.js process polls the Bot API
- the bridge runs Gemini CLI or Codex against a local repo
- stdout and stderr get sent back to Telegram
That is basically it.
I ended up with two integrations:
codex-telegram-integrationgemini-telegram-integration_v2
I published the repo here: github.com/0xkaz/codex-gemini-telegram-bridge.
I like this shape because it stays inspectable. There is no hosted backend, no extra dashboard, and no mystery service in the middle. The machine that already has the repo also has the bridge and the CLI tools. That makes debugging much easier.
Session Continuity Is the Part That Matters
The first version could have been a dumb “Telegram to shell” relay. That would have been enough for a demo. It would not have been enough for real use.
The useful part is session continuity.
If I send a follow-up prompt, I do not want the tool to forget what happened two messages ago. I want it to keep the current session, reuse the recent context, and continue from there. Without that, the whole thing collapses into repetitive setup prompts and wasted tokens.
This turned out to be the main reason the bridge felt usable instead of annoying.
I also made normal non-command messages behave like prompts by default. I did not want to type a slash command every single time. That small choice made the Telegram chat feel much closer to an actual agent interface.
Telegram Formatting Is Not a Small Detail
Raw CLI output looks worse in Telegram than in a terminal.
That sounds minor, but it is not. If the reply is messy, wrapped badly, or full of unreadable markdown-style file links, the workflow gets worse very quickly.
So I had to add a few quality-of-life fixes:
- typing indicators while the command is running
- session commands like
/sessions,/resume, and/reset-session - cleaner error summaries
- cleanup for file-link formatting so replies are readable
These are not impressive engineering problems. They are still the difference between “nice demo” and “something I actually keep using.”
Gemini Quotas Forced Better Error Handling
One of the more useful failures happened on the Gemini side.
While testing, I hit quota exhaustion. That exposed a bad assumption in the bridge: terminal-grade errors are often acceptable in a shell, but they are terrible in chat. A quota stack trace dumped into Telegram is just noise.
So I added error summarization instead of blindly forwarding everything.
That changed how I thought about the bridge. Once a CLI tool starts replying inside Telegram, the output itself becomes part of the UX. A raw stack trace is acceptable in a terminal. In chat, it just feels broken.
What This Restores
This bridge does not recreate Claude Code Channels exactly. I do not think that is realistic.
What it does restore is the part I cared about most:
- sending a prompt from my phone
- letting the machine at home do the work
- coming back later to a usable reply
- continuing the same thread without restarting context
That is enough to keep a side project moving while I am away from the laptop.
For me, that matters more than feature parity.
What I Would Tell Anyone Copying This
There are obvious limits.
- This is a stopgap, not a final platform.
- You should restrict which Telegram chat can talk to it.
- You should be careful with destructive actions like commit, push, or deploy.
- Model quotas still apply, especially on the Gemini side.
I would also say this: local-first is a real advantage here. If you already work out of local repos and already use Telegram heavily, this kind of bridge is easier to trust than some half-remote stack you cannot inspect.
Verdict
I do think Gemini CLI and Codex will eventually get better official remote workflows. That feels inevitable.
But “eventually” is not useful when the workflow break happens now.
This bridge is the kind of engineering work I like: not the final form, not especially elegant, but good enough to unlock the missing behavior immediately.
If you liked Claude Code Channels and want something similar for Gemini CLI or Codex right now, this approach is worth trying. I would use an official version later. Until then, I would still rather have this than go back to being stuck at the terminal.
// feedback