A Go-optimized coding agent

codalotl is a coding agent for Go codebases. Twice as fast and token efficent as standard agents. Deep integration with standard Go tooling and conventions.

Features

2x faster, with about half the tokens

  • In side-by-side tests with the same model, codalotl finished in about half the time while using about half the tokens. See the benchmark.
  • On gpt-5.2-high: 7 minutes per task with codalotl vs 15 minutes with Codex.
  • Average cost: $0.38 per task with codalotl vs $0.65 with Codex.

Starts with the Go context your model needs

  • Prompts are preloaded with Go-specific context for the package you are working in.
  • Relevant files, identifiers, and package graph are included up front, so the model can begin editing immediately.
  • Test, build, and lint status is available from the get-go.

Stays precise in large repositories

  • Edits are scoped to one package a time to let the model focus.
  • It explores the rest of the repo like you do: reading godocs-style API docs.
  • For cross-package work, subagents are launched automatically to coordinate changes.

Validates every code edit automatically

  • Every patch is gofmted automatically as part of the edit.
  • Build and lint checks run immediately after each patch.
  • Lints are configurable: staticcheck, golangci-lint, or your own lint pipeline.

Frequently Asked Questions

What is codalotl?

codalotl is a Go-focused coding agent built to work in large (or small) Go codebases. It is primarily a TUI, but also comes with noninteractive CLI modes. It's like Codex/Claude Code/OpenCode (and can use the same models), but the harness itself just works better because it's purpose-built for Go.

Why just Go?

Answer one: because I mostly write Go, and I primarily write codalotl for myself.

Answer two: there's a lot of things that can be done when you build an agent for just one language. To take a small example: codalotl automatically uses gofmt on all edits the LLM makes. The LLM never needs to think about this or make separate tool calls. I just works, 100% of the time.

Is codalotl free?

Yes. codalotl is free to use, and there are no plans to add monetization of any kind. You are also free to fork the repository and adapt it for your own workflows.

Is codalotl open source?

Yes. The source is available at github.com/codalotl/codalotl, so you can inspect how it works, run it yourself, and build on top of it.

Do I need an account?

No. There are no codalotl accounts.

You just need API keys for your LLM provider, and codalotl calls the provider directly from your environment.

What models does it support?

Right now, codalotl supports OpenAI models. We chose that starting point because OpenAI currently shows the best performance in goagentbench for the workflows codalotl is targeting.

Will my code stay private?

Yes. codalotl servers do not see your code and do not proxy your LLM requests. Requests go directly from your environment to your configured model provider.

codalotl visualization

Install Codalotl

Visit the GitHub repo to install codalotl:

GitHub