You commit dozens of times a day.
The message shouldn't slow you down.
Your diff becomes multiple proposals — right in the terminal.
Tweak until it reads like you wrote it.
npm i -g ultrahopeFree — no account needed. Open source.
Need unlimited requests?
See Pro plan →Judge by the output, not the model name.
A commit message is a single line. How much model do you need for one line?
Take commit messages. Here's what different models generate from a single diff.
| Model | Message | ||
|---|---|---|---|
Llama 3.1 8B cerebras/llama3.1-8b | fix(core): support Vitest's expect variants | 859ms 1.00x | $0.00026 |
Ministral 3B mistral/ministral-3b | fix(lint): allow `assert`, `expectTypeOf`, `assertType` in test assertions for Vitest | 887ms 1.03x | $0.00011 |
Codestral mistral/codestral | fix(analyze): add support for vitest expect variants in useExpect rule | 958ms 1.12x | $0.00087 |
GPT-5.3 Codex openai/gpt-5.3-codex | fix(js-analyze): treat Vitest assert and type assertions as expect variants | 1532ms 1.78x | $0.00487 |
Claude Sonnet 4.6 anthropic/claude-sonnet-4.6 | fix(nursery/useExpect): recognize assert, expectTypeOf, and assertType as valid assertions | 1577ms 1.84x | $0.00990 |
Claude Opus 4.5 anthropic/claude-opus-4.5 | fix(lint): recognize assert, expectTypeOf, and assertType as valid test assertions | 2148ms 2.50x | $0.01640 |
GPT-5.2 openai/gpt-5.2 | fix(analyze): treat Vitest assert/expectTypeOf/assertType as assertions | 2206ms 2.57x | $0.00488 |
Gemini 3 Pro google/gemini-3-pro-preview | feat(l | 3549ms 4.13x | $0.00753 |
Grok Code Fast xai/grok-code-fast-1 | feat(lint): extend useExpect rule to recognize assert, expectTypeOf, and assertType assertions | 13688ms 15.93x | $0.00102 |
Quality scores are manually reviewed by humans. Latency and cost are captured from the same AI Gateway execution.
What if the result isn't quite right?
You stay in control. The model just gets you started.
The model drafts. You have the final word.