Usage
Behavior
- load and validate config,
- plan tasks from groups and buckets,
- skip tasks already in
.hyperlocalise.lock.json, - execute remaining tasks,
- persist successful tasks to lock state.
Supported local file formats
run can read source and target files with these extensions:
.json.xlfand.xliff.po.md.mdx.strings.csv
.json), run supports:
- standard nested key/value JSON objects
- FormatJS message JSON when the root strictly matches:
{"[id]": {"defaultMessage": "[message]", "description": "[description]"}}
defaultMessage is translated. Keys (message IDs), description, and other non-message metadata are preserved.
For Markdown and MDX (.md, .mdx), run translates extracted prose and preserves non-translatable structure:
- frontmatter blocks (
---) - fenced code blocks (
```and~~~) - inline code spans
- Markdown anchors such as link destinations
- MDX
importandexportlines - JSX/MDX component tags and attribute values
.strings), run preserves comments and key/value formatting from the template while replacing value literals with translated text.
For CSV (.csv), run supports two layouts:
- key/value layout (for example:
key,value) - per-locale column layout (for example:
id,en,fr,de)
run preserves the existing header and non-target columns, updates matching keys in place, and appends new keys in deterministic sorted order.
Flags
--config: path to config file (defaulti18n.jsoncin current directory)--group: run only tasks for the given group name--bucket: run only tasks for the given bucket name--dry-run: print plan only, do not translate or write files--force: rerun all planned tasks and ignore lockfile skip state--workers: number of parallel translation workers (defaults to CPU cores)--progress: progress rendering mode (auto|on|off, default:auto)--output: write machine-readable JSON run report to the given path--experimental-context-memory: enable two-stage context memory generation before translating each scope--context-memory-scope: context sharing scope (file|bucket|group, defaultfile)--context-memory-max-chars: maximum context memory length injected into each translation request (default1200)
Progress debug logging (optional)
To troubleshoot progress rendering, you can enable debug logs without changing CLI flags:HYPERLOCALISE_PROGRESS_DEBUG=1enables progress debug logging.HYPERLOCALISE_PROGRESS_DEBUG_FILE=<path>overrides log file location.
.hyperlocalise/logs/run.log.
Experimental context memory flow
When--experimental-context-memory is enabled, run builds shared memory once per scope (default: per source file), then reuses it for all entries in that scope.
If memory generation fails or times out, run logs a warning and continues translation without shared memory for that scope.
Why it can appear to wait
- First entry in a new scope waits for memory generation to finish.
- Later entries in the same scope reuse cached memory and proceed without rebuilding.
- Progress UI now shows context-memory steps in the file list so you can see active scope-level work.
Scope runs to one group
Use--group when you want to run only one configured group.
run fails with an unknown group planning error.
Scope runs to one bucket
Use--bucket when you want to run only one configured bucket. This is useful for focused updates, CI partitioning, or validating a single area before a full run.
run fails with an unknown bucket planning error.
Force rerun all planned tasks
Use--force to ignore lockfile skip state and execute every planned task again.
Output fields
planned_totalskipped_by_lockexecutable_totalsucceededfailedpersisted_to_lockprompt_tokenscompletion_tokenstotal_tokens
locale_usage locale=<locale> prompt_tokens=<...> completion_tokens=<...> total_tokens=<...>.
When you pass --output, the JSON report includes run metadata (generatedAt, configPath), aggregate token usage, per-locale usage, and per-entry batch usage.
Failure output
On task failure, output includesfailure target=<...> key=<...> reason=<...>.
Worker tuning guidance
Lower--workers when you hit provider rate limits or run in constrained CI environments. Start with 1 to stabilize retries and then increase gradually.
Raise --workers when your provider quota and machine resources allow more throughput. Increase in small steps and watch API error rates plus local CPU and memory usage.