Handlers and entrypoints
Each function under functions: declares one of two fields:
handler:— string, mode A only. A Python import path that resolves to a callable. MecaPy’s runner imports the module and calls it.entrypoint:— list of strings, modes B/C only. The argv that the worker runs inside your container after staging/workspace/in/.
Declaring both, or neither, or entrypoint: [] is rejected at parse
time.
Mode A — handler:
Section intitulée « Mode A — handler: »The handler is a single string of the form module:function. The
module is imported relative to your repo root; the function is
fetched by attribute lookup on that module. Only module-level
functions are supported — module:Class.method, callable classes,
or partial application aren’t accepted by the parser.
Plain function
Section intitulée « Plain function »functions: size: handler: bolts:sizedef size(diameter: float, load: float) -> dict: return {"stress": load / area(diameter)}The runner calls size(**inputs) where inputs is the merge of
in/data.json and the File inputs (each File arrives as a
pathlib.Path).
Submodule
Section intitulée « Submodule »handler: pkg.subpkg.module:my_functionThe dotted prefix is the import path, the : splits it from the
attribute name. The runner adds /workspace/ to sys.path, so
anything you ship is importable from there.
Wrapping stateful behaviour
Section intitulée « Wrapping stateful behaviour »If the underlying logic lives in a class, expose a module-level function that instantiates and dispatches:
class Vis: def __init__(self, designation: str): ... def compute_stress(self, force: float) -> float: ...
# This is what the manifest references:def stress_for(designation: str, force: float) -> float: return Vis(designation).compute_stress(force)handler: bolts:stress_forThis keeps the function-vs-method distinction out of the platform — the manifest only ever sees module-level functions, and the class remains an internal implementation detail.
Signature → ports
Section intitulée « Signature → ports »In mode A, the handler’s type-annotated parameters become input
ports and the return type becomes output ports — without you
writing an inputs: / outputs: section. The introspector reads:
- Plain parameters → typed inputs (one port per parameter, name = parameter name).
pathlib.Pathparameters → File inputs.- Single non-dict return type → one output called
result. dict[str, T]return → multiple typed outputs (one per dict key, inferred from aTypedDictif you provide one).TypedDictreturn → multiple typed outputs (one per field, type from the field).
If the introspection fails (no annotations, ambiguous return type,
etc.) the schemas fall back to a permissive {"type": "object"} and a
warning is logged. Always prefer to add type hints — they drive
the workflow editor’s typing.
You can override or supplement the auto-inferred ports by declaring
inputs: / outputs: explicitly — see
typed I/O.
Modes B/C — entrypoint:
Section intitulée « Modes B/C — entrypoint: »In modes B/C the worker doesn’t import any user Python — it just
launches the entrypoint inside your container with /workspace/
already staged.
functions: static_support: entrypoint: ["python3", "/workspace/_runner/wrapper.py"]The list is the argv. Element 0 is the program; the rest are CLI args.
The list must be non-empty; the worker errors out at validation
otherwise. The image’s own ENTRYPOINT and CMD are ignored — the
worker invokes the argv directly via docker exec.
What your entrypoint must do
Section intitulée « What your entrypoint must do »The entrypoint is responsible for the full runtime contract:
- Read
/workspace/in/data.jsonand/workspace/in/files/*. - Run your computation.
- Write
/workspace/out/data.json(a JSON object). - Write any declared File outputs to
/workspace/out/files/<name>.<ext>. - Optionally drop free-form artifacts in
/workspace/out/artifacts/. - Optionally append progress lines to
/workspace/out/progress.jsonl. - On failure, write
/workspace/out/_error.jsonand exit non-zero.
Multiple functions, one image
Section intitulée « Multiple functions, one image »Several functions in the same package share one image (built or pulled once). They distinguish themselves by their entrypoint argv:
runtime: kind: image image: ghcr.io/me/solver:1.4
functions: thermal: entrypoint: ["/opt/run.sh", "--mode=thermal"] mechanical: entrypoint: ["/opt/run.sh", "--mode=mechanical"] fatigue: entrypoint: ["/opt/run.sh", "--mode=fatigue"]The image is pulled (mode C) or built (mode B) once and cached; each function reuses it with different argv at exec time.
Per-function resources still apply
Section intitulée « Per-function resources still apply »resources: overrides on each function work in modes B/C just like
mode A — it changes the cgroup limits and timeout for that
function’s runs:
functions: thermal: entrypoint: ["/opt/run.sh", "--mode=thermal"] resources: tier: small mechanical: entrypoint: ["/opt/run.sh", "--mode=mechanical"] resources: tier: large # mechanical needs more memoryChoosing between modes
Section intitulée « Choosing between modes »| Question | Answer |
|---|---|
| Pure Python with PyPI deps? | Mode A. Skip the Dockerfile boilerplate. |
| Native solver / system package / GPU? | Mode B. Ship a Dockerfile. |
| Image already published, can’t change it? | Mode C. Reference by digest or tag. |
| Multiple functions sharing native deps? | One package, multiple functions, mode B or C. |
| Multiple unrelated functions? | One package per function — versioning is independent. |
Workflow-side typing works identically across all three modes — the worker just routes the inputs through a different staging flow.