Aller au contenu

Handlers and entrypoints

Each function under functions: declares one of two fields:

  • handler: — string, mode A only. A Python import path that resolves to a callable. MecaPy’s runner imports the module and calls it.
  • entrypoint: — list of strings, modes B/C only. The argv that the worker runs inside your container after staging /workspace/in/.

Declaring both, or neither, or entrypoint: [] is rejected at parse time.

The handler is a single string of the form module:function. The module is imported relative to your repo root; the function is fetched by attribute lookup on that module. Only module-level functions are supportedmodule:Class.method, callable classes, or partial application aren’t accepted by the parser.

functions:
size:
handler: bolts:size
def size(diameter: float, load: float) -> dict:
return {"stress": load / area(diameter)}

The runner calls size(**inputs) where inputs is the merge of in/data.json and the File inputs (each File arrives as a pathlib.Path).

handler: pkg.subpkg.module:my_function

The dotted prefix is the import path, the : splits it from the attribute name. The runner adds /workspace/ to sys.path, so anything you ship is importable from there.

If the underlying logic lives in a class, expose a module-level function that instantiates and dispatches:

class Vis:
def __init__(self, designation: str): ...
def compute_stress(self, force: float) -> float: ...
# This is what the manifest references:
def stress_for(designation: str, force: float) -> float:
return Vis(designation).compute_stress(force)
handler: bolts:stress_for

This keeps the function-vs-method distinction out of the platform — the manifest only ever sees module-level functions, and the class remains an internal implementation detail.

In mode A, the handler’s type-annotated parameters become input ports and the return type becomes output ports — without you writing an inputs: / outputs: section. The introspector reads:

  • Plain parameters → typed inputs (one port per parameter, name = parameter name).
  • pathlib.Path parameters → File inputs.
  • Single non-dict return type → one output called result.
  • dict[str, T] return → multiple typed outputs (one per dict key, inferred from a TypedDict if you provide one).
  • TypedDict return → multiple typed outputs (one per field, type from the field).

If the introspection fails (no annotations, ambiguous return type, etc.) the schemas fall back to a permissive {"type": "object"} and a warning is logged. Always prefer to add type hints — they drive the workflow editor’s typing.

You can override or supplement the auto-inferred ports by declaring inputs: / outputs: explicitly — see typed I/O.

In modes B/C the worker doesn’t import any user Python — it just launches the entrypoint inside your container with /workspace/ already staged.

functions:
static_support:
entrypoint: ["python3", "/workspace/_runner/wrapper.py"]

The list is the argv. Element 0 is the program; the rest are CLI args. The list must be non-empty; the worker errors out at validation otherwise. The image’s own ENTRYPOINT and CMD are ignored — the worker invokes the argv directly via docker exec.

The entrypoint is responsible for the full runtime contract:

  1. Read /workspace/in/data.json and /workspace/in/files/*.
  2. Run your computation.
  3. Write /workspace/out/data.json (a JSON object).
  4. Write any declared File outputs to /workspace/out/files/<name>.<ext>.
  5. Optionally drop free-form artifacts in /workspace/out/artifacts/.
  6. Optionally append progress lines to /workspace/out/progress.jsonl.
  7. On failure, write /workspace/out/_error.json and exit non-zero.

Several functions in the same package share one image (built or pulled once). They distinguish themselves by their entrypoint argv:

runtime:
kind: image
image: ghcr.io/me/solver:1.4
functions:
thermal:
entrypoint: ["/opt/run.sh", "--mode=thermal"]
mechanical:
entrypoint: ["/opt/run.sh", "--mode=mechanical"]
fatigue:
entrypoint: ["/opt/run.sh", "--mode=fatigue"]

The image is pulled (mode C) or built (mode B) once and cached; each function reuses it with different argv at exec time.

resources: overrides on each function work in modes B/C just like mode A — it changes the cgroup limits and timeout for that function’s runs:

functions:
thermal:
entrypoint: ["/opt/run.sh", "--mode=thermal"]
resources:
tier: small
mechanical:
entrypoint: ["/opt/run.sh", "--mode=mechanical"]
resources:
tier: large # mechanical needs more memory
QuestionAnswer
Pure Python with PyPI deps?Mode A. Skip the Dockerfile boilerplate.
Native solver / system package / GPU?Mode B. Ship a Dockerfile.
Image already published, can’t change it?Mode C. Reference by digest or tag.
Multiple functions sharing native deps?One package, multiple functions, mode B or C.
Multiple unrelated functions?One package per function — versioning is independent.

Workflow-side typing works identically across all three modes — the worker just routes the inputs through a different staging flow.