Orientation to facet-format JIT deserialization (tiering, fallbacks, key types/entry points) and where to look when changing or debugging JIT code
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
skills listSkill Instructions
name: jit-overview description: Orientation to facet-format JIT deserialization (tiering, fallbacks, key types/entry points) and where to look when changing or debugging JIT code
JIT deserialization overview (facet-format)
Facet’s JIT lives in facet-format and is used by format crates like facet-json.
When to read this
- You’re touching anything under
facet-format/src/jit/or enabling thejitfeature. - You’re investigating performance changes in deserialization (especially “tier2”/JIT benchmarks).
- You’re debugging a JIT crash (SIGSEGV/UB-ish symptoms).
Mental model
facet-format defines a format-neutral deserialization pipeline:
- A
FormatParser(implemented by each format crate) produces a stream ofParseEvents. - A shape-driven layer consumes those events and writes to an output value (often via
facet-reflect).
The JIT accelerates this by compiling deserialization code specialized for:
- the target type (
T/ itsShape), and sometimes - the format parser (
P).
Two-tier architecture (high level)
Tier 1 (shape JIT)
- Compiles code that consumes
ParseEvents and writes directly into the output’s memory at known offsets. - Works with any format that implements
FormatParser(JSON/YAML/TOML/…).
Tier 2 (format JIT)
- For the “entire input slice is available” case, a format crate can provide a
FormatJitParser+JitFormatimplementation. - Tier 2 emits Cranelift IR to parse bytes directly, bypassing the
ParseEventstream for maximum throughput.
Fallbacks are part of the design
- Tier 2 may return “unsupported” for shapes/input it can’t handle, and must be side-effect-free in that case.
- Callers typically try tier 2, then tier 1, then reflection.
Entry points & where to look
- Main docs and contracts:
facet-format/src/jit/mod.rs - JIT-enabled parser trait:
facet-format/src/parser.rs(FormatJitParser) - JIT usage in a format crate:
facet-json/Cargo.tomlfeaturejit = ["facet-format/jit"]- Example:
facet-json/examples/profile_jit_vec(requiresjit)
- Windows crash debugging notes:
.claude/skills/windbg-jit.md - Memory debugging:
.claude/skills/debug-with-valgrind/SKILL.md(uses nextest profiles)
Debugging checklist (practical)
- Reproduce with a minimal type + input (see
.claude/skills/reproduce-reduce-regress/SKILL.md). - Run the failing test under:
- valgrind:
cargo nextest run --profile valgrind …(configured in.config/nextest.toml) - or Miri when applicable (
just miri) for UB/provenance issues outside the JIT itself.
- valgrind:
- If the crash is in JIT codegen/execution:
- Prefer isolating the smallest shape that triggers tier selection and failure.
- Look for tier selection diagnostics and caching behavior in
facet-format/src/jit/mod.rs.
Common pitfalls
- Assuming tier 2 supports “all shapes”: it intentionally supports a performance-focused subset.
- Forgetting that “unsupported” must not advance the parser cursor or partially initialize output.
- Introducing new unsafe paths without tests that exercise drop/cleanup on error paths.
More by facet-rs
View allDebug crashes, segfaults, and memory errors using valgrind integration with nextest through pre-configured profiles
Guidelines for using facet crates (facet-json, facet-toml, facet-args) instead of serde-based alternatives for consistent dogfooding
Profile code performance using callgrind and valgrind with nextest integration for analyzing instruction counts, cache behavior, and identifying bottlenecks
Systematic workflow for debugging by reproducing bugs with real data, reducing test cases to minimal examples, and adding regression tests
