Skip to main content

4.4 Programming & Tuning

Golden samples, library discipline, and fast iteration with Pareto feedback—so inspection is sharp, not shouty.

Programming and tuning inspection systems is about turning a known-good “Golden Board” into a consistent, scalable decision-making recipe. That means starting with verified examples—including borderline passes and real defect images—then building lean, reusable library items for each package so the same rules work across products. Clear risk classes define which defects get tight limits and which get looser monitoring, preventing unnecessary slowdowns while guarding against critical escapes. Fast iteration comes from Pareto analysis: fix the few sources that cause most false calls, verify with a small recheck, and only then lock in program revisions. Crucially, persistent defect patterns often signal process issues in printing or reflow—not problems with the inspection program—so tuning should complement, not replace, process control. With disciplined libraries, documented changes, and a regular review loop, AOI and AXI stay sharp, predictable, and low-noise.

4.4.1 What “programming & tuning” really means

You’re turning a Golden Board into a repeatable judgment. That’s:

  1. good lighting/slices, 2) lean, reusable library items, 3) sane limits per risk class, and 4) a feedback loop that trims false calls without letting escapes slip through.




4.4.2 Build the right golden set (don’t teach from randoms)

  • Golden board (do not ship): known-good after AOI/AXI/ICT/FCT.
  • Near-limit board: same build, but with “barely OK” features (smallest fillets, highest allowed voids). Teaches the tool what still passes.
  • Bad examples pack: cropped images of real defects (bridges, tombstones, HIP, void clusters) labeled by type.
  • Store top/bottom AOI lighting profiles, AXI slice heights/ROIs, and a one-pager of acceptance limits with each product rev.




4.4.3 Library discipline (the stuff that scales)

  • One package = one library item, reused across products; keep per-finish lighting variants if needed (e.g., ENIG vs OSP).
  • For AOI items: define presence window, pin-1/polarity region, and pad ROIs (solder checks). Avoid using logos/lot text as truth.
  • For AXI items: lock ball count/pitch, slice plane(s), and void % rules (per ball vs total).
  • Naming: <Pkg>_<Pitch/Size>_<Finish if special>_v#. No “temp_final_new2”.




4.4.4 Risk classes & limits (so tuning has guardrails)

  • Class A (must never escape): polarity, missing parts, BGA bridges, critical net shorts → tight limits, review if unsure.
  • Class B (quality/cosmetics): small wetting defects, fillet cosmetics → wider band, bias toward pass with trend tracking.
  • Class C (informational): smudges, silks nicks → chart only; never block WIP.

Put the class on every rule. When in doubt, promote to a stricter class for NPI, then relax after data.




4.4.5 The fast iteration loop (Pareto → change → recheck)

Run 50–200 boards, then:

  1. Pareto false calls by refdes & reason, separate from true defects.
  2. Fix top 3 false-call sources with the smallest change (lighting gain, ROI size, one threshold notch). Avoid global relax.
  3. Re-run a mini-lot (10 boards) → confirm false calls drop and escapes stay flat.
  4. Version-bump the program; add a 1-line “why” in the changelog.

Repeat weekly until false calls settle under your target (e.g., ≤0.5–1.0/board), then switch to monthly trims.




4.4.6 When to touch process, not code

Use inspection as a thermometer, not a hammer:

  • Bridging calls cluster → check SPI area/cleaning and separation (7.5) before widening AOI thresholds.
  • BGA voids high across a lot → revisit QFN/BGA windowing (7.4) or soak/TAL (9.2), not AXI limits.
  • Tombstones → re-balance chip apertures (7.4) and confirm ramp rate (9.1).

If the Pareto smells like a process drift, fix the line and keep inspection tight.




4.4.7 ML/auto-learn without regret

  • Train on diverse lots (mask colors, finishes, vendors).
  • Keep a holdout set (never trained on) for spot checks every rev.
  • Save example images with pass/fail scores; if a model decision isn’t explainable, don’t use it on Class A items.
  • Re-train only on a schedule (e.g., monthly), not mid-shift.




4.4.8 Change control (tie to ECNs so docs match the floor)

  • AOI/AXI program rev increments with any ECN that changes land patterns, silks, or components.
  • Bundle: program file, lighting/slice settings, limits sheet, golden images.
  • Store under the product’s Golden Recipe so stations load by ID, not memory.




4.4.9 Roles & cadence (who does what)

  • AOI/AXI programmer (R): builds/edits libraries, runs Pareto loops.
  • QE (A): sets limits, classes, and approves changes.
  • PE/ME (C): decides when issues are process not program and triggers stencil/profile fixes.
  • Operators (I): log nuisance calls accurately; don’t “teach around” defects.

Daily: clear review queue.
Weekly: Pareto + top-3 trims.
Monthly: program health (false calls/escapes trend) + ML retrain (if used).




4.4.10 Pocket checklists

Before first lot

  • Golden/near-limit/defect packs ready
  • Libraries reused (no one-offs); risk classes set
  • Limits sheet posted; lighting/slices saved in recipe

After 50–200 boards

  • False-call Pareto built (by refdes/reason)
  • Smallest fixes applied; mini-lot verified
  • Program rev’d with a clear “why”

Ongoing

  • Escapes cross-checked at ICT/FCT (with image backlinks)
  • Process drift flagged upstream (SPI/oven) before loosening rules
  • ECN ties intact; Golden Recipe bundle current




Bottom line: teach from good boards, keep libraries lean, tune with small changes driven by a Pareto, and route real trends back to printing/profile instead of widening limits. Do that, and inspection becomes a quiet, reliable gate—not a bottleneck, and not a debate.