Premium masterclass
ColorLoop AI: Predictive Setup for Modern Offset
Rutherford’s own software: the new generation.
Course syllabus
- What "AI-guided makeready" actually means (and what it doesn’t)
- Training the model on your jobs: first 30, 90, 365 days
- Predictive ink-key positioning vs reactive correction
- ColorLoop’s data layer: connecting press, measurement, MIS
- From operator decision to autonomous correction: staged adoption
Course content
The full lesson, module by module
The video is the introduction. The complete written course is below, structured to match the syllabus. Read it in one sitting or come back module by module.
ColorLoop's AI is not a generative model and is not the same technology that powers ChatGPT. It is a supervised learning system trained on the historical measurement data of your pressroom. It learns the relationship between job inputs (substrate, ink set, coverage profile, ambient conditions) and the ink-key positions that achieved good color on previous similar jobs.
What it does: predict starting ink-key positions for a new job based on the closest historical match. The prediction is then refined by real-time closed-loop correction during makeready. The combined effect is fewer correction cycles before reaching target color.
What it does not do: replace the closed-loop layer beneath it. The AI is a starting-point predictor; the closed-loop system is the actual color controller. If you remove the closed-loop, the AI prediction alone is a guess. If you remove the AI, the closed-loop still works, just with a generic CIP3-based starting position instead of a learned one.
The honest framing: AI-guided makeready typically saves 15-30 % of additional time and waste on top of vanilla closed-loop. Vanilla closed-loop already saved 30-55 % over open-loop. The AI is a refinement, not a revolution.
For pressrooms not yet running closed-loop, the AI is the wrong place to start. Get closed-loop in production first, accumulate 6-12 months of measurement history, then enable the AI layer. The AI needs that history to be useful.
Day zero of ColorLoop AI is the cold start. The model has no history of your shop; it falls back to a generic CIP3 starting position and the closed-loop layer does the rest. Performance is roughly equivalent to standard closed-loop without AI.
After 30 days (typically 90-120 makereadies), the model has seen enough of your common substrates and ink combinations to make useful starting predictions on familiar work. Expected gain over generic CIP3: 10-15 % faster to target on jobs similar to recent history.
After 90 days (300+ makereadies), the model has covered most of your routine production. It also has learned the substrate-specific quirks: how this paper batch behaves versus the previous one, how this ink set drifts versus that one. Gain over generic CIP3: 20-25 %.
After 365 days (1000+ makereadies), the model has seen the seasonal effects (humidity changes, ink supplier batches over time, operator rotation) and edge cases. Gain over generic CIP3: 25-35 %. The model continues to improve beyond a year, but the curve flattens.
New press? New substrate? New ink supplier? The model needs additional training data for the new variable. Plan for 30-60 days of slightly degraded prediction after any major input change, then back to the previous learning rate.
Reactive correction is what every closed-loop system does: measure the sheet, compare to target, move the ink keys, measure the next sheet. The cycle is fast (5-10 seconds per iteration) and usually converges within 10-30 sheets. The "30 sheets" matter; they are the waste sheets between "press started" and "press in target".
Predictive positioning aims to eliminate part of that waste. If the AI predicts the right ink-key opening before the press starts, the first sheet is already close to target. The closed-loop layer then has less correction to apply, fewer iterations to run, and fewer waste sheets to produce.
The math is multiplicative. If the AI gets you 70 % of the way to target instead of 40 %, the closed-loop loop runs fewer cycles. Fewer cycles equal fewer sheets. Fewer sheets equal lower waste cost per makeready.
Where predictive shines: jobs that are similar to recent history. Where it stalls: genuinely new work the model has not seen before. The system knows the difference; it reports confidence with each prediction, so the operator knows when to trust the predicted starting position versus when to fall back to a manual approach.
Practical operator experience: jobs that used to take 20 minutes of makeready now take 12-15. The press operators notice the saved time before they notice the saved paper, but both effects compound across a year.
AI is only as good as its training data. ColorLoop's data layer is the integration tissue that feeds the model: press telemetry from the OEM console, measurements from IntelliTrax2 or MeasureColor, job metadata from the MIS, and environmental data (temperature, humidity) from in-room sensors.
Press telemetry includes ink-key positions, fountain solution chemistry, blanket pressure, plate temperature, register data, and impressions per minute. Most modern presses expose this via standard protocols (JDF/JMF, OPC UA, or OEM-specific XML). ColorLoop reads it continuously during the run.
Measurement data is the ground truth. Every measurement event becomes a training example: given inputs X, the resulting color was Y. The model learns the mapping from inputs to outputs across thousands of examples.
MIS metadata adds the business context: customer, substrate, ink batch, deadline pressure. The model uses this for similarity matching (find me the historical jobs that look most like this one) and for tracking outcomes (which customers achieve consistent results, which substrates trigger more variance).
Environmental data closes the loop on physical effects. A 5 °C temperature swing between morning and afternoon shifts changes ink viscosity and substrate behavior; the model needs to know it.
The data layer is the single biggest determinant of ColorLoop's effectiveness. A pressroom with rich press telemetry and clean MIS data will see the AI deliver visible results within 30 days. A pressroom with patchy telemetry will take 90+ days to reach the same level of useful prediction.
Autonomy is not a switch you flip. It is a continuum, and the right place on the continuum depends on the job, the operator confidence, and the brand-owner risk profile. ColorLoop supports four stages, and most pressrooms cycle through all four during adoption.
Stage one: advisory. The system displays its prediction and recommended actions, but the operator executes every adjustment manually. Use stage one for the first 4-8 weeks of operation while operators learn what the system is telling them.
Stage two: assisted. The operator approves each recommendation with a single click before the system executes. This is the "one human in the loop" mode that satisfies most quality-management policies. Stay here for 2-4 months while building data and trust.
Stage three: supervised autonomous. The system executes corrections automatically; the operator monitors a live feed and intervenes only on alerts. This is the production-mode for routine work after 6+ months of accumulated history.
Stage four: fully autonomous. The system runs the makeready end-to-end, including correction decisions, with the operator handling exceptions and physical interventions (paper jams, plate changes). Reach this stage cautiously and only on well-understood job categories; high-stakes brand-owner work often stays at stage three indefinitely.
Each stage transition is a deliberate decision, not a feature toggle. Document the criteria for moving to the next stage (e.g., 30 consecutive jobs in stage two with no escalations) and review them quarterly with operations and quality leadership.
Other masterclasses
