Pre-season build sessions are where forecast accuracy is either won or lost. Most planning teams lose it before they ever get to market.
Here is what that session typically looks like. You have 200 choices to build. You know half of them have real selling history behind them. You know a predecessor item, a carry-forward style, a silhouette that ran for three seasons and left behind a clean performance record. But when you open the plan, the choice is blank. You are starting from scratch.
So you do what planners do. You pull last year's ROS. You apply a growth assumption that feels reasonable. You grab a sales curve from a template. Then you move on to the next choice and do it again.
That translation process is where the error enters. Every step from historical behavior to planning input requires a judgment call. Judgment calls compound. By the time you are done building, the forecast is already drifting from the reality the data tells you.
The Cost of Manual Demand Forecasting
Most conversations about pre-season build efficiency focus on hours. That matters. A mid-size planning team building hundreds of choices per season can spend weeks of their pre-season capacity just establishing initial demand on choices that have plenty of data behind them.
The bigger cost is the accuracy you leave behind.
Standard rate-of-sale forecasting treats price as a static input. If your choice has a promotional calendar, a planned markdown, or a tiered pricing strategy, a flat ROS assumption will be wrong almost every week of the selling period. Not dramatically wrong. Just wrong enough to flow downstream into receipt plans, open-to-buy positions, and sell-through projections that are all slightly off from the start.
You often do not catch it until the season is underway. Sell-through is tracking below plan. You are sitting on units you did not expect to own. The markdown budget you planned to protect starts moving. And the cause traces back to a forecast that was built without the price sensitivity built in.
How to Build a More Accurate Demand Forecast
The fix starts with a simple reframe: generating a forecast and judging a forecast are two different jobs, and most planning teams are doing both manually when they should only be doing one.
Think about what that looks like in practice. A planner sits down with a carry-forward style that has 18 months of clean selling history, a known promotional calendar, and a planned markdown in week eight. They still build the demand from a blank ROS field. The history is in the system. The pricing plan exists. None of it feeds the forecast automatically. The planner does the translation by hand, introduces judgment error at every step, and moves on.
Historical selling behavior should generate the starting point. Planner judgment should evaluate and override it. Right now, most teams have that backwards.
A better process enforces a few disciplines:
Analog linkage should be explicit and structured
If a new choice has a meaningful predecessor, that relationship should be documented and usable. It should not live in someone's memory, recalled on a good day and skipped on a busy one.
Price and allowance inputs need to be planned before demand is locked
This is not an optional detail. Forecast models that incorporate planned discount and markdown will behave differently than models that ignore them. Teams that enforce pricing inputs early in the planning cycle will see structurally more accurate first passes. Teams that treat pricing as downstream will keep correcting forecasts that were wrong from the start.
Data maturity should determine the method
A choice with 18 months of channel-level selling history is a different forecasting problem than a choice with 60 days. Using the same template for both introduces noise where the data could be doing more work. Applying different methods based on how much you actually know about an item will improve accuracy without adding planner effort.
System-generated result should be a starting point, not a black box
Planners should be able to see what a model produced, compare it against their own expectations, and override it where the business context demands it. The goal is informed judgment, not automated acceptance.
How AI Turns Selling History Into a Demand Starting Point
AI reads the selling history, applies the planned price and markdown inputs, and hands the planner a demand curve that is already grounded in how that item actually behaved. The planner's job shifts from building the forecast to stress-testing it.
A bad first forecast becomes an inventory problem, and inventory problems cost margin. The same is true anywhere seasonal carry-overs dominate the assortment and price sensitivity around promotional events is high. The history is there. The data exists. If your first-pass demand still starts with a blank field and a judgment call, you are paying for the gap between what your data knows and what your process uses.
Toolio's AI Forecasting closes that gap. For eligible choices with more than a year of channel-level selling history, it generates a demand starting point directly from historical behavior, with a model that accounts for planned price, discount, and markdown inputs, so the first number a planner sees is already doing more work than a template ever could.




