Putting the labor market back inside the workforce plan

This whitepaper describes methodology and patterns from PeopleAnalytics.AI's engagement work on strategic workforce planning. Engagement details are anonymised by design; no specific client outcomes are claimed. Published figures (FRED, BLS, SHRM, industry reports) are cited where load-bearing. Numbers drawn from the demo environment are labelled as demo measurements against synthetic data.


Summary

Workforce plans at most organisations are built from two inputs: internal headcount and a business-plan growth target. That gives the planner half the picture. The other half — what the labour market is actually doing — usually lives in a deck nobody updates. This whitepaper describes the methodology for a workforce-forecasting system that combines internal headcount and attrition with live feeds from FRED and BLS, so external labour-market context is inside the model rather than adjacent to it.


The problem

The classic workforce-planning spreadsheet takes current headcount, applies department-level growth assumptions from the business plan, subtracts assumed attrition at the prior year's observed rate, and produces a hiring-demand curve. Finance tends to love it because it reconciles cleanly with the operating plan. Talent Acquisition tends to resent it because it has no connection to whether the people the plan demands actually exist in the labour market.

The failure mode that recurs in engagement work: the business plan calls for significant growth in a specific function. The spreadsheet produces a hiring target. TA is asked to deliver. They can't — the open-role-to-qualified-applicant ratio in the relevant market has deteriorated, and nobody caught it in the planning process. By the time the gap is visible in the hiring funnel, the organisation is months behind and a product or business launch is slipping.

The bigger problem is that most planners know this happens, know the corrective data exists (FRED, BLS, industry reports publish it for free), and don't have a way to bring it into the forecast on a meaningful cadence. So the plan goes to the board with internal data only, TA does what it can, and reality does what reality does.

Why this is hard

"Include market data" is not a footnote. Adding a single BLS unemployment number to a slide is not a forecast. The labour market has multiple signals that matter for workforce planning — sector quit rate, job-opening rate, hire rate, layoff rate, unemployment, GDP growth, yield-curve spread — and they point in different directions at different points in a cycle. A single "labour market is tight" badge doesn't let a planner do anything actionable.

Update cadence. Most workforce plans refresh quarterly; the labour market moves faster than that. A plan that was right in Q2 can be wrong by Q3. The external data has to live in the model continuously, not be re-pulled when someone remembers.

Black-box temptation. Prophet and ARIMA are both reasonable tools; a trained neural net is more accurate in the abstract. But the workforce planner defending the forecast to the CFO cannot use a model she can't explain. Interpretability beats raw accuracy when the consumer has the power to reject the output.

The approach

The system is implemented in src/app/demos/workforce-forecast/ with five tabs: Market Context, Supply Forecast, Demand Forecast, Gap Analysis, and Scenarios. A setup guide lives at src/app/demos/workforce-forecast/SETUP.md.

Data inputs come from four sources.

  • Internal headcount and attrition are scanned from DynamoDB (peopleanalytics-employees), with department-level breakdowns.
  • FRED pulls eight series tuned for financial services: JTU5200QUR (sector quit rate), JTU5200JOL (sector job openings), JTU5200HIR (sector hires), JTU5200LDR (sector layoffs), JTSQUR (all-nonfarm quits for context), UNRATE (unemployment), A191RL1Q225SBEA (GDP growth), and T10Y2Y (10Y–2Y yield spread as a leading indicator). FRED series IDs are publicly documented by the Federal Reserve Bank of St. Louis; this is a question of which series to pull for which industry, not of proprietary data access.
  • BLS pulls employment for the relevant NAICS code (55 for management, as a demo default).
  • BigQuery holds industry benchmark tables for cross-industry comparison.

FRED and BLS are cached server-side for 24 hours via /api/workforce/fred to control rate limits and cost. Cache misses degrade gracefully to static fallback data so the demo never fails to load — a design choice that matters when the tool is being shown live.

The forecasting engine (src/app/demos/workforce-forecast/lib/forecastEngine.ts) runs a Prophet-style trend-plus-seasonality projection with configurable growth assumptions. An optional Cloud Run service (forecast-api/main.py) runs the real Prophet library for clients who want it; the demo falls back to the JavaScript engine when Cloud Run is unavailable. Prophet's components (trend, seasonality, holidays) are inspectable, and a planner can argue with each piece independently. That's worth more than marginal accuracy.

The scenario engine is the piece that gets used in planning sessions. It's not a slider that changes a chart — it's a genuine recomputation of supply, demand, and gap under adjustable assumptions. "What if growth is 12% instead of 8%?" "What if attrition returns to pre-pandemic levels?" "What if we add a regulatory-driven headcount increment?" Each scenario re-runs the full forecast; each output shows the gap in heads.

Dollar figures cite their assumptions inline — revenue-per-employee, vacancy multiplier, cost-per-hire, time-to-fill, training-ramp — as editable inputs. Defaults are set to cited industry benchmarks (SHRM, BLS, ADP); a planner who disagrees can override any of them and watch the model update. That's what makes the numbers defensible.

What the system produces

  • A supply-side forecast built from internal headcount and attrition, with explicit department-level breakdowns.
  • A demand-side forecast built from business-plan growth assumptions, adjustable per department.
  • A gap analysis showing where supply falls short of demand, in heads and in configurable dollar terms.
  • A scenario engine that recomputes all three under user-specified changes to any driver.
  • Live labour-market context from FRED and BLS, cached server-side and rendered alongside the internal forecast so the external picture is visible, not footnoted.

What the system does not produce:

  • A hiring decision. The forecast is upstream of hiring authorisation; the organisation decides what to act on.
  • A certain number. The forecast is a projection with assumptions; all assumptions are user-editable to reflect the planner's judgment.

Patterns from engagement work

Start with the planner's current spreadsheet and reproduce its outputs first. Trust has to be earned. If the new system's forecast diverges from the existing spreadsheet on day one — even by a defensibly correct amount — the planner's instinct is to trust the spreadsheet. Reproduce, then diverge with evidence.

Sector-specific FRED series selection is the judgment call. FRED publishes hundreds of labour-market series; picking the eight that matter for a specific industry is work that doesn't generalise. Financial services, healthcare, manufacturing, retail, and tech each need a different set. Allow time for this in scoping.

Internal attrition usually deserves more weight than sector attrition. Our default weighting leans toward internal data with sector data as a check, rather than letting a sector spike pull the whole forecast. Sector signals matter for inflection points; they shouldn't drive the baseline. That's a tuning decision a planner should make explicitly, not a default hardcoded by the vendor.

The scenario engine is what sells the system. Build it early. Stakeholders engage with "what if" far more readily than with baseline forecasts, and the scenario engine is what they'll actually use in a planning meeting.

Where this applies

This pattern works for industries with well-published FRED and BLS data — financial services, healthcare, manufacturing, retail, professional services, tech — and for organisations with at least several hundred employees, where department-level breakdowns give enough resolution to matter. It works best where labour-market conditions change faster than planning cycles.

It does not work where industry data is sparse, where headcount is too small for department-level forecasting to be meaningful, or where leadership will override the forecast regardless of what it says. A forecast is only as valuable as the willingness to act on it.