Public cloud bills per second. A depreciating server bills once and never again. To allocate fairly across a hybrid estate you have to manufacture an OpEx-equivalent for every CapEx asset — kW-month, rack-unit-month, and amortized-asset-month — and feed it into the same chargeback table your cloud workloads already use. The mechanics are unromantic, but they hold up under audit and they make the dashboards stop lying.
- CapEx assets do not produce billing data. You synthesize it from depreciation schedules, environmental telemetry, and rack inventory.
- The right denominators are kW-month + rack-unit-month + amortized-asset-month. Server-hours are noise on owned hardware.
- Depreciation should mirror the policy finance already uses — never invent a curve for the chargeback model.
- Reconcile quarterly against the fixed-asset register and the colo invoice. Anything that does not reconcile gets a confidence flag, not a hidden assumption.
- The output is one row per workload per month with the same shape as your AWS, Azure, and GCP allocation rows.
Why a server-hour is the wrong unit on owned hardware
The first instinct on a hybrid estate is to mirror what the cloud does and bill by server-hour. That instinct is wrong, for one reason: utilization does not change the run-rate of an owned asset. A depreciating server costs the same whether it runs at 5% CPU or 95% CPU. A colocation cabinet draws roughly the same power whether half the U is empty or fully populated. Charging tenants by server-hour creates a model where idle workloads look free and saturated workloads look expensive, when in reality both consume the same fixed capacity.
The cloud unit economics work because the cloud is metered and elastic: stop the workload, stop the bill. Owned hardware is neither. If you want a chargeback model that survives the first time a finance partner asks "why did this number go up when nothing changed," you have to bill against the things that actually drive cost: capacity occupied, power drawn, and time elapsed against a depreciating asset.
The three primitives
Three units carry almost every owned-hardware allocation I have built:
- amortized-asset-month — the monthly straight-line depreciation of the physical asset. Purchase price ÷ useful life in months.
- kW-month — the asset's allocated share of metered facility power, normalized to a month. Data: PDU readings, branch-circuit telemetry, or rack-level monitoring.
- rack-unit-month — the U the asset occupies times the months it occupies them. Data: CMDB, DCIM, or a maintained rack inventory.
Together those three give you a per-asset, per-month run-rate. Multiply by the asset's share of a workload and you get a workload-level OpEx line item. That line item drops into the same allocation table your cloud workloads already use, with the same shape, the same denominators (workload, environment, owner, cost-center), and the same downstream dashboards.
Where the data actually comes from
Nobody has all of this in one system. Expect to wire three sources:
The fixed-asset register from finance gives you the depreciation policy — useful life, salvage value, method (almost always straight-line for IT). Anything not on the register either is not yours or is being expensed; either way it is not your line item to allocate.
The facility power feed gives kW. Best case: branch-circuit-level metering exporting to OpenTelemetry or directly to your time-series database. Acceptable case: rack-level PDU readings, polled hourly. Survivable case: a single facility-level reading allocated by rack count, with a confidence flag of "modeled" rather than "metered" so consumers know the rate is approximate.
The rack inventory — DCIM if you have one, the CMDB if you do not, a maintained spreadsheet if you have neither — gives U-occupied per asset per month. The spreadsheet is more common than people admit; it works as long as it reconciles quarterly.
How the chargeback table looks
The output, for one workload, one month, looks like this:
| Source | Quantity | Unit rate | Cost | Confidence |
|---|---|---|---|---|
| amortized-asset-month | 1.0 | $280.00 | $280.00 | metered |
| kW-month | 0.42 | $118.50 | $49.77 | metered |
| rack-unit-month | 2.0 | $36.00 | $72.00 | metered |
| cooling overhead (allocated) | 0.42 | $24.30 | $10.21 | modeled |
| Workload total | $411.98 |
That row carries the same fields a cloud allocation row carries: workload, owner, cost-center, environment, month, cost. The only differences are the source label and the confidence flag. Your dashboard does not need to know whether the workload runs in AWS or in cabinet B14 — it just needs the row.
What this stops being an argument about
Done well, this method ends three recurring arguments:
"The cloud is more expensive." Maybe. Maybe not. Once you have a per-workload OpEx line item on owned hardware, you can compare against the actual cloud-equivalent rate, not against a hand-wavy "it would be way more in AWS" claim. Most of the time the on-prem rate is lower for steady-state workloads and higher for spiky ones — which is exactly what the published research has shown for years.
"We can't allocate this fairly." The model is fair when the inputs reconcile. Power reconciles to the colo invoice. Depreciation reconciles to the fixed-asset register. Rack-units reconcile to the inventory. Anything that does not reconcile gets a confidence flag. Tenants stop arguing about the rate and start arguing about the inputs, which is a much shorter conversation.
"FinOps is just for cloud." Once owned hardware shows up in the same chargeback table with the same fields, the practice stops being venue-specific. The same governance conversations work for a cabinet in Equinix DC11 and an EC2 fleet in us-east-1.
Where this falls down
Two failure modes worth flagging.
First, shared services. A core network switch serves dozens of workloads and does not belong to any of them. Same with shared storage, shared management appliances, shared backup infrastructure. You will end up with a pool that gets allocated by some heuristic — port-month, gigabyte-month, or a flat per-tenant fee. Heuristics are fine as long as they are documented and consistent. The mistake is to hide the heuristic in a magic number.
Second, pre-allocated reservations on the cloud side. Reserved Instances, Savings Plans, and Compute Engine Committed Use Discounts all distort the cloud allocation rate, which makes hybrid comparisons noisy. The clean answer is to allocate the reservation cost against actual reserved usage at the workload level — which is exactly what the major cloud cost tools already do. If you are still allocating commitments at the account level, fix that before you worry about hybrid parity.
The unit economics conversation finally works
Hybrid FinOps stops being a slide deck the moment a Tuesday-morning standup can answer "what does this workload cost?" with a single number that includes both venues. The method above is unromantic — three primitives, three data sources, quarterly reconciliation — but that is the point. Cost models that survive the first audit are the ones that match the data finance already trusts. Build to that bar, and the rest of the practice falls into place.
Frequently asked questions
How do I convert CapEx to OpEx-style metrics for FinOps reporting?
Manufacture a per-unit OpEx equivalent for every CapEx asset by combining its monthly amortization (purchase price ÷ useful life), its allocated power draw in kW-months, and the rack-unit-months it occupies. Push that synthetic unit price into the same allocation table your cloud workloads use. The output is one row per workload per month, with a rate column that carries a confidence flag indicating which inputs are metered versus modeled.
What FinOps metrics work for owned datacenter hardware?
Three primitives carry the load: amortized-asset-month (depreciation as a flat monthly rate), kW-month (allocated power draw against the metered facility load), and rack-unit-month (physical density against the colo or hall inventory). Server-hours are not useful — owned hardware does not bill by the hour and varying utilization does not change the run-rate.
Should I include depreciation in the chargeback rate?
Yes. Depreciation is the cleanest proxy for the time-value of an owned asset and is already on the balance sheet. Use straight-line over the useful-life policy your finance team applies (typically 36–60 months for compute, 60–84 for storage, 84+ for network). If your finance team uses accelerated methods, mirror them — never invent a depreciation curve for the chargeback model.
How often should the per-unit rate be recalibrated?
Quarterly is the right cadence for most estates. Power costs and rack density shift slowly. Recalibrate against (a) the fixed-asset register, (b) the colocation invoice, and (c) the metered facility load. Annual recalibration drifts too far; monthly creates noise that obscures real signal.
What is hybrid FinOps?
Hybrid FinOps is the discipline of applying unit economics and continuous optimization across every venue where value is metered — public cloud, private cloud, colocation, SaaS, and AI infrastructure — using a common allocation model rather than parallel, venue-specific practices.
Sources
If this kind of analysis is useful, the Hybrid FinOps brief ships one essay every two weeks. Subscribe to the Hybrid FinOps brief.