Analysis & Trends

The Analysis page is CostPilot’s dedicated environment for time-series cost investigation. Where Cost Explorer answers “what did we spend?”, Analysis answers “what is changing, why, and when did it start?”.

Analysis vs Cost Explorer

Cost ExplorerAnalysis
Primary useCurrent spend breakdownTrend investigation
Best forAllocation and attributionSpotting changes over time
Data shapeAggregated totals by dimensionTime-series charts and projections
Typical question”Which team spent the most this month?""Why did our costs jump last Tuesday?”

Tabs

Analysis is split into six tabs, each focusing on a different lens on your cost data.

Overview

The Overview tab shows your total cost as a time-series chart with a 15-minute moving-average smoothing applied to reduce collection noise. The chart makes sustained increases and decreases visible without the jaggedness of raw per-interval data.

Below the chart, burn rate projections show your current hourly, daily, monthly, and yearly spend rates extrapolated from recent data. Use the projections to quickly answer “if we keep spending at this rate, what will the bill be at month end?”.

Tip

The burn rate projection is calculated from recent actual cost, not from requests or reserved capacity. It updates as new metric data arrives.

Velocity

The Velocity tab focuses on the rate of change in your spending — how fast your costs are growing or shrinking, and whether that rate is accelerating or decelerating.

Interpret velocity signals:

  • Positive velocity, increasing — spend is growing and the growth is accelerating. Investigate new deployments or autoscaling events.
  • Positive velocity, decreasing — spend is growing but the growth rate is slowing. Optimisation work may be having an effect.
  • Negative velocity — spend is falling. Confirm with the Efficiency tab that workloads are still healthy.
  • Near-zero velocity — spend is stable. Expected for mature, well-provisioned clusters.

Breakdown

The Breakdown tab decomposes cost across a chosen dimension — namespace, workload, label, node pool — and plots each segment as a time series. This makes it easy to see which segments are growing, which are stable, and which are shrinking.

Switch dimensions using the selector at the top. You can:

  • Compare multiple namespaces on the same axis to see relative growth rates
  • Identify which label or team is driving a spike by checking which line changes at the relevant point in time
  • Spot a dimension that consistently consumes a disproportionate share of total cost

Signals

The Signals tab surfaces automatically detected cost anomalies. When CostPilot’s analysis detects a data point that deviates significantly from the expected pattern, it appears here as a signal.

The tab label shows a count badge when active signals are present — making it easy to spot from any other tab.

Each signal shows:

  • The affected dimension (namespace, workload, or label)
  • The magnitude of the deviation
  • The time window it occurred in
  • Whether it is still active or has resolved
Note

Signals feed the anomaly section of Insights. When CostPilot generates an insight about an unusual cost pattern, the underlying signal that triggered it is visible here.

Use Signals as your starting point when something “looks off” in a chart — navigate here to see whether CostPilot has already detected and quantified the anomaly.

Efficiency

The Efficiency tab shows how efficiently your cluster is using the resources it is paying for, tracked over time. Where the Dashboard shows a snapshot efficiency score, this tab shows the trend — whether efficiency is improving, degrading, or stable across your selected period.

See Efficiency Scoring for how the score is calculated.

Infrastructure

The Infrastructure tab shows cost broken down by infrastructure layer — node types, instance families, pricing tier (spot vs on-demand), and availability zone. This complements the workload-oriented views in Breakdown and Cost Explorer by grounding costs in the physical infrastructure that the workloads run on.

Useful for:

  • Identifying whether a cost increase was caused by workload growth or by node provisioning events
  • Checking whether your spot adoption rate is increasing or decreasing over time
  • Comparing the cost per node type over a period

Practical examples

“Our bill was higher than expected last month — where did it come from?” Open Breakdown, set the dimension to namespace, and look for a line that rises sharply mid-month. Once you identify the namespace, switch to Cost Explorer for detailed workload-level attribution.

“We shipped a new service two weeks ago — has it meaningfully increased our spend?” Open Velocity and look at the period around the deployment date. A step change in velocity that aligns with the deployment date confirms the cost impact.

“Why does cost spike every Monday morning?” Open Breakdown with a 30-day window. The weekly pattern will be visible in the time-series chart. Common causes: batch jobs, report generation, or caches warming after weekend autoscaling.

“Something looks off this week — but I’m not sure what.” Check the Signals tab first. If CostPilot has detected an anomaly, the signal will identify the affected dimension and magnitude.

“We optimised our node pools last sprint — did it work?” Open Infrastructure and compare the spot-to-on-demand ratio and total node cost before and after the sprint. Then confirm in Efficiency that workload efficiency scores improved alongside the node changes.


Spotting anomalies

A cost anomaly is a data point that does not fit the surrounding pattern. In the time-series views, anomalies appear as:

  • A sudden vertical spike on an otherwise flat line
  • A segment that detaches from the cluster of other segments
  • A burn rate projection that is significantly above your historical norm
Warning

A sudden cost drop is not always a good sign. It can indicate a workload crashing and not restarting, rather than a genuine efficiency gain. Confirm that availability metrics align before treating a drop as an optimisation win.

When you spot a suspicious pattern, cross-reference it with:

  1. The deployment history for that namespace or team around that date
  2. Node autoscaling events (check the Infrastructure tab and the Nodes page)
  3. Any spot interruptions that may have caused workloads to reschedule onto on-demand capacity