Logo
CounterWatch

How Counterwatch calculates hero counters and synergies

|OverwatchMarvel Rivals

I posted this on r/Overwatch and it turned into a good methodology discussion, so I'm moving it here too. This is the deep-dive on exactly how Counterwatch computes its counter rates and synergy rates, why counters use binary weighting while synergies use playtime-weighted, and what the double-normalization pass actually does. If you have ever wondered why our numbers look different from raw-aggregate stats sites, this is the answer.


Source data

Counterwatch aggregates around 60,000 Overwatch matches per week from opted-in app users.

Every match generates hero sessions: one per hero per player. Each session records the hero, team side, match outcome, and match portion (a 0–1 value based on playtime). Data is processed weekly and sliced by rank division, game type, and stat category (5V5, 6V6, Stadium).

That is the raw material. Everything below is computed on top of these hero sessions.

Counter rate calculation

For every match, every hero on one team is paired with every hero on the opposite team. Each pair counts as 1.0 regardless of playtime. This is deliberate, and it is the single biggest difference between how Counterwatch calculates counters and how a naive aggregate would.

Here is why it matters. When players get countered, they swap to a counter hero mid-match. That means the original (countered) hero has low playtime in the matches where they lost hardest to a specific opponent. If we weighted pairs by playtime, we would downweight exactly the data that proves a counter works: the matches where someone got hard-countered and swapped. The counter data would get systematically flattened because the most informative matches would carry the least weight.

Binary weight (1.0 per pair) avoids this trap. A hero that appears for 20% of a match still contributes a full data point to every enemy-hero matchup, preserving the signal that the swap was forced.

Raw counter rate is then straightforward:

raw_counter_rate = wins / total_pairs   for that hero-vs-opponent combo

Synergy rate calculation

Synergies are the opposite problem. Every hero on the same team is paired with every other allied hero. Pairs are weighted by both heroes' match portions multiplied: portion_a × portion_b.

The reason we diverge from the counter approach: hero swaps do not carry survivorship bias in the synergy direction. We want to measure how well two heroes work together when both are actually being played at the same time, not how well one works when the other has already left the match. Weighting by the product of their match portions correctly captures the time window both were on the roster together.

Raw synergy rate:

raw_synergy_rate = weighted_wins / weighted_total

Double normalization (both counters and synergies)

Raw rates are useful but biased. Two systematic effects distort them:

  1. Some heroes are just stronger than others. If Hero A wins 55% overall, every counter pair involving Hero A is going to look inflated on the "Hero A wins against..." side and deflated on the "losing to Hero A..." side.
  2. Our user base wins more than 50% of matches overall. That is a selection effect: people install an app like Counterwatch because they want to win more, and they do. Without correction, every hero looks artificially strong across every matchup.

Counterwatch runs two normalization passes to strip both effects out:

Pass 1: Opponent / ally normalization

Subtract each opponent's (or ally's) average win rate across all heroes. Removes the effect of facing or playing with a generally strong or weak hero. After this pass, a positive number means the hero is doing better than average against this specific opponent (or with this specific ally), controlling for the opponent's overall strength.

Pass 2: Hero popularity normalization

Subtract each hero's average already-normalized rate across all their matchups. Removes the bias from our user base winning above 50% globally. Without this, heroes that are popular among our users would get inflated rates across the entire opponent table.

The result: rates centered around 0, not 50%. Positive numbers mean “better than expected,” negative numbers mean “worse than expected,” and the magnitude tells you how strong the counter or synergy effect is.

This is why the numbers on our site read differently from raw-aggregate sites. A raw aggregate will tell you “Cassidy wins 48% against Winston,” which looks like Cassidy is losing that matchup. After double normalization, you see that Cassidy wins ~3 percentage points more than expected against Winston, controlling for both heroes' overall strength. That is the real matchup signal.

Worked example: the Cassidy counter problem

Before I added the double normalization, an earlier iteration of the data showed Cassidy having lower win rates against all the heroes he is considered a hard counter for. That looked obviously wrong and is what kicked off the normalization work.

The explanation is simple in hindsight. Cassidy is played hard into dive comps: Tracer, Genji, Winston. Those dive comps are strong in the current meta, so matches where Cassidy faces them tend to have higher enemy-team win rates overall. Raw aggregate numbers attributed that to Cassidy losing the matchup, when the actual signal is “Cassidy does better than expected against these dive heroes, relative to how well the dive team was going to do anyway.”

Pass 1 strips the “how strong is the enemy hero overall” effect. Pass 2 strips the “how popular is the friendly hero among Counterwatch users” effect. What is left is the real matchup.

Live match win-chance scoring

The tier list and matchup numbers are aggregated weekly. But when you are actually in a match, the app combines multiple stats to produce a real-time prediction for each player and for the team as a whole.

Each player's score combines:

  • Counter impact: average counter rate of this hero vs every enemy hero. Enemy scores are the inverse (zero-sum).
  • Synergy score: average synergy rate of this hero with every same-team ally.
  • Map win rate: how well this hero performs on the current map, in the rank division of the average player rank in the match.

Each factor is converted to a delta from 50% and summed into a single number per player. If your Tracer has +2% counter, +1.5% synergy, and +0.7% map, her total effect is +4.2 percentage points.

The team-level win chance rolls each factor up across all five players and adds them to a 50% baseline:

winChance = 50 + counterAdvantage + synergyAdvantage + mapAdvantage

That is the number the in-game overlay shows live during your match, and the same logic runs the web team builder.

What the methodology cannot account for (yet)

Two limits worth being upfront about.

Map granularity

60,000 matches per week is enough to compute reliable hero-vs-hero numbers across the roster, but it is not enough to break those numbers down per map. Pharah vs hitscan on Gibraltar is a genuinely different matchup than Pharah vs hitscan on King's Row, and the map data dilutes some real situational counters.

I handle this partially in live match scoring by pulling the map win rate as a separate factor, but I do not currently slice the counter/synergy tables by map. Data volume gates this.

MMR spread within a rank

Even within a single rank tier, there is meaningful MMR variance between players. Higher-MMR players at the same tier tend to play around counters better and maximize synergies more efficiently, which can shift matchup reliability.

For Overwatch, I can only read the rank of players with public profiles, which means I almost never have a complete picture of MMR differences between the two teams in a given match. Marvel Rivals exposes rank information more completely. For OW the data simply is not there to weight by MMR spread, and the honest thing to do is leave the factor out rather than guess.

Why this matters

A counter rate of 48% on a raw aggregate site does not mean a hero is losing the matchup. It means the matchup looks that way before you correct for the strength of the opponent, the selection bias of your user base, and whatever other systematic effect is baked into the raw numbers.

Counterwatch's numbers are meant to be actionable at the draft screen: does picking this hero against that enemy hero actually help me win? The double normalization is what makes the answer honest.

For the full tier-list view this feeds into, see the Overwatch tier list or Marvel Rivals tier list. For the live match surface, the Counterwatch app runs the same logic on the data as it comes in. For the shorter-form methodology summary, the methodology page has the basics.

If you want to suggest improvements or argue with the approach, I read every message at oskar@counterwatch.gg and the Discord is always open.

Ready for the live version?

Counterwatch runs inside Overwatch and Marvel Rivals. Live counter picks, win chance, and hero swaps without typing or tabbing out.

  • Deeper stats - personal win rates, hero grades, and match history beyond what's shown here
  • Live match data - real-time counter suggestions and win chance as heroes are picked
  • Performance tracking - session grades, hero grades, and trends over time compared to your averages
  • In-game overlay - match roster, hero swaps, and counter picks right on your screen