Starcraft Ranking

From Floor Pi Wiki
Jump to: navigation, search

Obsolete

This article or the information contained in it is old and quite possibly faulty. Sort of like our cruft. Please update.

Slide rule.jpg
Charles is testing a ranking system for Starcraft that he hopes will later get implemented in some large hall Battle.net server or somesuch.

Individuals

Each player is represented by a pair of numbers (a, t) corresponding to the player's individual playing strength and his teamwork capabilities respectively. If a player is alone on his side, then his team strength evaluates to his individual strength.

By Our Powers Combined

Suppose, then, that there are at least two people on a team. The strength of that team is then as follows:

Let each player <math>P_i = (a_i, t_i)</math>. Then the strength of a list of players (duplicates allowed, for reasons explained later), is

<math>\sigma(P_1, P_2, \ldots, P_n) = S \prod_{i = 1}^{n}{t_i^\frac{a_i}{S}},</math>

where <math>S = \sum_{i=1}^n{a_i}</math>. Informally, the team strength is the additively combined individual strengths of the players, multiplied by a coefficient determined by their teamwork capabilities. The coefficient is a weighted geometric average, where the weight assigned to each player is the fraction of the total team individual strength that he contributes.

For example, suppose I have four people on a team, with profiles (3, 1), (3, 2), (1, 3), (2, 1) (The 1,2 player might represent someone who isn't that great a player, but is very supportive of his team. For example, a Terran player who just pumps out medics to accompany his ally's Zealots, and comsats on command.) Then their strength is

<math>S = 3 + 3 + 1 + 2 = 9, \sigma = 9 \left(1^\frac{3}{9} 2^\frac{2}{9} 3^\frac{1}{9} 1^\frac{2}{9}\right) = 9 \sqrt[9]{12} \approx 12.8</math>

Balancing Act

The team balancing exercise, then, is to partition a list of players into two lists {P_1, ..., P_m} and {Q_1, ..., Q_n}, so that the "difference" between them is minimal. If m = n then we simply evaluate both teams as is and have

<math>\Delta ( \{P_1, \ldots, P_n\}, \{Q_1, \ldots, Q_n\} ) = | \sigma (P_1, \ldots, P_n) - \sigma (Q_1, \ldots, Q_n) |.</math>

Otherwise, one of the teams has more players than the other -- without loss of generality n - m = k > 0. Suppose the Q_i have been labelled so that a_1 > a_2 > ... > a_n. Then

<math>\Delta ( \{P_1, \ldots, P_m\}, \{Q_1, \ldots, Q_n\} ) = | \sigma (P_1, \ldots, P_m) - \sigma (Q_1, \ldots, Q_n, Q_1, \ldots, Q_k) |</math>

where the best k players on the second team have been duplicated before calculation.

Implementation

The Scheme implementation of the algorithm simply stores data about players, and the searches through all subsets of matches amongst a roster of players, and spits out the subset where subset vs. subset-complement results in the smallest strength difference.

The implementation uses the measure that a strength difference of 1 indicates that the stronger team will consistently win iterations of the same match.

Experience

The algorithm's data about players stays up to date by increasing the ratings of players on the winning side of a game, if that side were not already expected to win. To do this, it calculates a δ and an ε so that the teams (a_i + δ, t_i) and (a_i, t_i + ε) are both about 1 above the losing team. Then it gives each player (pδ, (1-p)ε), where p is the human-evaluated percentage of success that was a result of the player's own power, as opposed to synergy with his teammates.

One issue with this is that player ratings always go up, and never down, so while the strengths of players among themselves should remain consistent, the size of the numbers used in the ratings may eventually get meaninglessly big, and the definite-win radius of 1 may have to be reevaluated.

Download

You can try the Scheme implementation for yourself once I finish documenting the damn thing and upload it to the wiki.