In what turned out to be a more surprising 2009 for CounterStrike teams and fans, we were left with plenty of interesting and exciting results in terms of CounterStrike tournaments during this year. At the end of the it all, the question remains as to what order the teams found themselves after a year of hardfought matches.
In returning to the ranking system that I developed, actually starting in 2006, we have the following 2009 CounterStrike top 10:
Change  Points  Winning %  
1.  fnatic  0 

0.942  
2.  SK Gaming  0  3139  0.826  
3.  Wicked  +1  2574  0.896  
4.  mTw  1  2209  0.778  
5.  WeMade FOX  +1  960  0.527  
6.  Alternate  +1  894  0.604  
7.  TyLoo  +4  649  0.627  
8.  Meet Your Makers  +3  648  0.785  
9.  Evil Geniuses  1  639  0.473  
10.  Mousesports  4  582  0.426 
Outside looking in: k23, Virtus.Pro, compLexity, wNv.cn
Depending on response, an explanation of specific reasons for the team’s placement in this ranking can be developed.
Now, before everyone jumps on it, let me try to remind everyone of how the ranking system works. As many published events as possible are gathered from eSports media websites. Once those are accounted for, I assign the simple point system.
For those that don’t know, you get 100 points for first, 90 for second and so on. For tournaments that list 34, 58 etc… they’re given the median, the number in between the two (3=80, 4=70, thus 34=75). I’m aware of certain arguments against keeping the points so close together, but I find that it skews the rankings far more in the end result.
Now, after the initial top 10 is completed, those teams play into the following criteria for tournaments to be counted: tournaments must contain at least two countries, or the tournament must contain two top ten teams to be counted. This was changed from one top 10 team because it also skewed the rankings too much.
The next step after determining the tournaments for criteria is to apply the weighting system. The first weighting system works by taking the number of top 10 teams in the tournament and making the multiplier one less than that number. So, if there are four top 10 teams in a tournament, the multiplier will be 3, to be applied to those top ten teams in their point totals. In theory, teams outside the top 10 would be privy to a larger multiplier, but no team has benefited from such an occurrence.
After that weighting is done, I do the calculation for the winning percentage. In looking at one of the teams, I divide their multiplied point total by the maximum number of points that they could achieve from the events that they attended. Once that is done for each team, I work that winning percentage into the final point total that you see above.
Unfortunately, these rankings are only supremely effective at looking at a specific picture of what you’re trying to rank. I have a few ideas to keep an uptodate and rolling ranking to use throughout a year, but that will take time that I don’t really have at the moment.