Overall scores for a given server is calculated based on a given subset of evaluated data sets in the automated benchmark. This may be data sets submitted to the IEDB within a given week, or within the last 6 months, to give a couple of examples. For each evaluated data set, a percentage rank scores is assigned to each participating server. The rank scores lie between 0 and 100, with the best performing server scoring 100, the worst performing server scoring 0 and the remaining servers receiving scores evenly distributed between 0 and 100. Thus, for an evaluation data set where predictions for three servers are available, the scores 100, 50 and 0 are assigned. When predictions from four servers are available the scores 100, 66, 33 and 0 are assigned and so on. Each server receives a percentage rank score based on its AUC (Area under the ROC curve) performance and a percentage rank score based on its SRCC (Spearman's rank coefficient) performance. The overall score is calculated as the average percentage rank score.