Citations are widely used to assess the performance of scientists, both manually and through algorithms. However, such success metrics reflect only a community’s reaction to a performance, e.g. to a paper. They are not a quality measure of the performance itself. Therefore success metrics have one crucial issue: they build on biased data produced by society. For example, given equal performance, women are less cited than men due to discriminatory citation cultures. Although the biased ranking problem is increasingly recognized by scientists, funding agencies, and policy makers, a mechanistic modelling framework to quantify the biases in success data is missing. To fill this gap, we design mathematical models explaining the observed success inequalities. We will use scholarly datasets with hundreds of millions papers, patents and grants, to validate the models. We will (1) untangle the bias effects from true performance, (2) quantify bias in performance indicators and ranking algorithms, (3) redefine the way we use success measures to improve the fairness of indicators and algorithms. As an application we will assess success biases in Danish research. This research is novel and truly transformative for human inequality as it will provide the first rigorous tool to answer questions such as: How different will be the career success of a white male researcher from a western country from a black female researcher from a different part of the world, given equal performance?