Measures

In Fairness.jl, measures are callable structs. Their instances are also available to be called directly.

The instances can be called by passing fairness tensor to it. Its general form is metric(ft::FairTensor; grp=nothing). The instances have multiple aliases for convinience.

The Measures have been divided into calcMetrics and boolmetrics depending on whether the metric returns Numerical value or Boolean value respectively.

Notes :

  • All these metrics can be used even when more than 2 values are possible for the sensitive attribute.
  • To use the CalcMetrics with the evaluate function from MLJ, you have to use MetricWrapper or MetricWrappers. To use disparity with evaluate function, you have to use the struct Disparity

CalcMetrics

These metrics return a Numerical Value.

CalcMetrics - Usage

These measures all have the common calling syntax

measure(ft)

or

measure(ft; grp)

where ft is the fairness tensor. Here grp is an optional, named, string parameter used to compute the fairness metric for a specific group. If grp is not specified, the overall value of fairness metric is calculated.

julia> using Fairness
julia> ŷ = categorical([1, 0, 1, 1, 0]);
julia> y = categorical([0, 0, 1, 1, 1]);
julia> grp = categorical(["Asian", "African", "Asian", "American", "African"]);
julia> ft = fair_tensor(ŷ, y, grp);
julia> TruePositiveRate()(ft)0.6666666666666667
julia> true_positive_rate(ft) # true_positive_rate is instance of TruePositiveRate0.6666666666666667
julia> true_positive_rate(ft; grp="Asian")1.0
Following Metrics (callable structs) are available through Fairness.jl :

TruePositive, TrueNegative, FalsePositive, FalseNegative, TruePositiveRate, TrueNegativeRate, FalsePositiveRate, FalseNegativeRate, FalseDiscoveryRate, Precision, NPV

standard synonyms of above Metrics

TPR, TNR, FPR, FNR, FDR, PPV,

instances of above metrics and their synonyms

truepositive, truenegative, falsepositive, falsenegative, true_positive, true_negative, false_positive, false_negative, truepositive_rate, truenegative_rate, falsepositive_rate, true_positive_rate, true_negative_rate, false_positive_rate, falsenegative_rate, negativepredictive_value, false_negative_rate, negative_predictive_value, positivepredictive_value, positive_predictive_value, tpr, tnr, fpr, fnr, falsediscovery_rate, false_discovery_rate, fdr, npv, ppv, recall, sensitivity, hit_rate, miss_rate, specificity, selectivity, fallout

MetricWrapper(s)

To be able to pass your metric/metrics to MLJ.evaluate for automatic evaluation, you are required to use MetricWrapper/MetricWrappers.

Fairness.MetricWrapperType
MetricWrapper

MetricWrapper wraps the fairness metrics and has the information about the protected attribute.

source
evaluate(model, X, y, measure=MetricWrapper(tpr, grp=:race))

You can also use MetricWrapper to calculate Metric value outside the evaluate function.

wrappedMetric = MetricWrapper(negative_predictive_value; grp=:sex)
wrappedMetric(ŷ, X, y)

You can easily get multiple wrapped Metrics using MetricWrappers.

evaluate(wrappedModels, X, y,
	measures=MetricWrappers([tpr, ppv], grp=:sex))

evaluate(wrappedModels, X, y,
	measures=[accuracy, MetricWrappers([tpr, ppv], grp=:sex)...])

FairMetrics

These metrics build upon CalcMetrics and provide more insight about data through DataFrame. disparity and Parity corresponding to several CalcMetrics can be handled in a single call to them.

Fairness.disparityFunction
disparity(M, ft; refGrp=nothing, func=/)

Computes disparity for fairness tensor ft with respect to an array of metrics M.

For any class A and a reference Group B, disparity = func(metric(A), metric(B)). By default func is / .

A dataframe is returned with disparity values for all combinations of metrics and classes. It contains a column named labels for the classes and has a column for disparity of each metric in M. The column names are metric names appended with _disparity.

Keywords

  • refGrp=nothing : The reference group
  • func=/ : The function used to evaluate disparity. This function should take 2 arguments.

The second argument shall correspond to reference group.

Please note that division by 0 will result in NaN

source
julia> ft = fair_tensor(ŷ, y, grp);
julia> M = [true_positive_rate, positive_predictive_value];
julia> df = disparity(M, ft; refGrp="Asian");┌──────────┬────────────────────────────┬─────────────────────┐ │ labels │ TruePositiveRate_disparity │ Precision_disparity │ │ String │ Float64 │ Float64 │ │ Textual │ Continuous │ Continuous │ ├──────────┼────────────────────────────┼─────────────────────┤ │ African │ 1.0e-15 │ 0.0 │ │ American │ 1.0 │ 0.0 │ │ Asian │ 1.0 │ 1.0 │ └──────────┴────────────────────────────┴─────────────────────┘
julia> f(x, y) = x - yf (generic function with 1 method)
julia> df_1 = disparity(M, ft; refGrp="Asian", func=f);┌──────────┬────────────────────────────┬─────────────────────┐ │ labels │ TruePositiveRate_disparity │ Precision_disparity │ │ String │ Float64 │ Float64 │ │ Textual │ Continuous │ Continuous │ ├──────────┼────────────────────────────┼─────────────────────┤ │ African │ -1.0 │ -0.5 │ │ American │ 0.0 │ -0.5 │ │ Asian │ 0.0 │ 0.0 │ └──────────┴────────────────────────────┴─────────────────────┘

To use disparity with evaluate function, you have to use the struct Disparity

Fairness.DisparityType
Disparity

Disparity uses the disparity function and has information about the protected attribute, reference Group and custom function It will help in automatic evaluation using MLJ.evaluate

source
Fairness.DisparityMethod
Disparity(measure, grp=:class, refGrp=nothing, func=/)

Instantiates the struct Disparity.

source
Fairness.DisparitiesFunction
Disparities(measures, grp=:class, refGrp=nothing, func=/)

Creates instances of Disparity struct for each measure in measures.

source
evaluate(model, X, y,
	measure=Disparity(tpr, grp=:sex, refGrp="White", func=-)

evaluate(model, X, y,
	measures=[
	Disparities([tpr, fpr], grp=:sex, refGrp="White", func=-)...,
	MetricWrappers([tpr, ppv], grp=:sex)...])
Fairness.parityFunction
parity(df, ϵ=nothing; func=nothing)

Takes the dataframe df returned from disparity function and adds columns for parity values corresponding to each disparity column. It then calculates the parity values for measures that were passed for disparity calculation.

Parity is a boolean value indicating whether a fairness constraint is satisfied by disparity values.

The default fairness criteria for a disparity value x and fairness threshold ϵ for a group is:

(1-ϵ) <= x <= 1/(1-ϵ)

Here ϵ is the fairness threshold which is required if default fairness constraint is used.

Keywords

  • func=nothing : single argument function specifying custom Fairness constraint for disparity instead of the default criteria
source
julia> parity(df, 0.2);┌──────────┬────────────────────────────┬─────────────────────┬─────────────────────────┬──────────────────┐
│ labels   │ TruePositiveRate_disparity │ Precision_disparity │ TruePositiveRate_parity │ Precision_parity │
│ String   │ Float64                    │ Float64             │ Bool                    │ Bool             │
│ Textual  │ Continuous                 │ Continuous          │ Count                   │ Count            │
├──────────┼────────────────────────────┼─────────────────────┼─────────────────────────┼──────────────────┤
│ African  │ 1.0e-15                    │ 0.0                 │ false                   │ false            │
│ American │ 1.0                        │ 0.0                 │ true                    │ false            │
│ Asian    │ 1.0                        │ 1.0                 │ true                    │ true             │
└──────────┴────────────────────────────┴─────────────────────┴─────────────────────────┴──────────────────┘
julia> parity(df; func= x-> x > 0.8);┌──────────┬────────────────────────────┬─────────────────────┬─────────────────────────┬──────────────────┐ │ labels │ TruePositiveRate_disparity │ Precision_disparity │ TruePositiveRate_parity │ Precision_parity │ │ String │ Float64 │ Float64 │ Bool │ Bool │ │ Textual │ Continuous │ Continuous │ Count │ Count │ ├──────────┼────────────────────────────┼─────────────────────┼─────────────────────────┼──────────────────┤ │ African │ 1.0e-15 │ 0.0 │ false │ false │ │ American │ 1.0 │ 0.0 │ true │ false │ │ Asian │ 1.0 │ 1.0 │ true │ true │ └──────────┴────────────────────────────┴─────────────────────┴─────────────────────────┴──────────────────┘

BoolMetrics

These metrics return a boolean value. Currently, only Demographic Parity has been implemented under this. DemographicParity is implemented to illustrate the use of FairTensor. The user is instead recommended to use Parity function from FairMetrics . Every boolean metric can be easily obtained from Parity.

These metrics are callable structs. The struct has field for the A and C. A corresponds to the matrix on LHS of the equality-check equation A*z = 0 in this paper's, Equation No. 3. In this paper it is a 1D array. But to deal with multiple group fairness, a 2D array matrix is used.

Initially the instatiated metric contains 0 and [] as values for C and A. But after calling it on fairness tensor, the values of C and A change as shown below. This gives the advantage to reuse the same instantiation again. But upon reusing, the matrix A need not be generated again as it will remain the same. This makes it faster!

DemographicParity

julia> dp = DemographicParity()DemographicParity(
    C = 0,
    A = Any[]) @540
julia> dp.A, dp.C # Initial values in struct DemographicParity(Any[], 0)
julia> dp(ft)false
julia> dp.A, dp.C # New values in dp (instance of DemographicParity)([2 0 … -2 0; 1 0 … -2 0; 0 0 … 0 0], 3)

Helper Functions

Fairness._ftIdx is a utility function that has been used to calculate metrics and shall be helpful while using Fairness to inspect Fairness tensor values.

Fairness._ftIdxFunction
_ftIdx(ft, grp)

Finds the index of grp (string) in ft.labels which corresponds to ft.mat. For Index i for the grp returned by this function ft[i, :, :] returns the 2D array [[TP, FP], [FN, TN]] for that group.

source