Skip to content

Home

lib banner lib banner


This library allow to compute global sensitivity indices in the context of fairness measurements. The paper Fairness seen as Global Sensitivity Analysis bridges the gap between global sensitivity analysis (GSA) and fairness. It states that for each sensitivity analysis, there is a fairness measure, and vice-versa.

@misc{https://doi.org/10.48550/arxiv.2103.04613,
  doi = {10.48550/ARXIV.2103.04613},
  url = {https://arxiv.org/abs/2103.04613},
  author = {BΓ©nesse, ClΓ©ment and Gamboa, Fabrice and Loubes, Jean-Michel and Boissin, Thibaut},
  keywords = {Statistics Theory (math.ST), Methodology (stat.ME), FOS: Mathematics, FOS: Mathematics, FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {Fairness seen as Global Sensitivity Analysis},

This library is a toolbox which ease the computation of fairness and GSA indices.

πŸ‘‰ The problem

Each index has it's characteristics: some can be applied on continuous variables and some cannot. Some can handle regression problems and some handle classification problems. Some can handle variable groups and some cannot. Finally some can only be applied on the predictions of a model while some can be applied on the error made by the model.

The objective is then to provide a tool to investigate the fairness of an ML problem by computing the GSA indices while avoiding the aforementioned issues.

πŸš€ The strategy

The library allows to formulate a fairness problem which is stated as following:

  • a dataset describing the training distribution
  • a model which can be a function or a machine learning model
  • a fairness objective which indicate what should be studied : one can study the intrinsic bias of a dataset, or the bias of the model or the bias of the model's errors

These elements are encapsulated in an object called IndicesInput.

Then it becomes possible to compute GSA indices (in a interchangeable way) using the functions provided in fairsense.indices.

These functions output IndicesOutput objects that encapsulate the values of the indices. These results can finally be visualized with the functions available in the fairsense.visualization module.

πŸ’» install fairsense

‍for users

pip install fairsense

for developpers

After cloning the repository

pip install -e .[dev]

to clean code, at the root of the lib:

black .

for docs

pip install -e .[docs]

build rst files, in the docs folder:

sphinx-apidoc ..\libfairness -o source
the generate html docs:
make html
Warning: the library must be installed to generate the doc.

πŸ‘ Contributing

Feel free to propose your ideas or come and contribute with us on the Libname toolbox! We have a specific document where we describe in a simple way how to make your first pull request: just here.

πŸ‘€ See Also

More from the DEEL project:

  • Xplique a Python library exclusively dedicated to explaining neural networks.
  • deel-lip a Python library for training k-Lipschitz neural networks on TF.
  • Influenciae Python toolkit dedicated to computing influence values for the discovery of potentially problematic samples in a dataset.
  • deel-torchlip a Python library for training k-Lipschitz neural networks on PyTorch.
  • DEEL White paper a summary of the DEEL team on the challenges of certifiable AI and the role of data quality, representativity and explainability for this purpose.

πŸ™ Acknowledgments

DEEL Logo DEEL Logo This project received funding from the French ”Investing for the Future – PIA3” program within the Artificial and Natural Intelligence Toulouse Institute (ANITI). The authors gratefully acknowledge the support of the DEEL project.

πŸ—žοΈ Citation

If you use fairsense as part of your workflow in a scientific publication, please consider citing the πŸ—žοΈ our paper:

    @misc{https://doi.org/10.48550/arxiv.2103.04613,
      doi = {10.48550/ARXIV.2103.04613},
      url = {https://arxiv.org/abs/2103.04613},
      author = {BΓ©nesse, ClΓ©ment and Gamboa, Fabrice and Loubes, Jean-Michel and Boissin, Thibaut},
      keywords = {Statistics Theory (math.ST), Methodology (stat.ME), FOS: Mathematics, FOS: Mathematics, FOS: Computer and information sciences, FOS: Computer and information sciences},
      title = {Fairness seen as Global Sensitivity Analysis},

πŸ“ License

The package is released under MIT license.

πŸ’£ Disclaimer

To the maximum extent permitted by applicable law, authors of FairSense shall not be liable for any kind of tangible and intangible damages. Especially the authors shall not be liable in case of incorrect computation of the indices nor any biased interpretation of such indices.