Resources
Below is a (non-exhaustive) list of resources and fundamental papers we recommend to researchers and practitioners who want to learn more about Trustworthy ML. We categorize our resources as: (i) Introductory, aimed to serve as gentle introductions to high-level concepts and include tutorials, textbooks, and course webpages, and (ii) Advanced, aimed to be deeper dives into specific topics or concepts and include pointers to relevant influential papers.
While we try our best to ensure that this list of resources is up-to-date and comprehensive, it gets hard to keep up with all the great work out there. Did we miss a useful reference or an important paper? Email us at trustworthyml@gmail.com.
Introductory Resources
General
Tutorials & Talks:
Nicolas Papernot. "What does it mean for ML to be trustworthy?" https://www.youtube.com/watch?v=UpGgIqLhaqo
Himabindu Lakkaraju. “Machine Learning for High Stakes Decision Making: Challenges and Opportunities.” https://m.youtube.com/watch?v=nDJWbpXf2M0
Timnit Gebru and Emily Denton. "Tutorial on Fairness Accountability Transparency and Ethics in Computer Vision." CVPR, 2020. https://sites.google.com/view/fatecv-tutorial/schedule
Courses:
Nicolas Papernot. “Trustworthy Machine Learning.” University of Toronto. https://www.papernot.fr/teaching/f19-trustworthy-ml
Trevor Darrell, Dawn Song, and Jacob Steinhardt. “Trustworthy Deep Learning.” University of California, Berkeley. https://berkeley-deep-learning.github.io/cs294-131-s19/
Piotr Mardziel. “Security and Fairness of Deep Learning.” Carnegie Mellon University. https://course.ece.cmu.edu/~ece739/syllabus.html
Kamalika Chaudhuri. “Topics in Trustworthy Machine Learning.” University of California, San Diego. https://cseweb.ucsd.edu/classes/sp20/cse291-b/
Books:
Michael Kearns and Aaron Roth. "The Ethical Algorithm: The Science of Socially Aware Algorithm Design." https://www.amazon.com/Ethical-Algorithm-Science-Socially-Design/dp/0190948205
Kush R. Varshney."Trustworthy Machine Learning." http://www.trustworthymachinelearning.com/
Articles:
Kush Varshney. "Trustworthy Machine Learning and Artificial Intelligence." XRDS: Crossroads, 2020. https://krvarshney.github.io/pubs/Varshney_xrds2019.pdf
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané. "Concrete Problems in AI Safety." https://arxiv.org/abs/1606.06565
Interpretability and Explainability
Tutorials & Talks:
Finale Doshi. "A Roadmap for the Rigorous Science of Interpretability." https://www.youtube.com/watch?v=MMxZlr_L6YE
Been Kim. "Intepretability - now what?." https://www.youtube.com/watch?v=5w_rgBbwQHw
Zachary Lipton. "Interpretability: of what, for whom, why, and how?." https://www.youtube.com/watch?v=NWkicrRTupo
Courses:
Himabindu Lakkaraju. "Interpretability and Explainability in Machine Learning." Harvard University. https://interpretable-ml-class.github.io/
Books:
Christoph Molnar. "Interpretable Machine Learning - A Guide for Making Black Box Models Explainable." https://christophm.github.io/interpretable-ml-book/
Articles:
Adrian Weller. "Transparency: Motivations and Challenges." https://arxiv.org/abs/1708.01870
Finale Doshi-Velez and Been Kim. "Towards a Rigorous Science of Interpretable Machine Learning." https://arxiv.org/pdf/1702.08608.pdf
Zachary Lipton. "The Mythos of Model Interpretability." https://arxiv.org/pdf/1606.03490.pdf
Cynthia Rudin. "Stop Explaining Black Box Machine Learning Models for High Stakes Decisions." Nature Machine Intelligence, 2019. https://arxiv.org/abs/1811.10154
Fairness
Tutorials & Talks:
Arvind Narayanan. “Tutorial: 21 fairness definitions and their politics.” https://youtu.be/jIXIuYdnyyk
Solon Barocas and Moritz Hardt. "Tutorial on Fairness in Machine Learning." NeurIPS, 2017. https://fairmlbook.org/tutorial1.html
Sarah Bird, Ben Hutchinson, Sahin Geyik, Krishnaram Kenthapadi, Emre Kiciman, Margaret Mitchell, and Mehrnoosh Sameki . “Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned.” KDD, 2019. https://sites.google.com/view/kdd19-fairness-tutorial
Courses:
Moritz Hardt. "CS 294: Fairness in Machine Learning." University of California, Berkeley. https://fairmlclass.github.io/
Arvind Narayanan. “Fairness in Machine Learning.” Princeton University. https://docs.google.com/document/d/1XnbJXELA0L3CX41MxySdPsZ-HNECxPtAw4-kZRc7OPI/edit
Books:
Solon Barocas, Moritz Hardt, and Arvind Narayanan. “Fairness and machine learning: Limitations and Opportunities.” https://fairmlbook.org/
Articles:
Sam Corbett-Davies and Sharad Goel. "The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning." https://arxiv.org/pdf/1808.00023.pdf
Alexandra Chouldechova and Aaron Roth. "The Frontiers of Fairness in Machine Learning." https://arxiv.org/pdf/1810.08810.pdf
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. "A Survey on Bias and Fairness in Machine Learning." https://arxiv.org/pdf/1908.09635.pdf
Megnan Du, Fan Yang, Na Zou, and Xia Hu. "Fairness in Deep Learning: A Computational Perspective." https://arxiv.org/pdf/1908.08843.pdf
Adversarial Machine Learning
Tutorials & Talks:
Zico Kolter and Aleksander Madry. "Adversarial Robustness: Theory and Practice." NeurIPS, 2018. https://adversarial-ml-tutorial.org/
Ian Goodfellow. “Adversarial Examples and Adversarial Training.” Stanford University. https://youtu.be/CIfsB_EYsVI
Aleksander Mądry and Ludwig Schmidt. “A Brief Introduction to Adversarial Examples.” https://gradientscience.org/intro_adversarial/
Ian Goodfellow, Nicolas Papernot, Sandy Huang, Rocky Duan, Pieter Abbeel, and Jack Clark. "Attacking Machine Learning with Adversarial Examples." https://openai.com/blog/adversarial-example-research/
Bo Li, Dawn Song, and Yevgeniy Vorobeychik. "Adversarial Machine Learning Tutorial." AAAI, 2018. https://aaai18adversarial.github.io/index.html#syl
Nicholas Carlini. "Adversarial Machine Learning Reading List." https://nicholas.carlini.com/writing/2018/adversarial-machine-learning-reading-list.html
Articles:
Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. "Adversarial Attacks and Defences: A Survey." https://arxiv.org/abs/1810.00069
Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. "Adversarial Examples: Attacks and Defenses for Deep Learning." https://arxiv.org/abs/1712.07107
Differential Privacy
Tutorials and Talks:
Katrina Ligett. "Differential Privacy: The Tools, The Results, and The Frontier." NeurIPS, 2014. https://www.youtube.com/watch?v=hoEyvHCRRc8
Kamalika Chaudhuri and Anand Sarwate. "Differentially Private Machine Learning: Theory, Algorithms, and Applications." NeurIPS, 2017. https://neurips.cc/Conferences/2017/ScheduleMultitrack?event=8732
Katrina Ligett, Kobbi Nissim, Vitaly Shmatikov, Adam Smith, and Jon Ullman. "Differential Privacy: From Theory to Practice." The 7th BIU Winter School on Cryptography, 2017. http://cyber.biu.ac.il/event/the-7th-biu-winter-school/
Katrina Ligett. “Tutorial on Differential Privacy.” Big Data and Differential Privacy, 2013. https://simons.berkeley.edu/talks/katrina-ligett-2013-12-11
Damien Desfontaines. “A reading list on Differential Privacy.” https://desfontain.es/privacy/differential-privacy-reading-list.html
Courses:
Jonathan Ullman. "Rigorous Approaches to Data Privacy." Northeastern University. http://www.ccs.neu.edu/home/jullman/cs7880s17/syllabus.html
Ashwin Machanavajjhala. "Design of Stable Algorithms for Privacy and Learning." Duke University. https://www2.cs.duke.edu/courses/fall16/compsci590.3/
Aaron Roth. "Differential Privacy in Game Theory and Mechanism Design." University of Pennsylvania. https://www.cis.upenn.edu/~aaroth/courses/gametheoryprivacyS14.html
Salil Vadhan. "Mathematical Approaches to Data Privacy." Harvard University. http://people.seas.harvard.edu/~salil/diffprivcourse/spring13/
Gautam Kamath. "Algorithms for Private Data Analysis." University of Waterloo. http://www.gautamkamath.com/CS860-fa2020.html
Moni Naor. “Foundations of Privacy.” Weizmann Institute of Science. http://www.wisdom.weizmann.ac.il/~naor/COURSE/foundations_of_privacy.html
Books:
Articles:
Gautam Kamath and Jonathan Ullman. "A Primer on Private Statistics." https://arxiv.org/abs/2005.00010
Kobbi Nissim, Thomas Steinke, Alexandra Wood, Mark Bun, Marco Gaboardi, David R. O’Brien, and Salil Vadhan. "Differential Privacy: A Primer for a Non-Technical Audience." https://privacytools.seas.harvard.edu/files/privacytools/files/pedagogical-document-dp_0.pdf
Cynthia Dwork, Adam Smith, Thomas Steinke, and Jonathan Ullman. "Exposed! A Survey of Attacks on Private Data." Annual Review of Statistics and Its Application, 2017. https://privacytools.seas.harvard.edu/publications/exposed-survey-attacks-private-data
Other Forums:
DifferentialPrivacy.org. https://differentialprivacy.org/
Causality
Tutorials and Talks:
Ferenc Huszar. "Causal Inference in Everyday Machine Learning." MLSS, 2019. https://www.youtube.com/watch?v=HOgx_SBBzn0 (3 parts)
Amit Sharma and Emre Kiciman. "Tutorial on Causal Inference and Counterfactual Reasoning." ACM KDD, 2018. https://causalinference.gitlab.io/kdd-tutorial/
Jose Ramon Zubizarreta and Sharon-Lise Normand. "Introduction to Causal Inference." HDSI, 2019. https://www.youtube.com/watch?v=jSV052cE5n8
Susan Athey. "Machine Learning and Causal Inference for Policy Evaluation." Harvard CMSA Big Data Conference, 2015. https://www.youtube.com/watch?v=Yx6qXM_rfKQ
Courses:
Jonas Peters. “Lectures on Causality.” MIT. https://www.youtube.com/watch?v=zvrcyqcN9Wo (4 parts)
Robert Ness. "Causality in Machine Learning." Northeastern University. https://bookdown.org/robertness/causalml/docs/
Elena Zheleva. "Causal Inference and Learning." University of Illinois, Chicago. https://www.cs.uic.edu/~elena/courses/fall19/cs594cil.html
Books:
Judea Pearl and Dana Mackenzie. “The Book of Why: The New Science of Cause and Effect.” http://bayes.cs.ucla.edu/WHY/
Judea Pearl, Madelyn Glymour, and Nicholas Jewell. "Causal Inference in Statistics: A Primer." http://bayes.cs.ucla.edu/PRIMER/
Guido Imbens and Donald Rubin. "Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction." https://www.amazon.com/Causal-Inference-Statistics-Biomedical-Sciences/dp/0521885884
Hernán MA and Robins JM. "Causal Inference: What If." Boca Raton: Chapman & Hall/CRC. https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1268/2020/07/ci_hernanrobins_31july20.pdf
Emre Kiciman and Amit Sharma. "Causal Reasoning: Fundamentals and Machine Learning Applications." https://causalinference.gitlab.io/
Articles:
Donald Rubin. "Causal Inference Using Potential Outcomes: Design, Modeling, Decisions." American Statistical Association, 2005. https://5harad.com/mse331/papers/rubin_causal_inference.pdf
Judea Pearl. "An Introduction to Causal Inference." The International Journal of Biostatistics, 2010. https://ftp.cs.ucla.edu/pub/stat_ser/r354-corrected-reprint.pdf
Judea Pearl. "The Seven Pillars of Causal Reasoning with Reflections on Machine Learning." Communications of the ACM, 2019. https://ftp.cs.ucla.edu/pub/stat_ser/r481.pdf
ADVANCED Resources
Interpretability & Explainability
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “Why Should I Trust You? Explaining the Predictions of Any Classifier.” KDD, 2016. https://arxiv.org/abs/1602.04938
Scott M Lundberg and Su-In Lee. “A Unified Approach to Interpreting Model Predictions.” NeurIPS, 2017. https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. “Interpretable Decision Sets: A Joint Framework for Description and Prediction.” KDD, 2016. https://www-cs-faculty.stanford.edu/people/jure/pubs/interpretable-kdd16.pdf
Berk Ustun and Cynthia Rudin. "Optimized Risk Scores." KDD, 2017. https://canvas.harvard.edu/courses/68154/files/?preview=8630586
Been Kim, Rajiv Khanna, and Oluwasanmi Koyejo. "Examples are not enough, learn to criticize! Criticism for Interpretability." NeurIPS, 2016. https://papers.nips.cc/paper/6300-examples-are-not-enough-learn-to-criticize-criticism-for-interpretability
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. "A Benchmark for Interpretability Methods in Deep Neural Networks." NeurIPS, 2019. https://arxiv.org/abs/1806.10758
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. "Sanity Checks for Saliency Maps." NeurIPS, 2018. https://papers.nips.cc/paper/8160-sanity-checks-for-saliency-maps.pdf
Sandra Wachter, Brent Mittelstadt, and Chris Russell. "Counterfactual Explanations without Opening the Black Box." Harvard Journal of Law and Technology, 2018. https://arxiv.org/pdf/1711.00399.pdf
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. "Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)." ICML, 2018. https://arxiv.org/abs/1711.11279
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. "Faithful and Customizable Explanations of Black Box Models." AIES, 2019. https://web.stanford.edu/~himalv/customizable.pdf
Pang Wei Koh and Percy Liang. "Understanding Black-box Predictions via Influence Functions." ICML, 2017. https://arxiv.org/pdf/1703.04730.pdf
Sarah Tan, Rich Caruana, Giles Hooker, and Yin Lou. "Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation." AIES, 2018. https://arxiv.org/abs/1710.06169
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. "Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods." AIES, 2020. https://arxiv.org/abs/1911.02508
I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle A. Friedler. "Problems with Shapley-value-based explanations as feature importance measures." ICML, 2020. https://arxiv.org/abs/2002.11097
Fairness
Solon Barocas and Andrew Selbst. "Big Data's Disparate Impact." California Law Review, 2016. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. "Fairness in Criminal Justice Risk Assessments: The State of the Art." Sociological Methods and Research, 2018. https://arxiv.org/abs/1703.09207
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. "Fairness through Awareness." ITCS, 2012. https://dl.acm.org/doi/10.1145/2090236.2090255
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. "Certifying and Removing Disparate Impact." KDD, 2015. https://arxiv.org/abs/1412.3756
Moritz Hardt, Eric Price, and Nathan Srebro. "Equality of Opportunity in Supervised Learning." NeurIPS, 2016. https://arxiv.org/abs/1610.02413
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. "Inherent Trade-Offs in the Fair Determination of Risk Scores." ITCS, 2017. https://arxiv.org/abs/1609.05807
Sorelle Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. "On the (im)possibility of fairness." https://arxiv.org/abs/1609.07236
Sam Corbett-Davis, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. "Algorithmic decision making and the cost of fairness." KDD, 2017. https://arxiv.org/abs/1701.08230
Matt Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. "Counterfactual Fairness." NeurIPS, 2017. https://arxiv.org/abs/1703.06856
Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Scholkopf. "Avoiding Discrimination through Causal Reasoning." NeurIPS, 2017. https://arxiv.org/abs/1706.02744
Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. NeurIPS, 2017. https://arxiv.org/abs/1704.03354
Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.” NeurIPS, 2016. https://arxiv.org/abs/1607.06520
Joy Buolamwini and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” FAT*, 2018. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
Adversarial Machine Learning
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. “Explaining and Harnessing Adversarial Examples.” ICLR, 2015. https://arxiv.org/abs/1412.6572
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. “Intriguing properties of neural networks.” ICLR, 2014. https://arxiv.org/abs/1312.6199
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Mądry. "Adversarial Examples are Not Bugs, They are Features." NeurIPS, 2019. https://arxiv.org/abs/1905.02175
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. "The Limitations of Deep Learning in Adversarial Settings." IEEE European Symposium on Security and Privacy, 2016. https://arxiv.org/abs/1511.07528
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. "DeepFool: a simple and accurate method to fool deep neural networks." CVPR, 2016. https://arxiv.org/abs/1511.04599
Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. “Evasion Attacks against Machine Learning at Test Time.” ECML PKDD, 2013. https://arxiv.org/abs/1708.06131
Jan Hendrik Mezten, Tim Genewein, Volker Fischer, and Bastian Bischoff. "On Detecting Adversarial Perturbations." ICLR, 2017. https://arxiv.org/abs/1702.04267
Aleksander Mądry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. “Towards Deep Learning Models Resistant to Adversarial Attacks.” ICLR, 2018. https://arxiv.org/abs/1706.06083
Nicolas Ford, Justin Gilmer, Nicholas Carlini, and Ekin D. Cubuk. "Adversarial Examples are a Natural Consequence of Test Error in Noise." ICML, 2019. https://arxiv.org/abs/1901.10513
Jeremy Cohen, Elan Rosenfeld, and J. Zico Kolter. "Certified Adversarial Robustness via Randomized Smoothing." ICML, 2019. https://arxiv.org/abs/1902.02918
Aditi Raghunathan, Jacob Steinhardt, Percy Liang. "Certified Defenses Against Adversarial Examples." ICLR, 2018. https://arxiv.org/pdf/1801.09344.pdf
Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. "Certified Robustness to Adversarial Examples with Differential Privacy." IEEE Symposium on Security and Privacy, 2019. https://arxiv.org/abs/1802.03471
Differential Privacy
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. “Calibrating Noise to Sensitivity in Private Data Analysis.” TCC, 2006. https://people.csail.mit.edu/asmith/PS/sensitivity-tcc-final.pdf
Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. “Deep Learning with Differential Privacy.” CCS, 2016. https://arxiv.org/abs/1607.00133
Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. “Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data.” ICLR, 2017. https://arxiv.org/abs/1610.05755
Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. “RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response.” CCS, 2014. https://arxiv.org/abs/1407.6981
Andrea Bittau, Úlfar Erlingsson, Petros Maniatis, Ilya Mironov, and Ananth Raghunathan. “Prochlo: Strong Privacy for Analytics in the Crowd.” SOSP, 2017. https://arxiv.org/abs/1710.00901
Royce J Wilson, Celia Yuxin Zhang, William Lam, Damien Desfontaines, Daniel Simmons-Marengo, and Bryant Gipson. “Differentially Private SQL with Bounded User Contribution.” PET, 2020. https://arxiv.org/abs/1909.01917
Simson L. Garfinkel, John M. Abowd, and Sarah Powazek. “Issues Encountered Deploying Differential Privacy.” WPES, 2018. https://arxiv.org/abs/1809.02201
David Sommer, Sebastian Meiser, and Esfandiar Mohammadi. “Privacy Loss Classes: The Central Limit Theorem in Differential Privacy.” PET, 2019. https://eprint.iacr.org/2018/820
Sebastian Meiser and Esfandiar Mohammadi. “Tight on Budget? Tight Bounds for r-Fold Approximate Differential Privacy.“ CCS, 2018. https://eprint.iacr.org/2017/1034
Jinshuo Dong, Aaron Roth, and Weijie J. Su. “Gaussian Differential Privacy.“ https://arxiv.org/abs/1905.02383
Michael Carl Tschantz, Shayak Sen, and Anupam Datta. “SoK: Differential Privacy as a Causal Property.“ SP, 2020. https://arxiv.org/abs/1710.05899
Reza Shokri and Vitaly Shmatikov. “Privacy-Preserving Deep Learning.” CCS, 2015. https://www.comp.nus.edu.sg/~reza/files/Shokri-CCS2015.pdf
Ilya Mironov. “Renyi Differential Privacy.” CSF, 2017. https://arxiv.org/abs/1702.07476
Vitaly Feldman, Ilya Mironov, Kunal Talwar, and Abhradeep Thakurta. “Privacy Amplification by Iteration.” FOCS, 2018. https://arxiv.org/abs/1808.06651
Or Sheffet. “Differentially Private Ordinary Least Squares.” JPC, 2019. https://arxiv.org/abs/1507.02482
Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. “Differentially Private Empirical Risk Minimization.” JMLR, 2011. https://arxiv.org/abs/0912.0071
Raef Bassily, Adam Smith, and Abhradeep Thakurta. “Differentially Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds.” FOCS, 2014. https://arxiv.org/abs/1405.7085
Causality
Susan Athey, Guido Imbens. “The state of applied econometrics: Causality and policy evaluation.” Journal of Economic Perspectives, 2017. https://arxiv.org/abs/1607.00699
Hal R. Varian. “Causal inference in economics and marketing.” PNAS, 2016. https://www.pnas.org/content/pnas/113/27/7310.full.pdf
Joshua D. Angrist. “Treatment Effect Heterogeneity in Theory and Practice.” The Economic Journal, 2004. https://www.nber.org/papers/w9708.pdf
Elias Bareinboim and Judea Pearl. “Causal inference and the data fusion problem.” PNAS, 2016. https://www.pnas.org/content/pnas/113/27/7345.full.pdf
Elias Bareinboim, Jin Tian, and Judea Pearl. “Recovering from selection bias in causal and statistical inference.” AAAI, 2014. https://ftp.cs.ucla.edu/pub/stat_ser/r425.pdf
Bernhard Scholkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, Joris Mooij. “On causal and anticausal inference.” ICML, 2012. https://icml.cc/2012/papers/625.pdf
Krzysztof Chalupka, Pietro Perona, and Frederick Eberhardt. “Multi-level cause-effect systems.” AISTATS, 2016. https://arxiv.org/abs/1512.07942
Sander Beckers, Frederick Eberhardt, and Joseph Y. Halpern. “Approximate causal abstraction.” UAI, 2019. https://arxiv.org/abs/1906.11583
Elizabeth Stuart. “Matching methods for causal inference: a review and look forward.” Statistical Science, 2010. https://projecteuclid.org/download/pdfview_1/euclid.ss/1280841730
Sören R. Künzel, Jasjeet S. Sekhon, Peter J. Bickel, Bin Yu. “Metalearners for estimating heterogeneous treatment effects using machine learning.” PNAS, 2019. https://arxiv.org/abs/1706.03461
Stefan Wager and Susan Athey. “Estimation and Inference of Heterogeneous Treatment Effects using Random Forests.” JASA, 2018. https://arxiv.org/abs/1510.04342
Peng Ding and Fan Li. “Causal inference: a missing data perspective.” Statistical Science, 2018. https://arxiv.org/abs/1712.06170
Jamie M. Robins. Miguel A. Hernan. B. Brumback. “Marginal structural models and causal inference in epidemiology.” Epidemiology, 2000. https://www.stat.ubc.ca/~john/papers/RobinsEpi2000.pdf
Kosuke Imai. Luke Keele. Dustin Tingley. “A general approach to causal mediation analysis.” Psychological Methods, 2010. https://imai.fas.harvard.edu/research/files/BaronKenny.pdf
Elizabeth L. Ogburn, Tyler J. VanderWeele. Causal diagrams for interference. Statistical Science, 2014. https://arxiv.org/abs/1403.1239
Alexander D'Amour. “On multi-cause causal inference with unobserved confounding: Counterexamples, impossibility, and alternatives.” AISTATS, 2019. https://arxiv.org/abs/1902.10286