# Notes on: Park, M., Jitkrittum, W., & Sejdinovic, D. (2016): K2-abc: approximate bayesian computation with kernel embeddings

## Table of Contents

## Overview

- Adopt
*[[file:~*org-blog*notes*mathematics*statistics.org::def:maximum-mean-discrepancy][Maximum Mean Discrepancy (MMD)]]*gretton_2012 as non-parametric distance between*empirical*distributions of*simulated*and*observed*data- No need to select summary statistic
*first*as the kernel embedding itself plays this role

- No need to select summary statistic
- Apply
*additional*Gaussian smoothning kernel which operates on the corresponding RKHS

## Terminology

- ABC
- Approximate Bayesian Computing, a paradigm which enables simulation-based posterior inference in such cases by measuring the simularity between
*simulated*and*observed*data in terms of a chosen statistic.

## Notation

- parameters
- generated samples from model with parameters
- denotes observed data
- is the domain of the observations
- is a metric on
For a probability distribution on a domain , its kernel embedding is defined as

i.e. an element of an RKHS with an associated kernel

## Background

- Consider cases where computation of the
*likelihood*of is*intractable*

## Kernel MMD

We can obtain an unbiased estimator for MMD. Given

an *unbiased* estimator of the MMD is given by

## K2-ABC

- Given and of i.i.d. observations (can be relaxed in practice)
- Non-parametric distance between empricial distributions
Use to measure distance between :

i.e. is an unbiased estimate of between probability distributions used to generate and

Second kernel, which operates directly on the probability measures, and compute the ABC posterior sample weights,

with a suitably chosen parameter .

- Compare datasets using estimated similiarity between generating distributions