THIS FUNCTION IS DEPRECATED. It will be removed after 2018-10-01.
Instructions for updating:
The tf.contrib.bayesflow library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). Use tfp.monte_carlo.expectation instead.

This function computes the Monte-Carlo approximation of an expectation, i.e.,

However, if p is not reparameterized, TensorFlow's gradient will be incorrect
since the chain-rule stops at samples of non-reparameterized distributions.
(The non-differentiated result, approx_expectation, is the same regardless
of use_reparametrization.) In this circumstance using the Score-Gradient
trick results in an unbiased gradient, i.e.,

Args:

f: Python callable which can return f(samples).

samples: Tensor of samples used to form the Monte-Carlo approximation of
\(E_p[f(X)]\). A batch of samples should be indexed by axis
dimensions.

log_prob: Python callable which can return log_prob(samples). Must
correspond to the natural-logarithm of the pdf/pmf of each sample. Only
required/used if use_reparametrization=False.
Default value: None.

use_reparametrization: Python bool indicating that the approximation
should use the fact that the gradient of samples is unbiased. Whether
True or False, this arg only affects the gradient of the resulting
approx_expectation.
Default value: True.

axis: The dimensions to average. If None, averages all
dimensions.
Default value: 0 (the left-most dimension).