aepsych.acquisition

Submodules

aepsych.acquisition.bvn module

aepsych.acquisition.bvn.bvn_cdf(xu, yu, r)[source]

Evaluate the bivariate normal CDF.

WARNING: Implements only the routine for moderate levels of correlation. Will be inaccurate and should not be used for correlations larger than 0.925.

Standard (mean 0, var 1) bivariate normal distribution with correlation r. Evaluated from -inf to xu, and -inf to yu.

Based on function developed by Alan Genz: http://www.math.wsu.edu/faculty/genz/software/matlab/bvn.m

based in turn on Drezner, Z and G.O. Wesolowsky, (1989), On the computation of the bivariate normal inegral, Journal of Statist. Comput. Simul. 35, pp. 101-107.

Parameters
  • xu (torch.Tensor) – Upper limits for cdf evaluation in x

  • yu (torch.Tensor) – Upper limits for cdf evaluation in y

  • r (torch.Tensor) – BVN correlation

Return type

torch.Tensor

Returns: Tensor of cdf evaluations of same size as xu, yu, and r.

aepsych.acquisition.lookahead module

aepsych.acquisition.lookahead.Hb(p)[source]

Binary entropy.

Parameters

p (torch.Tensor) – Tensor of probabilities.

Returns: Binary entropy for each probability.

aepsych.acquisition.lookahead.MI_fn(Px, P1, P0, py1)[source]

Average mutual information. H(p) - E_y*[H(p | y*)]

Parameters
  • Px (torch.Tensor) – (b x m) Level-set posterior before observation

  • P1 (torch.Tensor) – (b x m) Level-set posterior given observation of 1

  • P0 (torch.Tensor) – (b x m) Level-set posterior given observation of 0

  • py1 (torch.Tensor) – (b x 1) Probability of observing 1

Return type

torch.Tensor

Returns: (b) tensor of mutual information averaged over Xq.

aepsych.acquisition.lookahead.ClassErr(p)[source]

Expected classification error, min(p, 1-p).

Parameters

p (torch.Tensor) –

Return type

torch.Tensor

aepsych.acquisition.lookahead.SUR_fn(Px, P1, P0, py1)[source]

Stepwise uncertainty reduction.

Expected reduction in expected classification error given observation at Xstar, averaged over Xq.

Parameters
  • Px (torch.Tensor) – (b x m) Level-set posterior before observation

  • P1 (torch.Tensor) – (b x m) Level-set posterior given observation of 1

  • P0 (torch.Tensor) – (b x m) Level-set posterior given observation of 0

  • py1 (torch.Tensor) – (b x 1) Probability of observing 1

Return type

torch.Tensor

Returns: (b) tensor of SUR values.

aepsych.acquisition.lookahead.EAVC_fn(Px, P1, P0, py1)[source]

Expected absolute value change.

Expected absolute change in expected level-set volume given observation at Xstar.

Parameters
  • Px (torch.Tensor) – (b x m) Level-set posterior before observation

  • P1 (torch.Tensor) – (b x m) Level-set posterior given observation of 1

  • P0 (torch.Tensor) – (b x m) Level-set posterior given observation of 0

  • py1 (torch.Tensor) – (b x 1) Probability of observing 1

Return type

torch.Tensor

Returns: (b) tensor of EAVC values.

class aepsych.acquisition.lookahead.GlobalLookaheadAcquisitionFunction(model, target, query_set_size=None, Xq=None)[source]

Bases: botorch.acquisition.acquisition.AcquisitionFunction

A global look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

  • Xq (Optional[torch.Tensor]) – (m x d) global reference set.

  • query_set_size (Optional[int]) –

Return type

None

forward(X)[source]

Evaluate acquisition function at X.

Parameters

X (torch.Tensor) – (b x 1 x d) point at which to evalaute acquisition function.

Return type

torch.Tensor

Returns: (b) tensor of acquisition values.

training: bool
class aepsych.acquisition.lookahead.GlobalMI(model, target, query_set_size=None, Xq=None)[source]

Bases: aepsych.acquisition.lookahead.GlobalLookaheadAcquisitionFunction

A global look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

  • Xq (Optional[torch.Tensor]) – (m x d) global reference set.

  • query_set_size (Optional[int]) –

Return type

None

training: bool
class aepsych.acquisition.lookahead.GlobalSUR(model, target, query_set_size=None, Xq=None)[source]

Bases: aepsych.acquisition.lookahead.GlobalLookaheadAcquisitionFunction

A global look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

  • Xq (Optional[torch.Tensor]) – (m x d) global reference set.

  • query_set_size (Optional[int]) –

Return type

None

training: bool
class aepsych.acquisition.lookahead.ApproxGlobalSUR(model, target, query_set_size=None, Xq=None)[source]

Bases: aepsych.acquisition.lookahead.GlobalSUR

A global look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

  • Xq (Optional[torch.Tensor]) – (m x d) global reference set.

  • query_set_size (Optional[int]) –

Return type

None

training: bool
class aepsych.acquisition.lookahead.EAVC(model, target, query_set_size=None, Xq=None)[source]

Bases: aepsych.acquisition.lookahead.GlobalLookaheadAcquisitionFunction

A global look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

  • Xq (Optional[torch.Tensor]) – (m x d) global reference set.

  • query_set_size (Optional[int]) –

Return type

None

training: bool
class aepsych.acquisition.lookahead.LocalLookaheadAcquisitionFunction(model, target)[source]

Bases: botorch.acquisition.acquisition.AcquisitionFunction

A localized look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

Return type

None

forward(X)[source]

Evaluate acquisition function at X.

Parameters

X (torch.Tensor) – (b x 1 x d) point at which to evalaute acquisition function.

Return type

torch.Tensor

Returns: (b) tensor of acquisition values.

training: bool
class aepsych.acquisition.lookahead.LocalMI(model, target)[source]

Bases: aepsych.acquisition.lookahead.LocalLookaheadAcquisitionFunction

A localized look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

Return type

None

training: bool
class aepsych.acquisition.lookahead.LocalSUR(model, target)[source]

Bases: aepsych.acquisition.lookahead.LocalLookaheadAcquisitionFunction

A localized look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

Return type

None

training: bool

aepsych.acquisition.lookahead_utils module

aepsych.acquisition.lookahead_utils.posterior_at_xstar_xq(model, Xstar, Xq)[source]

Evaluate the posteriors of f at single point Xstar and set of points Xq.

Parameters
  • model (gpytorch.models.gp.GP) – The model to evaluate.

  • Xstar (torch.Tensor) – (b x 1 x d) tensor.

  • Xq (torch.Tensor) – (b x m x d) tensor.

Returns

(b x 1) mean at Xstar. Sigma2_s: (b x 1) variance at Xstar. Mu_q: (b x m) mean at Xq. Sigma2_q: (b x m) variance at Xq. Sigma_sq: (b x m) covariance between Xstar and each point in Xq.

Return type

Mu_s

aepsych.acquisition.lookahead_utils.lookahead_at_xstar(model, Xstar, Xq, gamma)[source]

Evaluate the look-ahead level-set posterior at Xq given observation at xstar.

Parameters
  • model (gpytorch.models.gp.GP) – The model to evaluate.

  • Xstar (torch.Tensor) – (b x 1 x d) observation point.

  • Xq (torch.Tensor) – (b x m x d) reference points.

  • gamma (float) – Threshold in f-space.

Returns

(b x m) Level-set posterior at Xq, before observation at xstar. P1: (b x m) Level-set posterior at Xq, given observation of 1 at xstar. P0: (b x m) Level-set posterior at Xq, given observation of 0 at xstar. py1: (b x 1) Probability of observing 1 at xstar.

Return type

Px

aepsych.acquisition.lookahead_utils.approximate_lookahead_at_xstar(model, Xstar, Xq, gamma)[source]

The look-ahead posterior approximation of Lyu et al.

Parameters
  • model (gpytorch.models.gp.GP) – The model to evaluate.

  • Xstar (torch.Tensor) – (b x 1 x d) observation point.

  • Xq (torch.Tensor) – (b x m x d) reference points.

  • gamma (float) – Threshold in f-space.

Returns

(b x m) Level-set posterior at Xq, before observation at xstar. P1: (b x m) Level-set posterior at Xq, given observation of 1 at xstar. P0: (b x m) Level-set posterior at Xq, given observation of 0 at xstar. py1: (b x 1) Probability of observing 1 at xstar.

Return type

Px

aepsych.acquisition.lse module

class aepsych.acquisition.lse.MCLevelSetEstimation(model, target, beta, objective=None, sampler=None)[source]

Bases: botorch.acquisition.monte_carlo.MCAcquisitionFunction

Monte-carlo level set estimation.

Parameters
  • model (botorch.models.model.Model) – A fitted model.

  • target (Union[float, torch.Tensor]) – the level set (after objective transform) to be estimated

  • beta (Union[float, torch.Tensor]) – a parameter that governs explore-exploit tradeoff

  • objective (Optional[botorch.acquisition.objective.MCAcquisitionObjective]) – An MCAcquisitionObjective representing the link function (e.g., logistic or probit.) applied on the samples. Can be implemented via GenericMCObjective.

  • sampler (Optional[botorch.sampling.samplers.MCSampler]) – The sampler used for drawing MC samples.

Return type

None

acquisition(obj_samples)[source]

Evaluate the acquisition based on objective samples.

Usually you should not call this directly unless you are subclassing this class and modifying how objective samples are generated.

Parameters

obj_samples (torch.Tensor) – Samples from the model, transformed by the objective. Should be samples x batch_shape.

Returns

Acquisition function at the sampled values.

Return type

torch.Tensor

forward(X)[source]

Evaluate the acquisition function

Parameters

X (torch.Tensor) – Points at which to evaluate.

Returns

Value of the acquisition functiona at these points.

Return type

torch.Tensor

training: bool

aepsych.acquisition.mc_posterior_variance module

aepsych.acquisition.mc_posterior_variance.balv_acq(obj_samps)[source]

Evaluate BALV (posterior variance) on a set of objective samples.

Parameters

obj_samps (torch.Tensor) – Samples from the GP, transformed by the objective. Should be samples x batch_shape.

Returns

Acquisition function value.

Return type

torch.Tensor

class aepsych.acquisition.mc_posterior_variance.MCPosteriorVariance(model, objective=None, sampler=None)[source]

Bases: botorch.acquisition.monte_carlo.MCAcquisitionFunction

Posterior variance, computed using samples so we can use objective/transform

Posterior Variance of Link Function

Parameters
  • model (botorch.models.model.Model) – A fitted model.

  • objective (Optional[botorch.acquisition.objective.MCAcquisitionObjective]) – An MCAcquisitionObjective representing the link function (e.g., logistic or probit.) applied on the difference of (usually 1-d) two samples. Can be implemented via GenericMCObjective.

  • sampler (Optional[botorch.sampling.samplers.MCSampler]) – The sampler used for drawing MC samples.

Return type

None

forward(X)[source]

Evaluate MCPosteriorVariance on the candidate set X.

Parameters

X (torch.Tensor) – A batch_size x q x d-dim Tensor

Returns

Posterior variance of link function at X that active learning hopes to maximize

Return type

torch.Tensor

acquisition(obj_samples)[source]
Parameters

obj_samples (torch.Tensor) –

Return type

torch.Tensor

training: bool
class aepsych.acquisition.mc_posterior_variance.MonotonicMCPosteriorVariance(model, deriv_constraint_points, num_samples=32, num_rejection_samples=1024, objective=None)[source]

Bases: aepsych.acquisition.monotonic_rejection.MonotonicMCAcquisition

Initialize MonotonicMCAcquisition

Parameters
  • model (Model) – Model to use, usually a MonotonicRejectionGP.

  • num_samples (int, optional) – Number of samples to keep from the rejection sampler. . Defaults to 32.

  • num_rejection_samples (int, optional) – Number of rejection samples to draw. Defaults to 1024.

  • objective (Optional[MCAcquisitionObjective], optional) – Objective transform of the GP output before evaluating the acquisition. Defaults to identity transform.

  • deriv_constraint_points (torch.Tensor) –

Return type

None

acquisition(obj_samples)[source]
Parameters

obj_samples (torch.Tensor) –

Return type

torch.Tensor

training: bool

aepsych.acquisition.monotonic_rejection module

class aepsych.acquisition.monotonic_rejection.MonotonicMCAcquisition(model, deriv_constraint_points, num_samples=32, num_rejection_samples=1024, objective=None)[source]

Bases: botorch.acquisition.acquisition.AcquisitionFunction

Acquisition function base class for use with the rejection sampling

monotonic GP. This handles the bookkeeping of the derivative constraint points – implement specific monotonic MC acquisition in subclasses.

Initialize MonotonicMCAcquisition

Parameters
  • model (Model) – Model to use, usually a MonotonicRejectionGP.

  • num_samples (int, optional) – Number of samples to keep from the rejection sampler. . Defaults to 32.

  • num_rejection_samples (int, optional) – Number of rejection samples to draw. Defaults to 1024.

  • objective (Optional[MCAcquisitionObjective], optional) – Objective transform of the GP output before evaluating the acquisition. Defaults to identity transform.

  • deriv_constraint_points (torch.Tensor) –

Return type

None

forward(X)[source]

Evaluate the acquisition function at a set of points.

Parameters

X (Tensor) – Points at which to evaluate the acquisition function. Should be (b) x q x d, and q should be 1.

Returns

Acquisition function value at these points.

Return type

Tensor

acquisition(obj_samples)[source]
Parameters

obj_samples (torch.Tensor) –

Return type

torch.Tensor

training: bool
class aepsych.acquisition.monotonic_rejection.MonotonicMCLSE(model, deriv_constraint_points, target, num_samples=32, num_rejection_samples=1024, beta=3.84, objective=None)[source]

Bases: aepsych.acquisition.monotonic_rejection.MonotonicMCAcquisition

Level set estimation acquisition function for use with monotonic models.

Parameters
  • model (Model) – Underlying model object, usually should be MonotonicRejectionGP.

  • target (float) – Level set value to target (after the objective).

  • num_samples (int, optional) – Number of MC samples to draw in MC acquisition. Defaults to 32.

  • num_rejection_samples (int, optional) – Number of rejection samples from which to subsample monotonic ones. Defaults to 1024.

  • beta (float, optional) – Parameter of the LSE acquisition function that governs exploration vs exploitation (similarly to the same parameter in UCB). Defaults to 3.84, which maps to the straddle heuristic of Bryan et al. 2005.

  • objective (Optional[MCAcquisitionObjective], optional) – Objective transform. Defaults to identity transform.

  • deriv_constraint_points (torch.Tensor) –

Return type

None

acquisition(obj_samples)[source]
Parameters

obj_samples (torch.Tensor) –

Return type

torch.Tensor

training: bool

aepsych.acquisition.mutual_information module

aepsych.acquisition.mutual_information.bald_acq(obj_samples)[source]

Evaluate Mutual Information acquisition function.

With latent function F and X a hypothetical observation at a new point, I(F; X) = I(X; F) = H(X) - H(X |F), H(X |F ) = E_{f} (H(X |F =f ) i.e., we take the posterior entropy of the (Bernoulli) observation X given the current model posterior and subtract the conditional entropy on F, that being the mean entropy over the posterior for F. This is equivalent to the BALD acquisition function in Houlsby et al. NeurIPS 2012.

Parameters

obj_samples (torch.Tensor) – Objective samples from the GP, of shape num_samples x batch_shape x d_out

Returns

Value of acquisition at samples.

Return type

torch.Tensor

class aepsych.acquisition.mutual_information.BernoulliMCMutualInformation(model, objective, sampler=None)[source]

Bases: botorch.acquisition.monte_carlo.MCAcquisitionFunction

Mutual Information acquisition function for a bernoulli outcome.

Given a model and an objective link function, calculate the mutual information of a trial at a new point and the distribution on the latent function.

Objective here should give values in (0, 1) (e.g. logit or probit).

Single Bernoulli mutual information for active learning

Parameters
  • model (Model) – A fitted model.

  • objective (MCAcquisitionObjective) – An MCAcquisitionObjective representing the link function (e.g., logistic or probit)

  • sampler (MCSampler, optional) – The sampler used for drawing MC samples.

Return type

None

forward(X)[source]

Evaluate mutual information on the candidate set X.

Parameters

X (torch.Tensor) – A batch_size x q x d-dim Tensor.

Returns

Tensor of shape batch_size x q representing the mutual information of a hypothetical trial at X that active learning hopes to maximize.

Return type

torch.Tensor

acquisition(obj_samples)[source]

Evaluate the acquisition function value based on samples.

Parameters

obj_samples (torch.Tensor) – Samples from the model, transformed through the objective.

Returns

value of the acquisition function (BALD) at the input samples.

Return type

torch.Tensor

training: bool
class aepsych.acquisition.mutual_information.MonotonicBernoulliMCMutualInformation(model, deriv_constraint_points, num_samples=32, num_rejection_samples=1024, objective=None)[source]

Bases: aepsych.acquisition.monotonic_rejection.MonotonicMCAcquisition

Initialize MonotonicMCAcquisition

Parameters
  • model (Model) – Model to use, usually a MonotonicRejectionGP.

  • num_samples (int, optional) – Number of samples to keep from the rejection sampler. . Defaults to 32.

  • num_rejection_samples (int, optional) – Number of rejection samples to draw. Defaults to 1024.

  • objective (Optional[MCAcquisitionObjective], optional) – Objective transform of the GP output before evaluating the acquisition. Defaults to identity transform.

  • deriv_constraint_points (torch.Tensor) –

Return type

None

acquisition(obj_samples)[source]

Evaluate the acquisition function value based on samples.

Parameters

obj_samples (torch.Tensor) – Samples from the model, transformed through the objective.

Returns

value of the acquisition function (BALD) at the input samples.

Return type

torch.Tensor

training: bool

aepsych.acquisition.objective module

class aepsych.acquisition.objective.ProbitObjective[source]

Bases: botorch.acquisition.objective.MCAcquisitionObjective

Probit objective

Transforms the input through the normal CDF (probit).

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Return type

None

forward(samples, X=None)[source]

Evaluates the objective (normal CDF).

Parameters
  • samples (Tensor) – GP samples.

  • X (Optional[Tensor], optional) – ignored, here for compatibility with MCAcquisitionObjective.

Returns

[description]

Return type

Tensor

class aepsych.acquisition.objective.FloorLinkObjective(floor=0.5)[source]

Bases: botorch.acquisition.objective.MCAcquisitionObjective

Wrapper for objectives to add a floor, when the probability is known not to go below it.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(samples, X=None)[source]

Evaluates the objective for input x and floor f

Parameters
  • samples (Tensor) – GP samples.

  • X (Optional[Tensor], optional) – ignored, here for compatibility with MCAcquisitionObjective.

Returns

outcome probability.

Return type

Tensor

classmethod from_config(config)[source]
class aepsych.acquisition.objective.FloorLogitObjective(floor=0.5)[source]

Bases: aepsych.acquisition.objective.FloorLinkObjective

Logistic sigmoid (aka expit, aka logistic CDF), but with a floor so that its output is between floor and 1.0.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

class aepsych.acquisition.objective.FloorGumbelObjective(floor=0.5)[source]

Bases: aepsych.acquisition.objective.FloorLinkObjective

Gumbel CDF but with a floor so that its output is between floor and 1.0. Note that this is not the standard Gumbel distribution, but rather the left-skewed Gumbel that arises as the log of the Weibull distribution, e.g. Treutwein 1995, doi:10.1016/0042-6989(95)00016-X.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

class aepsych.acquisition.objective.FloorProbitObjective(floor=0.5)[source]

Bases: aepsych.acquisition.objective.FloorLinkObjective

Probit (aka Gaussian CDF), but with a floor so that its output is between floor and 1.0.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

aepsych.acquisition.rejection_sampler module

class aepsych.acquisition.rejection_sampler.RejectionSampler(num_samples, num_rejection_samples, constrained_idx)[source]

Bases: botorch.sampling.samplers.MCSampler

Samples from a posterior subject to the constraint that samples in constrained_idx should be >= 0.

If not enough feasible samples are generated, will return the least violating samples.

Initialize RejectionSampler

Parameters
  • num_samples (int) – Number of samples to return. Note that if fewer samples than this number are positive in the required dimension, the remaining samples returned will be the “least violating”, i.e. closest to 0.

  • num_rejection_samples (int) – Number of samples to draw before rejecting.

  • constrained_idx (Tensor) – Indices of input dimensions that should be constrained positive.

forward(posterior)[source]

Run the rejection sampler.

Parameters

posterior (Posterior) – The unconstrained GP posterior object to perform rejection samples on.

Returns

Kept samples.

Return type

Tensor

training: bool

Module contents

class aepsych.acquisition.BernoulliMCMutualInformation(model, objective, sampler=None)[source]

Bases: botorch.acquisition.monte_carlo.MCAcquisitionFunction

Mutual Information acquisition function for a bernoulli outcome.

Given a model and an objective link function, calculate the mutual information of a trial at a new point and the distribution on the latent function.

Objective here should give values in (0, 1) (e.g. logit or probit).

Single Bernoulli mutual information for active learning

Parameters
  • model (Model) – A fitted model.

  • objective (MCAcquisitionObjective) – An MCAcquisitionObjective representing the link function (e.g., logistic or probit)

  • sampler (MCSampler, optional) – The sampler used for drawing MC samples.

Return type

None

forward(X)[source]

Evaluate mutual information on the candidate set X.

Parameters

X (torch.Tensor) – A batch_size x q x d-dim Tensor.

Returns

Tensor of shape batch_size x q representing the mutual information of a hypothetical trial at X that active learning hopes to maximize.

Return type

torch.Tensor

acquisition(obj_samples)[source]

Evaluate the acquisition function value based on samples.

Parameters

obj_samples (torch.Tensor) – Samples from the model, transformed through the objective.

Returns

value of the acquisition function (BALD) at the input samples.

Return type

torch.Tensor

training: bool
class aepsych.acquisition.MonotonicBernoulliMCMutualInformation(model, deriv_constraint_points, num_samples=32, num_rejection_samples=1024, objective=None)[source]

Bases: aepsych.acquisition.monotonic_rejection.MonotonicMCAcquisition

Initialize MonotonicMCAcquisition

Parameters
  • model (Model) – Model to use, usually a MonotonicRejectionGP.

  • num_samples (int, optional) – Number of samples to keep from the rejection sampler. . Defaults to 32.

  • num_rejection_samples (int, optional) – Number of rejection samples to draw. Defaults to 1024.

  • objective (Optional[MCAcquisitionObjective], optional) – Objective transform of the GP output before evaluating the acquisition. Defaults to identity transform.

  • deriv_constraint_points (torch.Tensor) –

Return type

None

acquisition(obj_samples)[source]

Evaluate the acquisition function value based on samples.

Parameters

obj_samples (torch.Tensor) – Samples from the model, transformed through the objective.

Returns

value of the acquisition function (BALD) at the input samples.

Return type

torch.Tensor

training: bool
class aepsych.acquisition.MonotonicMCLSE(model, deriv_constraint_points, target, num_samples=32, num_rejection_samples=1024, beta=3.84, objective=None)[source]

Bases: aepsych.acquisition.monotonic_rejection.MonotonicMCAcquisition

Level set estimation acquisition function for use with monotonic models.

Parameters
  • model (Model) – Underlying model object, usually should be MonotonicRejectionGP.

  • target (float) – Level set value to target (after the objective).

  • num_samples (int, optional) – Number of MC samples to draw in MC acquisition. Defaults to 32.

  • num_rejection_samples (int, optional) – Number of rejection samples from which to subsample monotonic ones. Defaults to 1024.

  • beta (float, optional) – Parameter of the LSE acquisition function that governs exploration vs exploitation (similarly to the same parameter in UCB). Defaults to 3.84, which maps to the straddle heuristic of Bryan et al. 2005.

  • objective (Optional[MCAcquisitionObjective], optional) – Objective transform. Defaults to identity transform.

  • deriv_constraint_points (torch.Tensor) –

Return type

None

acquisition(obj_samples)[source]
Parameters

obj_samples (torch.Tensor) –

Return type

torch.Tensor

training: bool
class aepsych.acquisition.MCPosteriorVariance(model, objective=None, sampler=None)[source]

Bases: botorch.acquisition.monte_carlo.MCAcquisitionFunction

Posterior variance, computed using samples so we can use objective/transform

Posterior Variance of Link Function

Parameters
  • model (botorch.models.model.Model) – A fitted model.

  • objective (Optional[botorch.acquisition.objective.MCAcquisitionObjective]) – An MCAcquisitionObjective representing the link function (e.g., logistic or probit.) applied on the difference of (usually 1-d) two samples. Can be implemented via GenericMCObjective.

  • sampler (Optional[botorch.sampling.samplers.MCSampler]) – The sampler used for drawing MC samples.

Return type

None

forward(X)[source]

Evaluate MCPosteriorVariance on the candidate set X.

Parameters

X (torch.Tensor) – A batch_size x q x d-dim Tensor

Returns

Posterior variance of link function at X that active learning hopes to maximize

Return type

torch.Tensor

acquisition(obj_samples)[source]
Parameters

obj_samples (torch.Tensor) –

Return type

torch.Tensor

training: bool
class aepsych.acquisition.MonotonicMCPosteriorVariance(model, deriv_constraint_points, num_samples=32, num_rejection_samples=1024, objective=None)[source]

Bases: aepsych.acquisition.monotonic_rejection.MonotonicMCAcquisition

Initialize MonotonicMCAcquisition

Parameters
  • model (Model) – Model to use, usually a MonotonicRejectionGP.

  • num_samples (int, optional) – Number of samples to keep from the rejection sampler. . Defaults to 32.

  • num_rejection_samples (int, optional) – Number of rejection samples to draw. Defaults to 1024.

  • objective (Optional[MCAcquisitionObjective], optional) – Objective transform of the GP output before evaluating the acquisition. Defaults to identity transform.

  • deriv_constraint_points (torch.Tensor) –

Return type

None

acquisition(obj_samples)[source]
Parameters

obj_samples (torch.Tensor) –

Return type

torch.Tensor

training: bool
class aepsych.acquisition.MCLevelSetEstimation(model, target, beta, objective=None, sampler=None)[source]

Bases: botorch.acquisition.monte_carlo.MCAcquisitionFunction

Monte-carlo level set estimation.

Parameters
  • model (botorch.models.model.Model) – A fitted model.

  • target (Union[float, torch.Tensor]) – the level set (after objective transform) to be estimated

  • beta (Union[float, torch.Tensor]) – a parameter that governs explore-exploit tradeoff

  • objective (Optional[botorch.acquisition.objective.MCAcquisitionObjective]) – An MCAcquisitionObjective representing the link function (e.g., logistic or probit.) applied on the samples. Can be implemented via GenericMCObjective.

  • sampler (Optional[botorch.sampling.samplers.MCSampler]) – The sampler used for drawing MC samples.

Return type

None

acquisition(obj_samples)[source]

Evaluate the acquisition based on objective samples.

Usually you should not call this directly unless you are subclassing this class and modifying how objective samples are generated.

Parameters

obj_samples (torch.Tensor) – Samples from the model, transformed by the objective. Should be samples x batch_shape.

Returns

Acquisition function at the sampled values.

Return type

torch.Tensor

forward(X)[source]

Evaluate the acquisition function

Parameters

X (torch.Tensor) – Points at which to evaluate.

Returns

Value of the acquisition functiona at these points.

Return type

torch.Tensor

training: bool
class aepsych.acquisition.ProbitObjective[source]

Bases: botorch.acquisition.objective.MCAcquisitionObjective

Probit objective

Transforms the input through the normal CDF (probit).

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Return type

None

forward(samples, X=None)[source]

Evaluates the objective (normal CDF).

Parameters
  • samples (Tensor) – GP samples.

  • X (Optional[Tensor], optional) – ignored, here for compatibility with MCAcquisitionObjective.

Returns

[description]

Return type

Tensor

class aepsych.acquisition.FloorProbitObjective(floor=0.5)[source]

Bases: aepsych.acquisition.objective.FloorLinkObjective

Probit (aka Gaussian CDF), but with a floor so that its output is between floor and 1.0.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

class aepsych.acquisition.FloorLogitObjective(floor=0.5)[source]

Bases: aepsych.acquisition.objective.FloorLinkObjective

Logistic sigmoid (aka expit, aka logistic CDF), but with a floor so that its output is between floor and 1.0.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

class aepsych.acquisition.FloorGumbelObjective(floor=0.5)[source]

Bases: aepsych.acquisition.objective.FloorLinkObjective

Gumbel CDF but with a floor so that its output is between floor and 1.0. Note that this is not the standard Gumbel distribution, but rather the left-skewed Gumbel that arises as the log of the Weibull distribution, e.g. Treutwein 1995, doi:10.1016/0042-6989(95)00016-X.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

class aepsych.acquisition.GlobalMI(model, target, query_set_size=None, Xq=None)[source]

Bases: aepsych.acquisition.lookahead.GlobalLookaheadAcquisitionFunction

A global look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

  • Xq (Optional[torch.Tensor]) – (m x d) global reference set.

  • query_set_size (Optional[int]) –

Return type

None

training: bool
class aepsych.acquisition.GlobalSUR(model, target, query_set_size=None, Xq=None)[source]

Bases: aepsych.acquisition.lookahead.GlobalLookaheadAcquisitionFunction

A global look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

  • Xq (Optional[torch.Tensor]) – (m x d) global reference set.

  • query_set_size (Optional[int]) –

Return type

None

training: bool
class aepsych.acquisition.ApproxGlobalSUR(model, target, query_set_size=None, Xq=None)[source]

Bases: aepsych.acquisition.lookahead.GlobalSUR

A global look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

  • Xq (Optional[torch.Tensor]) – (m x d) global reference set.

  • query_set_size (Optional[int]) –

Return type

None

training: bool
class aepsych.acquisition.EAVC(model, target, query_set_size=None, Xq=None)[source]

Bases: aepsych.acquisition.lookahead.GlobalLookaheadAcquisitionFunction

A global look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

  • Xq (Optional[torch.Tensor]) – (m x d) global reference set.

  • query_set_size (Optional[int]) –

Return type

None

training: bool
class aepsych.acquisition.LocalMI(model, target)[source]

Bases: aepsych.acquisition.lookahead.LocalLookaheadAcquisitionFunction

A localized look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

Return type

None

training: bool
class aepsych.acquisition.LocalSUR(model, target)[source]

Bases: aepsych.acquisition.lookahead.LocalLookaheadAcquisitionFunction

A localized look-ahead acquisition function.

Parameters
  • model (botorch.models.gpytorch.GPyTorchModel) – The gpytorch model.

  • target (float) – Threshold value to target in p-space.

Return type

None

training: bool