aepsych package

Subpackages

Submodules

aepsych.config module

class aepsych.config.Config(config_dict=None, config_fnames=None, config_str=None)[source]

Bases: ConfigParser

Initialize the AEPsych config object. This can be used to instantiate most objects in AEPsych by calling object.from_config(config).

Parameters
  • config_dict (Mapping[str, str], optional) – Mapping to build configuration from. Keys are section names, values are dictionaries with keys and values that should be present in the section. Defaults to None.

  • config_fnames (Sequence[str], optional) – List of INI filenames to load configuration from. Defaults to None.

  • config_str (str, optional) – String formatted as an INI file to load configuration from. Defaults to None.

registered_names: Dict[str, object] = {'AcquisitionFunction': <class 'botorch.acquisition.acquisition.AcquisitionFunction'>, 'AdditiveKernel': <class 'gpytorch.kernels.kernel.AdditiveKernel'>, 'AdditiveStructureKernel': <class 'gpytorch.kernels.additive_structure_kernel.AdditiveStructureKernel'>, 'AnalyticAcquisitionFunction': <class 'botorch.acquisition.analytic.AnalyticAcquisitionFunction'>, 'AnalyticExpectedUtilityOfBestOption': <class 'botorch.acquisition.preference.AnalyticExpectedUtilityOfBestOption'>, 'ApproxGlobalSUR': <class 'aepsych.acquisition.lookahead.ApproxGlobalSUR'>, 'ArcKernel': <class 'gpytorch.kernels.arc_kernel.ArcKernel'>, 'BernoulliLikelihood': <class 'gpytorch.likelihoods.bernoulli_likelihood.BernoulliLikelihood'>, 'BernoulliMCMutualInformation': <class 'aepsych.acquisition.mutual_information.BernoulliMCMutualInformation'>, 'BernoulliObjectiveLikelihood': <class 'aepsych.likelihoods.BernoulliObjectiveLikelihood'>, 'ConstrainedExpectedImprovement': <class 'botorch.acquisition.analytic.ConstrainedExpectedImprovement'>, 'ConstrainedMCObjective': <class 'botorch.acquisition.objective.ConstrainedMCObjective'>, 'CosineKernel': <class 'gpytorch.kernels.cosine_kernel.CosineKernel'>, 'CylindricalKernel': <class 'gpytorch.kernels.cylindrical_kernel.CylindricalKernel'>, 'DistributionalInputKernel': <class 'gpytorch.kernels.distributional_input_kernel.DistributionalInputKernel'>, 'EAVC': <class 'aepsych.acquisition.lookahead.EAVC'>, 'EpsilonGreedyGenerator': <class 'aepsych.generators.epsilon_greedy_generator.EpsilonGreedyGenerator'>, 'ExpectedImprovement': <class 'botorch.acquisition.analytic.ExpectedImprovement'>, 'FixedFeatureAcquisitionFunction': <class 'botorch.acquisition.fixed_feature.FixedFeatureAcquisitionFunction'>, 'FloorGumbelObjective': <class 'aepsych.acquisition.objective.FloorGumbelObjective'>, 'FloorLogitObjective': <class 'aepsych.acquisition.objective.FloorLogitObjective'>, 'FloorProbitObjective': <class 'aepsych.acquisition.objective.FloorProbitObjective'>, 'GPClassificationModel': <class 'aepsych.models.gp_classification.GPClassificationModel'>, 'GPRegressionModel': <class 'aepsych.models.gp_regression.GPRegressionModel'>, 'GaussianLikelihood': <class 'gpytorch.likelihoods.gaussian_likelihood.GaussianLikelihood'>, 'GaussianSymmetrizedKLKernel': <class 'gpytorch.kernels.gaussian_symmetrized_kl_kernel.GaussianSymmetrizedKLKernel'>, 'GenericCostAwareUtility': <class 'botorch.acquisition.cost_aware.GenericCostAwareUtility'>, 'GenericMCObjective': <class 'botorch.acquisition.objective.GenericMCObjective'>, 'GlobalMI': <class 'aepsych.acquisition.lookahead.GlobalMI'>, 'GlobalSUR': <class 'aepsych.acquisition.lookahead.GlobalSUR'>, 'GridInterpolationKernel': <class 'gpytorch.kernels.grid_interpolation_kernel.GridInterpolationKernel'>, 'GridKernel': <class 'gpytorch.kernels.grid_kernel.GridKernel'>, 'IdentityMCObjective': <class 'botorch.acquisition.objective.IdentityMCObjective'>, 'IndexKernel': <class 'gpytorch.kernels.index_kernel.IndexKernel'>, 'InducingPointKernel': <class 'gpytorch.kernels.inducing_point_kernel.InducingPointKernel'>, 'InverseCostWeightedUtility': <class 'botorch.acquisition.cost_aware.InverseCostWeightedUtility'>, 'Kernel': <class 'gpytorch.kernels.kernel.Kernel'>, 'LCMKernel': <class 'gpytorch.kernels.lcm_kernel.LCMKernel'>, 'LearnedObjective': <class 'botorch.acquisition.objective.LearnedObjective'>, 'LinearKernel': <class 'gpytorch.kernels.linear_kernel.LinearKernel'>, 'LinearMCObjective': <class 'botorch.acquisition.objective.LinearMCObjective'>, 'LocalMI': <class 'aepsych.acquisition.lookahead.LocalMI'>, 'LocalSUR': <class 'aepsych.acquisition.lookahead.LocalSUR'>, 'MCAcquisitionFunction': <class 'botorch.acquisition.monte_carlo.MCAcquisitionFunction'>, 'MCAcquisitionObjective': <class 'botorch.acquisition.objective.MCAcquisitionObjective'>, 'MCLevelSetEstimation': <class 'aepsych.acquisition.lse.MCLevelSetEstimation'>, 'MCPosteriorVariance': <class 'aepsych.acquisition.mc_posterior_variance.MCPosteriorVariance'>, 'ManualGenerator': <class 'aepsych.generators.manual_generator.ManualGenerator'>, 'MaternKernel': <class 'gpytorch.kernels.matern_kernel.MaternKernel'>, 'MaxValueBase': <class 'botorch.acquisition.max_value_entropy_search.MaxValueBase'>, 'MonotonicBernoulliMCMutualInformation': <class 'aepsych.acquisition.mutual_information.MonotonicBernoulliMCMutualInformation'>, 'MonotonicMCLSE': <class 'aepsych.acquisition.monotonic_rejection.MonotonicMCLSE'>, 'MonotonicMCPosteriorVariance': <class 'aepsych.acquisition.mc_posterior_variance.MonotonicMCPosteriorVariance'>, 'MonotonicRejectionGP': <class 'aepsych.models.monotonic_rejection_gp.MonotonicRejectionGP'>, 'MonotonicRejectionGenerator': <class 'aepsych.generators.monotonic_rejection_generator.MonotonicRejectionGenerator'>, 'MonotonicThompsonSamplerGenerator': <class 'aepsych.generators.monotonic_thompson_sampler_generator.MonotonicThompsonSamplerGenerator'>, 'MultiDeviceKernel': <class 'gpytorch.kernels.multi_device_kernel.MultiDeviceKernel'>, 'MultitaskKernel': <class 'gpytorch.kernels.multitask_kernel.MultitaskKernel'>, 'NewtonGirardAdditiveKernel': <class 'gpytorch.kernels.newton_girard_additive_kernel.NewtonGirardAdditiveKernel'>, 'NoisyExpectedImprovement': <class 'botorch.acquisition.analytic.NoisyExpectedImprovement'>, 'OneShotAcquisitionFunction': <class 'botorch.acquisition.acquisition.OneShotAcquisitionFunction'>, 'OptimizeAcqfGenerator': <class 'aepsych.generators.optimize_acqf_generator.OptimizeAcqfGenerator'>, 'PairwiseMCPosteriorVariance': <class 'botorch.acquisition.active_learning.PairwiseMCPosteriorVariance'>, 'PairwiseOptimizeAcqfGenerator': <class 'aepsych.generators.pairwise_optimize_acqf_generator.PairwiseOptimizeAcqfGenerator'>, 'PairwiseProbitModel': <class 'aepsych.models.pairwise_probit.PairwiseProbitModel'>, 'PairwiseSobolGenerator': <class 'aepsych.generators.pairwise_sobol_generator.PairwiseSobolGenerator'>, 'PeriodicKernel': <class 'gpytorch.kernels.periodic_kernel.PeriodicKernel'>, 'PiecewisePolynomialKernel': <class 'gpytorch.kernels.piecewise_polynomial_kernel.PiecewisePolynomialKernel'>, 'PolynomialKernel': <class 'gpytorch.kernels.polynomial_kernel.PolynomialKernel'>, 'PolynomialKernelGrad': <class 'gpytorch.kernels.polynomial_kernel_grad.PolynomialKernelGrad'>, 'PosteriorMean': <class 'botorch.acquisition.analytic.PosteriorMean'>, 'ProbabilityOfImprovement': <class 'botorch.acquisition.analytic.ProbabilityOfImprovement'>, 'ProbitObjective': <class 'aepsych.acquisition.objective.ProbitObjective'>, 'ProductKernel': <class 'gpytorch.kernels.kernel.ProductKernel'>, 'ProductStructureKernel': <class 'gpytorch.kernels.product_structure_kernel.ProductStructureKernel'>, 'ProximalAcquisitionFunction': <class 'botorch.acquisition.proximal.ProximalAcquisitionFunction'>, 'RBFKernel': <class 'gpytorch.kernels.rbf_kernel.RBFKernel'>, 'RBFKernelGrad': <class 'gpytorch.kernels.rbf_kernel_grad.RBFKernelGrad'>, 'RFFKernel': <class 'gpytorch.kernels.rff_kernel.RFFKernel'>, 'RQKernel': <class 'gpytorch.kernels.rq_kernel.RQKernel'>, 'RandomGenerator': <class 'aepsych.generators.random_generator.RandomGenerator'>, 'ScalarizedObjective': <class 'botorch.acquisition.objective.ScalarizedObjective'>, 'ScalarizedPosteriorTransform': <class 'botorch.acquisition.objective.ScalarizedPosteriorTransform'>, 'ScaleKernel': <class 'gpytorch.kernels.scale_kernel.ScaleKernel'>, 'SequentialStrategy': <class 'aepsych.strategy.SequentialStrategy'>, 'SobolGenerator': <class 'aepsych.generators.sobol_generator.SobolGenerator'>, 'SpectralDeltaKernel': <class 'gpytorch.kernels.spectral_delta_kernel.SpectralDeltaKernel'>, 'SpectralMixtureKernel': <class 'gpytorch.kernels.spectral_mixture_kernel.SpectralMixtureKernel'>, 'Strategy': <class 'aepsych.strategy.Strategy'>, 'UpperConfidenceBound': <class 'botorch.acquisition.analytic.UpperConfidenceBound'>, 'default_mean_covar_factory': <function default_mean_covar_factory>, 'get_acqf_input_constructor': <function get_acqf_input_constructor>, 'get_acquisition_function': <function get_acquisition_function>, 'monotonic_mean_covar_factory': <function monotonic_mean_covar_factory>, 'qExpectedImprovement': <class 'botorch.acquisition.monte_carlo.qExpectedImprovement'>, 'qKnowledgeGradient': <class 'botorch.acquisition.knowledge_gradient.qKnowledgeGradient'>, 'qLowerBoundMaxValueEntropy': <class 'botorch.acquisition.max_value_entropy_search.qLowerBoundMaxValueEntropy'>, 'qMaxValueEntropy': <class 'botorch.acquisition.max_value_entropy_search.qMaxValueEntropy'>, 'qMultiFidelityKnowledgeGradient': <class 'botorch.acquisition.knowledge_gradient.qMultiFidelityKnowledgeGradient'>, 'qMultiFidelityLowerBoundMaxValueEntropy': <class 'botorch.acquisition.max_value_entropy_search.qMultiFidelityLowerBoundMaxValueEntropy'>, 'qMultiFidelityMaxValueEntropy': <class 'botorch.acquisition.max_value_entropy_search.qMultiFidelityMaxValueEntropy'>, 'qMultiStepLookahead': <class 'botorch.acquisition.multi_step_lookahead.qMultiStepLookahead'>, 'qNegIntegratedPosteriorVariance': <class 'botorch.acquisition.active_learning.qNegIntegratedPosteriorVariance'>, 'qNoisyExpectedImprovement': <class 'botorch.acquisition.monte_carlo.qNoisyExpectedImprovement'>, 'qProbabilityOfImprovement': <class 'botorch.acquisition.monte_carlo.qProbabilityOfImprovement'>, 'qSimpleRegret': <class 'botorch.acquisition.monte_carlo.qSimpleRegret'>, 'qUpperConfidenceBound': <class 'botorch.acquisition.monte_carlo.qUpperConfidenceBound'>, 'song_mean_covar_factory': <function song_mean_covar_factory>}
to_dict()[source]
jsonifyMetadata()[source]
jsonifyAll()[source]
update(config_dict=None, config_fnames=None, config_str=None)[source]

Update this object with a new configuration.

Parameters
  • config_dict (Mapping[str, str], optional) – Mapping to build configuration from. Keys are section names, values are dictionaries with keys and values that should be present in the section. Defaults to None.

  • config_fnames (Sequence[str], optional) – List of INI filenames to load configuration from. Defaults to None.

  • config_str (str, optional) – String formatted as an INI file to load configuration from. Defaults to None.

classmethod register_module(module)[source]
Register a module with Config so that objects in it can

be referred to by their string name in config files.

Parameters

module (ModuleType) – Module to register.

classmethod register_object(obj)[source]
Register an object with Config so that it can be

referred to by its string name in config files.

Parameters

obj (object) – Object to register.

convert_to_latest()[source]
convert(from_version, to_version)[source]

Converts a config from an older version to a newer version.

Parameters
  • from_version (str) – The version of the config to be converted.

  • to_version (str) – The version the config should be converted to.

Return type

None

property version: str

Returns the version number of the config.

aepsych.likelihoods module

class aepsych.likelihoods.BernoulliObjectiveLikelihood(objective)[source]

Bases: _OneDimensionalLikelihood

Bernoulli likelihood with a flexible link (objective) defined by a callable (which can be a botorch objective)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Parameters

objective (Callable) –

forward(function_samples, **kwargs)[source]

Computes the conditional distribution \(p(\mathbf y \mid \mathbf f, \ldots)\) that defines the likelihood.

Parameters
  • function_samples (torch.Tensor) – Samples from the function (\(\mathbf f\))

  • data (dict {str: torch.Tensor}, optional - Pyro integration only) – Additional variables that the likelihood needs to condition on. The keys of the dictionary will correspond to Pyro sample sites in the likelihood’s model/guide.

  • args – Additional args

  • kwargs – Additional kwargs

Return type

Distribution (with same shape as function_samples )

classmethod from_config(config)[source]
Parameters

config (Config) –

training: bool

aepsych.plotting module

aepsych.plotting.plot_strat(strat, ax=None, true_testfun=None, cred_level=0.95, target_level=0.75, xlabel=None, ylabel=None, yes_label='Yes trial', no_label='No trial', flipx=False, logx=False, gridsize=30, title='', save_path=None, show=True, include_legend=True, include_colorbar=True)[source]

Creates a plot of a strategy, showing participants responses on each trial, the estimated response function and threshold, and optionally a ground truth response threshold.

Parameters
  • strat (Strategy) – Strategy object to be plotted. Must have a dimensionality of 2 or less.

  • ax (plt.Axes, optional) – Matplotlib axis to plot on (if None, creates a new axis). Default: None.

  • true_testfun (Callable, optional) – Ground truth response function. Should take a n_samples x n_parameters tensor as input and produce the response probability at each sample as output. Default: None.

  • cred_level (float) – Percentage of posterior mass around the mean to be shaded. Default: 0.95.

  • target_level (float) – Response probability to estimate the threshold of. Default: 0.75.

  • xlabel (str) – Label of the x-axis. Default: “Context (abstract)”.

  • ylabel (str) – Label of the y-axis (if None, defaults to “Response Probability” for 1-d plots or “Intensity (Abstract)” for 2-d plots). Default: None.

  • yes_label (str) – Label of trials with response of 1. Default: “Yes trial”.

  • no_label (str) – Label of trials with response of 0. Default: “No trial”.

  • flipx (bool) –

    Whether the values of the x-axis should be flipped such that the min becomes the max and vice

    versa.

    (Only valid for 2-d plots.) Default: False.

  • logx (bool) – Whether the x-axis should be log-transformed. (Only valid for 2-d plots.) Default: False.

  • gridsize (int) – The number of points to sample each dimension at. Default: 30.

  • title (str) – Title of the plot. Default: ‘’.

  • save_path (str, optional) – File name to save the plot to. Default: None.

  • show (bool) – Whether the plot should be shown in an interactive window. Default: True.

  • include_legend (bool) – Whether to include the legend in the figure. Default: True.

  • include_colorbar (bool) – Whether to include the colorbar indicating the probability of “Yes” trials. Default: True.

Return type

None

aepsych.plotting.plot_strat_3d(strat, parnames=None, outcome_label='Yes Trial', slice_dim=0, slice_vals=5, contour_levels=None, probability_space=False, gridsize=30, extent_multiplier=None, save_path=None, show=True)[source]

Creates a plot of a 2d slice of a 3D strategy, showing the estimated model or probability response and contours :param strat: Strategy object to be plotted. Must have a dimensionality of 3. :type strat: Strategy :param parnames: list of the parameter names :type parnames: str list :param outcome_label: The label of the outcome variable :type outcome_label: str :param slice_dim: dimension to slice on :type slice_dim: int :param dim_vals: values to take slices; OR number of values to take even slices from :type dim_vals: list of floats or int :param contour_levels: List contour values to plot. Default: None. If true, all integer levels. :type contour_levels: iterable of floats or bool, optional :param probability_space: Whether to plot probability. Default: False :type probability_space: bool :param gridsize: The number of points to sample each dimension at. Default: 30. :type gridsize: int :param extent_multiplier: multipliers for each of the dimensions when plotting. Default:None :type extent_multiplier: list, optional :param save_path: File name to save the plot to. Default: None. :type save_path: str, optional :param show: Whether the plot should be shown in an interactive window. Default: True. :type show: bool

Parameters
  • strat (Strategy) –

  • parnames (Optional[List[str]]) –

  • outcome_label (str) –

  • slice_dim (int) –

  • slice_vals (Union[List[float], int]) –

  • contour_levels (Optional[Union[Iterable[float], bool]]) –

  • probability_space (bool) –

  • gridsize (int) –

  • extent_multiplier (Optional[List[float]]) –

  • save_path (Optional[str]) –

  • show (bool) –

aepsych.plotting.plot_slice(ax, strat, parnames, slice_dim, slice_val, vmin, vmax, gridsize=30, contour_levels=None, lse=False, extent_multiplier=None)[source]

Creates a plot of a 2d slice of a 3D strategy, showing the estimated model or probability response and contours :param strat: Strategy object to be plotted. Must have a dimensionality of 3. :type strat: Strategy :param ax: Matplotlib axis to plot on :type ax: plt.Axes :param parnames: list of the parameter names :type parnames: str list :param slice_dim: dimension to slice on :type slice_dim: int :param slice_vals: value to take the slice along that dimension :type slice_vals: float :param vmin: global model minimum to use for plotting :type vmin: float :param vmax: global model maximum to use for plotting :type vmax: float :param gridsize: The number of points to sample each dimension at. Default: 30. :type gridsize: int :param contour_levels: Contours to plot. Default: None :type contour_levels: int list :param lse: Whether to plot probability. Default: False :type lse: bool :param extent_multiplier: multipliers for each of the dimensions when plotting. Default:None :type extent_multiplier: list, optional

aepsych.strategy module

aepsych.strategy.ensure_model_is_fresh(f)[source]
class aepsych.strategy.Strategy(generator, lb, ub, stimuli_per_trial, outcome_types, dim=None, min_total_tells=0, min_asks=0, model=None, refit_every=1, min_total_outcome_occurrences=1, max_asks=None, keep_most_recent=None, min_post_range=None)[source]

Bases: object

Object that combines models and generators to generate points to sample.

Initialize the strategy object.

Parameters
  • generator (AEPsychGenerator) – The generator object that determines how points are sampled.

  • lb (Union[numpy.ndarray, torch.Tensor]) – Lower bounds of the parameters.

  • ub (Union[numpy.ndarray, torch.Tensor]) – Upper bounds of the parameters.

  • dim (int, optional) – The number of dimensions in the parameter space. If None, it is inferred from the size of lb and ub.

  • min_total_tells (int) – The minimum number of total observations needed to complete this strategy.

  • min_asks (int) – The minimum number of points that should be generated from this strategy.

  • model (ModelProtocol, optional) – The AEPsych model of the data.

  • refit_every (int) – How often to refit the model from scratch.

  • min_total_outcome_occurrences (int) – The minimum number of total observations needed for each outcome before the strategy will finish. Defaults to 1 (i.e., for binary outcomes, there must be at least one “yes” trial and one “no” trial).

  • max_asks (int, optional) – The maximum number of trials to generate using this strategy. If None, there is no upper bound (default).

  • keep_most_recent (int, optional) – Experimental. The number of most recent data points that the model will be fitted on. This may be useful for discarding noisy data from trials early in the experiment that are not as informative as data collected from later trials. When None, the model is fitted on all data.

  • min_post_range (float, optional) – Experimental. The required difference between the posterior’s minimum and maximum value in probablity space before the strategy will finish. Ignored if None (default).

  • stimuli_per_trial (int) –

  • outcome_types (Sequence[Type[str]]) –

normalize_inputs(x, y)[source]

converts inputs into normalized format for this strategy

Parameters
  • x (np.ndarray) – training inputs

  • y (np.ndarray) – training outputs

Returns

training inputs, normalized y (np.ndarray): training outputs, normalized n (int): number of observations

Return type

x (np.ndarray)

gen(*args, **kwargs)[source]
get_max(*args, **kwargs)[source]
get_min(*args, **kwargs)[source]
inv_query(*args, **kwargs)[source]
predict(*args, **kwargs)[source]
get_jnd(*args, **kwargs)[source]
sample(*args, **kwargs)[source]
property finished
property can_fit
property n_trials
add_data(x, y)[source]
fit()[source]
update()[source]
classmethod from_config(config, name)[source]
Parameters
  • config (Config) –

  • name (str) –

class aepsych.strategy.SequentialStrategy(strat_list)[source]

Bases: object

Runs a sequence of strategies defined by its config

All getter methods defer to the current strat

Parameters

strat_list (list[Strategy]) – TODO make this nicely typed / doc’d

gen(num_points=1, **kwargs)[source]
Parameters

num_points (int) –

property finished
add_data(x, y)[source]
classmethod from_config(config)[source]
Parameters

config (Config) –

aepsych.utils module

aepsych.utils.make_scaled_sobol(lb, ub, size, seed=None)[source]
aepsych.utils.promote_0d(x)[source]
aepsych.utils.dim_grid(lower, upper, dim, gridsize=30, slice_dims=None)[source]

Create a grid Create a grid based on lower, upper, and dim. :param - lower (‘int’) - lower bound: :param - upper (‘int’) - upper bound: :param - dim (‘int) - dimension: :param - gridsize (‘int’) - size for grid: :param - slice_dims (Optional: :type - slice_dims (Optional: value} dict :param dict) - values to use for slicing axes: :type dict) - values to use for slicing axes: value} dict :param as an {index: :type as an {index: value} dict

Returns

grid – Tensor

Return type

torch.FloatTensor

Parameters
  • lower (Tensor) –

  • upper (Tensor) –

  • dim (int) –

  • gridsize (int) –

  • slice_dims (Optional[Mapping[int, float]]) –

aepsych.utils.interpolate_monotonic(x, y, z, min_x=- inf, max_x=inf)[source]
aepsych.utils.get_lse_interval(model, mono_grid, target_level, cred_level=None, mono_dim=- 1, n_samps=500, lb=- inf, ub=inf, gridsize=30, **kwargs)[source]
aepsych.utils.get_lse_contour(post_mean, mono_grid, level, mono_dim=- 1, lb=- inf, ub=inf)[source]
aepsych.utils.get_jnd_1d(post_mean, mono_grid, df=1, mono_dim=- 1, lb=- inf, ub=inf)[source]
aepsych.utils.get_jnd_multid(post_mean, mono_grid, df=1, mono_dim=- 1, lb=- inf, ub=inf)[source]

aepsych.utils_logging module

aepsych.utils_logging.getLogger(level=20, log_path='logs')[source]
Return type

Logger

Module contents

class aepsych.GPClassificationModel(lb, ub, dim=None, mean_module=None, covar_module=None, likelihood=None, inducing_size=100, max_fit_time=None, inducing_point_method='auto')[source]

Bases: AEPsychMixin, ApproximateGP

Probit-GP model with variational inference.

From a conventional ML perspective this is a GP Classification model, though in the psychophysics context it can also be thought of as a nonlinear generalization of the standard linear model for 1AFC or yes/no trials.

For more on variational inference, see e.g. https://docs.gpytorch.ai/en/v1.1.1/examples/04_Variational_and_Approximate_GPs/

Initialize the GP Classification model

Parameters
  • lb (Union[numpy.ndarray, torch.Tensor]) – Lower bounds of the parameters.

  • ub (Union[numpy.ndarray, torch.Tensor]) – Upper bounds of the parameters.

  • dim (int, optional) – The number of dimensions in the parameter space. If None, it is inferred from the size of lb and ub.

  • mean_module (gpytorch.means.Mean, optional) – GP mean class. Defaults to a constant with a normal prior.

  • covar_module (gpytorch.kernels.Kernel, optional) – GP covariance kernel class. Defaults to scaled RBF with a gamma prior.

  • likelihood (gpytorch.likelihood.Likelihood, optional) – The likelihood function to use. If None defaults to Bernouli likelihood.

  • inducing_size (int) – Number of inducing points. Defaults to 100.

  • max_fit_time (float, optional) – The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time.

  • inducing_point_method (string) – The method to use to select the inducing points. Defaults to “auto”. If “sobol”, a number of Sobol points equal to inducing_size will be selected. If “pivoted_chol”, selects points based on the pivoted Cholesky heuristic. If “kmeans++”, selects points by performing kmeans++ clustering on the training data. If “auto”, tries to determine the best method automatically.

stimuli_per_trial = 1
outcome_type = 'binary'
classmethod from_config(config)[source]

Alternate constructor for GPClassification model.

This is used when we recursively build a full sampling strategy from a configuration. TODO: document how this works in some tutorial.

Parameters

config (Config) – A configuration containing keys/values matching this class

Returns

Configured class instance.

Return type

GPClassificationModel

fit(train_x, train_y, warmstart_hyperparams=False, warmstart_induc=False, **kwargs)[source]

Fit underlying model.

Parameters
  • train_x (torch.Tensor) – Inputs.

  • train_y (torch.LongTensor) – Responses.

  • warmstart_hyperparams (bool) – Whether to reuse the previous hyperparameters (True) or fit from scratch (False). Defaults to False.

  • warmstart_induc (bool) – Whether to reuse the previous inducing points or fit from scratch (False). Defaults to False.

Return type

None

sample(x, num_samples)[source]

Sample from underlying model.

Parameters
  • x (torch.Tensor) – Points at which to sample.

  • num_samples (int, optional) – Number of samples to return. Defaults to None.

  • ignored (kwargs are) –

Returns

Posterior samples [num_samples x dim]

Return type

torch.Tensor

predict(x, probability_space=False)[source]

Query the model for posterior mean and variance.

Parameters
  • x (torch.Tensor) – Points at which to predict from the model.

  • probability_space (bool, optional) – Return outputs in units of response probability instead of latent function value. Defaults to False.

Returns

Posterior mean and variance at queries points.

Return type

Tuple[np.ndarray, np.ndarray]

update(train_x, train_y)[source]

Perform a warm-start update of the model from previous fit.

Parameters
  • train_x (Tensor) –

  • train_y (Tensor) –

class aepsych.Strategy(generator, lb, ub, stimuli_per_trial, outcome_types, dim=None, min_total_tells=0, min_asks=0, model=None, refit_every=1, min_total_outcome_occurrences=1, max_asks=None, keep_most_recent=None, min_post_range=None)[source]

Bases: object

Object that combines models and generators to generate points to sample.

Initialize the strategy object.

Parameters
  • generator (AEPsychGenerator) – The generator object that determines how points are sampled.

  • lb (Union[numpy.ndarray, torch.Tensor]) – Lower bounds of the parameters.

  • ub (Union[numpy.ndarray, torch.Tensor]) – Upper bounds of the parameters.

  • dim (int, optional) – The number of dimensions in the parameter space. If None, it is inferred from the size of lb and ub.

  • min_total_tells (int) – The minimum number of total observations needed to complete this strategy.

  • min_asks (int) – The minimum number of points that should be generated from this strategy.

  • model (ModelProtocol, optional) – The AEPsych model of the data.

  • refit_every (int) – How often to refit the model from scratch.

  • min_total_outcome_occurrences (int) – The minimum number of total observations needed for each outcome before the strategy will finish. Defaults to 1 (i.e., for binary outcomes, there must be at least one “yes” trial and one “no” trial).

  • max_asks (int, optional) – The maximum number of trials to generate using this strategy. If None, there is no upper bound (default).

  • keep_most_recent (int, optional) – Experimental. The number of most recent data points that the model will be fitted on. This may be useful for discarding noisy data from trials early in the experiment that are not as informative as data collected from later trials. When None, the model is fitted on all data.

  • min_post_range (float, optional) – Experimental. The required difference between the posterior’s minimum and maximum value in probablity space before the strategy will finish. Ignored if None (default).

  • stimuli_per_trial (int) –

  • outcome_types (Sequence[Type[str]]) –

normalize_inputs(x, y)[source]

converts inputs into normalized format for this strategy

Parameters
  • x (np.ndarray) – training inputs

  • y (np.ndarray) – training outputs

Returns

training inputs, normalized y (np.ndarray): training outputs, normalized n (int): number of observations

Return type

x (np.ndarray)

gen(*args, **kwargs)[source]
get_max(*args, **kwargs)[source]
get_min(*args, **kwargs)[source]
inv_query(*args, **kwargs)[source]
predict(*args, **kwargs)[source]
get_jnd(*args, **kwargs)[source]
sample(*args, **kwargs)[source]
property finished
property can_fit
property n_trials
add_data(x, y)[source]
fit()[source]
update()[source]
classmethod from_config(config, name)[source]
Parameters
  • config (Config) –

  • name (str) –

class aepsych.SequentialStrategy(strat_list)[source]

Bases: object

Runs a sequence of strategies defined by its config

All getter methods defer to the current strat

Parameters

strat_list (list[Strategy]) – TODO make this nicely typed / doc’d

gen(num_points=1, **kwargs)[source]
Parameters

num_points (int) –

property finished
add_data(x, y)[source]
classmethod from_config(config)[source]
Parameters

config (Config) –

class aepsych.BernoulliObjectiveLikelihood(objective)[source]

Bases: _OneDimensionalLikelihood

Bernoulli likelihood with a flexible link (objective) defined by a callable (which can be a botorch objective)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Parameters

objective (Callable) –

forward(function_samples, **kwargs)[source]

Computes the conditional distribution \(p(\mathbf y \mid \mathbf f, \ldots)\) that defines the likelihood.

Parameters
  • function_samples (torch.Tensor) – Samples from the function (\(\mathbf f\))

  • data (dict {str: torch.Tensor}, optional - Pyro integration only) – Additional variables that the likelihood needs to condition on. The keys of the dictionary will correspond to Pyro sample sites in the likelihood’s model/guide.

  • args – Additional args

  • kwargs – Additional kwargs

Return type

Distribution (with same shape as function_samples )

classmethod from_config(config)[source]
Parameters

config (Config) –

training: bool
class aepsych.BernoulliLikelihood(*args, **kwargs)[source]

Bases: _OneDimensionalLikelihood

Implements the Bernoulli likelihood used for GP classification, using Probit regression (i.e., the latent function is warped to be in [0,1] using the standard Normal CDF \(\Phi(x)\)). Given the identity \(\Phi(-x) = 1-\Phi(x)\), we can write the likelihood compactly as:

\[\begin{equation*} p(Y=y|f)=\Phi(yf) \end{equation*}\]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(function_samples, **kwargs)[source]

Computes the conditional distribution \(p(\mathbf y \mid \mathbf f, \ldots)\) that defines the likelihood.

Parameters
  • function_samples (torch.Tensor) – Samples from the function (\(\mathbf f\))

  • data (dict {str: torch.Tensor}, optional - Pyro integration only) – Additional variables that the likelihood needs to condition on. The keys of the dictionary will correspond to Pyro sample sites in the likelihood’s model/guide.

  • args – Additional args

  • kwargs – Additional kwargs

Return type

Distribution (with same shape as function_samples )

log_marginal(observations, function_dist, *args, **kwargs)[source]

(Used by PredictiveLogLikelihood for approximate inference.)

Computes the log marginal likelihood of the approximate predictive distribution

\[\sum_{\mathbf x, y} \log \mathbb{E}_{q\left( f(\mathbf x) \right)} \left[ p \left( y \mid f(\mathbf x) \right) \right]\]

Note that this differs from expected_log_prob() because the \(log\) is on the outside of the expectation.

Parameters
  • observations (torch.Tensor) – Values of \(y\).

  • function_dist (MultivariateNormal) – Distribution for \(f(x)\).

  • args – Additional args (passed to the foward function).

  • kwargs – Additional kwargs (passed to the foward function).

Return type

torch.Tensor

marginal(function_dist, **kwargs)[source]

Computes a predictive distribution \(p(y^* | \mathbf x^*)\) given either a posterior distribution \(p(\mathbf f | \mathcal D, \mathbf x)\) or a prior distribution \(p(\mathbf f|\mathbf x)\) as input.

With both exact inference and variational inference, the form of \(p(\mathbf f|\mathcal D, \mathbf x)\) or \(p(\mathbf f| \mathbf x)\) should usually be Gaussian. As a result, function_dist should usually be a MultivariateNormal specified by the mean and (co)variance of \(p(\mathbf f|...)\).

Parameters
  • function_dist (MultivariateNormal) – Distribution for \(f(x)\).

  • args – Additional args (passed to the foward function).

  • kwargs – Additional kwargs (passed to the foward function).

Returns

The marginal distribution, or samples from it.

Return type

Distribution

expected_log_prob(observations, function_dist, *params, **kwargs)[source]

(Used by VariationalELBO for variational inference.)

Computes the expected log likelihood, where the expectation is over the GP variational distribution.

\[\sum_{\mathbf x, y} \mathbb{E}_{q\left( f(\mathbf x) \right)} \left[ \log p \left( y \mid f(\mathbf x) \right) \right]\]
Parameters
  • observations (torch.Tensor) – Values of \(y\).

  • function_dist (MultivariateNormal) – Distribution for \(f(x)\).

  • args – Additional args (passed to the foward function).

  • kwargs – Additional kwargs (passed to the foward function).

Return type

torch.Tensor

training: bool
class aepsych.GaussianLikelihood(noise_prior=None, noise_constraint=None, batch_shape=torch.Size([]), **kwargs)[source]

Bases: _GaussianLikelihoodBase

The standard likelihood for regression. Assumes a standard homoskedastic noise model:

\[p(y \mid f) = f + \epsilon, \quad \epsilon \sim \mathcal N (0, \sigma^2)\]

where \(\sigma^2\) is a noise parameter.

Note

This likelihood can be used for exact or approximate inference.

Parameters
  • noise_prior (Prior, optional) – Prior for noise parameter \(\sigma^2\).

  • noise_constraint (Interval, optional) – Constraint for noise parameter \(\sigma^2\).

  • batch_shape (torch.Size, optional) – The batch shape of the learned noise parameter (default: []).

Variables

noise (torch.Tensor) – \(\sigma^2\) parameter (noise)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

property noise: Tensor
property raw_noise: Tensor
training: bool