aepsych.models¶
Submodules¶
aepsych.models.base module¶
- class aepsych.models.base.ModelProtocol(*args, **kwargs)[source]¶
Bases:
Protocol
- property outcome_type: str¶
- property extremum_solver: str¶
- property train_inputs: Tensor¶
- property lb: Tensor¶
- property ub: Tensor¶
- property dim: int¶
- sample(points, num_samples)[source]¶
- Parameters
points (Tensor) –
num_samples (int) –
- Return type
Tensor
- class aepsych.models.base.AEPsychMixin[source]¶
Bases:
GPyTorchModel
Mixin class that provides AEPsych-specific utility methods.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- extremum_solver = 'Nelder-Mead'¶
- get_max(locked_dims=None)[source]¶
Return the maximum of the modeled function, subject to constraints :returns: Tuple containing the max and its location (argmax).
- locked_dims (Mapping[int, List[float]]): Dimensions to fix, so that the
inverse is along a slice of the full surface.
- Return type
Tuple[float, np.ndarray]
- Parameters
self (ModelProtocol) –
locked_dims (Optional[Mapping[int, List[float]]]) –
- get_min(locked_dims=None)[source]¶
Return the minimum of the modeled function, subject to constraints :returns: Tuple containing the min and its location (argmin).
- locked_dims (Mapping[int, List[float]]): Dimensions to fix, so that the
inverse is along a slice of the full surface.
- Return type
Tuple[float, np.ndarray]
- Parameters
self (ModelProtocol) –
locked_dims (Optional[Mapping[int, List[float]]]) –
- inv_query(y, locked_dims=None, probability_space=False, n_samples=1000)[source]¶
Query the model inverse. Return nearest x such that f(x) = queried y, and also return the
value of f at that point.
- Parameters
y (float) – Points at which to find the inverse.
locked_dims (Mapping[int, List[float]]) – Dimensions to fix, so that the inverse is along a slice of the full surface.
probability_space (bool, optional) – Is y (and therefore the returned nearest_y) in probability space instead of latent function space? Defaults to False.
self (ModelProtocol) –
n_samples (int) –
- Returns
- Tuple containing the value of f
nearest to queried y and the x position of this value.
- Return type
Tuple[float, np.ndarray]
- get_jnd(grid=None, cred_level=None, intensity_dim=- 1, confsamps=500, method='step')[source]¶
Calculate the JND.
Note that JND can have multiple plausible definitions outside of the linear case, so we provide options for how to compute it. For method=”step”, we report how far one needs to go over in stimulus space to move 1 unit up in latent space (this is a lot of people’s conventional understanding of the JND). For method=”taylor”, we report the local derivative, which also maps to a 1st-order Taylor expansion of the latent function. This is a formal generalization of JND as defined in Weber’s law. Both definitions are equivalent for linear psychometric functions.
- Parameters
grid (Optional[np.ndarray], optional) – Mesh grid over which to find the JND. Defaults to a square grid of size as determined by aepsych.utils.dim_grid
cred_level (float, optional) – Credible level for computing an interval. Defaults to None, computing no interval.
intensity_dim (int, optional) – Dimension over which to compute the JND. Defaults to -1.
confsamps (int, optional) – Number of posterior samples to use for computing the credible interval. Defaults to 500.
method (str, optional) – “taylor” or “step” method (see docstring). Defaults to “step”.
self (ModelProtocol) –
- Raises
RuntimeError – for passing an unknown method.
- Returns
- either the
mean JND, or a median, lower, upper tuple of the JND posterior.
- Return type
Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor, torch.Tensor]]
- dim_grid(gridsize=30)[source]¶
- Parameters
self (ModelProtocol) –
gridsize (int) –
- Return type
Tensor
- set_train_data(inputs=None, targets=None, strict=False)[source]¶
- Parameters
inputs (torch.Tensor) – The new training inputs.
targets (torch.Tensor) – The new training targets.
strict (bool) – (default False, ignored). Here for compatibility with
input transformers. TODO: actually use this arg or change input transforms to not require it.
aepsych.models.derivative_gp module¶
- class aepsych.models.derivative_gp.MixedDerivativeVariationalGP(train_x, train_y, inducing_points, scales=1.0, mean_module=None, covar_module=None, fixed_prior_mean=None)[source]¶
Bases:
ApproximateGP
,GPyTorchModel
A variational GP with mixed derivative observations.
For more on GPs with derivative observations, see e.g. Riihimaki & Vehtari 2010.
References
- Riihimäki, J., & Vehtari, A. (2010). Gaussian processes with
monotonicity information. Journal of Machine Learning Research, 9, 645–652.
Initialize MixedDerivativeVariationalGP
- Parameters
train_x (torch.Tensor) – Training x points. The last column of x is the derivative indiciator: 0 if it is an observation of f(x), and i if it is an observation of df/dx_i.
train_y (torch.Tensor) – Training y points
inducing_points (torch.Tensor) – Inducing points to use
scales (Union[torch.Tensor, float], optional) – Typical scale of each dimension of input space (this is used to set the lengthscale prior). Defaults to 1.0.
mean_module (Mean, optional) – A mean class that supports derivative indexes as the final dim. Defaults to a constant mean.
covar_module (Kernel, optional) – A covariance kernel class that supports derivative indexes as the final dim. Defaults to RBF kernel.
fixed_prior_mean (float, optional) – A prior mean value to use with the constant mean. Often setting this to the target threshold speeds up experiments. Defaults to None, in which case the mean will be inferred.
aepsych.models.gp_classification module¶
- class aepsych.models.gp_classification.GPClassificationModel(lb, ub, dim=None, mean_module=None, covar_module=None, likelihood=None, inducing_size=100, max_fit_time=None, inducing_point_method='auto')[source]¶
Bases:
AEPsychMixin
,ApproximateGP
Probit-GP model with variational inference.
From a conventional ML perspective this is a GP Classification model, though in the psychophysics context it can also be thought of as a nonlinear generalization of the standard linear model for 1AFC or yes/no trials.
For more on variational inference, see e.g. https://docs.gpytorch.ai/en/v1.1.1/examples/04_Variational_and_Approximate_GPs/
Initialize the GP Classification model
- Parameters
lb (Union[numpy.ndarray, torch.Tensor]) – Lower bounds of the parameters.
ub (Union[numpy.ndarray, torch.Tensor]) – Upper bounds of the parameters.
dim (int, optional) – The number of dimensions in the parameter space. If None, it is inferred from the size of lb and ub.
mean_module (gpytorch.means.Mean, optional) – GP mean class. Defaults to a constant with a normal prior.
covar_module (gpytorch.kernels.Kernel, optional) – GP covariance kernel class. Defaults to scaled RBF with a gamma prior.
likelihood (gpytorch.likelihood.Likelihood, optional) – The likelihood function to use. If None defaults to Bernouli likelihood.
inducing_size (int) – Number of inducing points. Defaults to 100.
max_fit_time (float, optional) – The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time.
inducing_point_method (string) – The method to use to select the inducing points. Defaults to “auto”. If “sobol”, a number of Sobol points equal to inducing_size will be selected. If “pivoted_chol”, selects points based on the pivoted Cholesky heuristic. If “kmeans++”, selects points by performing kmeans++ clustering on the training data. If “auto”, tries to determine the best method automatically.
- outcome_type = 'single_probit'¶
- classmethod from_config(config)[source]¶
Alternate constructor for GPClassification model.
This is used when we recursively build a full sampling strategy from a configuration. TODO: document how this works in some tutorial.
- Parameters
config (Config) – A configuration containing keys/values matching this class
- Returns
Configured class instance.
- Return type
- fit(train_x, train_y, warmstart_hyperparams=False, warmstart_induc=False, **kwargs)[source]¶
Fit underlying model.
- Parameters
train_x (torch.Tensor) – Inputs.
train_y (torch.LongTensor) – Responses.
warmstart_hyperparams (bool) – Whether to reuse the previous hyperparameters (True) or fit from scratch (False). Defaults to False.
warmstart_induc (bool) – Whether to reuse the previous inducing points or fit from scratch (False). Defaults to False.
- Return type
None
- sample(x, num_samples)[source]¶
Sample from underlying model.
- Parameters
x (torch.Tensor) – Points at which to sample.
num_samples (int, optional) – Number of samples to return. Defaults to None.
ignored (kwargs are) –
- Returns
Posterior samples [num_samples x dim]
- Return type
torch.Tensor
- predict(x, probability_space=False)[source]¶
Query the model for posterior mean and variance.
- Parameters
x (torch.Tensor) – Points at which to predict from the model.
probability_space (bool, optional) – Return outputs in units of response probability instead of latent function value. Defaults to False.
- Returns
Posterior mean and variance at queries points.
- Return type
Tuple[np.ndarray, np.ndarray]
- update(train_x, train_y)[source]¶
Perform a warm-start update of the model from previous fit.
- Parameters
train_x (Tensor) –
train_y (Tensor) –
- training: bool¶
aepsych.models.monotonic_rejection_gp module¶
- class aepsych.models.monotonic_rejection_gp.MonotonicRejectionGP(monotonic_idxs, lb, ub, dim=None, mean_module=None, covar_module=None, likelihood=None, fixed_prior_mean=None, num_induc=25, num_samples=250, num_rejection_samples=5000)[source]¶
Bases:
AEPsychMixin
,ApproximateGP
A monotonic GP using rejection sampling.
This takes the same insight as in e.g. Riihimäki & Vehtari 2010 (that the derivative of a GP is likewise a GP) but instead of approximately optimizing the likelihood of the model using EP, we optimize an unconstrained model by VI and then draw monotonic samples by rejection sampling.
References
- Riihimäki, J., & Vehtari, A. (2010). Gaussian processes with monotonicity information.
Journal of Machine Learning Research, 9, 645–652.
Initialize MonotonicRejectionGP.
- Parameters
likelihood (str) – Link function and likelihood. Can be ‘probit-bernoulli’ or ‘identity-gaussian’.
monotonic_idxs (List[int]) – List of which columns of x should be given monotonicity
constraints. –
fixed_prior_mean (Optional[float], optional) – Fixed prior mean. If classification, should be the prior
probability (classification) –
covar_module (Optional[Kernel], optional) – Covariance kernel to use (default: scaled RBF).
mean_module (Optional[Mean], optional) – Mean module to use (default: constant mean).
num_induc (int, optional) – Number of inducing points for variational GP.]. Defaults to 25.
num_samples (int, optional) – Number of samples for estimating posterior on preDict or
250. (acquisition function evaluation. Defaults to) –
num_rejection_samples (int, optional) – Number of samples used for rejection sampling. Defaults to 4096.
acqf (MonotonicMCAcquisition, optional) – Acquisition function to use for querying points. Defaults to MonotonicMCLSE.
objective (Optional[MCAcquisitionObjective], optional) – Transformation of GP to apply before computing acquisition function. Defaults to identity transform for gaussian likelihood, probit transform for probit-bernoulli.
extra_acqf_args (Optional[Dict[str, object]], optional) – Additional arguments to pass into the acquisition function. Defaults to None.
lb (Union[np.ndarray, torch.Tensor]) –
ub (Union[np.ndarray, torch.Tensor]) –
dim (Optional[int]) –
- outcome_type = 'single_probit'¶
- fit(train_x, train_y, **kwargs)[source]¶
Fit the model
- Parameters
train_x (Tensor) – Training x points
train_y (Tensor) – Training y points. Should be (n x 1).
- Return type
None
- update(train_x, train_y, warmstart=True)[source]¶
Update the model with new data.
Expects the full set of data, not the incremental new data.
- Parameters
train_x (Tensor) – Train X.
train_y (Tensor) – Train Y. Should be (n x 1).
warmstart (bool) – If True, warm-start model fitting with current parameters.
- Return type
None
- sample(X, num_samples=None, num_rejection_samples=None)[source]¶
Sample from monotonic GP
- Parameters
X (Tensor) – tensor of n points at which to sample
num_samples (int, optional) – how many points to sample (default: self.num_samples)
num_rejection_samples (Optional[int]) –
- Return type
Tensor
Returns: a Tensor of shape [n_samp, n]
- predict(X, probability_space=False)[source]¶
Predict
- Parameters
X (Tensor) – tensor of n points at which to predict.
probability_space (bool) –
- Return type
Tuple[Tensor, Tensor]
Returns: tuple (f, var) where f is (n,) and var is (n,)
- forward(x)[source]¶
Evaluate GP
- Parameters
x (torch.Tensor) – Tensor of points at which GP should be evaluated.
- Returns
- Distribution object
holding mean and covariance at x.
- Return type
gpytorch.distributions.MultivariateNormal
- training: bool¶
Module contents¶
- class aepsych.models.GPClassificationModel(lb, ub, dim=None, mean_module=None, covar_module=None, likelihood=None, inducing_size=100, max_fit_time=None, inducing_point_method='auto')[source]¶
Bases:
AEPsychMixin
,ApproximateGP
Probit-GP model with variational inference.
From a conventional ML perspective this is a GP Classification model, though in the psychophysics context it can also be thought of as a nonlinear generalization of the standard linear model for 1AFC or yes/no trials.
For more on variational inference, see e.g. https://docs.gpytorch.ai/en/v1.1.1/examples/04_Variational_and_Approximate_GPs/
Initialize the GP Classification model
- Parameters
lb (Union[numpy.ndarray, torch.Tensor]) – Lower bounds of the parameters.
ub (Union[numpy.ndarray, torch.Tensor]) – Upper bounds of the parameters.
dim (int, optional) – The number of dimensions in the parameter space. If None, it is inferred from the size of lb and ub.
mean_module (gpytorch.means.Mean, optional) – GP mean class. Defaults to a constant with a normal prior.
covar_module (gpytorch.kernels.Kernel, optional) – GP covariance kernel class. Defaults to scaled RBF with a gamma prior.
likelihood (gpytorch.likelihood.Likelihood, optional) – The likelihood function to use. If None defaults to Bernouli likelihood.
inducing_size (int) – Number of inducing points. Defaults to 100.
max_fit_time (float, optional) – The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time.
inducing_point_method (string) – The method to use to select the inducing points. Defaults to “auto”. If “sobol”, a number of Sobol points equal to inducing_size will be selected. If “pivoted_chol”, selects points based on the pivoted Cholesky heuristic. If “kmeans++”, selects points by performing kmeans++ clustering on the training data. If “auto”, tries to determine the best method automatically.
- outcome_type = 'single_probit'¶
- classmethod from_config(config)[source]¶
Alternate constructor for GPClassification model.
This is used when we recursively build a full sampling strategy from a configuration. TODO: document how this works in some tutorial.
- Parameters
config (Config) – A configuration containing keys/values matching this class
- Returns
Configured class instance.
- Return type
- fit(train_x, train_y, warmstart_hyperparams=False, warmstart_induc=False, **kwargs)[source]¶
Fit underlying model.
- Parameters
train_x (torch.Tensor) – Inputs.
train_y (torch.LongTensor) – Responses.
warmstart_hyperparams (bool) – Whether to reuse the previous hyperparameters (True) or fit from scratch (False). Defaults to False.
warmstart_induc (bool) – Whether to reuse the previous inducing points or fit from scratch (False). Defaults to False.
- Return type
None
- sample(x, num_samples)[source]¶
Sample from underlying model.
- Parameters
x (torch.Tensor) – Points at which to sample.
num_samples (int, optional) – Number of samples to return. Defaults to None.
ignored (kwargs are) –
- Returns
Posterior samples [num_samples x dim]
- Return type
torch.Tensor
- predict(x, probability_space=False)[source]¶
Query the model for posterior mean and variance.
- Parameters
x (torch.Tensor) – Points at which to predict from the model.
probability_space (bool, optional) – Return outputs in units of response probability instead of latent function value. Defaults to False.
- Returns
Posterior mean and variance at queries points.
- Return type
Tuple[np.ndarray, np.ndarray]
- update(train_x, train_y)[source]¶
Perform a warm-start update of the model from previous fit.
- Parameters
train_x (Tensor) –
train_y (Tensor) –
- training: bool¶
- class aepsych.models.MonotonicRejectionGP(monotonic_idxs, lb, ub, dim=None, mean_module=None, covar_module=None, likelihood=None, fixed_prior_mean=None, num_induc=25, num_samples=250, num_rejection_samples=5000)[source]¶
Bases:
AEPsychMixin
,ApproximateGP
A monotonic GP using rejection sampling.
This takes the same insight as in e.g. Riihimäki & Vehtari 2010 (that the derivative of a GP is likewise a GP) but instead of approximately optimizing the likelihood of the model using EP, we optimize an unconstrained model by VI and then draw monotonic samples by rejection sampling.
References
- Riihimäki, J., & Vehtari, A. (2010). Gaussian processes with monotonicity information.
Journal of Machine Learning Research, 9, 645–652.
Initialize MonotonicRejectionGP.
- Parameters
likelihood (str) – Link function and likelihood. Can be ‘probit-bernoulli’ or ‘identity-gaussian’.
monotonic_idxs (List[int]) – List of which columns of x should be given monotonicity
constraints. –
fixed_prior_mean (Optional[float], optional) – Fixed prior mean. If classification, should be the prior
probability (classification) –
covar_module (Optional[Kernel], optional) – Covariance kernel to use (default: scaled RBF).
mean_module (Optional[Mean], optional) – Mean module to use (default: constant mean).
num_induc (int, optional) – Number of inducing points for variational GP.]. Defaults to 25.
num_samples (int, optional) – Number of samples for estimating posterior on preDict or
250. (acquisition function evaluation. Defaults to) –
num_rejection_samples (int, optional) – Number of samples used for rejection sampling. Defaults to 4096.
acqf (MonotonicMCAcquisition, optional) – Acquisition function to use for querying points. Defaults to MonotonicMCLSE.
objective (Optional[MCAcquisitionObjective], optional) – Transformation of GP to apply before computing acquisition function. Defaults to identity transform for gaussian likelihood, probit transform for probit-bernoulli.
extra_acqf_args (Optional[Dict[str, object]], optional) – Additional arguments to pass into the acquisition function. Defaults to None.
lb (Union[np.ndarray, torch.Tensor]) –
ub (Union[np.ndarray, torch.Tensor]) –
dim (Optional[int]) –
- outcome_type = 'single_probit'¶
- fit(train_x, train_y, **kwargs)[source]¶
Fit the model
- Parameters
train_x (Tensor) – Training x points
train_y (Tensor) – Training y points. Should be (n x 1).
- Return type
None
- update(train_x, train_y, warmstart=True)[source]¶
Update the model with new data.
Expects the full set of data, not the incremental new data.
- Parameters
train_x (Tensor) – Train X.
train_y (Tensor) – Train Y. Should be (n x 1).
warmstart (bool) – If True, warm-start model fitting with current parameters.
- Return type
None
- sample(X, num_samples=None, num_rejection_samples=None)[source]¶
Sample from monotonic GP
- Parameters
X (Tensor) – tensor of n points at which to sample
num_samples (int, optional) – how many points to sample (default: self.num_samples)
num_rejection_samples (Optional[int]) –
- Return type
Tensor
Returns: a Tensor of shape [n_samp, n]
- predict(X, probability_space=False)[source]¶
Predict
- Parameters
X (Tensor) – tensor of n points at which to predict.
probability_space (bool) –
- Return type
Tuple[Tensor, Tensor]
Returns: tuple (f, var) where f is (n,) and var is (n,)
- forward(x)[source]¶
Evaluate GP
- Parameters
x (torch.Tensor) – Tensor of points at which GP should be evaluated.
- Returns
- Distribution object
holding mean and covariance at x.
- Return type
gpytorch.distributions.MultivariateNormal
- training: bool¶
- class aepsych.models.GPRegressionModel(lb, ub, dim=None, mean_module=None, covar_module=None, likelihood=None, max_fit_time=None)[source]¶
Bases:
AEPsychMixin
,ExactGP
GP Regression model for continuous outcomes, using exact inference.
Initialize the GP regression model
- Parameters
lb (Union[numpy.ndarray, torch.Tensor]) – Lower bounds of the parameters.
ub (Union[numpy.ndarray, torch.Tensor]) – Upper bounds of the parameters.
dim (int, optional) – The number of dimensions in the parameter space. If None, it is inferred from the size of lb and ub.
mean_module (gpytorch.means.Mean, optional) – GP mean class. Defaults to a constant with a normal prior.
covar_module (gpytorch.kernels.Kernel, optional) – GP covariance kernel class. Defaults to scaled RBF with a gamma prior.
likelihood (gpytorch.likelihood.Likelihood, optional) – The likelihood function to use. If None defaults to Gaussian likelihood.
max_fit_time (float, optional) – The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time.
- num_inputs = 1¶
- outcome_type = 'continuous'¶
- classmethod from_config(config)[source]¶
Alternate constructor for GP regression model.
This is used when we recursively build a full sampling strategy from a configuration. TODO: document how this works in some tutorial.
- Parameters
config (Config) – A configuration containing keys/values matching this class
- Returns
Configured class instance.
- Return type
- fit(train_x, train_y, **kwargs)[source]¶
Fit underlying model.
- Parameters
train_x (torch.Tensor) – Inputs.
train_y (torch.LongTensor) – Responses.
- Return type
None
- sample(x, num_samples)[source]¶
Sample from underlying model.
- Parameters
x (torch.Tensor) – Points at which to sample.
num_samples (int, optional) – Number of samples to return. Defaults to None.
ignored (kwargs are) –
- Returns
Posterior samples [num_samples x dim]
- Return type
torch.Tensor
- update(train_x, train_y)[source]¶
Perform a warm-start update of the model from previous fit.
- Parameters
train_x (Tensor) –
train_y (Tensor) –
- predict(x, **kwargs)[source]¶
Query the model for posterior mean and variance.
- Parameters
x (torch.Tensor) – Points at which to predict from the model.
probability_space (bool, optional) – Return outputs in units of response probability instead of latent function value. Defaults to False.
- Returns
Posterior mean and variance at queries points.
- Return type
Tuple[np.ndarray, np.ndarray]
- training: bool¶
- class aepsych.models.PairwiseProbitModel(lb, ub, dim=None, covar_module=None, max_fit_time=None)[source]¶
Bases:
PairwiseGP
,AEPsychMixin
- A probit-likelihood GP with Laplace approximation model that learns via
pairwise comparison data. By default it uses a scaled RBF kernel.
- Parameters
datapoints – A batch_shape x n x d tensor of training features.
comparisons – A batch_shape x m x 2 training comparisons; comparisons[i] is a noisy indicator suggesting the utility value of comparisons[i, 0]-th is greater than comparisons[i, 1]-th.
covar_module (Optional[Kernel]) – Covariance module.
input_transform – An input transform that is applied in the model’s forward pass.
lb (Union[ndarray, Tensor]) –
ub (Union[ndarray, Tensor]) –
dim (Optional[int]) –
max_fit_time (Optional[float]) –
- outcome_type = 'pairwise_probit'¶
- update(train_x, train_y, warmstart=True)[source]¶
Perform a warm-start update of the model from previous fit.
- Parameters
train_x (Tensor) –
train_y (Tensor) –
warmstart (bool) –
- training: bool¶