aepsych.strategy module

class aepsych.strategy.Strategy(generator, lb, ub, stimuli_per_trial, outcome_types, dim=None, min_total_tells=0, min_asks=0, model=None, refit_every=1, min_total_outcome_occurrences=1, max_asks=None, keep_most_recent=None, min_post_range=None, name='', run_indefinitely=False)[source]

Bases: object

Object that combines models and generators to generate points to sample.

Initialize the strategy object.

  • generator (AEPsychGenerator) – The generator object that determines how points are sampled.

  • lb (Union[numpy.ndarray, torch.Tensor]) – Lower bounds of the parameters.

  • ub (Union[numpy.ndarray, torch.Tensor]) – Upper bounds of the parameters.

  • dim (int, optional) – The number of dimensions in the parameter space. If None, it is inferred from the size of lb and ub.

  • min_total_tells (int) – The minimum number of total observations needed to complete this strategy.

  • min_asks (int) – The minimum number of points that should be generated from this strategy.

  • model (ModelProtocol, optional) – The AEPsych model of the data.

  • refit_every (int) – How often to refit the model from scratch.

  • min_total_outcome_occurrences (int) – The minimum number of total observations needed for each outcome before the strategy will finish. Defaults to 1 (i.e., for binary outcomes, there must be at least one “yes” trial and one “no” trial).

  • max_asks (int, optional) – The maximum number of trials to generate using this strategy. If None, there is no upper bound (default).

  • keep_most_recent (int, optional) – Experimental. The number of most recent data points that the model will be fitted on. This may be useful for discarding noisy data from trials early in the experiment that are not as informative as data collected from later trials. When None, the model is fitted on all data.

  • min_post_range (float, optional) – Experimental. The required difference between the posterior’s minimum and maximum value in probablity space before the strategy will finish. Ignored if None (default).

  • name (str) – The name of the strategy. Defaults to the empty string.

  • run_indefinitely (bool) – If true, the strategy will run indefinitely until finish() is explicitly called. Other stopping criteria will be ignored. Defaults to False.

  • stimuli_per_trial (int) –

  • outcome_types (Sequence[Type[str]]) –

normalize_inputs(x, y)[source]

converts inputs into normalized format for this strategy

  • x (np.ndarray) – training inputs

  • y (np.ndarray) – training outputs


training inputs, normalized y (np.ndarray): training outputs, normalized n (int): number of observations

Return type:

x (np.ndarray)

gen(*args, **kwargs)[source]
get_max(*args, **kwargs)[source]
get_min(*args, **kwargs)[source]
inv_query(*args, **kwargs)[source]
predict(*args, **kwargs)[source]
get_jnd(*args, **kwargs)[source]
sample(*args, **kwargs)[source]
property finished
property can_fit
property n_trials
add_data(x, y)[source]
classmethod from_config(config, name)[source]
  • config (Config) –

  • name (str) –

class aepsych.strategy.SequentialStrategy(strat_list)[source]

Bases: object

Runs a sequence of strategies defined by its config

All getter methods defer to the current strat


strat_list (list[Strategy]) – TODO make this nicely typed / doc’d

gen(num_points=1, **kwargs)[source]

num_points (int) –

property finished
add_data(x, y)[source]
classmethod from_config(config)[source]

config (Config) –