aepsych.kernels

Submodules

aepsych.kernels.rbf_partial_grad module

class aepsych.kernels.rbf_partial_grad.RBFKernelPartialObsGrad(ard_num_dims=None, batch_shape=torch.Size([]), active_dims=None, lengthscale_prior=None, lengthscale_constraint=None, eps=1e-06, **kwargs)[source]

Bases: gpytorch.kernels.rbf_kernel_grad.RBFKernelGrad

An RBF kernel over observations of f, and partial/non-overlapping observations of the gradient of f.

gpytorch.kernels.rbf_kernel_grad assumes a block structure where every partial derivative is observed at the same set of points at which x is observed. This generalizes that by allowing f and any subset of the derivatives of f to be observed at different sets of points.

The final column of x1 and x2 needs to be an index that identifies what is observed at that point. It should be 0 if this observation is of f, and i if it is of df/dxi.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Parameters
  • ard_num_dims (Optional[int]) –

  • batch_shape (Optional[torch.Size]) –

  • active_dims (Optional[Tuple[int, ...]]) –

  • lengthscale_prior (Optional[gpytorch.priors.prior.Prior]) –

  • lengthscale_constraint (Optional[gpytorch.constraints.constraints.Interval]) –

  • eps (Optional[float]) –

forward(x1, x2, diag=False, **params)[source]

Computes the covariance between x1 and x2. This method should be imlemented by all Kernel subclasses.

:param x1: First set of data :type x1: Tensor n x d or b x n x d :param x2: Second set of data :type x2: Tensor m x d or b x m x d :param diag: Should the Kernel compute the whole kernel, or just the diag? :type diag: bool :param last_dim_is_batch: If this is true, it treats the last dimension of the data as another batch dimension.

(Useful for additive structure over the dimensions). Default: False

:type last_dim_is_batch: tuple, optional

Returns

Tensor or gpytorch.lazy.LazyTensor.

The exact size depends on the kernel’s evaluation mode:

  • full_covar: n x m or b x n x m

  • full_covar with last_dim_is_batch=True: k x n x m or b x k x n x m

  • diag: n or b x n

  • diag with last_dim_is_batch=True: k x n or b x k x n

Parameters
  • x1 (torch.Tensor) –

  • x2 (torch.Tensor) –

  • diag (bool) –

  • params (Any) –

Return type

torch.Tensor

num_outputs_per_input(x1, x2)[source]

How many outputs are produced per input (default 1) if x1 is size n x d and x2 is size m x d, then the size of the kernel will be (n * num_outputs_per_input) x (m * num_outputs_per_input) Default: 1

Parameters
  • x1 (torch.Tensor) –

  • x2 (torch.Tensor) –

Return type

int

training: bool

Module contents