Tensor Ops
log_signatures_pytorch.tensor_ops.add_tensor_product(x, y, z)
Affine tensor product x + y ⊗ z in the tensor algebra.
Computes the sum of tensor x and the tensor product of y and z.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
First tensor operand. |
required |
y
|
Tensor
|
Second tensor operand (first factor of product). |
required |
z
|
Tensor
|
Third tensor operand (second factor of product). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Result of x + y ⊗ z. |
Examples:
>>> import torch
>>> from log_signatures_pytorch.tensor_ops import add_tensor_product
>>>
>>> x = torch.tensor([1.0, 2.0])
>>> y = torch.tensor([3.0, 4.0])
>>> z = torch.tensor([5.0, 6.0])
>>> result = add_tensor_product(x, y, z)
>>> result.shape
torch.Size([2, 2])
Source code in src/log_signatures_pytorch/tensor_ops.py
log_signatures_pytorch.tensor_ops.batch_add_tensor_product(x, y, z)
Batched version of x + y ⊗ z preserving the leading batch axis.
Computes the affine tensor product while preserving the first (batch) dimension.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Tensor shaped |
required |
y
|
Tensor
|
Tensor shaped |
required |
z
|
Tensor
|
Tensor shaped |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor shaped |
Examples:
>>> import torch
>>> from log_signatures_pytorch.tensor_ops import batch_add_tensor_product
>>>
>>> x = torch.tensor([[1.0, 2.0], [3.0, 4.0]]) # (batch=2, width=2)
>>> y = torch.tensor([[5.0, 6.0], [7.0, 8.0]]) # (batch=2, width=2)
>>> z = torch.tensor([[9.0, 10.0], [11.0, 12.0]]) # (batch=2, width=2)
>>> result = batch_add_tensor_product(x, y, z)
>>> result.shape
torch.Size([2, 2, 2])
Source code in src/log_signatures_pytorch/tensor_ops.py
log_signatures_pytorch.tensor_ops.batch_bch_formula(a, b, depth)
Truncated Baker-Campbell-Hausdorff merge for batched inputs.
Computes a truncated version of the Baker-Campbell-Hausdorff formula BCH(a, b) for batched inputs in tensor algebra coordinates.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
a
|
list[Tensor]
|
List of tensors where |
required |
b
|
list[Tensor]
|
List of tensors with the same structure as |
required |
depth
|
int
|
Truncation depth for the BCH series. |
required |
Returns:
| Type | Description |
|---|---|
list[Tensor]
|
List of tensors matching the shapes of |
Notes
This implementation includes only the series terms it explicitly writes:
a + b for all depths, and + 1/2 [a, b] when depth >= 2. Higher-order BCH
terms are not included. For richer Hall-basis truncations up to depth 4,
use :meth:HallBCH.bch.
Examples:
>>> import torch
>>> from log_signatures_pytorch.tensor_ops import batch_bch_formula
>>>
>>> a = [
... torch.tensor([[1.0, 2.0]]), # depth 1
... torch.tensor([[[0.5, 0.3], [0.2, 0.1]]]), # depth 2
... ]
>>> b = [
... torch.tensor([[3.0, 4.0]]), # depth 1
... torch.tensor([[[0.6, 0.4], [0.3, 0.2]]]), # depth 2
... ]
>>> result = batch_bch_formula(a, b, depth=2)
>>> len(result)
2
>>> result[0].shape
torch.Size([1, 2])
>>> result[1].shape
torch.Size([1, 2, 2])
Source code in src/log_signatures_pytorch/tensor_ops.py
383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 | |
log_signatures_pytorch.tensor_ops.batch_lie_brackets(x, y)
Batched Lie bracket preserving the leading batch axis.
Computes the Lie bracket for batched tensors while preserving the first (batch) dimension.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Tensor shaped |
required |
y
|
Tensor
|
Tensor shaped |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Batched Lie bracket [x, y] with shape preserving the batch dimension. |
Examples:
>>> import torch
>>> from log_signatures_pytorch.tensor_ops import batch_lie_brackets
>>>
>>> x = torch.tensor([[1.0, 2.0], [3.0, 4.0]]) # (batch=2, width=2)
>>> y = torch.tensor([[5.0, 6.0], [7.0, 8.0]]) # (batch=2, width=2)
>>> result = batch_lie_brackets(x, y)
>>> result.shape
torch.Size([2, 2, 2])
Source code in src/log_signatures_pytorch/tensor_ops.py
log_signatures_pytorch.tensor_ops.batch_mult_fused_restricted_exp(z, A)
Batched fused update of truncated tensor exponentials.
Updates a list of truncated exponential terms by multiplying with a new degree-1 element. This is used in the signature scan to incrementally update signatures as we process path increments.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
Tensor
|
Tensor of shape |
required |
A
|
list[Tensor]
|
List of current exponential terms; |
required |
Returns:
| Type | Description |
|---|---|
list[Tensor]
|
Updated list of tensors with the same shapes as |
Examples:
>>> import torch
>>> from log_signatures_pytorch.tensor_ops import batch_mult_fused_restricted_exp
>>>
>>> z = torch.tensor([[1.0, 2.0]]) # (batch=1, width=2)
>>> A = [
... torch.tensor([[1.0, 2.0]]), # depth 1: (batch=1, width=2)
... torch.tensor([[[0.5, 0.3], [0.2, 0.1]]]), # depth 2: (batch=1, width, width)
... ]
>>> result = batch_mult_fused_restricted_exp(z, A)
>>> len(result)
2
>>> result[0].shape
torch.Size([1, 2])
>>> result[1].shape
torch.Size([1, 2, 2])
Source code in src/log_signatures_pytorch/tensor_ops.py
log_signatures_pytorch.tensor_ops.batch_restricted_exp(input, depth)
Batched truncated tensor exponential with a shared batch axis.
Computes the truncated tensor exponential exp(input) - 1 for batched inputs, returning homogeneous components at each depth level. Each batch element receives the homogeneous components, enabling efficient signature scans over a batch of paths.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Tensor
|
Tensor of shape |
required |
depth
|
int
|
Truncation depth (>=1). |
required |
Returns:
| Type | Description |
|---|---|
list[Tensor]
|
List of length |
Examples:
>>> import torch
>>> from log_signatures_pytorch.tensor_ops import batch_restricted_exp
>>>
>>> input_tensor = torch.tensor([[1.0, 2.0], [3.0, 4.0]]) # (batch=2, width=2)
>>> result = batch_restricted_exp(input_tensor, depth=2)
>>> len(result)
2
>>> result[0].shape # depth 1
torch.Size([2, 2])
>>> result[1].shape # depth 2
torch.Size([2, 2, 2])
Source code in src/log_signatures_pytorch/tensor_ops.py
log_signatures_pytorch.tensor_ops.batch_sequence_tensor_product(x, y)
Tensor product preserving leading (batch, sequence) axes.
Computes the tensor product while preserving the first two dimensions (batch and sequence), allowing per-step tensor products in sequence processing.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Tensor shaped |
required |
y
|
Tensor
|
Tensor shaped |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor shaped |
Notes
This is used by the GPU signature scan where per-step products are formed without collapsing the time dimension.
Examples:
>>> import torch
>>> from log_signatures_pytorch.tensor_ops import batch_sequence_tensor_product
>>>
>>> x = torch.randn(2, 3, 2) # (batch=2, sequence=3, width=2)
>>> y = torch.randn(2, 3, 2) # (batch=2, sequence=3, width=2)
>>> result = batch_sequence_tensor_product(x, y)
>>> result.shape
torch.Size([2, 3, 2, 2])
Source code in src/log_signatures_pytorch/tensor_ops.py
log_signatures_pytorch.tensor_ops.batch_tensor_product(x, y)
Tensor product preserving the leading batch axis.
Computes the tensor product while preserving the first (batch) dimension, allowing batched tensor algebra operations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Tensor shaped |
required |
y
|
Tensor
|
Tensor shaped |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor shaped |
Examples:
>>> import torch
>>> from log_signatures_pytorch.tensor_ops import batch_tensor_product
>>>
>>> x = torch.tensor([[1.0, 2.0], [3.0, 4.0]]) # (batch=2, width=2)
>>> y = torch.tensor([[5.0, 6.0], [7.0, 8.0]]) # (batch=2, width=2)
>>> result = batch_tensor_product(x, y)
>>> result.shape
torch.Size([2, 2, 2])
Source code in src/log_signatures_pytorch/tensor_ops.py
log_signatures_pytorch.tensor_ops.lie_brackets(x, y)
Lie bracket [x, y] = x ⊗ y - y ⊗ x for degree-1 tensors.
Computes the Lie bracket (commutator) of two degree-1 tensors, which is the antisymmetric part of their tensor product.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
First tensor operand (degree-1). |
required |
y
|
Tensor
|
Second tensor operand (degree-1). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Lie bracket [x, y] = x ⊗ y - y ⊗ x. |
Examples:
>>> import torch
>>> from log_signatures_pytorch.tensor_ops import lie_brackets
>>>
>>> x = torch.tensor([1.0, 2.0])
>>> y = torch.tensor([3.0, 4.0])
>>> result = lie_brackets(x, y)
>>> result.shape
torch.Size([2, 2])
>>> # Result is antisymmetric: [x, y] = -[y, x]
>>> lie_brackets(y, x) + result
tensor([[0., 0.],
[0., 0.]])
Source code in src/log_signatures_pytorch/tensor_ops.py
log_signatures_pytorch.tensor_ops.tensor_product(x, y)
Compute the tensor product x ⊗ y with no shared leading axes.
For tensors x in V^{⊗p} and y in V^{⊗q}, returns x ⊗ y in V^{⊗(p+q)} by forming the outer product over their trailing axes. This is the multiplicative structure used throughout the tensor-algebra signature recurrences.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
First tensor operand. |
required |
y
|
Tensor
|
Second tensor operand. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor product x ⊗ y with shape |
Examples:
>>> import torch
>>> from log_signatures_pytorch.tensor_ops import tensor_product
>>>
>>> x = torch.tensor([1.0, 2.0])
>>> y = torch.tensor([3.0, 4.0])
>>> result = tensor_product(x, y)
>>> result.shape
torch.Size([2, 2])
>>> result
tensor([[3., 4.],
[6., 8.]])