Linear Algebra
A Module based on Linear Algebra.
Implementation Notes
Tensor operations are unoptimized. I recommend PyTorch or TorchLib instead for a general Tensor Library.
Theoretical Motivation
Given the standard ring of we can represent all potential operations as a composition of binary operators or where . Given a vector space of we can represent a vector as , where is some combination of . Given a collection of vectors of size We define a matrix . We describe the structure of a Tensor-space of order on as
Given the usual ring we can extend many theorems of linear algebra by allowing the following rule.
Given the tensors , , and a necessary (but not sufficient) condition for the operation to be well defined is for , , or and . Where WLOG if .
Documentation
# pysiclib.linalg.Tensor class Tensor: @overload def __init__(self, numpy_array: numpy.ndarray[numpy.float64]) -> None: ... @overload def __init__(self, input_data: List[float], input_shape: List[int] = ..., input_stride: List[int] = ..., offset: int = ...) -> None: ... @overload def __init__(self, other_view: Tensor) -> None: ... def binary_element_wise_op(self, arg0: Tensor, arg1: Callable[[float,float],float]) -> Tensor: ... def deep_copy(self) -> Tensor: ... def fold_op(self, arg0: Callable[[float,float],float], arg1: float, arg2: int, arg3: bool) -> Tensor: ... def get_buffer(self) -> List[float]: ... def get_offset(self) -> int: ... def get_shape(self) -> List[int]: ... def get_stride(self) -> List[int]: ... def matmul(self, arg0: Tensor) -> Tensor: ... def slice_view(self, arg0: List[int]) -> Tensor: ... def squeeze(self, target_dim: int = ...) -> Tensor: ... def to_numpy(self) -> numpy.ndarray[numpy.float64]: ... def transpose(self, dim_1: int = ..., dim_2: int = ...) -> Tensor: ... def unitary_op(self, arg0: Callable[[float],float]) -> Tensor: ... def unsqueeze(self, arg0: int) -> Tensor: ...