Formulas
Supported formulas, architectures, and activations
Trainable Formulas
These formulas compile directly to PyTorch models and can be trained on your CSV data.
Linear Regression
Simple linear model
\(y = mx + b\)
Multiple Linear Regression
Multi-feature linear
\(y = \beta_0 + \beta_1 x_1 + \beta_2 x_2\)
Logistic Regression
Binary classification
\(y = \sigma(w^T x + b)\)
Polynomial Regression
Curved relationships
\(y = \beta_0 + \beta_1 x + \beta_2 x^2 + \beta_3 x^3\)
Single Layer (Sigmoid)
Neural layer with sigmoid
\(y = \sigma(Wx + b)\)
Single Layer (ReLU)
Neural layer with ReLU
\(y = \text{ReLU}(Wx + b)\)
Single Layer (Tanh)
Neural layer with tanh
\(y = \tanh(Wx + b)\)
Softmax Classifier
Multi-class classification
\(y = \text{softmax}(Wx + b)\)
2-Layer MLP
Two hidden layers
\(y = \sigma(W_2 \cdot \text{ReLU}(W_1 x + b_1) + b_2)\)
3-Layer MLP
Three hidden layers
\(y = \sigma(W_3 \cdot \text{ReLU}(W_2 \cdot \text{ReLU}(W_1 x + b_1) + b_2) + b_3)\)
Computable Formulas
These are evaluated symbolically via SymPy (no data needed). Supports:
- Arithmetic:
2^{10} + 3 \times 7 - Algebra: solve equations, simplify, expand, factor
- Calculus: derivatives, integrals, limits, series
- Linear algebra: determinants, eigenvalues, matrix operations
- Anything SymPy can handle, with LLM fallback for complex expressions
Architecture Support
Advanced architectures available via the pipeline (whiteboard) mode.
Attention
Supported\(\text{softmax}(\frac{QK^T}{\sqrt{d_k}})V\)
Conv1D / Conv2D
Supported\(\text{Conv}(x)\)
LSTM
Supported\(\text{LSTM}(x)\)
GRU
Supported\(\text{GRU}(x)\)
Transformer Encoder
Supported\(\text{Transformer}(x)\)
Residual Block
Supported\(y = x + F(x)\)
Supported Activations
sigmoidrelutanhsoftmaxleaky_reluelugeluselusilu
Tips for Successful Compilation
- Use standard notation:
Wfor weights,bfor bias,xfor input - Subscripts indicate layers:
W_1, W_2, b_1, b_2 - Wrap activations clearly:
\sigma(...),\text{ReLU}(...) - For multi-layer networks, nest from inside out
- If a formula doesn't compile, the system falls back to logistic regression. Check the warning banner
- Use the whiteboard mode for complex architectures (attention, residual, etc.)