Supported Formulas

What MathExec can compile, compute, and train.

Trainable Formulas

These formulas compile directly to PyTorch models and can be trained on your CSV data.

Linear Regression

Simple linear model

\(y = mx + b\)
Multiple Linear Regression

Multi-feature linear

\(y = \beta_0 + \beta_1 x_1 + \beta_2 x_2\)
Logistic Regression

Binary classification

\(y = \sigma(w^T x + b)\)
Polynomial Regression

Curved relationships

\(y = \beta_0 + \beta_1 x + \beta_2 x^2 + \beta_3 x^3\)
Single Layer (Sigmoid)

Neural layer with sigmoid

\(y = \sigma(Wx + b)\)
Single Layer (ReLU)

Neural layer with ReLU

\(y = \text{ReLU}(Wx + b)\)
Single Layer (Tanh)

Neural layer with tanh

\(y = \tanh(Wx + b)\)
Softmax Classifier

Multi-class classification

\(y = \text{softmax}(Wx + b)\)
2-Layer MLP

Two hidden layers

\(y = \sigma(W_2 \cdot \text{ReLU}(W_1 x + b_1) + b_2)\)
3-Layer MLP

Three hidden layers

\(y = \sigma(W_3 \cdot \text{ReLU}(W_2 \cdot \text{ReLU}(W_1 x + b_1) + b_2) + b_3)\)

Computable Formulas

These are evaluated symbolically via SymPy (no data needed). Supports:

  • Arithmetic: 2^{10} + 3 \times 7
  • Algebra: solve equations, simplify, expand, factor
  • Calculus: derivatives, integrals, limits, series
  • Linear algebra: determinants, eigenvalues, matrix operations
  • Anything SymPy can handle, with LLM fallback for complex expressions

Architecture Support

Advanced architectures available via the pipeline (whiteboard) mode.

Attention
\(\text{softmax}(\frac{QK^T}{\sqrt{d_k}})V\)
Supported
Conv1D / Conv2D
\(\text{Conv}(x)\)
Supported
LSTM
\(\text{LSTM}(x)\)
Supported
GRU
\(\text{GRU}(x)\)
Supported
Transformer Encoder
\(\text{Transformer}(x)\)
Supported
Residual Block
\(y = x + F(x)\)
Supported

Supported Activations

sigmoidrelutanhsoftmaxleaky_reluelugeluselusilu

Current Limits

  • Max 10,000 rows per CSV upload
  • CPU-only training (GPU coming soon)
  • Single output (binary classification or scalar regression)
  • Supported optimizers: Adam, AdamW, SGD, RMSprop
  • Supported loss functions: Cross-Entropy, MSE, MAE, Huber

Tips for Successful Compilation

  • Use standard notation: W for weights, b for bias, x for input
  • Subscripts indicate layers: W_1, W_2, b_1, b_2
  • Wrap activations clearly: \sigma(...), \text{ReLU}(...)
  • For multi-layer networks, nest from inside out
  • If a formula doesn't compile, the system falls back to logistic regression. Check the warning banner
  • Use the whiteboard mode for complex architectures (attention, residual, etc.)