Changelog¶
All notable changes to torchloop are documented here.
Format follows Keep a Changelog.
Versioning follows Semantic Versioning.
[0.3.0] - 2026-03-31¶
Added¶
- Callback system with abstract
Callbackbase class WandBLoggercallback for Weights & Biases integrationMLflowLoggercallback for MLflow experiment trackingcallbacksoptional dependency group:pip install torchloop[logging]- Edge deployment utilities (
torchloop.edge) - FLOPs and parameter estimation (
torchloop.edge.estimate) - MkDocs documentation site
Changed¶
Trainernow acceptscallbacks: list[Callback]parameterTrainer.fit()triggerson_train_begin,on_epoch_end,on_train_endhooks
[0.2.0] - 2026-03-28¶
Added¶
- LR scheduler support in
Trainer— anytorch.optim.lr_scheduler ReduceLROnPlateauhandled automatically (passesval_loss)- Automatic Mixed Precision (AMP) via
amp=Trueflag — CUDA only - LR logged per epoch in
history["lr"]
Changed¶
Trainer.__init__now acceptsschedulerandampparameters- Training log now includes current LR per epoch
[0.1.0] - 2026-03-27¶
Added¶
Trainer— PyTorch training loop with early stopping and checkpointingEvaluator— classification report, confusion matrix, per-class F1Exporter— PyTorch → ONNX → TFLite export pipeline- CI via GitHub Actions across Python 3.9, 3.10, 3.11
- PyPI trusted publishing via OIDC