Overview
Training, evaluating, and deploying ML.NET classification models embedded directly in your ASP.NET Core apps.
Why ML.NET?
ML.NET lets you train, evaluate, and deploy machine learning models using C#—no Python required. For .NET shops, this means your ML pipeline lives in the same codebase as your application, sharing types and dependency injection.
We used ML.NET for the ShieldAI fraud detection platform: a binary classification model trained on transaction features that runs as a singleton service within ASP.NET Core, processing 5M transactions per day with sub-10ms inference latency.
- C#-native—no Python interop overhead
- Integrates with DI and ASP.NET Core middleware
- ONNX export for cross-platform deployment
- AutoML for hyperparameter tuning
Building the Training Pipeline
ML.NET uses a pipeline metaphor: load data → transform features → train model → evaluate. Each step is composable and strongly typed.
var mlContext = new MLContext(seed: 42);
var data = mlContext.Data.LoadFromTextFile<TransactionData>("train.csv", separatorChar: ',', hasHeader: true);
var pipeline = mlContext.Transforms.Categorical.OneHotEncoding("MerchantEncoded", "MerchantCategory")
.Append(mlContext.Transforms.Concatenate("Features", "Amount", "Hour", "DayOfWeek", "MerchantEncoded"))
.Append(mlContext.BinaryClassification.Trainers.FastTree());
var model = pipeline.Fit(data);
mlContext.Model.Save(model, data.Schema, "fraud_model.zip");Serving Models in ASP.NET Core
Register your trained model as a singleton PredictionEngine in the DI container. The PredictionEnginePool handles thread-safety and reuse efficiently.
For hot-swap model updates without restarting the service, use IOptionsMonitor<ModelPath> to watch for new model files and reload the pool.
services.AddPredictionEnginePool<TransactionData, FraudPrediction>()
.FromFile(modelName: "FraudModel", filePath: "fraud_model.zip", watchForChanges: true);Key Takeaways
- ML.NET eliminates Python interop for .NET ML workloads
- PredictionEnginePool handles thread-safe concurrent inference
- Use watchForChanges for zero-downtime model updates
- Always evaluate on a hold-out test set before deploying
- Export to ONNX for cross-platform/cross-framework compatibility
Saurav Rai
Founder & Lead Architect, Omni Stack
7+ years building enterprise .NET and cloud applications for clients across Australia, USA, and the Middle East. Passionate about clean architecture, developer experience, and shipping fast.