Design, tune, or review EF Core data access with proper modeling, migrations, query translation, performance, and lifetime management for modern .NET applications.
Sep for .NET separated values
Use Sep for high-performance separated-value parsing and writing in .NET, including delimiter inference, explicit parser/writer options, and low-allocation row/column workflows.
Trigger On
- delimited data needs are performance-sensitive and allocation-aware
- project needs explicit control over separator inference, escaping, trimming, and header behavior
- reading/writing large or long-lived file pipelines in ML, ETL, or analytics workloads
- startup/perf tests require AOT/trimming-friendly CSV/TSV processing
Workflow
flowchart LR
A[Input source: file/text/stream] --> B[Sep.Reader or Sep.New(...).Reader]
B --> C[SepReaderOptions]
C --> D[Rows -> Cols -> Span/Parse]
D --> E[Transform and validate]
E --> F[SepWriter via SepWriterOptions]
F --> G[To file/text output]
- Decide schema shape
- header present or no header - separator known (;, ,, tab, custom) or infer from first row - row/column quoting rules
- Build reader with
Sep.Reader(...)and explicit options only where needed:
- Sep.Reader() for inferred separator from header-like first row - Sep.New(',').Reader(...) for explicit separator mode - Sep.Reader(o => o with { HasHeader = false }) if header is absent
- Read rows and map columns as
ReadOnlySpan<char>first, convert only when needed. - For output, use
reader.Spec.Writer()when you need the same separator/culture as input. - Control writer behavior with
Sep.Writer(...)andSepWriterOptions(WriteHeader,Escape,DisableColCountCheck). - Add async only where it brings value and your runtime is C# 13 / .NET 9+ for
await foreachover async reader rows. - Use
ParallelEnumeratefor CPU-heavy transformations only after benchmarking single-threaded baseline.
Install and read patterns
using var reader = Sep.Reader(o => o with
{
HasHeader = true,
Unescape = true,
Trim = SepTrim.Both
}).FromText(data);
foreach (var row in reader)
{
var id = row["Id"].Parse<int>();
var name = row[1].ToString();
// process row
}
Write patterns
using var reader = Sep.Reader().FromFile("input.csv");
using var writer = reader.Spec.Writer().ToFile("output.csv");
foreach (var row in reader)
{
using var writeRow = writer.NewRow(row);
writeRow["Amount"].Format(row["Amount"].Parse<double>() * 1.2);
}
Async reading and writing
var text = "A;B\n1;hello\n";
using var reader = await Sep.Reader().FromTextAsync(text);
await using var writer = reader.Spec.Writer().ToText();
await foreach (var row in reader)
{
await using var writeRow = writer.NewRow(row);
var normalized = row["B"].ToString().ToUpperInvariant();
writeRow["B"].Set(normalized);
}
Common configuration patterns
- Header-driven read
- default HasHeader = true - query by name: row["ColName"]
- Headerless pipelines
- HasHeader = false - use index-based access: row[0], row[1]
- Round-trip output
- start writer with reader.Spec.Writer() to preserve inference and formatting contract
- Speed-first processing
- keep default buffer + culture unless profiling proves a need to tune
Deliver
- installation and usage guide that is ready to copy into a .NET repo
- practical reader/writer configuration patterns
- clear notes on defaults, tradeoffs, and constraints
Validate
dotnet add package Sepinstalls correctly and project compiles- one file-read sample and one file-write sample execute successfully
- header/no-header and explicit-separator cases are covered
- at least one validation sample for quoting/unescaping or async path exists if required by task
Load References
- references/overview.md - official links and practical decision notes.
Related skills
Use ManagedCode.MarkItDown when a .NET application needs deterministic document-to-Markdown conversion for ingestion, indexing, summarization, or content-processing workflows.
Use ManagedCode.Storage when a .NET application needs a provider-agnostic storage abstraction with explicit configuration, container selection, upload and download flows, and…