In-Memory Processing/Caching Options for 1 billion record file?

As the title says, I’m working with a ANSI text file with 1 billion records, and I’m currently loading the file into SQL Server and running a select to transform the data to a different result set that will be used for an output file.

Has anyone worked with a data set this large and know if there are any options to process the data using an in-memory cache instead of SQL Server? I know that using POCOs/a standard array will end up with OOM (out-of-memory) exceptions for this large of a data set. Wondering what other options I may have other than SQL Server import.

submitted by /u/PM_YOUR_SOURCECODE
[link] [comments]

Leave a Reply