I generate a very large .csv file from a database using the method outlined in
https://stackoverflow.com/a/13456219/141172
It works fine, up to a point. When the exported file is too large, I get an OutOfMemoryException
.
If I turn off output buffering by modifying that code like this:
protected override void WriteFile(System.Web.HttpResponseBase response)
{
response.BufferOutput = false; // <--- Added this
this.Content(response.OutputStream);
}
the file download completes. However, it is several orders of magnitude slower than when output buffering was enabled (measured for the same file with buffering true/false, on localhost).
I understand that is slower, but why would it slow to a relative crawl? Is there anything I can do to improve processing speed?
UPDATE
It would also be an option to use File(Stream stream, String contentType) as suggested in the comments. However, I'm not sure how to create stream
. The data is dynamically assembled based on a DB query, and a MemoryStream would run out of contiguous physical memory. Suggestions are welcome.
UPDATE 2
It was suggested in the comments that alternately reading from the database and writing to the stream is causing a degradation. I modified the code to perform the stream writing in a separate thread (using the producer/consumer pattern). There is no appreciable difference in performance.
See Question&Answers more detail:os