I am trying to understand how is more efficient to fully utilize all CPU processors to achieve the following scenario.
Let’s say we have to parse a million line file and for each we should lookup various table of a database to check if we must import this line, and if so insert some data to some tables. And let’s say this is done by a (async) function doWork().
I have parsed the file and put the lines in a rabbitMQ.
One approach is to get a line from the queue and do an await DoWork(lineString)
Second approach is to Thread.Start x Tasks with DoWork, where x is the number of cpu cores
A third approach is to utilize docker (the app is running under docker) and spawn x instances of the app, each of which will grab a line from rabbitMQ.
Which one will be more efficiency? Or any other approaches?