Neither option sounds very good to me.
If user b's list ran faster then user a's then the two jobs where already threaded. I presume they are threaded at the HTTP request level, but one would need a more indepth understanding of your code to know this.
If you single thread the process, that means user b can not do ANYTHING until user a's job is complete, which sounds like it could be a while.
What you need to figure out is why user a's and user b's data are getting comingled. Yes you have a race condition, but in this case I suspect the culprit are in these tables where you store information about the jobs, and that the records in these tables do not have the proper identifiers so that both u
Completely agree with ilssac here, in this situation neither is suitable.
You're trying to get around database concurrency with the wrong tools. What you're kinda trying to reinvent is SQL Server's Serializable locking, whereby each query runs as if it were in sequence with when other transaction start times. However, that's generally a very bad thing except in extreme cases.
I would look more into adding columns into your email info table to give it the concept of "batches", which is what we did here. One table that stores a batch id, the user who started it, when it was created etc, and when it completed. You then insert the email details into another table, and FK it across.
You can then deal with emails on a batch-by-batch basis, rather than all at once.
Trying to use a basic holding table which supports multiple users is a dark road, down which only severe arseache lies.
Hope that helps.
As the others have said: this is a data design/integrity problem. CTHREAD won't help you, and CFLOCK would just be treating the symptom, not the problem.
1) Why aren't steps 1-3 not all done on the DB, returning a final recordset to CF for doing the emailing?
2) How is it the two requests can't distinguish between what's their data and what's another request's data?