if there is an error email will be sent or inserted into a bad record table.

I have two solutions for processing the job.

Timers - Create a timer which will execute every 5 mins. This will look for any excel sheet which is pending for execution.

Process - Create a process that will be "launched on" as soon as new excel sheet is created.

(Note : Assume that we have an entity "Job" and a new record will get created as soon as an excel sheet is uploaded. This table can be referenced both the Timers and Process. There will be a status flag against each record. ).

I am sure both of the above will work but thinking from the scalability and maintenance perspective I need advice on which one to go for considering

there will be huge number of parallel uploads

multiple front end servers are involved.

The question here is what would be the side effect of Process (approach 2) if there are say 500 uploads happening from different users. All the 500 will get processed parallel ? (Assuming the front server is Quad Core server) How does the queuing work if they are not executed in parallel ? What would be the impact on the user expericence - Other users accessing the website ? How do we retry if one of them failed ( not due to application logic).

If we go for the Timer approach and if there are several front end servers involved I think parallel processing will happen - timer will get executed on all the front end server ( please correct me if I am wrong). All the front send servers will access the job table and will find that there are excel files to be processed - All the timer will execute same time and they all look for job status not processed, they will find it and they attempt to lock the record before processing the job (at the same time) . Some will fail as somebody has already locked it and it has to retry for another record. And after a couple of retries every front end server will be processing different excel files parallel.

Going with BPT may be a bit faster then using timers, if your servers have enough capacity. If you have 500 concurrent uploads, though, they won't be processed in parallel. The processes are still queued and handled by the scheduler, as a timer would; except it reacts faster. If a BPT activity fails for any reason, you can retry it manually from Service Center, although having to do that is a clear sign that you need to fix something in your code!

That being said, I usually go with timers in very high volume scenarios, because BPT generates too much extra activity in the database (creating and updating metadata on the processes) and generates many "trash" records that need to be cleaned-up once the process ends. You can always use a WakeTimer action to start your timer when a file is uploaded, instead of waiting for it to run.

3. The Web Service checks first whether the timer is already executing in any Front-end Server node. If not, theIs_Runnining_Since andIs_Runnining_By attributes of theCyclic_Job_Shared entity are updated. This will lock the timer from executed in any other Front-end Server node.

Given that your work load is stored as Jobs in a table, you can have your Timer handling a Job (or batch of Jobs) at a time and monitor how long it is taking so as to never timeout and loose work, this way there will always be progress. Doing this typically relies on checking whether you're reaching the threshold of available time for execution and if so stop processing Jobs and call the Wake<Timer> action again to make sure it continues as soon as possible.

With BPT, on the other hand, you can have multiple instances of the same process running in parallel and on all front-ends configured. A new process instance could be started automatically by the creation of a new Job record, and the frequency of polling for BPT events is higher than for Timers. As downsides, João's concerns are relevant, along with the hard 5 minute timeout of a BPT automatic activity (which may be avoided by separating your job processing logic in more than one activity, depending on your particular case).