I have optimized almost all my test classes but the "Run All Tests" execution time is still not reduced.

I ran the tests in my full copy sandbox and it took 28 minutes. There are about 464 test classes. Some of the test classes have 50 to 60 test methods as the class demands so. If I remove the testmethod, the test coverage is getting reduced. If I write more test methods, the deployment time taken is too long. I just trying to keep a balance between my coverage % and the no. of methods.

Currently while deploying to production it takes about 1.5 hours. So I have optimized my code and gave a run all test in my sandbox it takes about 28 minutes. But if I run in someother time it takes only 16 minutes.

Not sure how much time it will take when I am deploying to production? How do I find and reduce this time taken to deploy?

For Example, I have attached a result of a test class here (which is one of the test classes from run all tests).

If you notice this image, There are totally 19 methods and the 1st highlighted method shown above is
taking 6 mins 43 seconds but the 2nd method highlighted takes only 21 seconds.
But the code for both the methods are almost the same.

Not sure why its making a difference. I guess it should be because its waiting for some other method in Run all tests to complete. If that is the case how can we determine the actual runtime of a test method and how can we optimize it?

Also I have tested each of the test classes separately and its just executing in a few seconds even for the ones which has about 60 test methods, but when I run it together sometimes the same class alone takes 17 minutes. Can some one help / guide me here correctly as to what I need to do for reducing the time to deploy in production?

* Edited Part *

I still couldn't figure out how to reduce the time.
FYI, I am using change sets to deploy to production.

Are you running tests in the web browser or in the IDE (eclipse)?
– MarcFeb 12 '13 at 20:45

I have been developing on Salesforce for a three years and I do not recall these deployments to production taking so long. Granted, it has been a few months since I last deployed code but it seems that 30 to 60 minutes for every single deploy is excessive. It seemed so unusual that I submitted a ticket to Salesforce support and they have been trying to help. So far the suggestions have not made much difference. I've tried using Deploy rather than Save to Server in the Force.com IDE. I've tried updating the Force.com IDE to the latest version (a suggestion from some other posting). I've tried t
– BryanFeb 20 '13 at 22:49

I have seen it take 45 mins with a Change Set deploy to validate and execute ~1100 unit tests, with many of them including a complex data model set up (many objects, relationships, triggers, etc. all setting up their own data). It is good to have release schedules so you aren't deploying every day (and for other reasons). It has pretty consistently always taken that long. I don't have an issue finding 45 minutes of work to do, so it isn't a big deal. ;-)
– Peter KnolleFeb 21 '13 at 10:18

3 Answers
3

When you run in the sandbox are you running the tests in parallel? When you deploy to production the tests are not run in parallel, so it will likely take longer to run all of them. If you are running in parallel in the sandbox, you can switch to not run in parallel to get a better idea of how long the actual deployment will take.

If you are running in parallel in the sandbox it could be that there is some contention occurring for a shared resource. Also, Apex tests are placed in the Apex Job Queue for execution according to the documentation, so there could be some delay there, although 6 minutes+ seems excessive.

There could be different causes for the fluctuations in overall time between entire test runs. The documentation on Salesforce Asynchronous Processing has more info on how asynchronous processing is handled. It could be that there are a differing number of other orgs running at the same time (less when it takes less time...more when it takes longer). It could be that some tests depend on some shared resource and during certain test runs they happen to execute at the same time and in other test executions they run at separate times and don't have an issue.

I would rather have longer deploys and more unit tests that cover as many use cases as possible. I would not recommend that you remove valid tests (e.g., negative tests, edge cases, etc.) to reduce running time even if doing so saves time and doesn't reduce coverage. The 30 minutes you save on your scheduled deployments will easily be eaten up by time lost due to regressions introduced that would've been caught by the removed tests.

For what it's worth, I've deployed to an org that has roughly ~1000 test cases (methods) and a relatively complex data model and it takes close to 45 minutes for production deployments.

One approach that you can take to save some time is to run the validate only deploy the night before the deploy (assuming you deploy in the early morning -- adjust this example as necessary) and then on the following morning of the deploy run the deploy without the validate step. Salesforce will still do the validate and the deployment will fail if it doesn't validate, so you don't have anything to worry about. If you are currently doing the 1.5 hr validate deploy followed by a 1.5 hr actual deploy this will save you 1.5 hrs on the day of your actual deployment.

Lastly, if you are completely stuck you could start violating some testing best practices:

Unbulkify tests. Always create the minimal amount of records for a test.

Remove methods that don't add additional coverage.

Use existing records in the database.

If you do any of them it would be a good idea to structure the test code in a way that you could "toggle" that functionality. E.g., have a setting that holds the number of records to create for bulk testing and in preparation test it with the high number and on the actual deploy set it to 1, use a custom setting to conditionally execute tests that don't add coverage and don't execute them on deploy, etc.

Hi @Peter, thanks for your response. I did a run all tests using apex test execution. yes its parallel. Any ideas why on each run its taking different execution times? For ex:- 1st time i ran all tests it took 14 mins, 2nd time it was 28 mins, 3rd time it was 18 mins. Am not sure why is it behaving weird like this. And thanks for your suggestion of having longer deploy time than reducing the coverage. I wasnt sure of a better approach. Thats y I was doing it.How long can we wait max for deployment bcoz sometimes mine goes for 2 hours even while validating? I will disable parallel meanwhile
– SathyaFeb 12 '13 at 22:46

yeah initially I was doing like that, but even then while deploying it validates as you say. I am not sure how we can ignore validation while deploying (if we have already validated). Please let me know if there is a way. I am using changeset validation and deployment btw, and not eclipse. Also recently I did a project which involved complete custom coding with Partner portal, sites, etc. so I had to write a lot of test methods (say 400) for each page which boomed up the deployment time to 1.5 to 2 hrs from 10 mins. So my lead has asked me to optimize test methods. not sure what to tell them?!
– SathyaFeb 13 '13 at 1:39

No. I didn't mean you could ignore validation...just don't need to do a validate only deploy followed by the actual deploy.
– Peter KnolleFeb 13 '13 at 1:48

1

@Sathya - Using SeeAllData=true on all of the tests could increase times, because instead of just processing the data inserted for the test they'd be processing org data. You can create Custom Settings in tests and you can access Users even without SeeAllData=true
– Peter KnolleFeb 14 '13 at 2:58

1

You can create them pretty much like any other record. I usually do the following: Your_Settings__c settings = Your__c.getOrgDefaults(); if (settings.Id == null) { settings.Your_Field__c = 'xyz'; insert settings; }
– Peter KnolleFeb 14 '13 at 19:44

You can prepare the data using @testSetup annotation, other than calling createDataAndCookie multiple times.

As you can see in Salesforce documentation @testSetup is preparing data once then rollback changes each time the next test is end running:

Test setup methods can be time-saving when you need to create reference or prerequisite data for all test methods, or a common set of records that all test methods operate on.
Test setup methods can reduce test execution times especially when you’re working with many records. Test setup methods enable you to create common test data easily and efficiently. By setting up records once for the class, you don’t need to re-create records for each test method. Also, because the rollback of records that are created during test setup happens at the end of the execution of the entire class, the number of records that are rolled back is reduced. As a result, system resources are used more efficiently compared to creating those records and having them rolled back for each test method.