From BBC Basic to Force.com and beyond…

Category Archives: Testing

In this blog I want to highlight the use of Test.createStub with Force DI, to effectively inject mock implementations of dependent classes. We will go through a small example showing a top-level class with two dependencies. In order to test that class in isolation as part of a true unit test, we will mock its dependencies and then inject the mocks. This blog will also show how Force DI adds value by extracting and encapsulating configuration code from the app.

ChatApp Requirements

The greeting message for the app should be determined by an out of office setting combined with a configurable initial message. The developers decide to split the concerns of fulfilling this requirement into two classes. An instance of the Displayclass will handle the system part of the message and an instance of the Messageclass will be used to obtain the rest. Object orientated programming principles (OOP) have been used to create an interface for Display and base class for Message. The implementations of these are not of huge interest here so are not shown.

Life without Dependency Injection

The following first shows how the ChatApp looks without Dependency Injection.

Now also imagine that the Display class has further dependencies and setup requirements, that would add to the test setup code. It is also not very supportive of Test Driven Development since ChatApp required other dependencies to be implemented for the test to run. Technically it resembles an Integration Test, more than a unit test.

By using the Force DI Injector.Org.Bindings.set method and Test.createStub (read more here) method mock implementations are injected in place of concrete implementations (these need not even exist at this point). This avoids the need to write more code than needed in order to get to a point where a test can be written and run. The use of these two technologies brings the developer experience closer to that of Test Driven Development.

What about Integration Tests?

As I mentioned earlier there was nothing necessarily wrong with the initial test code, its just that in terms of unit testing and TDD, its scope was too broad (it tested more than it needed to). So if we wanted to run this original test now that we have implemented DI above what else is needed?

The question you are likely asking is, how do the real (non-mock) implementations of Display and Message get executed? This is where another feature of Force DI comes in. Force DI Modules can be created dynamically per the unit test code above and/or configured via the Binding custom metadata. Once the Binding custom metadata is configured, the following code runs automatically as part of the Injector.Org initialization. Other code defined bindings for DI used elsewhere throughout the app would also go here (read more).

Below is the original test code we started with, only this time the ChatApp code used the Injector class to resolve its dependencies. Since in production those bindings are not overridden via Injector.Org.Bindings.set method. The Injector code uses the custom metadata based binding configuration, which invokes the module logic above.

It’s worth pointing out that in wanting to include a configuration requirement in the ChatApp use case the above examples may appear overly complex for basic needs. It can also seem a bit daunting to use OOP concepts like interfaces and base classes for new developers. Well good news! It turns out you can use this approach with regular standalone classes as well. Take a look at the simpler use case below.

NOTE: As mentioned in the previous blog, Force DI is based roughly on Java Guice, which also has a great worked example here. The principles explained are the same as here, however, the examples are using annotations to let the Java Guide injector automatically detect where to do the injection. In Force DI, this has to be expressed directly via Injector.Org.getInstance.

I am proud to announce the publication of the second edition of my book Force.com Enterprise Architecture. In this blog we take a look at what the book covers for new readers and for those who have read the first edition, what’s in store for them in the second!

What is the book about?

“This book will teach you how to architect and support enduring applications for enterprise clients with Salesforce by exploring how to identify architecture needs and design solutions based on industry standard patterns. There are several ways to build solutions on Force.com, and this book will guide you through a logical path and show you the steps and considerations required to build packaged solutions from start to finish. It covers all aspects, from engineering to getting your application into the hands of your customers, and ensuring that they get the best value possible from your Force.com application. You will get acquainted with extending tools such as Lightning App Builder, Process Builder, and Flow with your own application logic. In addition to building your own application API, you will learn the techniques required to leverage the latest Lightning technologies on desktop and mobile platforms.” You can read more from the Amazon page here.

What is new in the second edition?

This second edition is 512 pages, of which over 100 pages are new, including 2 brand new chapters, covering Lightning and Testing. Plus numerous other updates to reflect the latest the platform currently has to offer, as well as thoughts on what’s upcoming. The following are some highlights:

Patterns, further worked examples of using the Application and Query factories and other features contributed by the community. The sample FormulaForce app is extended to support generic Lightning components that leverage factory principles.

Custom Metadata, is now integrated into various chapters throughout the book and the sample application FormulaForce. Showcasing various benefits relating to reducing configuration management overheads, through to managing extensibility.

Lightning Architecture and Components, builds from the ground up an appreciation of the Lightning framework fundamentals, including how SOC is applied. Builds from some simple components you can run standalone to more advanced integration approaches with Salesforce’s own app containers.

Lightning App Containers, continues to explore Lightning Experience and Salesforce1 mobile features with more complex components.

Advanced Unit Testing, the second of the two brand new chapters introduces a coded use case and contrasts the differences between integration testing and unit testing. Focusing mainly on the later, the chapter teaches the principles required to implement true unit testing, such as Dependency Injection and Mocking. The latest Apex Stub API and the FinancialForce ApexMocks framework is also covered.

As a self confessed API junkie,each time the new Salesforce platform release notes land. I tend to head straight to anything API related, such as sections on REST API, Metadata, Tooling, Streaming, Apex etc etc. This time Spring’17 release seems more packed than ever with API potential for building apps on platform, off platform and combinations of the two! So i thought i would write a short blog highlight what i found and my thoughts on the following…

New or updated API’s in Spring’17…

Lightning API (Developer Preview)

External Services (Beta)

Einstein Predictive Vision Service (Selected Customers Pilot)

Apex Stub API (GA)

SObject.getPopulatedFieldsAsMap API (GA)

Reports and Dashboard REST API Enhancements (GA)

Composite Resource and SObject Tree REST APIs (GA)

Enterprise Messaging Platform Java API (GA)

Bulk API v2.0 (Pilot)

Tooling API (GA)

Metadata API (GA)

Lightning API (Developer Preview)

This REST API seems to be UI helper API that wraps a number of smaller already existing REST API’s on the platform. Providing a one stop shop (a single API call) for reading both record dataandrelated record metadata such as layout and theme information. In addition to that it will resolve security before returning the response. If your building your own replacement UI or integrating the platform into a custom UI. This API looks like it could be quite a saving on development costs, compared to the many API calls and client logic that would be required to figure all this out. Reading between the lines its likely its the byproduct of a previously internal API Salesforce themselves have been using for Salesforce1 Mobile already? But thats just a guess on my behalf! The good news if so, is that its likely pretty well battle tested from a stability and use case perspective. The API has its own dedicated Developer Guide if want to read more.

External Services (Beta)

If there is one major fly in the ointment of the #clicksnotcode story so far, it’s been calling API’s. By definition they require a developer to write code to use them, right? Well not anymore! A new feature of delivered via Flow (and likely Process Builder) allows the user to effectively teach Flow about REST API’s via JSON Hyper-Schema (an emerging and very interesting independent specification for describing API’s). Once the user points the new External Services Wizard to an API supporting JSON Hyper Schema it uses the information to generate Apex code for an Invocable Method that makes the HTTP callout. Generating Apex code, is a relatively new approach by Salesforce to a tricky requirement to bring more power to non-developers and one i am also a fan of. It is something they have done before for Transaction Security Policies plugins and of course Force.com Sites. At time of writing i could not find it in my pre-release org, but i am keen to dig in deeper! Read more here.

SObject.getPopulatedFieldsAsMap API (GA)

So calling this an “API” is a bit of stretch i know. Since its basically a existing Apex method on the SObject class. The big news though is that a gap in its behaviour has been fixed / filled that makes it more useful. Basically prior to Spring this method would not recognise fields set by code after a record (SObject) was queried. Thus if for example your attempting to implement a generic FLS checking solution using the response from this method, you where left feeling a little disappointed. Thankfully the method now returns all populated fields, regardless if they are populated via the query or later set by code.

Reports and Dashboard REST API Enhancements (GA)

Its now possible to create and delete reports using the Analytics REST API (no mention of the Apex API equivalent and i suspect this wont be supported). Reports are a great way to provide a means for driving data selection for processes you develop. The Analytics API is available in REST and Apex contexts. As well as driving reports from your code, Report Notifications allow users to schedule reports and have actions performed if certain criteria is met. I recently covered the ability to invoke an Apex class and Flow in response to Report Notification in this blog, Supercharing Salesforce Report Subscriptions. In Spring, the Reports REST API can now create notifications.

Composite Resource and SObject Tree REST APIs (GA)

An often overlooked implication of using multiple REST API calls in response to a user action is that if those calls update the database, there is no over arching database transaction. Meaning if the user was to close the page before processing was done, or kill the mobile app or your client code just crashed. It is possible to leave the records in an an invalid state. This is bad for database integrity. Apart from this, making multiple consecutive REST API calls can eat into an orgs 24hr rolling quota.

To address these use cases Salesforce have now released in GA form the composite and tree APIs (which actually this was already GA, how did i miss that?!). The composite resource API does allow you to package multiple CRUD REST API calls into one call and optionally control transaction scope via the AllOrNothing header. Allowing the possibility of committing multiple records in one CRUD API requested. The tree API allows you to create an account with a related set of contacts (for example) in one transaction wrapped REST API call. Basically the REST API is now bulkified! You can read more in the release notes here and in the REST API developers guide here and here.

Bulk API v2.0 (Pilot)

Salesforce is overhauling their long standing Bulk REST API. Chances are you have not used it much, as its mostly geared towards data loading tools and integration frameworks (its simply invoked by ticking a box in the Salesforce Data Loader). The first phase of v2.0 changes to this API allow it to support larger CSV files to be uploaded and automatically chunked by the platform without the developer having to split them. Also changing the way limits are imposed, making it more record centric. Read more here.

Tooling API (GA)

Tooling API appears to be taken on new REST API resources that expose more standard aspects of the platform, such as formula functions and operators. For those building alternative UI’s over these features its a welcome alternative to hard coding these lists and having to remember to check / update them each release. Read more here.

Metadata API (GA)

Ironically my favourite API, the Metadata API has undergone mainly typical changes relating to new features elsewhere in the release. So no new methods or general features. I guess given all the great stuff above, i can not feel to sad! Especially with the announcement recently from the Apex PM that the native Apex Metadata API is finally under development, of course safe harbour and no statement yet on dates… but progress!

The Apex Mocks framework gained a new feature recently, namely Matchers. This new feature means that we can start verifying what records and their fields values are being passed to a mocked Unit Of Work more reliably and with a greater level of detail.

On the face of it, it looks like it should correctly verify that an updated Opportunity record with 10% removed from the Amount was passed to the Unit Of Work. But this fails with an assert claiming the method was not called. The main reason for this is its a new instance and this is not what the mock recorded. Changing it to verify with the test record instance works, but this only verifies the test record was passed, the Amount could be anything.

The solution is to use the new Matchers functionality for SObject’s. This time we can verify that a record was passed to the registerDirty method, that it was the one we expected by its Id and critically the correct Amount was set.

There is also methods fflib_Match.sObjectWithName and fflib_Match.sObjectWithId as kind of short hands if you just want to check these specific fields. The Matcher framework is hugely powerful, with many more useful matchers. So i encourage you to take a deeper look David Frudd‘s excellent blog post here to learn more.

If you want to know more about how Apex Mocks integrates with the Apex Enterprise Patterns as shown in the example above, refer to this two part series here.

Share this:

Like this:

Quite often when i answer questions on Salesforce StackExchange they prompt me to consider future blog posts. This question has been sat on my blog list for a while and i’m finally going to tackle the ‘performing validation in the after‘ comment in this short blog post, so here goes!

Salesforce offers Apex developers two phases within an Apex Trigger, before and after. Most examples i see tend to perform validation code in the before phase of the trigger, even the Salesforce examples show this. However there can be implications with this, that is not at first that obvious. Lets look at an example, first here is my object…

The following test method asserts that a valid value has been written to the database. In reality you would also have tests that assert invalid values are rejected, though the test method below will suffice for this blog.

Once developed Apex Triggers are either deployed into a Production org or packaged within an AppExchange package which is then installed. In the later case such Apex Triggers cannot be changed. The consideration being raised here arises if a second Apex Trigger is created on the object. There can be a few reasons for this, especially if the existing Apex Trigger is managed and cannot be modified or a developer simply chooses to add another Apex Trigger.

So what harm can a second Apex Trigger on the same object really cause? Well, like the first Apex Trigger it has the ability to change field values as well as validate them. As per the additional considerations at the bottom of the Salesforce trigger invocation documentation, Apex Triggers are not guaranteed to fire in any order. So what would happen if we add a second trigger like the one below which attempts to modify the answer to an invalid value?

At time of writing in my org, it appears that this new trigger (despite its name) is actually being run by the platform before my first validation trigger. Thus since the validation code gets executed after this new trigger changes the value we can see the validation is still catching it. So while thats technically not what this test method was testing, it shows for the purposes of this blog that the validation is still working, phew!

Apex Trigger execution order matters…

So while all seems well up until this point, remember that we cannot guarantee that Salesforce will always run our validation trigger last. What if ours validation trigger ran first? Since we cannot determine the order of invocation of triggers, what we can do to illustrate the effects of this is simply switch the code in the two examples triggers like this so.

Having effectively emulated the platform running the two triggers in a different order, validation trigger first, then the field modify trigger second. Our test asserts are now showing the validation logic in this scenario failed to do its job and invalid data reached the database, not good!

So whats the solution to making my triggers bullet proof?

So what is the solution to avoiding this, well its pretty simple really, move your logic into the after phase. Even though the triggers may still fire in different orders, one thing is certain. Nothing and i mean nothing, can change in the after phase of a trigger execution, meaning you can reliably check the field values without fear of them changing later!

trigger LifeTheUniverseAndEverythingTrigger2 on LifeTheUniverseAndEverything__c
(after insert) {
// Make if there is an answer given its always 42!
for(LifeTheUniverseAndEverything__c record : Trigger.new) {
if(record.Answer__c!=null && record.Answer__c != 42) {
record.Answer__c.addError('Answer is not 42!');
}
}
}

Thus with this change in place, even though the second trigger fires afterwards and attempts to change the value an inserted to 43, the first trigger validation still prevents records being inserted to the database, success!

One downside here is that for error scenarios the record is executing potentially much more platform features (for workflow and other processes) before your validation eventually stops proceedings and asks for the platform to roll everything back. That said, error scenarios, once users learn your system are hopefully less frequent use cases.

So if you feel scenario described above is likely to occur (particularly if your developing a managed package where its likely subscriber developers will add triggers), you should seriously consider leveraging the after phase of triggers for validation. Note this also applies with the update and delete events in triggers.

Like this:

If you attended my Advanced Apex Enterprise Patterns session at Dreamforce 2014 you’ll have heard me highlight the different between Apex tests that are either written as true unit test vs those written in a way that more resembles an integration test. As Paul Hardaker (ApexMocks author) once pointed out to me, technically the reality is Apex developers often only end up writing only integration tests.

Intuitively, one can view a unit as the smallest testable part of an application. In procedural programming, a unit could be an entire module, but it is more commonly an individual function or procedure. In object-oriented programming, a unit is often an entire interface, such as a class, but could be an individual method. Unit tests are short code fragments created by programmers or occasionally by white box testers during the development process

Does this describe an Apex Test you have written recently?

Lets review what Apex tests typically require us to perform…

Setup of application data for every test method

Executes the more code than we care about testing at the time

Tests often not very varied enough, as they can take a long time to run!

Does the following Wikipedia snippet describing integration tests more accurately describe this?

Integration testing (sometimes called integration and testing, abbreviated I&T) is the phase in software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before validation testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing

The challenge with writing true unit tests in Apex can also leave those wishing to follow practices like TDD struggling due to the lack of dependency injection and mocking support in the Apex runtime. We start to desire mocking support such as what we find for example in Java’s Mockito (the inspiration behind ApexMocks).

The lines between unit vs integration testing and which we should use and when can get blurred since Force.com does need Apex tests to invoke Apex Triggers for coverage (requiring actual test integration with the database) and if your using Workflows a lot you may want the behaviour of these reflected in your tests. So one cannot completely move away from writing integration tests of course. But is there a better way for us to regain some of the benefits other platforms enjoy in this area for the times we feel it would benefit us?

Problems writing Unit Tests for complex code bases…

The problem is a true unit tests aim to test a small unit of the code, typically a specific method. However if this method ends up querying the database we need to have inserted those records prior to calling the method and then assert the records afterwards. If your familiar with Apex Enterprise Patterns, you’ll recognise the following separation of concerns in this diagram which shows clearly what code might be executed in a controller test for example.

For complex applications this approach per test can be come quite an overhead before you even get to call your controller method and assert the results! Lets face it, as we have to wait longer and longer for such tests, this inhibits our desire to write further more complex tests that may more thoroughly test the code with different data combinations and use cases.

What if we could emulate the database layer somehow?

Well those of you familiar with Apex Enterprise Patterns will know its big on separation of concerns. Thus aspects such as querying the database and updating it are encapsulated away in so called Selectors and the Unit Of Work. Just prior to Dreamforce 2014, the patterns introduced the Application class, this provides a single application wide means to access the Service, Domain, Selector and Unit Of Work implementations as apposed to directly instantiating them.

If you’ve been reading my book, you’ll know that this also provides access to new Object Orientated Programming possibilities, such as polymorphism between the Service layer and Domain layer, allowing for a functional frameworks and greater reuse to be constructed within the code base.

In this two part blog series, we are focusing on the role of the Application class and its setMock methods. These methods, modelled after the platforms Test.setMock method (for mocking HTTP comms), provide a means to mock the core architectural layers of an application which is based on the Apex Enterprise Patterns. By allowing mocking in these areas, we can see that we can write unit tests that focus only on the behaviour of the controller, service or domain class we are testing.

Next is the Service class, since the service layer remains stateless and global, i prefer to retain the static method style. Since you cannot apply interfaces to static methods, i use the following convention, though I’ve seen others with inner classes. First create a new class something like OpportunitiesServiceImpl, copy the implementation of the existing service into it and remove the static modifier from the method signatures before apply the interface. The original service class then becomes a stub for the service entry point.

Once you have defined and implemented your interfaces you need to ensure there is a means to switch at runtime the different implementations of them, between the real implementation and a the mock implementation as required within a test context. To do this a factory pattern is applied for calling logic to obtain the appropriate instance. Define the Application class as follows, using the factory classes provided in the library. Also note that the Unit Of Work is defined here in a single maintainable place.

If your adapting an existing code base, be sure to leverage the Application class factory methods in your application code, seek out code which is explicitly instantiating the classes of your Domain, Selector and Unit Of Work usage. Note you don’t need to worry about Service class references, since this is now just a stub entry point.

The following code shows how to wrap the Application factory methods using convenience methods that can help avoid repeated casting to the interfaces, it’s up to you if you adopt these or not, the effect is the same regardless. Though the modification the service method shown above is required.

In this blog we’ve looked at how to defined and apply interfaces between your service, domain, selector and unit of work dependencies. Using a factory pattern through the indirection of the Application class we have implemented an injection framework within the definition of these enterprise application separation of concerns.

I’ve seen dependency injection done via constructor injection, my personal preference is to use the approach shown in this blog. My motivation for this lies with the fact that these pattern layers are well enough known throughout the application code base and the Application class supports other facilities such as polymorphic instantiation of domain classes and helper methods as shown above on the Selector factory.

In the second part of this series we will look at how to write true unit tests for your controller, service and domain classes, leveraging the amazing ApexMocks library! If in the meantime you wan to get a glimpse of what this might look like take a wonder through the Apex Enterprise Patterns sample application tests here and here.

// Provide a mock instance of a Unit of Work
Application.UnitOfWork.setMock(uowMock);
// Provide a mock instance of a Domain class
Application.Domain.setMock(domainMock);
// Provide a mock instance of a Selector class
Application.Selector.setMock(selectorMock);
// Provide a mock instance of a Service class
Application.Service.setMock(IOpportunitiesService.class, mockService);

Share this:

Like this:

If your a fan of TDD you’ll hopefully have been following FinancialForce.com‘s latest open source contribution to the Salesforce community, known as ApexMocks. Providing a fantastic framework for writing true unit tests in Apex. Allowing you to implement mock implementations of classes used by the code your testing.

The ability to construct data structures returned by mock methods is critical. If its a method performing a SOQL query, there has been an elusive challenge in the area of queries containing sub-selects. Take a look at the following test which inserts and then queries records from the database.

Now you may think you can mock the results of this query by simply constructing the required records in memory, but you’d be wrong! The following code fails to compile with a ‘Field is not writeable: Contacts‘ error on line 16.

While Salesforce have gradually opened up write access to previously read only fields, the most famous of which being Id, they have yet to enable the ability to set the value of a child relationship field. Paul Hardaker contacted me recently to ask if this problem had been resolved, as he had the very need described above. Using his ApexMock’s framework he wanted to mock the return value of a Selector class method that makes a SOQL query with a sub-select.

Driven by an early workaround (I believe Chris Peterson found) to the now historic inability to write to the Id field. I started to think about using the same approach to stich together parent and child records using the JSON serialiser and derserializer. Brace yourself though, because its not ideal, but it does work! And i’ve managed to wrap it in a helper method that can easily be adapted or swept out if a better solution presents itself.

NOTE: Credit should also go to Paul Hardaker for the Mock.Id.generate method implementation.

The Mock class is provided with this blog as a Gist but i suspect will find its way into the ApexMocks at some point. The secret of this method is that it leverages the fact that we can in a supported way expect the platform to deserialise into memory the following JSON representation of the very database query result we want to mock.

The Mock.makeRelationship method turns the parent and child lists into JSON and goes through some rather funky code i’m quite proud off, to splice the two together, before serialising it back into an SObject list and vola! It currently only supports a single sub-select, but can easily be extended to support more. Regardless if you use ApexMocks or not (though you really should try it), i hope this helps you write a few more unit tests than you’ve previous been able.