Categories

The concept of a "wrapper class" occurs frequently in Force.com development. Object oriented design encourages the use of classes to encapsulate data and behaviors that may mutate the data, or interact with other classes.

The goals of an Apex wrapper class are:

Decouple data (records) from behaviors (methods)

Validate inputs

Minimize side effects

Enable unit testing

Handle exceptions

This article provides an example wrapper class template to address these goals and explores each aspect of the wrapper class individually.

Register for Dreamforce 13 and attend my session on "Leap: An Apex Development Framework" for training on how to use the Leap framework to generate wrapper classes, triggers, and more. Follow @codewithleap on Twitter for updates on Apex design patterns.

All Apex wrapper classes share a few common properties and methods. To avoid repeatedly cut-n-pasting the same common code into each wrapper class, a base class is created for all wrapper classes to inherit through the "extends" keyword.

Inheritance

As of this writing, Apex does not allow inheriting the core SObject class (which would be ideal).

Record encapsulation uses the strongly typed Order__c record, rather than the abstractly typed SObject, in order to use the benefits of strongly typed objects in the Force IDE, such as auto-complete of the fields. Moving the record to the base class would require constantly casting SObject to the strong type.

Class Name

For custom objects, it's common to remove the namespace prefix and __c suffix from the wrapper class name. For standard objects, always prefix the class name with "SFDC", or some other naming convention, to avoid conflicts.

Wrapping Standard Objects

Creating wrapper classes with the same name as standard objects, although possible, is discouraged.
Class names supersede standard object names, such that if the intent is to create a standard Account object, but a class named 'Account' already exists, the code will not compile because the compiler is trying to create an instance of the wrapper class and not the standard object.

To get around this, use a standard naming convention; SFDCAccount, SFDCContact, or SFDCLead; to differentiate the wrapper class names from their respective standard objects.

Construction

SObjects are constructed in 2 contexts:

withSObject(SObject record): The record has already been retrieved via a SOQL statement and the data just needs to be wrapped in a class container.

withId(ID recordId): The ID of a record is known, but has not yet been retrieved from the database.

The actual class constructor accepts no arguments. The builder pattern is used to construct the class and kick off a fluent chain of subsequent methods in a single line.

Once constructed, SObject fields are accessed directly through the public 'record' property, as in:

new Order().withID(someid).record.Custom_Field__c

This convention is chosen over creating getCustomField() and setCustomField() properties for brevity and to make use of code auto-complete features,. However, if mutability of the SObject record, or it's fields, are a concern then the public accessor can be modified to 'private' and corresponding get/set properties added.

SFIELDS

Each wrapper class exposes a static public string named SFIELDS for use in selecting all fields for a record. This is equivalent to writing "SELECT * FROM TABLE_NAME" in traditional SQL syntax.

The SFIELDS string can be periodically auto-generated from a Leap task to keep wrapper class field definitions in sync with the data model, or manually updated with just a subset of fields to be used by the wrapper class.

The return types of builder methods must be the same as the wrapper class type. Each builder method returns 'this' to allow method chaining. Builder pattern is useful in the early stages of development when the exact method behaviors and system architecture is not entirely known (see 40-70 Rule to Technical Architecture) and allows a compositional flow to development, incrementally adding new features without significant refactoring effort.

Child Object Relationships

A wrapper class represents a single instance of a Salesforce record. Depending on how lookup relationships are defined, wrapper classes will usually be either a parent (master) or child (detail) of some other records, which also have wrapper classes defined.

The "fromRecords" utility method is provided to easily construct collections of child objects retrieved from SOQL queries. Collections of child wrapper classes are stored as Maps that support the quick lookup of child wrapper classes by their record ID.

Properties and Side Effects

The #1 cause of software entropy in Apex development is unwanted "side effects", which is the dependency on class variables that can be modified by other methods.

The wrapper class template encourages lazy initialization of properties to protect access to member variable. Lazy initialization also avoids repeated queries for the same records, which is a common cause for exceeding governor limits.

Java has not yet evolved to support class properties. But Apex does, and wrapper classes are an opportunity to use them whenever possible. For the sake of brevity, properties are preferred over methods, whenever possible. This Microsoft .NET article on choosing between properties and methods is very applicable to Apex.

For Developers doing a lot of client-side Javascript development in the UI, the use of server-side Apex properties closely approximates the associative array behavior of Javascript objects, and maintains a consistent coding style across code bases.

Unit Testing

Wrapper classes provide a clean interface for unit testing behaviors on objects. The Winter '14 release requires that unit tests be managed in a separate file from the wrapper class. Common convention is to always create a unit test file for each wrapper class with a suffix of 'Tests' in the class name.

Exception Handling

Without a clear exception handling strategy, it can be confusing for Developers to know how a class handles exceptions. Does it consistently bubble up, or catch all exceptions? There is no equivalent to the Java 'throws' keyword in Apex. To remedy this, the wrapper class template base provides a boolean 'success' flag that can be set by any method at any time.

When setting success=false, the exception handling code should also add to the List errors collection, giving an explanation of what went wrong in the transaction. It is the responsibility of calling objects/methods to check for success or hasErrors() after any transaction.

JSON Serialization

Wrapper classes can be serialized to JSON and returned to requesting clients for use in UI binding. The ToJSON() method is provided in the wrapper class template and can be customized to serialize the class.

Note: Apex can now be called from workflows as
Invocable Methods
using the Process Builder.

Here is a simple hack to call Apex classes from workflow rules.

Problem: Salesforce has a magnificently declarative environment for creating point-and-click applications and workflows, but one area that gets particularly gnarly is executing business rules in response to changes in state.

Given a problem like "When Opportunity stage equals 'Closed-Won', send the order to the back office system for processing", the Business Analyst has a good idea of "when" the business process should be executed. The Developer knows "how" the process should be executed.

The result is often the development of a trigger that includes both the "when" and "how" logic merged into a single class. The trigger ultimately ends up containing code to detect state changes; a task otherwise best left to workflow rule conditions.

Future enhancements to the business rules require the BA to submit a change request to the Developer, impairing the overall agility of the system.

(Some discussions of detecting record state in trigger can be found here,
here, and
here.)

The Solution: Calling Apex From Outbound Messages
Empower the System Administrator/BA to create workflow rules that call Apex classes in response to declarative conditions.

Create a workflow rule with an outbound message action that calls a message handler (hosted on Heroku), that in turn calls back to a Salesforce REST resource.

Components of the Outbound Message:

The endpoint URL is hosted on Heroku. The outbound message handler receives the message and issues a callback to Salesforce using the path provided after the root URL.

Pass the session ID to the endpoint (Note: the 'User to send as' must have permissions to call and invoke the Apex REST web service)

Pass only the Id of object meeting the workflow condition. This gets passed back to the REST service as an "oid" parameter (object id).

Create a workflow rule with an outbound message action that calls the Heroku hosted endpoint.
Create a Salesforce REST resource for handling the callback.
To see the workflow in action, view the Heroku web service logs while updating records in Salesforce that trigger the workflow rule.

heroku logs -tail

Errata:

IWorkflowTask:
In the real world, I'm evolving this design pattern to include an IWorkflowTask interface to clearly distinguish which business objects handle workflow actions. The execute() method takes a WorkflowContext object that includes more details from the outbound message.

Daisy Chaining Workflows:
It's important that workflow tasks record or modify some state after executing a task in order to allow for triggering follow-up workflow actions.
For example, an OrderProcessor workflow task might update an Order__c status field to "Processed". This allows System Administrators to create follow-up workflow rules/actions, such as sending out emails.

Security: Use HTTPS/SSL endpoints to ensure session information is not subject to man in the middle attacks.

Idempotence: Salesforce does not guarantee that each outbound message will be sent only once (although it's mostly consistent with 1:1 messaging). REST resources should be developed to handle the rare instance where a message might be received twice. In the use case above, the code should be designed to defend against submitting the same order twice; possibly by checking a 'Processed' flag on a record before submitting to a back-office system.

Governor Limits:
Workflow tasks are called asynchronously, so there's a decent amount of processing and execution freedom using this pattern.

Technical Architects make many tough decisions on a daily basis, often with incomplete information. Colin Powell's 40-70 rule is helpful when facing such situations.

He
says that every time you face a tough decision you should have no less
than forty percent and no more than seventy percent of the information
you need to make the decision. If you make a decision with less than
forty percent of the information you need, then you're shooting from the
hip and will make too many mistakes.

The second part of the
decision making rule is what surprises many leaders. They often think
they need more than seventy percent of the information before they can
make a decision. But in reality, if you get more than seventy percent of
the information you need to make the decision, then the opportunity to
add value has usually passed, or the competition has beaten you to the punch. And with today's agile development and
continuous integration (CI) methodologies, you can afford to iterate on
an architecture with incomplete information.

A key element that
supports Powell’s rule is the notion that intuition is what separates
great Technical Architects from average ones. Intuition is what allows
us to make tough decisions well, but many of us ignore our gut. We want
certainty that we are making the right decision, but that's not
possible. People who want certainty in their decisions end up missing opportunities, not leading.

Making decisions with only 40%-70%
of the information requires responsibly communicating the technical
architecture + how changes will be implemented as more information
becomes available.

Architecture + Continuous Integration Process = Agility.

Architecture alone is not a sufficient solution and can leave a solution inflexible to change. "Release early and often" is the new mantra in cloud development.

The best way to manage risk as a TA with 40-70% of the information is to constantly ask yourself 2 questions:1) What is the simplest possible solution to the current problem?2) How will future changes be implemented?

1) Declarative configuration.
First and foremost, it's the obligation of a TA to maximize the
point-and-click configuration of any solution. This is done by using as
many out of box features as possible. 2) Custom settings:
When coding is required, externalizing the behaviors and conditional
branches to custom settings gives System Admins and Business Analysts
the ability to fine tune a solution as more information becomes
available. For example, rather than hardcoding a callout URL in a
trigger, move the URL to a custom setting.3) Hybrid / Web Tabs / Canvas:
For ISVs and custom application development, an IFRAME wrapper to an
app hosted on Heroku provides the greatest agility to pivot on a
solution. Code changes can be pushed several times per day without
having to go through the AppExchange package and review process.
Matching the look and feel of Salesforce within a Hybrid or canvas app
can provide the best of both worlds; a native Salesforce business
application with code managed on a separate platform.

Fortunately, the RESTDoc project has already defined a specification for documenting REST APIs. The intent of RESTDocs is for each endpoint to support the OPTIONS method and return a structured RESTDoc for the particular API endpoint.

Getting back to the Heroku hosted sample, submitting a curl request for the 'hello' endpoint using the OPTIONS method returns the sample RESTDoc below.

Standardizing on RESTDocs opens up a number of interesting possibilities. Since transitioning away from SOAP, the web development community has lost the ability to auto-generate object proxies for calling web services. Webhook and Enterprise Service Bus (ESB) platforms have lost the ability to auto-discover web service endpoints and their supported messages.

Self-describing REST APIs, using RESTDocs and the OPTIONS method, are a compelling solution for enabling the service oriented enterprise and integrating the cloud.

I frequently use the FizzBuzz interview question when interviewing Salesforce developer candidates.

The original FizzBuzz interview question goes something like this:

Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".

The output from the first 15 numbers should look like this:

1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz

It's a great question because the interviewer can evolve the requirements and take the discussion in many different directions.

A good interview is a lot like auditioning a drummer for a rock band. You want to start off with something easy, then "riff" on an idea to get a sense of the candidates listening skills and ability to create variations on a theme (aka refactoring).

Unfortunately, most interviews have the intimidating premise of pass/fail, so the key to an effective interview is in setting up the question so that the interview candidate understands it is okay to constantly change and revise their answers, and that the interview will be evolving around a central concept; which is FizzBuzz.

The questions below gradually get harder by design, and at some point the candidate may not have an answer. That's okay. As an interviewer, you need to know:

a) How does the candidate respond when asked to do something they don't understand?
b) If we hired this person, what is the correct onboarding and mentoring plan for this candidate to help them be successful?

I'll drop hints during the question setup, using buzzwords like "TDD" (test-driven development), "unit testing", and "object oriented design", hoping the candidate might ask clarifying questions before jumping into code, like "Oh, you want to do TDD. Should I write the unit test first?"

So, on to the code. The fundamental logic for FizzBuzz requires a basic understanding of the modulo operator; which, in all fairness, is not a particularly valuable thing to know on a daily basis, but is often the minimum bar for testing the meets "Computer Science or Related 4 Year Degree" requirement in many job descriptions, since it's universally taught in almost all academic curriculums.

After the first round, the basic logic for FizzBuzz should look something like this:

Efficiency. Is mod calculated twice for each value to meet the compound "FizzBuzz" requirement?

Using a 0 based loop index and printing numbers 0-99 instead of 1-100

Unclear coding blocks or control flow (confusing use of parentheses or indenting)

Even if the candidate misses one of these points, they can usually get over the hurdle quickly with a bit of coaching.

"So, let's evolve this function into an Apex class."

For experienced Salesforce Developers, you can start gauging familiarity with Apex syntax; but be flexible. More experienced Developers/Architects will probably think faster in pseudo code, and Java Developers (if you're gauging potential to become a Force.com Developer) will want to use their syntax.

The test runner will report 100% unit test coverage by virtue of executing the entire run() method within a testMethod. But is this really aligned with the true spirit and principle of unit testing? Not really.

A more precise follow-up question might be: "How would you Assert the expected output of FizzBuzz?"

In it's current state, FizzBuzz is just emitting strings. Does the candidate attempt to parse and make assertions on the string output?

At this point, it's helpful to start thinking in terms of TDD, or Test Driven Development, and attempt to write a unit test before writing code. One possible solution is the Extract Method design pattern, creating methods for isFizz() and isBuzz(), then test to assert those methods are working correctly.

This is a considerable improvement, but the test coverage is now only at 40%. The run() method is still leaving some technical debt behind to be refactored.

I may drop the candidate a hint about Model-View-Controller and ask how they might deconstruct this class into it's constituent parts.

There are no DML or objects to access, so effectively there is no Model.

But the run() method is currently overloaded with FizzBuzz logic (controller) and printing the output (view). We can further extract the logic into a List of strings to be rendered in any form by the run() method.

Test coverage is now at 90% after extracting the run() print logic into a unit testable method that returns a list. The last 10% can be easily covered by calling run() anywhere inside a testMethod.

If there's time remaining in the interview, a good enhancement is to add dynamic ranges. Instead of printing 1-100, modify the class to support any range of numbers. Basically, this is just testing the candidate's ability to manage class constructor arguments.

I will usually follow-up this question with questions about boundary checking and programmatic validation rules.

"Should FizzBuzz be allowed to accept negative numbers?"

"Should the ceiling value always be greater than the floor?"

If yes to either of these, then how would the candidate implement validation rules and boundary checks? This very quickly gets into writing more methods and more unit tests, but mirrors the reality of day-to-day Force.com development.

Once variables get introduced at class scope, then this is a good opportunity to have discussions about side-effects and immutability.

"What happens 6 months later when another Developer comes along and tries to modify the ceiling or floor variables in new methods?"

"How can you prevent this from happening?"

"What are the benefits of initializing variables only once and declaring them 'final'?"

An experienced Developer will likely have a grasp of functional programming techniques and the long-term benefits of minimizing side-effects and keeping classes immutable.

And finally, these unit tests are all written inline. How would the candidate separate tests from production classes?

The announcement that Facebook intended to acquire Instagram made headlines. While the focus of these headlines primarily centered around the $1B valuation and 50 Million user audience, the true story went largely untold.

Instagram's story marks an inflection point in computing history whereby a start-up embracing cloud infrastructure is no longer the exception, but now considered the rule for building a successful company.

The costs for a v1 product were extraordinarily low and gave them an opportunity to pivot at any point without being grounded by capitalized infrastructure investments (i.e. building their own colo servers and doing their own hosting).

At the time of Instagram's acquisition they had added only 11 more employees, mostly hired in the few months leading up to their acquisition.

Some tips and lessons learned for other start-ups considering embracing the cloud as their development and hosting platform:

Choose a platform and master it: Instagram selected Amazon Web Services, but there are many options available. Don't hedge your bets by designing for platform portability. Dig deep into the capabilities of your chosen platform and exploit them.

Think Ephemeral: Your cloud web servers and storage can disappear at any point. Anticipate and design for this fact. If you have a client server background, then take a step back and grasp the concepts of ephemeral storage and computing before applying old world concepts to the cloud.

Share Knowledge: This is still a new frontier and we're all constantly learning new tips and tricks about how to best utilize cloud computing. Many people at Facebook (including myself) were fans of the Instagram Engineering blog long before the acquisition. Share what you've learned and others will reciprocate.

Build for the long term: The Instagram team did not build a company to be sold. They built a company that could have continued to grow indefinitely, and perhaps even overtaken competing services. There are so many legacy business processes and consumer applications whose growth is restricted by their architecture. Embrace the cloud with the intent to build something enduring and everlasting.

Integrating CRM with ERP/Financial systems can be a challenge. Particularly if the systems are from 2 different vendors, which is often the case when using Salesforce.com CRM.

At Facebook, we've gone through several iterations of integrating Salesforce with Oracle Financials and the team has arrived at a fairly stable and reliable integration process (kudos to Kumar, Suresh, Gopal, Trevor, and Sunil for making this all work).

Here is the basic flow (see diagram below):

1) The point at which Salesforce CRM needs to pass information to Oracle is typically once an Opportunity has been closed/won and an order or contract has been signed.

2) Salesforce is configured to send an outbound message containing the Opportunity ID to an enterprise service bus (ESB) that is configured to listen for specific SOAP messages from Salesforce.

3) The ESB accepts the outbound message (now technically an inbound message on the receiver side) and asserts any needed security policies, such as whitelist trusting the source of the message.

4) This is the interesting part. Because the Salesforce outbound message wizard only allows for the exporting of fields on a single object, the ESB must call back to retrieve additional information about the order; such as the Opportunity line items, Account, and Contacts associated with the Order.

Dreamforce 11 is just around the corner and fellow Facebook Engineer Mike Fullmore and myself have been invited to speak at the following panel:

Enterprise Engineering
Friday, September 2
10:00 a.m. - 11:00 a.m.
Can you really develop at 5x a regular speed when you're at enterprise scale? In this session, a panel of enterprise technical engineers will discuss engineering best practices for the Sales Cloud, Service Cloud, Chatter and Force.com. Topics include security, sandbox, integration, Apex, and release management.

In case you're not able to attend, here are the high level points from our presentation.

Moving Fast on Force.com

Facebook has been using Salesforce for several months to rapidly prototype, build, and deploy a number of line of business applications to meet the needs of a hyper-growth organization. This presentation shares some best practices that have evolved at Facebook to help develop on the Force.com platform.

People

Before sharing details about Facebook's processes, methodologies, and tools; it's important to point out that the people on the enterprise engineering team are what really make things happen.
Each Engineer is able to work autonomously and carry a project through from design to deployment. Great people instinctively take great pride in their work and consistently take the initiative to deliver awesomeness. I would be remiss not to point them out here. All these Engineers operate at MVP levels.

The effort that goes into recruiting a great development team should not be underestimated. Recruiting an awesome team involves several people doing hundreds of phone screens and dozens of interviews.
Facebook is in a unique situation in its history and we don't take it for granted that we have access to unprecedented resources and talent. It's actually very humbling to work with such a stellar team at such a great company.

Business Processes

Projects and applications generally fall into one of 9 major process buckets. Engineers at Facebook seeking to have a high impact will typically either have a breadth or depth of knowledge. Some focus on the long-term intricate details and workflows of a single business process while others are able to move around and generally lead several, concurrent, short-term development efforts in any business area.

Sandbox->Staging->Deploy

Each Project has it's own development sandbox. Additionally, each Engineer may also have their own personal sandbox. When code is ready to be deployed, it's packaged using the Ant migration tool format and typically tested in 2 sandboxes: 1 daily refreshed staging org to ensure all unit tests will run and there are no metadata conflicts, and a full sandbox deploy to give business managers an opportunity to test using real-world data.

Change sets are rarely used, but may be the best option for first time deployments of large applications that have too many metadata dependencies to reliably be identified by hand.

The online security scanner is used as a resource during deployment to identify any potential security issues. A spreadsheet is used for time-series analysis of scanner results to understand code quality trends.
Once a package has been reviewed, tested, and approved for deployment; a release Engineer deploys the package to production using Ant. This entire process is designed to support daily deployments. There are typically 3-5 incremental change deployments per week.

Obligatory Chaotic Process Diagram

"Agile" and "process" are 2 words that are not very complimentary. Agile teams must find an equilibrium of moving fast yet maintaining high quality code. Facebook trusts every Engineer will make the right decisions when pushing changes. When things go wrong, we conduct a post-mortem or retrospective against an "ideal" process to identify what trade-offs were made, why, and where improvements can be made.

All Engineers go through a 6 week orientation "bootcamp" to understand the various processes.

Typical Scrum Development Process

The development "lingua franca" within Silicon Valley, and for most Salesforce service providers, tends to be Scrum. Consultants and contractors provide statements of work and deliver progress reports around "sprints". Scrum training is readily available by a number of agile shops.

This industry standard has been adopted internally and keeps multiple projects and people in sync. Mike Fullmore developed a Force.com app named "Scrumbook" for cataloguing projects, sprints, and stories.

A basic Force.com project template with key milestones has been created to give Project Managers an idea of when certain activities should take place. Whenever possible we prefer to avoid a "waterfall" or "big bang" mentality; preferring to launch with minimal functionality, validate assumptions with end-users, then build on the app in subsequent sprints.

Manage The Meta(data)

The general line of demarcation within IT at Facebook is:

Admins own the data

Engineers own the metadata

The Salesforce Metadata API is a tremendously powerful resource for scaling an enterprise, yet remaining highly leveraged and lean. We've developed custom metadata tools to help us conduct security audits and compare snapshot changes.

(Credit to our Summer Intern, Austin Wang, who initiated the development of internal tools! )

Change Management

The advantage to using Salesforce is the ability to use declarative configuration and development techniques to produce functional applications, then use more powerful Apex and Visualforce tools to maximize apps around business core competencies. "Clicks over code" is a common mantra in Salesforce shops, and Facebook is no exception.

A change management matrix is a useful tool for determining when "clicks-over-code" is preferred over a more rigorous change management process.