Instanceof for Apex Date and DateTime

Just spent some time chasing a unit test error where the date returned in some code was a day out. The code being tested was:

SObject sob = ...
Object o = sob.get(f);
Date d;
if (o instanceof DateTime) {
    d = ((DateTime) o).date();
} else if (o instanceof Date) {
    d = (Date) o;
}

In the end this unit test demonstrated the cause of the problem:

@IsTest
private class InstanceofTest {
    @IsTest
    static void date() {
        Object o = Date.today();
        System.assertEquals(true, o instanceof Date);
        System.assertEquals(true, o instanceof DateTime);
    }
    @IsTest
    static void dateTime() {
        Object o = DateTime.now();
        System.assertEquals(false, o instanceof Date);
        System.assertEquals(true, o instanceof DateTime);
    }
}

So a Date is a DateTime (as well as a Date) and that was pushing the code through some unwanted timezone offsetting.

A fix is:

SObject sob = ...
Object o = sob.get(f);
Date d;
if (o instanceof Date) {
    d = (Date) o;
} else if (o instanceof DateTime) {
    d = ((DateTime) o).date();
}

and a lesson learned is to be paranoid about the type system in Apex.

Advertisements

Favour the Typesafe Enum pattern in Apex

A frequent need is to have behaviour that varies according to a value such as picklist String value. The simplest approach is to pass that value around as a String and to compare the value against inline String constants or static final Strings declared somewhere. But weaknesses of this approach are:

  • Not typesafe – other values can be accidentally introduced
  • Not self-documenting – using the type String says nothing about the domain/purpose
  • Any related attributes or logic has to be added inline or in a separate class using if/elseif/else chains

(Java had the enum types language feature added in version 5 to address these problems; Apex’s enum language feature is basic in comparison.)

A pattern that addresses these problems and was often recommended in Java before enum types were added is the “typesafe enum” pattern. Here is an Apex example:

public class Status {

    private static final Map<String, Status> STATUSES = new Map<String, Status>();
    
    public static final Status PENDING = new Status('Pending', 5);
    public static final Status OPEN = new Status('Open', 30);
    public static final Status CLOSED = new Status('Closed', 60);
    
    public static Status valueOf(String name) {
        return STATUSES.get(name);
    }
    
    public static Status[] values() {
        return STATUSES.values();
    }
    
    public String name {get; private set;}
    public Integer days {get; private set;}
    
    private Status(String name, Integer days) {
        this.name = name;
        this.days = days;
        STATUSES.put(name, this);
    }
    
    public override String toString() {
        return name;
    }
     
    public Boolean equals(Object o) {
        if (o instanceof Status) {
            Status that = ((Status) o);
            return this.name == that.name;
        }
        return false;
    }
    
    public Integer hashCode() {
        return name.hashCode();
    }
}

Code using it looks like this:

    private void method1() {
        ...
        method2('xyz', Status.OPEN);
        ...
    }
    
    private void method2(String v, Status s) {
        ...
        Date d = Date.now().addDays(s.days);
        String message = 'Status is ' + s;
        ...
    }

To convert a picklist value (or other String) to an instance of this object:

Status status = Status.valueOf(picklistValue);

To iterate over all the values:

for (Status status : Status.values()) {
    ...
}

Extra methods can be added and so can extra data values.

Note that the equals/hashCode methods are only needed if references are serialized. Examples of where that can happen are in Visualforce’s view state or when Database.Stateful is used in a Batchable.

PS

This approach can be used for run-time loaded values (e.g. using describe calls on picklist fields or loading from JSON) though that of course means not having constants for each value. If that is done I strongly recommend lazy loading to avoid filling the debug log with entries for cases where the full set of values are not needed.

Trigger handler induced design damage

There is a great series of discussions including videos on the subject of the benefits and costs of test driven development. Part of that discussion is the notion of Test-induced design damage where pursuing one principle – testability – to the exclusion of other principles results in damage to the design.

An area in Salesforce where I see risk of damage is the pursuit of this:

Another widely-recognized best practice is to make your Triggers logic-less. That means, the role of the Trigger is just to delegate the logic responsibilities to some other handler class.

quoted from Trigger Frameworks and Apex Trigger Best Practices and often paraphrased. (The term “best practice” is sometimes used to give an idea unmerited gravitas.)

There can be benefits in using separate classes, typically where the logic being implemented has enough complexity to require multiple methods and static constants. Or if a choice has been made to use a carefully selected trigger framework (and not an unproven local invention). And a mixed model – where some code is in the trigger and other parts are moved to classes as needed – also works.

But there are costs:

  • A reduction in cohesion: the logic has moved to a class that is separate from the execution context that includes a number of subtleties as documented in Trigger Context Variables.
  • Over time as more logic is added, the chances of queries being repeated in separate handler class methods or separate handler classes goes up.
  • Some naming challenges: what should the handler class, methods and method parameters be called? And the clutter of more top level classes in an org.
  • Should the test relate to the trigger (the specific context the logic has to run in) or the handler class (with no assumed context)?
  • A risk of failure to bulkify. A handler class that has methods that accept single values rather than collections is an obvious red flag. But with the trigger context not so directly in view when writing the handler class and logic running several method calls down, bulkification can be missed.

Code in a trigger is just like any other code and should be written to Do The Simplest Thing That Could Possibly Work. If the problem being solved can be solved cleanly by keeping the code in the trigger then do that. If not, introduce a class to solve the part of the problem that needs a class. Be prepared to refactor when logic is added later.

The Apex Code Best Practices documentation says “Bulkify your Helper Methods” – if you have any. Those helper methods and classes are not assumed.

Don’t damage your code base by introducing a pattern where the benefits are outweighed by the costs.

Three books all Salesforce developers should read

I’ve recently been writing some Salesforce coding guidelines for new hires that say explicitly what is implicit in our code base. A difficulty is deciding where to stop, so in the end I thought the best thing to do was to also include an as short as possible reading list.

Here is what I chose:

  • Clean Code by Robert C. Martin
    How to write better code. A great book about software craftsmanship whatever language and stack you use.
  • Advanced Apex Programming by Dan Appleman
    The unique features of Salesforce and patterns to address the hard parts. With its help you will be able to write Apex code that works all the time not most of the time.
  • Effective JavaScript by David Herman
    JavaScript is becoming increasingly important. This is a book that will give you real insight into how the language works and the typical patterns you need to use the language well. (I’ve read several modern JavaScript books and I found this by far the best.)

What books would you choose?

Referencing one controller extension from another controller extension

A controller extension can add functionality to a standard controller. The controller extension coding pattern means an extension gets a reference to the standard controller and so can uses its methods:

public with sharing class Ext1 {
    public Ext1(ApexPages.StandardController sc) {
        ...
    }
}

But what if you have a page with two extensions:

<apex:page standardController="Contact" extensions="Ext1, Ext2">
    ...
</apex:page>

and you want to reference the methods of one extension from the other extension? No platform API is provided for that.

Here is some code that allows such cross-referencing. When the extensions are constructed, they are added (registered) into a singleton (static) instance. But a reference to that singleton is also made a field in each extension ensuring that the extension references are made part of the view state. So the cross-references are preserved across form posts etc.

The extensions look like this:

public with sharing class Ext1 {
    private Registry r;
    public Ext1(ApexPages.StandardController sc) {
        r = Registry.instance();
        r.add(Ext1.class, this);
        ...
    }
    private Ext2 getExt2() {
        return (Ext2) r.get(Ext2.class);
    }
}
public with sharing class Ext2 {
    private Registry r;
    public Ext2(ApexPages.StandardController sc) {
        r = Registry.instance();
        r.add(Ext2.class, this);
        ...
    }
    private Ext1 getExt1() {
        return (Ext1) r.get(Ext1.class);
    }
}

and the singleton is:

public class Registry {
    private static Registry instance;
    private Map<Type, Object> m = new Map<Type, Object>();
    // Set a view state field to this
    public static Registry instance() {
        if (instance == null) instance = new Registry();
        return instance;
    }
    // Singleton
    private Registry() {
    }
    public void add(Type key, Object value) {
        m.put(key, value);
    }
    public Object get(Type key) {
        return m.get(key);
    }
}

Picklist values by record type for AngularJS UI

A convenient Apex API to get the picklist value sub-set per record type has yet to appear. Visualforce does the sub-setting, but that is no help if you are building UI in some other technology such as AngularJS and want the select lists to reflect the sub-setting.

Here is a work-around for that situation, where the aim is to provide JSON to the AngularJS client-side code via a static resource. An API that provides access to both record types and static resources is the Metadata API. Thanks to Andrew Fawcett‘s work (see financialforcedev/apex-mdapi) and improvements in the underlying Salesforce API, this API is now quite simple to call from Apex code.

In the code below, the readPicklists method reads the record type information for 5 record types of a custom SObject called cve__BenefitClaimed__c. (This SObject and its record types are in a managed package that has the namespace prefix cve. Note also that it is the developer name of the record type that is used.) The picklist value data is extracted and placed in nested maps where the first key is the SObject type name, the second key the record type (developer) name and the third key the field name with the value a list of valid picklist values. The updateStaticResource method updates a pre-existing static resource (in a managed package that has the namespace prefix cveep) with the JSON string version of the nested maps.

I run this from a Visualforce admin page. If the picklist assignments are changed, the code is manually re-run.

The result is that the AngularJS code can use its $http service to get the static resource. The data is pre-created and so available quickly, and is also already in JSON format so is easy for the AngularJS code to consume.

Here is the code; sorry it’s rather wide…

public PageReference updatePicklists() {
    
    final String[] recordTypeFullNames = new String[] {
            'cve__BenefitClaimed__c.cve__Accident',
            'cve__BenefitClaimed__c.cve__LongTermCare',
            'cve__BenefitClaimed__c.cve__LongTermDisability',
            'cve__BenefitClaimed__c.cve__ShortTermDisability',
            'cve__BenefitClaimed__c.cve__Survivor'
            };
    
    final String staticResourceFullName = 'cveep__RecordTypePicklistValues';
            
    MetadataService.MetadataPort service = new MetadataService.MetadataPort();
    service.SessionHeader = new MetadataService.SessionHeader_element();
    service.SessionHeader.sessionId = UserInfo.getSessionId();
    
    String jsonString = readPicklists(service, recordTypeFullNames);
    updateStaticResource(service, staticResourceFullName, jsonString);
    
    return null;
}

private String readPicklists(MetadataService.MetadataPort service, String[] recordTypeFullNames) {
    
    Map<String, Map<String, Map<String, List<String>>>> sobMap = new Map<String, Map<String, Map<String, List<String>>>>();
    for (MetadataService.RecordType rt : (MetadataService.RecordType[]) service.readMetadata('RecordType', recordTypeFullNames).getRecords()) {
        MetadataService.RecordTypePicklistValue[] values = rt.picklistValues;
        if (rt.fullName != null && rt.picklistValues != null) {
            String[] parts = rt.fullName.split('\\.');
            String sobjectType = parts[0];
            String recordType = parts[1];
            Map<String, Map<String, List<String>>> rtMap = sobMap.get(sobjectType);
            if (rtMap == null) {
                rtMap = new Map<String, Map<String, List<String>>>();
                sobMap.put(sobjectType, rtMap);
            }
            Map<String, List<String>> fieldMap = rtMap.get(recordType);
            if (fieldMap == null) {
                fieldMap = new Map<String, List<String>>();
                rtMap.put(recordType, fieldMap);
            }
            for (MetadataService.RecordTypePicklistValue picklist : rt.picklistValues) {
                if (picklist.values != null) {
                    List<String> valueList = fieldMap.get(picklist.picklist);
                    if (valueList == null) {
                        valueList = new List<String>();
                        fieldMap.put(picklist.picklist, valueList);
                    }
                    for (MetadataService.PicklistValue value : picklist.values) {
                        valueList.add(value.fullName);
                    }
                }
            }
        }
    }
    
    return JSON.serialize(sobMap);
}

private void updateStaticResource(MetadataService.MetadataPort service, String staticResourceFullName, String jsonString) {
    
    MetadataService.StaticResource sr = new MetadataService.StaticResource();
    sr.fullName = staticResourceFullName;
    sr.contentType = 'text/json';
    sr.cacheControl = 'public';
    sr.content = EncodingUtil.base64Encode(Blob.valueOf(jsonString));
    
    MetadataService.SaveResult[] results = service.updateMetadata(new MetadataService.StaticResource[] {sr});
    for (MetadataService.SaveResult r : results) {
        if (!r.success) {
            String[] errors = new String[] {};
            if (r.errors != null) {
                for (MetadataService.Error e : r.errors) {
                    errors.add('message=' + e.message + ' statusCode=' + e.statusCode + ' fields=' + e.fields);
                }
            }
            throw new EndUserMessageException('Error: ' + String.join(errors, '; '));
        }
    }
}

PS A maximum of 10 record types can be read at once so use multiple calls if you require more than 10.

Fixing a common cause of System.LimitException: Apex CPU time limit exceeded

When developing code, automated unit tests and interactive testing naturally tend to use small numbers of objects. As the number of objects increases, the execution time has to increase in proportion – linearly. But it is all too easy to introduce code where the execution time grows as the square of the number of objects or the cube of the number of objects. (See e.g. Time complexity for some background.) For example, even with 100 objects, code that takes 100ms for the linear case, takes 10s for the squared case and 1,000s for the cubed case. So for the squared or cubed cases

System.LimitException: Apex CPU time limit exceeded

exceptions can result with even modest numbers of objects.

Is this governor limit a good thing? On the positive side, it forces a bad algorithm to be replaced by a better one. But on the negative side, your customer is stuck unable to work until you can deliver a fix to them. Some sort of alarm and forgiveness from the platform for a while would be more helpful…

Here is a blatant example of the problem (assuming all the collections include large numbers of objects):

// Linear - OK
for (Parent__c p : parents) {
    // Squared - problem
    for (Child__c c : children) {
        // Cubed - big problem
        for (GrandChild__c bc : grandChildren) {
            // ...
        }
    }
}

So review all nested loops carefully. Sometimes the problem is hidden, with one loop in one method or class and another loop in another method or class, and so harder to find.

Often the purpose of the loops is just to find the objects/object in one collection that match an object in another collection. There are two contexts that require two different approaches (though in more complicated cases the approaches can be combined) to fix the problem or better to avoid it in the first place:

  • In e.g. a controller where the query is explicit, a parent and child can be queried together with direct relationship references available using a relationship query.
  • In a trigger, no __r collections are populated, so maps have to be used. A map allows a value to be looked up without the cost of going through every entry in a list.

Here is how to fix the problem for the two contexts in parent-child relationships:

// Explicit SOQL context
for (Parent__c p : [
        select Name, (select Name from Childs__r)
        from Parent__c
        ]) {
    // Loop is over the small number of related Child__c not all of the Child__c
    for (Child__c c : p.Childs__r) {
        // ...
    }
}


// Trigger context
Map<Id, List<Child__c>> children = new Map<Id, List<Child__c>>();
for (Child__c c : [
        select Parent__c, Name
        from Child__c
        where Parent__c in Trigger.newMap.keySet()
        ]) {
    List<Child__c> l = children.get(c.Parent__c);
    if (l == null) {
        l = new List<Child__c>();
        children.put(c.Parent__c, l);
    }
    l.add(c);
}
for (Parent__c p : Trigger.new) {
    // Loop is over the small number of related Child__c not all of the Child__c
    if (children.containsKey(p.Id)) {
        for (Child__c c : children.get(p.Id) {
            // ...
        }
    }
}

And for the two contexts in child-parent relationships:

// Explicit SOQL context
for (Child__c c : [
        select Name, Parent__r.Name
        from Child__c
        ]) {
    // The one parent
    Parent__c p = c.Parent__r;
    if (p != null) {
        // ...
    }
}


// Trigger context
Set<Id> parentIds = new Set<Id>();
for (Child__c c : Trigger.new) {
    if (c.Parent__c != null) {
        parentIds.add(c.Parent__c);
    }
}
if (parentIds.size() > 0) {
    Map<Id, Parent__c> parents = new Map<Id, Parent__c>([
            select Id, Name
            from Parent__c
            where Id in :parentIds
            ]);
    for (Child__c c : Trigger.new) {
        // The one parent
        if (c.Parent__c != null) {
            Parent__c p = parents.get(c.Parent__c);
            if (p != null) {
                // ...
            }
        }
    }
}

This fixed code will operate in close to linear time.