Referencing one controller extension from another controller extension

A controller extension can add functionality to a standard controller. The controller extension coding pattern means an extension gets a reference to the standard controller and so can uses its methods:

public with sharing class Ext1 {
    public Ext1(ApexPages.StandardController sc) {
        ...
    }
}

But what if you have a page with two extensions:

<apex:page standardController="Contact" extensions="Ext1, Ext2">
    ...
</apex:page>

and you want to reference the methods of one extension from the other extension? No platform API is provided for that.

Here is some code that allows such cross-referencing. When the extensions are constructed, they are added (registered) into a singleton (static) instance. But a reference to that singleton is also made a field in each extension ensuring that the extension references are made part of the view state. So the cross-references are preserved across form posts etc.

The extensions look like this:

public with sharing class Ext1 {
    private Registry r;
    public Ext1(ApexPages.StandardController sc) {
        r = Registry.instance();
        r.add(Ext1.class, this);
        ...
    }
    private Ext2 getExt2() {
        return (Ext2) r.get(Ext2.class);
    }
}
public with sharing class Ext2 {
    private Registry r;
    public Ext2(ApexPages.StandardController sc) {
        r = Registry.instance();
        r.add(Ext2.class, this);
        ...
    }
    private Ext1 getExt1() {
        return (Ext1) r.get(Ext1.class);
    }
}

and the singleton is:

public class Registry {
    private static Registry instance;
    private Map<Type, Object> m = new Map<Type, Object>();
    // Set a view state field to this
    public static Registry instance() {
        if (instance == null) instance = new Registry();
        return instance;
    }
    // Singleton
    private Registry() {
    }
    public void add(Type key, Object value) {
        m.put(key, value);
    }
    public Object get(Type key) {
        return m.get(key);
    }
}

My week with Heroku Connect

Heroku Connect replicates and synchronises data between a Salesforce org and a Heroku Postgres database. So you can build an app in Heroku using one of seven technologies (including Node.js), hook that app up to the Postgres database, and then your app users will be able to see and modify the same data that the Salesforce users see and modify. Bear in mind that this mechanism allows you to share data: it does not allow you to share business logic.

As this solution is built in/on Heroku, it is incredibly easy to setup and get working; I had the Demo Edition setup and the first couple of SObjects working inside an hour. The mapping UI involves a lot of checkbox clicking, but once done the configuration can be exported and imported. The UI provides overview and drill-down on what is going on. One minor problem I had was quickly and efficiently sorted out by Heroku support. All good.

The clean and simple scenario is where the Postgres data is largely handled as read-only. Then (besides whatever technologies you are writing the Heroku app in) the main thing to get used to is that you are writing SQL not SOQL. Also there are two identifier values: the SFID which is the 18 character (case insensitive) Salesforce ID we are used to and a local Postgres integer ID. Foreign key fields from Salesforce reference the SFID not the ID.

Things get more awkward when you want to insert or update the Postgres data. Heroku Connect handles the synchronisation in that direction too, but some aspects of the implementation leak out:

  • The SFID is not available in the transaction where an insert is done (as it is allocated in Salesforce). In my very limited tests, it was available about a second later.
  • Fields such as CreatedDate and any fields populated by Salesforce logic such as insert or update triggers will remain empty until the next synchronization is done; in the basic polling mode that can be up to 10 minutes later. So your users may experience blank fields in the UI when they insert data that change to populated fields when the UI is refreshed 10 minutes later.

Inserting objects into Postgres that have parent/child relationships is awkward. For master/detail, the required pattern is documented in Inserting records with master/detail relationships using external IDs. It requires an “External ID” field to be added to the Salesforce objects (so is a little intrusive at that side) and the Heroku code needs to populate that field and (an automatically added) foreign key with a matching unique value. This then allows Salesforce to create the same master/detail relationship using the normal Salesforce identifier. I don’t know if, but do hope that, the Salesforce foreign key is pushed back into Postgres fairly immediately like the SFID is. There is no mention of how to accomplish this for lookup relationships.

I must mention the pleasure and productivity of writing the full application stack in one language and in JavaScript: AngularJS for the client and Node.js/Express/pg-promise/Passport for the server. All quick to get running for existing Salesforce data thanks to Heroku and Heroku Connect.

Picklist values by record type for AngularJS UI

A convenient Apex API to get the picklist value sub-set per record type has yet to appear. Visualforce does the sub-setting, but that is no help if you are building UI in some other technology such as AngularJS and want the select lists to reflect the sub-setting.

Here is a work-around for that situation, where the aim is to provide JSON to the AngularJS client-side code via a static resource. An API that provides access to both record types and static resources is the Metadata API. Thanks to Andrew Fawcett‘s work (see financialforcedev/apex-mdapi) and improvements in the underlying Salesforce API, this API is now quite simple to call from Apex code.

In the code below, the readPicklists method reads the record type information for 5 record types of a custom SObject called cve__BenefitClaimed__c. (This SObject and its record types are in a managed package that has the namespace prefix cve. Note also that it is the developer name of the record type that is used.) The picklist value data is extracted and placed in nested maps where the first key is the SObject type name, the second key the record type (developer) name and the third key the field name with the value a list of valid picklist values. The updateStaticResource method updates a pre-existing static resource (in a managed package that has the namespace prefix cveep) with the JSON string version of the nested maps.

I run this from a Visualforce admin page. If the picklist assignments are changed, the code is manually re-run.

The result is that the AngularJS code can use its $http service to get the static resource. The data is pre-created and so available quickly, and is also already in JSON format so is easy for the AngularJS code to consume.

Here is the code; sorry it’s rather wide…

public PageReference updatePicklists() {
    
    final String[] recordTypeFullNames = new String[] {
            'cve__BenefitClaimed__c.cve__Accident',
            'cve__BenefitClaimed__c.cve__LongTermCare',
            'cve__BenefitClaimed__c.cve__LongTermDisability',
            'cve__BenefitClaimed__c.cve__ShortTermDisability',
            'cve__BenefitClaimed__c.cve__Survivor'
            };
    
    final String staticResourceFullName = 'cveep__RecordTypePicklistValues';
            
    MetadataService.MetadataPort service = new MetadataService.MetadataPort();
    service.SessionHeader = new MetadataService.SessionHeader_element();
    service.SessionHeader.sessionId = UserInfo.getSessionId();
    
    String jsonString = readPicklists(service, recordTypeFullNames);
    updateStaticResource(service, staticResourceFullName, jsonString);
    
    return null;
}

private String readPicklists(MetadataService.MetadataPort service, String[] recordTypeFullNames) {
    
    Map<String, Map<String, Map<String, List<String>>>> sobMap = new Map<String, Map<String, Map<String, List<String>>>>();
    for (MetadataService.RecordType rt : (MetadataService.RecordType[]) service.readMetadata('RecordType', recordTypeFullNames).getRecords()) {
        MetadataService.RecordTypePicklistValue[] values = rt.picklistValues;
        if (rt.fullName != null && rt.picklistValues != null) {
            String[] parts = rt.fullName.split('\\.');
            String sobjectType = parts[0];
            String recordType = parts[1];
            Map<String, Map<String, List<String>>> rtMap = sobMap.get(sobjectType);
            if (rtMap == null) {
                rtMap = new Map<String, Map<String, List<String>>>();
                sobMap.put(sobjectType, rtMap);
            }
            Map<String, List<String>> fieldMap = rtMap.get(recordType);
            if (fieldMap == null) {
                fieldMap = new Map<String, List<String>>();
                rtMap.put(recordType, fieldMap);
            }
            for (MetadataService.RecordTypePicklistValue picklist : rt.picklistValues) {
                if (picklist.values != null) {
                    List<String> valueList = fieldMap.get(picklist.picklist);
                    if (valueList == null) {
                        valueList = new List<String>();
                        fieldMap.put(picklist.picklist, valueList);
                    }
                    for (MetadataService.PicklistValue value : picklist.values) {
                        valueList.add(value.fullName);
                    }
                }
            }
        }
    }
    
    return JSON.serialize(sobMap);
}

private void updateStaticResource(MetadataService.MetadataPort service, String staticResourceFullName, String jsonString) {
    
    MetadataService.StaticResource sr = new MetadataService.StaticResource();
    sr.fullName = staticResourceFullName;
    sr.contentType = 'text/json';
    sr.cacheControl = 'public';
    sr.content = EncodingUtil.base64Encode(Blob.valueOf(jsonString));
    
    MetadataService.SaveResult[] results = service.updateMetadata(new MetadataService.StaticResource[] {sr});
    for (MetadataService.SaveResult r : results) {
        if (!r.success) {
            String[] errors = new String[] {};
            if (r.errors != null) {
                for (MetadataService.Error e : r.errors) {
                    errors.add('message=' + e.message + ' statusCode=' + e.statusCode + ' fields=' + e.fields);
                }
            }
            throw new EndUserMessageException('Error: ' + String.join(errors, '; '));
        }
    }
}

PS A maximum of 10 record types can be read at once so use multiple calls if you require more than 10.

Adding JavaScript and jQuery to Visualforce pages

Visualforce is pretty tolerant of how custom JavaScript is added, but this post suggests a couple of patterns to use (where possible):

  • External JavaScript file references or local JavaScript is best placed at the end of the page rather than the beginning which looks a bit strange at first sight. The benefit is that the page content can then be rendered by the browser before the browser becomes blocked loading the external JavaScript file and then executing that file and the local Javascript. So a potential page load delay is avoided, with the JavaScript work being completed in the user’s “thinking time” as they first see the page content.
  • It is easy to unintentionally add references to Javascript’s global scope and so potentially interfere with other references in that scope. In JavaScript scope is delineated by functions (not by blocks – curly brackets have no impact on scope) so var declarations should always be used and should always be used within a function.

So when not using external libraries put this executed (via the ()) anonymous JavaScript function (to create a new scope) at the end of the page:

<apex:page ...>
    <apex:sectionHeader .../>
    <apex:pageBlock ...>
        ...
    </apex:pageBlock>

<script>
(function() {
    // All the custom JavaScript goes in here
    // Always use var
    var i = ...;
    ...
})();
</script>
</apex:page>

and when using jQuery put both the include and the executed anonymous JavaScript function at the end of the page:

<apex:page ...>
    <apex:sectionHeader .../>
    <apex:pageBlock ...>
        ...
    </apex:pageBlock>

<apex:includeScript value="{!URLFOR($Resource.jQueryZip, 'jquery.js')}"/>
<script>
(function($) {
    // All the custom JavaScript goes in here
    // Always use var
    var i = ...;
    // Use $ in here for jQuery
    var j = $('table.notes');
    ...
})(jQuery.noConflict());
</script>
</apex:page>

This allows $ to be used as the reference to jQuery within the function, while ensuring that whatever the symbol $ was set to before jQuery was included is restored via the noConflict call.

Creating a custom global describe API using @RestResource

A colleague is working on a client that needs to know all the SObject names and all the field names within those SObjects. The Apex describe APIs provide this information but also a lot of other information that is not required in this case. So it is worth doing work at the server-side to cut down the information to only what is required by the client.

(In the org in question, the 300 SObjects produce JSON output of 800 kB, well below the 3 MB governor limit on HTTP responses.)

Salesforce’s @RestResource mechanism makes doing this pretty easy. The code below transfers the required information into instances of simple Apex classes, sorts the data based on label first then API name second, and then leaves it up to the platform to serialise those as JSON:

@RestResource(urlMapping='/v1/describe')
global with sharing class DescribeRest {

    global class Sob implements Comparable {
        
        public String sobLabel;
        public String sobApi;
        public Field[] sobFields;
        
        Sob(SObjectType t) {
            DescribeSObjectResult r = t.getDescribe();
            sobLabel = r.getLabel();
            sobApi = r.getName();
            sobFields = new Field[] {};
            for (SObjectField f : r.fields.getMap().values()) {
                sobFields.add(new Field(f));
            }
            sobFields.sort();
        }
        
        public Integer compareTo(Object o) {
            Sob that = (Sob) o;
            if (this.sobLabel < that.sobLabel) return -1;
            else if (this.sobLabel > that.sobLabel) return 1;
            else {
                if (this.sobApi < that.sobApi) return -1;
                else if (this.sobApi > that.sobApi) return 1;
                else return 0;
            }
        }
    }
    
    global class Field implements Comparable {
        
        public String label;
        public String api;
        
        Field(SObjectField f) {
            DescribeFieldResult r = f.getDescribe();
            label = r.getLabel();
            api = r.getName();
        }
        
        public Integer compareTo(Object o) {
            Field that = (Field) o;
            if (this.label < that.label) return -1;
            else if (this.label > that.label) return 1;
            else {
                if (this.api < that.api) return -1;
                else if (this.api > that.api) return 1;
                else return 0;
            }
        }
    }
    
    @HttpGet
    global static Sob[] get() {
        
        Sob[] sobs = new Sob[] {};
        for (SObjectType t : Schema.GetGlobalDescribe().values()) {
            sobs.add(new Sob(t));
        }
        sobs.sort();
        
        return sobs;
    }
}

When accessed using /services/apexrest/cveep/v1/describe.json, this produces JSON (formatted here to better illustrate the structure) taking about 10ms per object at the server-side:

[
    {
        "sobLabel":"Absence",
        "sobFields":[
            {"label":"Absence","api":"Name"},
            {"label":"Absence Type","api":"Type__c"},
            {"label":"Claim","api":"Claim__c"},
            ...
        ],
        "sobApi":"Absence__c"
    },
    ....
]

Now that the governor limits have been removed on describe calls the first limit that will be hit is probably the 3 MB response size limit.

Fixing a common cause of System.LimitException: Apex CPU time limit exceeded

When developing code, automated unit tests and interactive testing naturally tend to use small numbers of objects. As the number of objects increases, the execution time has to increase in proportion – linearly. But it is all too easy to introduce code where the execution time grows as the square of the number of objects or the cube of the number of objects. (See e.g. Time complexity for some background.) For example, even with 100 objects, code that takes 100ms for the linear case, takes 10s for the squared case and 1,000s for the cubed case. So for the squared or cubed cases

System.LimitException: Apex CPU time limit exceeded

exceptions can result with even modest numbers of objects.

Is this governor limit a good thing? On the positive side, it forces a bad algorithm to be replaced by a better one. But on the negative side, your customer is stuck unable to work until you can deliver a fix to them. Some sort of alarm and forgiveness from the platform for a while would be more helpful…

Here is a blatant example of the problem (assuming all the collections include large numbers of objects):

// Linear - OK
for (Parent__c p : parents) {
    // Squared - problem
    for (Child__c c : children) {
        // Cubed - big problem
        for (GrandChild__c bc : grandChildren) {
            // ...
        }
    }
}

So review all nested loops carefully. Sometimes the problem is hidden, with one loop in one method or class and another loop in another method or class, and so harder to find.

Often the purpose of the loops is just to find the objects/object in one collection that match an object in another collection. There are two contexts that require two different approaches (though in more complicated cases the approaches can be combined) to fix the problem or better to avoid it in the first place:

  • In e.g. a controller where the query is explicit, a parent and child can be queried together with direct relationship references available using a relationship query.
  • In a trigger, no __r collections are populated, so maps have to be used. A map allows a value to be looked up without the cost of going through every entry in a list.

Here is how to fix the problem for the two contexts in parent-child relationships:

// Explicit SOQL context
for (Parent__c p : [
        select Name, (select Name from Childs__r)
        from Parent__c
        ]) {
    // Loop is over the small number of related Child__c not all of the Child__c
    for (Child__c c : p.Childs__r) {
        // ...
    }
}


// Trigger context
Map<Id, List<Child__c>> children = new Map<Id, List<Child__c>>();
for (Child__c c : [
        select Parent__c, Name
        from Child__c
        where Parent__c in Trigger.newMap.keySet()
        ]) {
    List<Child__c> l = children.get(c.Parent__c);
    if (l == null) {
        l = new List<Child__c>();
        children.put(c.Parent__c, l);
    }
    l.add(c);
}
for (Parent__c p : Trigger.new) {
    // Loop is over the small number of related Child__c not all of the Child__c
    if (children.containsKey(p.Id)) {
        for (Child__c c : children.get(p.Id) {
            // ...
        }
    }
}

And for the two contexts in child-parent relationships:

// Explicit SOQL context
for (Child__c c : [
        select Name, Parent__r.Name
        from Child__c
        ]) {
    // The one parent
    Parent__c p = c.Parent__r;
    if (p != null) {
        // ...
    }
}


// Trigger context
Set<Id> parentIds = new Set<Id>();
for (Child__c c : Trigger.new) {
    if (c.Parent__c != null) {
        parentIds.add(c.Parent__c);
    }
}
if (parentIds.size() > 0) {
    Map<Id, Parent__c> parents = new Map<Id, Parent__c>([
            select Id, Name
            from Parent__c
            where Id in :parentIds
            ]);
    for (Child__c c : Trigger.new) {
        // The one parent
        if (c.Parent__c != null) {
            Parent__c p = parents.get(c.Parent__c);
            if (p != null) {
                // ...
            }
        }
    }
}

This fixed code will operate in close to linear time.

In praise of apex:inlineEditSupport

I recently had a requirement where one date value in a calculated table needed to be manually editable. While it would be possible to use an apex:inputField for all the values, using apex:inlineEditSupport seemed like a better approach because the need to edit is relatively rare. That allows the table to look uncluttered for the common case of no editing, yet still allows the values to be changed.

This is the result:

inlineedit

The point of this post is to highlight how easy this is to accomplish and to give a +1 to apex:inlineEditSupport in case anyone has been wary of using it in the several years it has been available…

All that was needed was to change this:

<apex:pageBlockTable value="{!payments}" var="p">
    ...
    <apex:column value="{!p.IssueDate__c}"/>
    ...
</apex:pageBloackTable>

to this:

<apex:pageBlockTable value="{!payments}" var="p">
    ...            
    <apex:column>
        <apex:facet name="header">
            <span class="inlineEditPencil">
                {!$ObjectType.Payment__c.fields.IssueDate__c.label}
            </span>
        </apex:facet>
        <apex:outputField value="{!p.IssueDate__c}">
            <apex:inlineEditSupport event="ondblclick"
                    showOnEdit="save, cancel"
                    hideOnEdit="submit, approve, send"
                    />
        </apex:outputField>
    </apex:column>
    ...
</apex:pageBloackTable>

and to add a couple of new command buttons (“save” and “cancel”) for the inline editing mode.

This CSS displays the pencil icon in the heading (to hint that the column is different):

<style type="text/css">
/* Fragile */
.inlineEditPencil {
    padding-right: 16px;
    background: url(/img/func_icons/util/pencil12.gif) no-repeat right 2px;
}
</style>