An @RestResource Apex class that returns multiple JSON formats

The simplest way to write an @RestResource class is to return Apex objects from the @Http methods and leave it up to the platform to serialize these objects as JSON (or XML):

@RestResource(urlMapping='/report/*')
global without sharing class ReportRest {

    public class MyInnerClass {
        public String name;
        public Integer number;
    }

    @HttpGet  
    global static MyInnerClass get() {
        MyInnerClass instance = new MyInnerClass();
        ...
        return instance;
    }
}

This also allows tests to be written that don’t have to deserialize as they can just reference the class instances directly. But the approach imposes these limitations:

  • The response JSON is fixed and determined by the returned classes and their fields so responses that vary depending on the URL requested can’t be produced
  • Error conditions typically get handled by adding error fields to the response object rather than by returning a status code other than 200 and separate error information

Here is an alternate pattern that is a bit more work but in my experience meets the needs of client-side MVC applications (AngularJS in my case) better. The class returns two different JSON formats (depending on the part of the URL after “/report/”):

@RestResource(urlMapping='/report/*')
global without sharing class ReportRest {

    public class Day {
        public Date date;
        public Integer hours;
    }
    
    public class Employee {
        public String name;
        public Day[] approved = new Day[] {};
    }
    
    public class Claim {
        public String employeeName;
        public String claimNumber;
    }
 
    @HttpGet  
    global static void get() {
        RestResponse res = RestContext.response;
        if (res == null) {
            res = new RestResponse();
            RestContext.response = res;
        }
        try {
            res.responseBody = Blob.valueOf(JSON.serialize(doGet(extractReportId())));
            res.statusCode = 200;
        } catch (EndUserMessageException e) {
            res.responseBody = Blob.valueOf(e.getMessage());
            res.statusCode = 400;
        } catch (Exception e) {
            res.responseBody = Blob.valueOf(
                    String.valueOf(e) + '\n\n' + e.getStackTraceString()
                    );
            res.statusCode = 500;
        }
    }
    
    private static Object doGet(String reportId) {
        if (reportId == 'ac') {
            return absenceCalendarReport();
        } else if (reportId == 'al') {
            return absenceListReport();
        } else if (reportId == 'dl') {
            return disabilityListReport();
        } else {
            throw new EndUserMessageException(reportId + ' not implemented');
        }
    }
    
    private static Employee[] absenceCalendarReport() {
        Employee[] employees = new Employee[] {};
        ...
        return employees;
    }
    
    private static Claim[] absenceListReport() {
        Claim[] claims = new Claim[] {};
        ...
        return claims;
    }
    
    private static Claim[] disabilityListReport() {
        Claim[] claims = new Claim[] {};
        ...
        return claims;
    }
    
    private static String extractReportId() {
        String[] parts = RestContext.request.requestURI.split('\\/');
        String lastPart = parts[parts.size() - 1];
        Integer index = lastPart.indexOf('?');
        return index != -1 ? lastPart.substring(0, index) : lastPart;
    }
}

Apex classes are still used to represent the returned data but are explicitly serialized using a JSON.serialize call. As the overall response is being explicitly built, the returned status code can be set allowing the client side to vary its logic depending on that status code. In this example error information – intended to be shown to an end user as signalled by the EndUserMessageException custom exception or unintended and so including a stack trace – is returned as plain text that can be directly shown to the end user.

Salesforce Packaged Modules – what do you think?

If you have worked on server-side JavaScript you will be familiar with the NPM (Node Packaged Modules) site. The Node.js community have managed to corral the loose world of JavaScript into some 74,000 packages that have been downloaded many million times. When I wanted a HTTP client it took me just a few minutes to find one in NPM that I liked and it has been working fine ever since.

In comparison, I know of relatively few libraries of Apex code, and one of the oldest, apex-lang, presently shows only 1100 downloads. So I assume that of the (several) billion lines of Apex that have been written, just about none of it is re-used in more than one org. So not much standing on the shoulders of giants going on.

Yes there is the managed package mechanism, but I think of managed packages as containers for substantial functionality with typically no dependency on other 3rd party managed packages. Perhaps we have all been waiting for Salesforce to introduce some lighter-weight library mechanism.

Some cool features of NPM packages (see the package.json example below) are:

  • automatic dependency chains: if you install a package that requires other packages they will also be automatically installed
  • versioning and version dependency
  • repository locations included
  • very easy to use e.g. “npm install protractor” and 30 seconds later everything you need is installed and ready to run

What might this look like for Salesforce and Apex? There would need to be an spm command-line tool that could pull from (Git) repositories and push into Salesforce orgs (or a local file system) – seems possible. Some of the 40 characters available for names would have to be used to avoid naming collisions and there would need to be a single name registry. The component naming convention could be something like spm_myns_MyClassName for an Apex class and spm_myns_Package for the package description static resource (that would contain JSON). Somehow say 90% test coverage would need to be mandatory. Without the managed package facility of source code being hidden it would inherently be open source (a good thing in my mind). Perhaps code should have no dependency on concrete SObject types? Perhaps only some component types should be be allowed?

To get this started – apart from the tooling and a site – some truly useful contributions would be needed to demonstrate the value.

In the company where I work we have talked about sharing source code between managed packages but have not done it in any formalised way; I wonder if systems similar to what is described here are already in use in some companies? Or are already open sourced? Comments very welcome.

For reference, here is a slightly cut down NPM package.json example:

{
  "name": "protractor",
  "description": "Webdriver E2E test wrapper for Angular.",
  "homepage": "https://github.com/angular/protractor",
  "keywords": [
    "angular",
    "test",
    "testing",
    "webdriver",
    "webdriverjs",
    "selenium"
  ],
  "author": {
    "name": "Julie Ralph",
    "email": "ju.ralph@gmail.com"
  },
  "dependencies": {
    "selenium-webdriver": "~2.39.0",
    "minijasminenode": ">=0.2.7",
    "saucelabs": "~0.1.0",
    "glob": ">=3.1.14",
    "adm-zip": ">=0.4.2",
    "optimist": "~0.6.0"
  },
  "devDependencies": {
    "expect.js": "~0.2.0",
    "chai": "~1.8.1",
    "chai-as-promised": "~4.1.0",
    "jasmine-reporters": "~0.2.1",
    "mocha": "~1.16.0",
    "express": "~3.3.4",
    "mustache": "~0.7.2"
  },
  "repository": {
    "type": "git",
    "url": "git://github.com/angular/protractor.git"
  },
  "bin": {
    "protractor": "bin/protractor",
    "webdriver-manager": "bin/webdriver-manager"
  },
  "main": "lib/protractor.js",
  "scripts": {
    "test": "node lib/cli.js spec/basicConf.js; ..."
  },
  "license": "MIT",
  "version": "0.16.1",
  "webdriverVersions": {
    "selenium": "2.39.0",
    "chromedriver": "2.8",
    "iedriver": "2.39.0"
  },
  "readme": "...",
  "readmeFilename": "README.md",
  "bugs": {
    "url": "https://github.com/angular/protractor/issues"
  },
  "_id": "protractor@0.16.1",
  "_from": "protractor@0.16.x"
}

Serving AngularJS templates from static resources

An AngularJS app typically starts with an “index” page that loads the required JavaScript/CSS and acts as the container for the client-side processed content. The app operates by rendering various templates in response to user interactions into that container.

That “index” page is a good place to obtain information from the “Visualforce” world that can be passed to the “AngularJS” world, and so is best made a Visualforce page. (See Passing platform configuration to an AngularJS app.)

But what about the templates? Typically there are many of these. Should they also be Visualforce pages? At first sight it seems a reasonable thing to do as the templates are “partial pages”. And Visualforce pages have fixed URLs whereas static resources have URLs that include a timestamp making them harder to reference in JavaScript code such as a route provider. And if you use individual static resources per template (rather than a ZIP static resource containing all the templates) each template has its own timestamp.

But providing a clear separation has been made between server-side processing and client-side processing, no Visualforce capabilities are needed for the templates. And using Visualforce pages adds complexity such as requiring profiles to be updated. So how can the static resource timestamp value be handled if static resources are used instead?

The answer is surprisingly simple: it appears that using the current (JavaScript) timestamp is enough to get the latest version. So a $routeProvider templateUrl for a static resource called “xyz_partial” is simply:

templateUrl: '/resource/' + Date.now() + '/xyz_partial'

You can see this pattern applied in this (quite new) Salesforce AngularJS sample application created by Pat Patterson.

PS As David Esposito comments, where there are only a small number of resource references, it is arguably cleaner to not use this timestamp approach.

Passing platform configuration to an AngularJS app

Running a JavaScript client-side MVC app such as an AngularJS app in Salesforce presents the problem of how to obtain configuration information from the platform. Most of the app is best located in a static resource zip file as server-side Visualforce processing isn’t needed. Using relative URLs between the various files in the zip then avoids any dependency on the absolute URL of the zip. (That absolute URL includes a timestamp and also a namespace prefix if a managed package is involved so the fewer references to it the better.)

But there are still a few configuration parameters that are easiest to obtain using Visualforce. The index Visualforce page – that dynamic page content is inserted into – is a good single place to obtain that information and make it available to the rest of the app through JavaScript via Angular’s constant mechanism:

<apex:page showHeader="false" sidebar="false"
        standardStylesheets="false" applyHtmlTag="false">
<html lang="en" ng-app="eepApp" ng-controller="AppController">
<head>...</head>
<body>...

<script src="{!URLFor($Resource.appzip, 'lib/angular/angular.min.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/app.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/controllers.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/filters.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/services.js')}"></script>

<script>
(function() {
    var parts = '{! $CurrentPage.Name }'.split('__');
    var namespace = parts.length == 2 ? parts[0] : null
    var restPrefix =  '{! $Site.CurrentSiteUrl }services/apexrest'
            + (namespace ? '/' + namespace : '');
    var pagePrefix = 'https://{! $Site.Domain }';
    var serverUrls = {
        namespacePrefix: namespace ? namespace + '__' : '',
        configRest: restPrefix + '/eep/config',
        employeesRest: restPrefix + '/eep/employees',
        metaRest: restPrefix + '/eep/meta',
        loginPage: pagePrefix + '{! $Page.Login }',
        logoutPage: pagePrefix + '{! $Page.Logout }'
    }
    console.log('serverUrls=' + JSON.stringify(serverUrls));
    
    // This configures the Angular app (declared in app.js)
    eepApp.constant('ServerUrls', serverUrls);
})();
</script>
  
</body>
</html>
</apex:page>

With this setup, any service or controller that needs to reference one of the configuration values just declares a dependency on the ServerUrls object and references the values from that. The result is a clean separation of concerns.

Connecting DataTables to JSON generated by Apex

I had a quick go at coming up with an answer to this DataTables sAjaxSource question on Salesforce Stackexchange but didn’t get very far. Using DataTable’s sAjaxSource directly runs you into the problem that the Visualforce page containing the table and the Apex REST API that supplies the JSON use different host names and so the same origin security policy blocks the access. (The answer includes one way to work-around this.)

The approach I tried was to use Apex’s @RemoteAction mechanism, where a JavaScript function is generated in the page. So instead of referencing a URL, the JavaScript function is called and how it connects back to the server is its concern. The trick to getting it to work is to recognise that the JavaScript function that you add (via the fnServerData property) is intended to provide an interception point for the request to the URL you specify (via the sAjaxSource property). So both must be specified, even though in this case the URL is not used.

This an example of the output (using default styles):

dt

Here is the Visualforce page; most of the JavaScript is just cleaning up the JSON data returned from the controller to suit DataTables:

<apex:page controller="DataTableController">

<link
rel="stylesheet"
type="text/css"
href="https://ajax.aspnetcdn.com/ajax/jquery.dataTables/1.9.4/css/jquery.dataTables.css"
/>

<apex:sectionHeader title="DataTables"/>

<table id="table" cellpadding="0" cellspacing="0" border="0">
    <thead>
        <th>Name</th>
        <th>Birthdate</th>
        <th>Phone</th>
        <th>Email</th>
        <th>Salary</th>
    </thead>
    <tbody>
    </tbody>
</table>

<script
type="text/javascript"
charset="utf8"
src="https://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.8.2.min.js"
>
</script>
<script
type="text/javascript"
charset="utf8"
src="https://ajax.aspnetcdn.com/ajax/jquery.dataTables/1.9.4/jquery.dataTables.min.js"
>
</script>
<script>

var j$ = jQuery.noConflict();

var fields = ['Name', 'Birthdate', 'Phone', 'Email', 'Salary__c'];

var aoColumns = [];
for (var i = 0; i < fields.length; i++) {
    aoColumns.push({'mData': fields[i]});
}

j$(document).ready(function() {
    j$('#table').dataTable({
        'aoColumns': aoColumns,
        'bProcessing': true,
        'bServerSide': true,
        'bFilter': false,
        'sAjaxSource': 'fakeUrl',
        'fnServerData': function(sSource, aoData, fnCallback) {
            console.log(JSON.stringify(aoData));
            // Call the @RemoteAction JavaScript function
            DataTableController.contacts(aoData, function(result, event) {
                if (event.type != 'exception') {
                    console.log(JSON.stringify(result));
                    for (var i = 0; i < result.aaData.length; i++) {
                        var r = result.aaData[i];
                        for (var j = 0; j < fields.length; j++) {
                            var field = fields[j];
                            if (r[field] == undefined) {
                                // DataTables pops a dialog for undefined values
                                r[field] = null;
                            } else if (field == 'Birthdate') {
                                // Dates transmitted as longs
                                var d = new Date(r[field]);
                                r[field] = ''
                                        + (d.getMonth() + 1)
                                        + '/'
                                        + d.getDate()
                                        + '/'
                                        + d.getFullYear()
                                        ;
                            }
                        }
                    }
                    // Call back into the DataTable function
                    fnCallback(result);
                } else {
                    alert(event.message);
                }
            });
        }
    });
});

</script>

</apex:page>

Most of the complexity in the Apex code is in interpreting the request parameters sent by DataTables including things like the multi-column sorting. Note that the conversion from JSON to Apex objects and Apex objects to JSON is left to the platform.

// See https://datatables.net/usage/server-side
global class DataTableController {

    // Defines shape of JSON response
    global class Response {
        public Integer sEcho;
        public Integer iTotalRecords;
        public Integer iTotalDisplayRecords;
        public SObject[] aaData;
        Response(Integer echo, Integer total, SObject[] sobs) {
            this.sEcho = echo;
            this.iTotalRecords = total;
            this.iTotalDisplayRecords = total;
            this.aaData = sobs;
        }
    }

    // DataTable passes JSON definition of what server should do
    private class Params {
    
        Map<String, Object> m = new Map<String, Object>();
        
        Integer echo;
        Integer start;
        Integer length;
        String[] columns;
        Integer[] sortColumns;
        String[] sortDirections;
        
        Params(List<Map<String, Object>> request) {
            for (Map<String, Object> r : request) {
                m.put((String) r.get('name'), r.get('value'));
            }
            echo = integer('sEcho');
            start = integer('iDisplayStart');
            length = integer('iDisplayLength');
            columns = stringArray('mDataProp');
            sortColumns = integerArray('iSortCol');
            sortDirections = stringArray('sSortDir');
        }
        
        String[] stringArray(String prefix) {
            String[] strings = new String[] {};
            for (Object o : array(prefix)) {
                strings.add(o != null ? esc(String.valueOf(o)) :null);
            }
            return strings;
        }
        
        Integer[] integerArray(String prefix) {
            Integer[] integers = new Integer[] {};
            for (Object o : array(prefix)) {
                integers.add(o != null ? Integer.valueOf(o) : null);
            }
            return integers;
        }

        Object[] array(String prefix) {
            Object[] objects = new Object[] {};
            for (Integer i = 0; true; i++) {
                Object o = m.get(prefix + '_' + i);
                if (o != null) {
                    objects.add(o);
                } else {
                    break;
                }
            }
            return objects;
        }
        
        Integer integer(String name) {
           Object o = m.get(name);
           if (o instanceof Decimal) {
               return ((Decimal) o).intValue();
           } else if (o instanceof Integer) {
               return (Integer) o;
           } else {
               return null;
           }
        }
        
        // Guard against SOQL injection
        String esc(String s) {
            return s != null ? String.escapeSingleQuotes(s) : null;
        }
    }
    
    @RemoteAction
    global static Response contacts(List<Map<String, Object>> request) {
    
        Params p = new Params(request);

        String soql = ''
                + ' select ' + String.join(p.columns, ', ')
                + ' from Contact'
                + ' order by ' + String.join(orderBys(p), ', ')
                + ' limit :length'
                + ' offset :start'
                ;
        System.debug('>>> soql=' + soql);

        Integer start = p.start;
        Integer length = p.length;
        return new Response(
                p.echo,
                [select Count() from Contact limit 40000],
                Database.query(soql)
                );
    }
    
    private static String[] orderBys(Params p) {
        Map<String, String> soqlDirections = new Map<String, String>{
                'asc' => 'asc nulls last',
                'desc' => 'desc nulls first'
                };
        String[] orderBys = new String[] {};
        Integer min = Math.min(p.sortColumns.size(), p.sortDirections.size());
        for (Integer i = 0; i < min; i++) {
            orderBys.add(''
                    + p.columns[p.sortColumns[i]]
                    + ' '
                    + soqlDirections.get(p.sortDirections[i])
                    );
        }
        return orderBys;
    }
}

Apex code formatting

I’ve been spending (wasting?) time on Salesforce Stack Exchange. Often code is posted and nearly always the formatting used gets in the way of understanding what the code is doing. I suggest this can muddle the thinking of the person writing the code, and certainly impacts those who have to pick the code up later.

So before contemplating gnarly logical problems in code, the first job is to remove the formatting noise. Having a Java background, my “go to” reference is Code Conventions for the Java Programming Language. But there are a couple of language mechanisms specific to Apex that are worth particular attention…

The first is the SOQL for loop that you should make your default pattern when querying and iterating over SObjects. (Some reasons why: where there are large numbers of SObjects, only a chunk of SObjects occupy heap space rather than all the SObjects; the scope of the variable is limited to the loop block; clear and elegant syntax.) But there can be a lot going on in the SOQL so give the SOQL the clarity it deserves and keep the line length reasonable by using line breaks:

public class MyClass {
    public void myMethod(Set<Id> accountIds) {
        for (Account a : [
                select Id, Name, BillingStreet, BillingCity, BillingState
                from Account
                where Id in :accountIds
                order by Name
                ]) {
            // Do something with each Account
        }
    }
}

The second is the support for named parameters in SObjects. Instead of creating an object with no values in its fields and then assigning values field by field, create the complete object, dedicating a separate line to each parameter:

@isTest
private class MyClassTest {
    @isTest
    static void myTestName() {
        Account a = new Account(
                Name = 'Acme',
                BillingStreet = '123 The Street',
                BillingCity = 'The City',
                BillingState = 'The State'
                );
        // Do something
    }
}

The above examples also use these formatting ideas:

  • Indent consistently and by 4 spaces (not tabs)
  • Indent continuations (where lines get too long and need to be wrapped) by 8 spaces more than their containing block to distinguish them from the start of a new block
  • Eliminate all blank lines that don’t have a purpose
  • Unfortunately Apex is case insensitive, but that is no reason to not use consistent capitalization to distinguish e.g. between types and variables

Finding Visualforce field ids

I was asked in a comment on Hack to find field ids – allows a default UI “New” page to be pre-populated to post the code so I have done so below. Remember this is a hack.

Before using this code, consider instead using the more recent Tooling API – see Andrew Fawcett’s Querying Custom Object and Field IDs via Tooling API.

public class LkidUtil {

    /**
     * This field id is needed to pass parameters to default UI. Unfortunately there is no API to obtain it, hence this hack.
     * Takes about 50-100ms. Results could be cached in a custom setting if that became problematic.
     */
    public static String findFieldId(SObjectType sobType, SObjectField field) {

        return findFieldIds(sobType, new SObjectField[] { field })[0];
    }
    
    public static String[] findFieldIds(SObjectType sobType, SObjectField[] fields) {
        
        // Crazy but necessary: parse default UI HTML for required id
        PageReference p = new PageReference('/' + sobType.getDescribe().getKeyPrefix() + '/e?nooverride=1');
        String html = p.getContent().toString();
        
        List ids = new List();
        for (SObjectField field : fields) {
        	
            DescribeFieldResult f = field.getDescribe();
            String label = f.getLabel();
            ids.add(matchFieldId(html, label));
        }
        return ids;
    }
    
    // Public for testing
    public static String matchFieldId(String html, String label) {
    	
    	// Non-greedy ? wasn't sufficient; now limit the matching characters to "word character", usually [A-Za-z0-9_], which is ok for the field id
    	Matcher m = Pattern.compile('(<span class="requiredMark">\\*</span>)?' + label + '').matcher(html);
    	
        // Use first match
        if (m.find()) {
            return m.group(1);
        } else {
            return null;
        }
    }
}

Note calls to this need guarding for tests to work:

String[] fieldIds;
if (!Test.isRunningTest()) {
    // Contains code that causes tests to (silently) abort
    fieldIds = LkidUtil.findFieldIds(Document__c.SObjectType, new SObjectField[] {
            Document__c.Claim__c,
            Document__c.BenefitClaimed__c,
            Document__c.Type__c
            });
} else {
    // Used in test
    fieldIds = new String[] {
            'FakeClaimFieldId',
            'FakeBenefitClaimedFieldId',
            'FakeTypeFieldId'
             };
}

And here is a unit test (limited by the fact that the page HTML cannot be generated in a test):

@isTest
private class LkidUtilTest {

    @isTest
    static void testRegexPatternRequiredField() {
    	
    	// Always worked
    	doTestRegexPattern('<td class="labelCol requiredInput"><label for="00NK0000000ap5l"><span class="requiredMark">*</span>Type</label></td>');
    	
    	// Was broken
    	doTestRegexPattern('<label for="Name"><span class="requiredMark">*</span>Document Name</label></td><td class="dataCol col02"><div class="requiredInput"><div class="requiredBlock"></div><input  id="Name" maxlength="80" name="Name" size="20" tabindex="1" type="text" /></div></td><td class="labelCol requiredInput"><label for="00NK0000000ap5l"><span class="requiredMark">*</span>Type</label>');
    }
    
    @isTest
    static void testRegexPatternNonRequiredField() {
    	
    	// Always worked
    	doTestRegexPattern('<td class="labelCol"><label for="00NK0000000ap5l">Type</label></td>');
    	
    	// Was broken
    	doTestRegexPattern('<label for="Name"><span class="requiredMark">*</span>Document Name</label></td><td class="dataCol col02"><div class="requiredInput"><div class="requiredBlock"></div><input  id="Name" maxlength="80" name="Name" size="20" tabindex="1" type="text" /></div></td><td class="labelCol"><label for="00NK0000000ap5l">Type</label>');
    }
    
    private static void doTestRegexPattern(String html) {
    	System.assertEquals('00NK0000000ap5l', LkidUtil.matchFieldId(html, 'Type'));
    }
}