Cool data tables using @RestResource, AngularJS and trNgGrid

I have an AngularJS application that shows tables of data using:

  • an Apex class that does dynamic SOQL and populates instances of a simple Apex class that are serialised to the client as JSON via the @RestResource annotation
  • the client side is AngularJS that pretty much just passes the JSON data through to a page template
  • the presentation work is all done by the excellent trNgGrid component and Bootstrap styling

The Apex code is clean and simple:

@RestResource(urlMapping='/report')
global without sharing class ReportRest {
    public class Claim {
        public String employeeName;
        public String department;
        public String reportsTo;
        public String claimNumber;
        public String status;
        public String leaveType;
        public Date startDate;
        public Date endDate;
        Claim(SObject c) {
            SObject e = c.getSObject('Employee__r');
            employeeName = (String) e.get('Name');
            department = (String) e.get('Department');
            SObject r = e.getSObject('ReportsTo');
            reportsTo = r != null ? (String) r.get('Name') : null;     
            claimNumber = (String) c.get('Name');
            status = (String) c.get('Status__c');
            leaveType = (String) c.get('LeaveType__c');
            startDate = (Date) c.get('StartDate__c');
            endDate = (Date) c.get('EndDate__c');
        }
    }
    @HttpGet  
    global static Claim[] get() {
        Claim[] claims = new Claim[] {};
        String soql = ...;
        for (SObject sob : Database.query(soql)) {
            claims.add(new Claim(sob));
        }
        return claims;
    }
}

and the trNgGrid markup is even more impressive:

<table tr-ng-grid="tr-ng-grid" class="table table-condensed" items="items"
        order-by="orderBy" order-by-reverse="orderByReverse">
    <thead>
        <tr>
            <th field-name="employeeName"/>
            <th field-name="department"/>
            <th field-name="reportsTo"/>
            <th field-name="claimNumber"/>
            <th field-name="status"/>
            <th field-name="leaveType"/>
            <th field-name="startDate" display-format="longDate" display-align="right"/>
            <th field-name="endDate" display-format="longDate" display-align="right"/>
        </tr>
    </thead>
</table>

You just define the column headers and trNgGrid generates the rows from the JSON data array (called “items” here). The resulting table has column sorting and column searching and other features can be enabled too. As it is written in AngularJS, it leverages AngularJS features such as filters (“longDate” here) for custom formatting.

What is great about this arrangement is that there is no tedious coding involved: all the code serves a purpose and the grunt work is handled by the frameworks. It also scores high on “ease of modification”: an extra column only takes a few minutes to add.

(Contrast this with the JavaScript required in e.g. Connecting DataTables to JSON generated by Apex.)

Here is a screen shot from the real application (with different columns):

TrNgGrid

Hash code for any Apex type via System.hashCode

If you are writing your own class that you want to use in sets or as a map key, you must add equals and hashCode methods as described in Using Custom Types in Map Keys and Sets. But hashCode is usually implemented by combining the hash codes of the fields of the class and until now not all Apex types exposed a hash code value.

This is now addressed in Summer ’14 by the System.hashCode method. It saves a lot of work for SObjects and provides consistency for primitive types.

This test illustrates the method working:

@isTest
private class HashCodeTest {

    @isTest
    static void string() {
        // Already had a hashCode method
        System.assertEquals('hello world'.hashCode(), System.hashCode('hello world'));
    }
        
    @isTest
    static void decimal() {
        // Wasn't available before
        System.assertEquals(3843600, System.hashCode(123.987));
        System.assertEquals(3843600, System.hashCode(123.987));
        System.assertEquals(0, System.hashCode(0));
        System.assertEquals(1, System.hashCode(1));
        System.assertEquals(2, System.hashCode(2));
        System.assertEquals(1024, System.hashCode(3.3));
        System.assertEquals(-720412809, System.hashCode(4773463427.34354345));
    }
    
    @isTest
    static void sobject() {
        // Wasn't available before
        System.assertEquals(-1394890885, System.hashCode(new Contact(
                LastName = 'Doe')));
        System.assertEquals(-1394890885, System.hashCode(new Contact(
                LastName = 'Doe')));
        System.assertEquals(-1474401918, System.hashCode(new Contact(
                LastName = 'Smith')));
        System.assertEquals(744078320, System.hashCode(new Contact(
                LastName = 'Doe', FirstName = 'Jane')));
        System.assertEquals(744095627, System.hashCode(new Contact(
                LastName = 'Doe', FirstName = 'John')));
    }
    
    @isTest
    static void testNull() {
        try {
            System.assertEquals(0, System.hashCode(null));
            System.assert(false);
        } catch (NullPointerException e) {
        }
    }
}

Thanks to Daniel Ballinger for flagging this via a comment on Expose hashCode on all Apex primitives so hashCode viable for custom classes.

How to pass a large number of selections from a standard list view to a Visualforce page

Buttons can be defined and added to an object’s standard list view and these buttons can access the selected objects:

listview

If only a few items are selected then the IDs can be passed on to a Visualforce page like this (a “List Button” with “Display Checkboxes” checked that has behavior “Execute JavaScript” and content source “OnClickJavaScript”):

var ids = {!GETRECORDIDS($ObjectType.Contact)};
if (ids.length) {
    if (ids.length <= 100) {
        window.location = '/apex/Target?ids=' + ids.join(',');
    } else {
        alert('Select 100 or less');
    }
} else {
    alert('Select one or more Contacts');
}

The limit of 100 selected items is imposed because 2k characters is generally considered the longest URL that works safely everywhere, and a GET requires that each 15 character ID is appended to the URL together with a delimiter (to pass the values).

So what if you need to support more than 100 selected objects? (The standard list view UI does allow selections to be made on multiple pages and allows a single page to show up to 200 objects.) Using a POST instead of a GET where the ID values are part of the request body rather than the URL avoids the URL length problem. Here is how to do that:

var ids = {!GETRECORDIDS($ObjectType.Contact)};
if (ids.length) {
    var form = document.createElement("form");
    form.setAttribute("method", "POST");
    form.setAttribute("action", "https://c.na15.visual.force.com/apex/Target");
    var hiddenField = document.createElement("input");
    hiddenField.setAttribute("type", "hidden");
    hiddenField.setAttribute("name", "ids");
    hiddenField.setAttribute("value", ids.join(','));
    form.appendChild(hiddenField);
    document.body.appendChild(form);
    form.submit();
} else {
    alert('Select one or more Contacts');
}

This JavaScript creates a form and posts it to the server. An important part of making this work is that the full target URL needs to be specified as described in this Get POST data via visualforce page article otherwise the Apex controller can’t pickup the posted data via ApexPages.currentPage().getParameters(). An obscure trick to say the least.

This page:

<apex:page controller="Target">
    <h1>Target</h1>
    <apex:repeat value="{!ids}" var="id">
        <div>{!id}</div>
    </apex:repeat>
</apex:page>

and controller:

public with sharing class Target {
    public String[] ids {
        get {
            if (ids == null) {
                String s = ApexPages.currentPage().getParameters().get('ids');
                if (s != null) {
                    ids = s.split(',');
                } else {
                    ids = new String[] {};
                }
            }
            return ids;
        }
        private set;
    }
}

can be used to demonstrate that the IDs are passed correctly for both versions of the JavaScript button.

An @RestResource Apex class that returns multiple JSON formats

The simplest way to write an @RestResource class is to return Apex objects from the @Http methods and leave it up to the platform to serialize these objects as JSON (or XML):

@RestResource(urlMapping='/report/*')
global without sharing class ReportRest {

    public class MyInnerClass {
        public String name;
        public Integer number;
    }

    @HttpGet  
    global static MyInnerClass get() {
        MyInnerClass instance = new MyInnerClass();
        ...
        return instance;
    }
}

This also allows tests to be written that don’t have to deserialize as they can just reference the class instances directly. But the approach imposes these limitations:

  • The response JSON is fixed and determined by the returned classes and their fields so responses that vary depending on the URL requested can’t be produced
  • Error conditions typically get handled by adding error fields to the response object rather than by returning a status code other than 200 and separate error information

Here is an alternate pattern that is a bit more work but in my experience meets the needs of client-side MVC applications (AngularJS in my case) better. The class returns two different JSON formats (depending on the part of the URL after “/report/”):

@RestResource(urlMapping='/report/*')
global without sharing class ReportRest {

    public class Day {
        public Date date;
        public Integer hours;
    }
    
    public class Employee {
        public String name;
        public Day[] approved = new Day[] {};
    }
    
    public class Claim {
        public String employeeName;
        public String claimNumber;
    }
 
    @HttpGet  
    global static void get() {
        RestResponse res = RestContext.response;
        if (res == null) {
            res = new RestResponse();
            RestContext.response = res;
        }
        try {
            res.responseBody = Blob.valueOf(JSON.serialize(doGet(extractReportId())));
            res.statusCode = 200;
        } catch (EndUserMessageException e) {
            res.responseBody = Blob.valueOf(e.getMessage());
            res.statusCode = 400;
        } catch (Exception e) {
            res.responseBody = Blob.valueOf(
                    String.valueOf(e) + '\n\n' + e.getStackTraceString()
                    );
            res.statusCode = 500;
        }
    }
    
    private static Object doGet(String reportId) {
        if (reportId == 'ac') {
            return absenceCalendarReport();
        } else if (reportId == 'al') {
            return absenceListReport();
        } else if (reportId == 'dl') {
            return disabilityListReport();
        } else {
            throw new EndUserMessageException(reportId + ' not implemented');
        }
    }
    
    private static Employee[] absenceCalendarReport() {
        Employee[] employees = new Employee[] {};
        ...
        return employees;
    }
    
    private static Claim[] absenceListReport() {
        Claim[] claims = new Claim[] {};
        ...
        return claims;
    }
    
    private static Claim[] disabilityListReport() {
        Claim[] claims = new Claim[] {};
        ...
        return claims;
    }
    
    private static String extractReportId() {
        String[] parts = RestContext.request.requestURI.split('\\/');
        String lastPart = parts[parts.size() - 1];
        Integer index = lastPart.indexOf('?');
        return index != -1 ? lastPart.substring(0, index) : lastPart;
    }
}

Apex classes are still used to represent the returned data but are explicitly serialized using a JSON.serialize call. As the overall response is being explicitly built, the returned status code can be set allowing the client side to vary its logic depending on that status code. In this example error information – intended to be shown to an end user as signalled by the EndUserMessageException custom exception or unintended and so including a stack trace – is returned as plain text that can be directly shown to the end user.

Salesforce Packaged Modules – what do you think?

If you have worked on server-side JavaScript you will be familiar with the NPM (Node Packaged Modules) site. The Node.js community have managed to corral the loose world of JavaScript into some 74,000 packages that have been downloaded many million times. When I wanted a HTTP client it took me just a few minutes to find one in NPM that I liked and it has been working fine ever since.

In comparison, I know of relatively few libraries of Apex code, and one of the oldest, apex-lang, presently shows only 1100 downloads. So I assume that of the (several) billion lines of Apex that have been written, just about none of it is re-used in more than one org. So not much standing on the shoulders of giants going on.

Yes there is the managed package mechanism, but I think of managed packages as containers for substantial functionality with typically no dependency on other 3rd party managed packages. Perhaps we have all been waiting for Salesforce to introduce some lighter-weight library mechanism.

Some cool features of NPM packages (see the package.json example below) are:

  • automatic dependency chains: if you install a package that requires other packages they will also be automatically installed
  • versioning and version dependency
  • repository locations included
  • very easy to use e.g. “npm install protractor” and 30 seconds later everything you need is installed and ready to run

What might this look like for Salesforce and Apex? There would need to be an spm command-line tool that could pull from (Git) repositories and push into Salesforce orgs (or a local file system) – seems possible. Some of the 40 characters available for names would have to be used to avoid naming collisions and there would need to be a single name registry. The component naming convention could be something like spm_myns_MyClassName for an Apex class and spm_myns_Package for the package description static resource (that would contain JSON). Somehow say 90% test coverage would need to be mandatory. Without the managed package facility of source code being hidden it would inherently be open source (a good thing in my mind). Perhaps code should have no dependency on concrete SObject types? Perhaps only some component types should be be allowed?

To get this started – apart from the tooling and a site – some truly useful contributions would be needed to demonstrate the value.

In the company where I work we have talked about sharing source code between managed packages but have not done it in any formalised way; I wonder if systems similar to what is described here are already in use in some companies? Or are already open sourced? Comments very welcome.

For reference, here is a slightly cut down NPM package.json example:

{
  "name": "protractor",
  "description": "Webdriver E2E test wrapper for Angular.",
  "homepage": "https://github.com/angular/protractor",
  "keywords": [
    "angular",
    "test",
    "testing",
    "webdriver",
    "webdriverjs",
    "selenium"
  ],
  "author": {
    "name": "Julie Ralph",
    "email": "ju.ralph@gmail.com"
  },
  "dependencies": {
    "selenium-webdriver": "~2.39.0",
    "minijasminenode": ">=0.2.7",
    "saucelabs": "~0.1.0",
    "glob": ">=3.1.14",
    "adm-zip": ">=0.4.2",
    "optimist": "~0.6.0"
  },
  "devDependencies": {
    "expect.js": "~0.2.0",
    "chai": "~1.8.1",
    "chai-as-promised": "~4.1.0",
    "jasmine-reporters": "~0.2.1",
    "mocha": "~1.16.0",
    "express": "~3.3.4",
    "mustache": "~0.7.2"
  },
  "repository": {
    "type": "git",
    "url": "git://github.com/angular/protractor.git"
  },
  "bin": {
    "protractor": "bin/protractor",
    "webdriver-manager": "bin/webdriver-manager"
  },
  "main": "lib/protractor.js",
  "scripts": {
    "test": "node lib/cli.js spec/basicConf.js; ..."
  },
  "license": "MIT",
  "version": "0.16.1",
  "webdriverVersions": {
    "selenium": "2.39.0",
    "chromedriver": "2.8",
    "iedriver": "2.39.0"
  },
  "readme": "...",
  "readmeFilename": "README.md",
  "bugs": {
    "url": "https://github.com/angular/protractor/issues"
  },
  "_id": "protractor@0.16.1",
  "_from": "protractor@0.16.x"
}

Serving AngularJS templates from static resources

An AngularJS app typically starts with an “index” page that loads the required JavaScript/CSS and acts as the container for the client-side processed content. The app operates by rendering various templates in response to user interactions into that container.

That “index” page is a good place to obtain information from the “Visualforce” world that can be passed to the “AngularJS” world, and so is best made a Visualforce page. (See Passing platform configuration to an AngularJS app.)

But what about the templates? Typically there are many of these. Should they also be Visualforce pages? At first sight it seems a reasonable thing to do as the templates are “partial pages”. And Visualforce pages have fixed URLs whereas static resources have URLs that include a timestamp making them harder to reference in JavaScript code such as a route provider. And if you use individual static resources per template (rather than a ZIP static resource containing all the templates) each template has its own timestamp.

But providing a clear separation has been made between server-side processing and client-side processing, no Visualforce capabilities are needed for the templates. And using Visualforce pages adds complexity such as requiring profiles to be updated. So how can the static resource timestamp value be handled if static resources are used instead?

The answer is surprisingly simple: it appears that using the current (JavaScript) timestamp is enough to get the latest version. So a $routeProvider templateUrl for a static resource called “xyz_partial” is simply:

templateUrl: '/resource/' + Date.now() + '/xyz_partial'

You can see this pattern applied in this (quite new) Salesforce AngularJS sample application created by Pat Patterson.

PS As David Esposito comments, where there are only a small number of resource references, it is arguably cleaner to not use this timestamp approach.

Passing platform configuration to an AngularJS app

Running a JavaScript client-side MVC app such as an AngularJS app in Salesforce presents the problem of how to obtain configuration information from the platform. Most of the app is best located in a static resource zip file as server-side Visualforce processing isn’t needed. Using relative URLs between the various files in the zip then avoids any dependency on the absolute URL of the zip. (That absolute URL includes a timestamp and also a namespace prefix if a managed package is involved so the fewer references to it the better.)

But there are still a few configuration parameters that are easiest to obtain using Visualforce. The index Visualforce page – that dynamic page content is inserted into – is a good single place to obtain that information and make it available to the rest of the app through JavaScript via Angular’s constant mechanism:

<apex:page showHeader="false" sidebar="false"
        standardStylesheets="false" applyHtmlTag="false">
<html lang="en" ng-app="eepApp" ng-controller="AppController">
<head>...</head>
<body>...

<script src="{!URLFor($Resource.appzip, 'lib/angular/angular.min.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/app.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/controllers.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/filters.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/services.js')}"></script>

<script>
(function() {
    var parts = '{! $CurrentPage.Name }'.split('__');
    var namespace = parts.length == 2 ? parts[0] : null
    var restPrefix =  '{! $Site.CurrentSiteUrl }services/apexrest'
            + (namespace ? '/' + namespace : '');
    var pagePrefix = 'https://{! $Site.Domain }';
    var serverUrls = {
        namespacePrefix: namespace ? namespace + '__' : '',
        configRest: restPrefix + '/eep/config',
        employeesRest: restPrefix + '/eep/employees',
        metaRest: restPrefix + '/eep/meta',
        loginPage: pagePrefix + '{! $Page.Login }',
        logoutPage: pagePrefix + '{! $Page.Logout }'
    }
    console.log('serverUrls=' + JSON.stringify(serverUrls));
    
    // This configures the Angular app (declared in app.js)
    eepApp.constant('ServerUrls', serverUrls);
})();
</script>
  
</body>
</html>
</apex:page>

With this setup, any service or controller that needs to reference one of the configuration values just declares a dependency on the ServerUrls object and references the values from that. The result is a clean separation of concerns.