An Apex implementation of the OAuth 2.0 JWT Bearer Token Flow

This flow allows an access token (AKA a session ID) to be obtained for a user based on a certificate shared by the client and the authorization server. Unlike most other OAuth 2.0 flows, no password is needed. This avoids having to prompt for a password in a browser or having to have a stored password. So the flow works well for server to server interactions.

There is a Java sample implementation in the OAuth 2.0 JWT Bearer Token Flow help. Here is an Apex implementation of that. Having this in Apex allows e.g. one org to connect to another or a Site to obtain a session ID. This code could also be used to establish a connection from Salesforce to some other platform that supports the flow, but details like the encryption algorithm used might need changing.

public class Jwt {
    public class Configuration {
        public String jwtUsername {get; set;}
        public String jwtConnectedAppConsumerKey {get; set;}
        public String jwtSigningCertificateName {get; set;}
        public String jwtHostname {get; set;}
    private class Header {
        String alg;
        Header(String alg) {
            this.alg = alg;
    private class Body {
        String iss;
        String prn;
        String aud;
        String exp;
        Body(String iss, String prn, String aud, String exp) {
            this.iss = iss;
            this.prn = prn;
            this.aud = aud;
            this.exp = exp;
    private class JwtException extends Exception {
    private Configuration config;
    public Jwt(Configuration config) {
        this.config = config;
    public String requestAccessToken() {

         Map<String, String> m = new Map<String, String>();
         m.put('grant_type', 'urn:ietf:params:oauth:grant-type:jwt-bearer');
         m.put('assertion', createToken());
         HttpRequest req = new HttpRequest();
         req.setEndpoint('https://' + config.jwtHostname +'/services/oauth2/token');
         req.setTimeout(60 * 1000);
         HttpResponse res = new Http().send(req);
         if (res.getStatusCode() >= 200 && res.getStatusCode() < 300) {
             return extractJsonField(res.getBody(), 'access_token');
         } else {
             throw new JwtException(res.getBody());
    private String formEncode(Map<String, String> m) {
         String s = '';
         for (String key : m.keySet()) {
            if (s.length() > 0) {
                s += '&';
            s += key + '=' + EncodingUtil.urlEncode(m.get(key), 'UTF-8');
         return s;
    private String extractJsonField(String body, String field) {
        JSONParser parser = JSON.createParser(body);
        while (parser.nextToken() != null) {
            if (parser.getCurrentToken() == JSONToken.FIELD_NAME
                    && parser.getText() == field) {
                return parser.getText();
        throw new JwtException(field + ' not found in response ' + body);
    private String createToken() {
        String alg = 'RS256';
        String iss = config.jwtConnectedAppConsumerKey;
        String prn = config.jwtUsername;
        String aud = 'https://' + config.jwtHostname;
        String exp = String.valueOf(System.currentTimeMillis() + 60 * 60 * 1000);
        String headerJson = JSON.serialize(new Header(alg));
        String bodyJson =  JSON.serialize(new Body(iss, prn, aud, exp));
        String token = base64UrlSafe(Blob.valueOf(headerJson))
                + '.' + base64UrlSafe(Blob.valueOf(bodyJson));
        String signature = base64UrlSafe(Crypto.signWithCertificate(
        token += '.' + signature;
        return token;
    private String base64UrlSafe(Blob b) {
        return EncodingUtil.base64Encode(b).replace('+', '-').replace('/', '_');

And a test for the class. Note that the certificate has to be manually created in the org because there is no API that allows the test to automatically create it:

private class JwtTest {
    // No API for createing a certificate so must be manually pre-created
    // using Setup -> Security Controls -> Certificate and Key Management
    private static final String PRE_CREATED_CERTIFICATE_NAME = 'JWT';
    private static final String FAKE_TOKEN = 'fakeToken';
    private class Mock implements HttpCalloutMock {

        public HTTPResponse respond(HTTPRequest req) {
            HTTPResponse res = new HTTPResponse();
            System.assertEquals('POST', req.getMethod());
            System.assert(req.getBody().contains('grant_type'), req.getBody());
            System.assert(req.getBody().contains('assertion'), req.getBody());
            res.setBody('{"scope":"api","access_token":"' + FAKE_TOKEN + '"}');

            return res;

    static void test() {
        Jwt.Configuration config = new Jwt.Configuration();
        config.jwtUsername = '';
        config.jwtSigningCertificateName = PRE_CREATED_CERTIFICATE_NAME;
        config.jwtHostname = '';
        config.jwtConnectedAppConsumerKey = '6MVG9ZsNvTsRRnx.BZjJLCHB.hXYNAVb_oM';
        Test.setMock(HttpCalloutMock.class, new Mock());
        String accessToken = new Jwt(config).requestAccessToken();
        System.assertEquals(FAKE_TOKEN, accessToken);

Cool data tables using @RestResource, AngularJS and trNgGrid

I have an AngularJS application that shows tables of data using:

  • an Apex class that does dynamic SOQL and populates instances of a simple Apex class that are serialised to the client as JSON via the @RestResource annotation
  • the client side is AngularJS that pretty much just passes the JSON data through to a page template
  • the presentation work is all done by the excellent trNgGrid component and Bootstrap styling

The Apex code is clean and simple:

global without sharing class ReportRest {
    global class Claim {
        public String employeeName;
        public String department;
        public String reportsTo;
        public String claimNumber;
        public String status;
        public String leaveType;
        public Date startDate;
        public Date endDate;
        Claim(SObject c) {
            SObject e = c.getSObject('Employee__r');
            employeeName = (String) e.get('Name');
            department = (String) e.get('Department');
            SObject r = e.getSObject('ReportsTo');
            reportsTo = r != null ? (String) r.get('Name') : null;     
            claimNumber = (String) c.get('Name');
            status = (String) c.get('Status__c');
            leaveType = (String) c.get('LeaveType__c');
            startDate = (Date) c.get('StartDate__c');
            endDate = (Date) c.get('EndDate__c');
    global static Claim[] get() {
        Claim[] claims = new Claim[] {};
        String soql = ...;
        for (SObject sob : Database.query(soql)) {
            claims.add(new Claim(sob));
        return claims;

and the trNgGrid markup is even more impressive:

<table tr-ng-grid="tr-ng-grid" class="table table-condensed" items="items"
      order-by="orderBy" order-by-reverse="orderByReverse">
      <th field-name="employeeName"/>
      <th field-name="department"/>
      <th field-name="reportsTo"/>
      <th field-name="claimNumber"/>
      <th field-name="status"/>
      <th field-name="leaveType"/>
      <th field-name="startDate" display-format="longDate" display-align="right"/>
      <th field-name="endDate" display-format="longDate" display-align="right"/>

You just define the column headers and trNgGrid generates the rows from the JSON data array (called “items” here). The resulting table has column sorting and column searching and other features can be enabled too. As it is written in AngularJS, it leverages AngularJS features such as filters (“longDate” here) for custom formatting.

What is great about this arrangement is that there is no tedious coding involved: all the code serves a purpose and the grunt work is handled by the frameworks. It also scores high on “ease of modification”: an extra column only takes a few minutes to add.

(Contrast this with the JavaScript required in e.g. Connecting DataTables to JSON generated by Apex.)

Here is a screen shot from the real application (with different columns):


Hash code for any Apex type via System.hashCode

If you are writing your own class that you want to use in sets or as a map key, you must add equals and hashCode methods as described in Using Custom Types in Map Keys and Sets. But hashCode is usually implemented by combining the hash codes of the fields of the class and until now not all Apex types exposed a hash code value.

This is now addressed in Summer ’14 by the System.hashCode method. It saves a lot of work for SObjects and provides consistency for primitive types.

This test illustrates the method working:

private class HashCodeTest {

    static void string() {
        // Already had a hashCode method
        System.assertEquals('hello world'.hashCode(), System.hashCode('hello world'));
    static void decimal() {
        // Wasn't available before
        System.assertEquals(3843600, System.hashCode(123.987));
        System.assertEquals(3843600, System.hashCode(123.987));
        System.assertEquals(0, System.hashCode(0));
        System.assertEquals(1, System.hashCode(1));
        System.assertEquals(2, System.hashCode(2));
        System.assertEquals(1024, System.hashCode(3.3));
        System.assertEquals(-720412809, System.hashCode(4773463427.34354345));
    static void sobject() {
        // Wasn't available before
        System.assertEquals(-1394890885, System.hashCode(new Contact(
                LastName = 'Doe')));
        System.assertEquals(-1394890885, System.hashCode(new Contact(
                LastName = 'Doe')));
        System.assertEquals(-1474401918, System.hashCode(new Contact(
                LastName = 'Smith')));
        System.assertEquals(744078320, System.hashCode(new Contact(
                LastName = 'Doe', FirstName = 'Jane')));
        System.assertEquals(744095627, System.hashCode(new Contact(
                LastName = 'Doe', FirstName = 'John')));
    static void testNull() {
        try {
            System.assertEquals(0, System.hashCode(null));
        } catch (NullPointerException e) {

Thanks to Daniel Ballinger for flagging this via a comment on Expose hashCode on all Apex primitives so hashCode viable for custom classes.

How to pass a large number of selections from a standard list view to a Visualforce page

Buttons can be defined and added to an object’s standard list view and these buttons can access the selected objects:


If only a few items are selected then the IDs can be passed on to a Visualforce page like this (a “List Button” with “Display Checkboxes” checked that has behavior “Execute JavaScript” and content source “OnClickJavaScript”):

var ids = {!GETRECORDIDS($ObjectType.Contact)};
if (ids.length) {
    if (ids.length <= 100) {
        window.location = '/apex/Target?ids=' + ids.join(',');
    } else {
        alert('Select 100 or less');
} else {
    alert('Select one or more Contacts');

The limit of 100 selected items is imposed because 2k characters is generally considered the longest URL that works safely everywhere, and a GET requires that each 15 character ID is appended to the URL together with a delimiter (to pass the values).

So what if you need to support more than 100 selected objects? (The standard list view UI does allow selections to be made on multiple pages and allows a single page to show up to 200 objects.) Using a POST instead of a GET where the ID values are part of the request body rather than the URL avoids the URL length problem. Here is how to do that:

var ids = {!GETRECORDIDS($ObjectType.Contact)};
if (ids.length) {
    var form = document.createElement("form");
    form.setAttribute("method", "POST");
    form.setAttribute("action", "");
    var hiddenField = document.createElement("input");
    hiddenField.setAttribute("type", "hidden");
    hiddenField.setAttribute("name", "ids");
    hiddenField.setAttribute("value", ids.join(','));
} else {
    alert('Select one or more Contacts');

This JavaScript creates a form and posts it to the server. An important part of making this work is that the full target URL needs to be specified as described in this Get POST data via visualforce page article otherwise the Apex controller can’t pickup the posted data via ApexPages.currentPage().getParameters(). An obscure trick to say the least.

This page:

<apex:page controller="Target">
    <apex:repeat value="{!ids}" var="id">

and controller:

public with sharing class Target {
    public String[] ids {
        get {
            if (ids == null) {
                String s = ApexPages.currentPage().getParameters().get('ids');
                if (s != null) {
                    ids = s.split(',');
                } else {
                    ids = new String[] {};
            return ids;
        private set;

can be used to demonstrate that the IDs are passed correctly for both versions of the JavaScript button.

An @RestResource Apex class that returns multiple JSON formats

The simplest way to write an @RestResource class is to return Apex objects from the @Http methods and leave it up to the platform to serialize these objects as JSON (or XML):

global without sharing class ReportRest {

    public class MyInnerClass {
        public String name;
        public Integer number;

    global static MyInnerClass get() {
        MyInnerClass instance = new MyInnerClass();
        return instance;

This also allows tests to be written that don’t have to deserialize as they can just reference the class instances directly. But the approach imposes these limitations:

  • The response JSON is fixed and determined by the returned classes and their fields so responses that vary depending on the URL requested can’t be produced
  • Error conditions typically get handled by adding error fields to the response object rather than by returning a status code other than 200 and separate error information

Here is an alternate pattern that is a bit more work but in my experience meets the needs of client-side MVC applications (AngularJS in my case) better. The class returns two different JSON formats (depending on the part of the URL after “/report/”):

global without sharing class ReportRest {

    public class Day {
        public Date date;
        public Integer hours;
    public class Employee {
        public String name;
        public Day[] approved = new Day[] {};
    public class Claim {
        public String employeeName;
        public String claimNumber;
    global static void get() {
        RestResponse res = RestContext.response;
        if (res == null) {
            res = new RestResponse();
            RestContext.response = res;
        try {
            res.responseBody = Blob.valueOf(JSON.serialize(doGet(extractReportId())));
            res.statusCode = 200;
        } catch (EndUserMessageException e) {
            res.responseBody = Blob.valueOf(e.getMessage());
            res.statusCode = 400;
        } catch (Exception e) {
            res.responseBody = Blob.valueOf(
                    String.valueOf(e) + '\n\n' + e.getStackTraceString()
            res.statusCode = 500;
    private static Object doGet(String reportId) {
        if (reportId == 'ac') {
            return absenceCalendarReport();
        } else if (reportId == 'al') {
            return absenceListReport();
        } else if (reportId == 'dl') {
            return disabilityListReport();
        } else {
            throw new EndUserMessageException(reportId + ' not implemented');
    private static Employee[] absenceCalendarReport() {
        Employee[] employees = new Employee[] {};
        return employees;
    private static Claim[] absenceListReport() {
        Claim[] claims = new Claim[] {};
        return claims;
    private static Claim[] disabilityListReport() {
        Claim[] claims = new Claim[] {};
        return claims;
    private static String extractReportId() {
        String[] parts = RestContext.request.requestURI.split('\\/');
        String lastPart = parts[parts.size() - 1];
        Integer index = lastPart.indexOf('?');
        return index != -1 ? lastPart.substring(0, index) : lastPart;

Apex classes are still used to represent the returned data but are explicitly serialized using a JSON.serialize call. As the overall response is being explicitly built, the returned status code can be set allowing the client side to vary its logic depending on that status code. In this example error information – intended to be shown to an end user as signalled by the EndUserMessageException custom exception or unintended and so including a stack trace – is returned as plain text that can be directly shown to the end user.

Salesforce Packaged Modules – what do you think?

If you have worked on server-side JavaScript you will be familiar with the NPM (Node Packaged Modules) site. The Node.js community have managed to corral the loose world of JavaScript into some 74,000 packages that have been downloaded many million times. When I wanted a HTTP client it took me just a few minutes to find one in NPM that I liked and it has been working fine ever since.

In comparison, I know of relatively few libraries of Apex code, and one of the oldest, apex-lang, presently shows only 1100 downloads. So I assume that of the (several) billion lines of Apex that have been written, just about none of it is re-used in more than one org. So not much standing on the shoulders of giants going on.

Yes there is the managed package mechanism, but I think of managed packages as containers for substantial functionality with typically no dependency on other 3rd party managed packages. Perhaps we have all been waiting for Salesforce to introduce some lighter-weight library mechanism.

Some cool features of NPM packages (see the package.json example below) are:

  • automatic dependency chains: if you install a package that requires other packages they will also be automatically installed
  • versioning and version dependency
  • repository locations included
  • very easy to use e.g. “npm install protractor” and 30 seconds later everything you need is installed and ready to run

What might this look like for Salesforce and Apex? There would need to be an spm command-line tool that could pull from (Git) repositories and push into Salesforce orgs (or a local file system) – seems possible. Some of the 40 characters available for names would have to be used to avoid naming collisions and there would need to be a single name registry. The component naming convention could be something like spm_myns_MyClassName for an Apex class and spm_myns_Package for the package description static resource (that would contain JSON). Somehow say 90% test coverage would need to be mandatory. Without the managed package facility of source code being hidden it would inherently be open source (a good thing in my mind). Perhaps code should have no dependency on concrete SObject types? Perhaps only some component types should be be allowed?

To get this started – apart from the tooling and a site – some truly useful contributions would be needed to demonstrate the value.

In the company where I work we have talked about sharing source code between managed packages but have not done it in any formalised way; I wonder if systems similar to what is described here are already in use in some companies? Or are already open sourced? Comments very welcome.

For reference, here is a slightly cut down NPM package.json example:

  "name": "protractor",
  "description": "Webdriver E2E test wrapper for Angular.",
  "homepage": "",
  "keywords": [
  "author": {
    "name": "Julie Ralph",
    "email": ""
  "dependencies": {
    "selenium-webdriver": "~2.39.0",
    "minijasminenode": ">=0.2.7",
    "saucelabs": "~0.1.0",
    "glob": ">=3.1.14",
    "adm-zip": ">=0.4.2",
    "optimist": "~0.6.0"
  "devDependencies": {
    "expect.js": "~0.2.0",
    "chai": "~1.8.1",
    "chai-as-promised": "~4.1.0",
    "jasmine-reporters": "~0.2.1",
    "mocha": "~1.16.0",
    "express": "~3.3.4",
    "mustache": "~0.7.2"
  "repository": {
    "type": "git",
    "url": "git://"
  "bin": {
    "protractor": "bin/protractor",
    "webdriver-manager": "bin/webdriver-manager"
  "main": "lib/protractor.js",
  "scripts": {
    "test": "node lib/cli.js spec/basicConf.js; ..."
  "license": "MIT",
  "version": "0.16.1",
  "webdriverVersions": {
    "selenium": "2.39.0",
    "chromedriver": "2.8",
    "iedriver": "2.39.0"
  "readme": "...",
  "readmeFilename": "",
  "bugs": {
    "url": ""
  "_id": "protractor@0.16.1",
  "_from": "protractor@0.16.x"

Serving AngularJS templates from static resources

An AngularJS app typically starts with an “index” page that loads the required JavaScript/CSS and acts as the container for the client-side processed content. The app operates by rendering various templates in response to user interactions into that container.

That “index” page is a good place to obtain information from the “Visualforce” world that can be passed to the “AngularJS” world, and so is best made a Visualforce page. (See Passing platform configuration to an AngularJS app.)

But what about the templates? Typically there are many of these. Should they also be Visualforce pages? At first sight it seems a reasonable thing to do as the templates are “partial pages”. And Visualforce pages have fixed URLs whereas static resources have URLs that include a timestamp making them harder to reference in JavaScript code such as a route provider. And if you use individual static resources per template (rather than a ZIP static resource containing all the templates) each template has its own timestamp.

But providing a clear separation has been made between server-side processing and client-side processing, no Visualforce capabilities are needed for the templates. And using Visualforce pages adds complexity such as requiring profiles to be updated. So how can the static resource timestamp value be handled if static resources are used instead?

The answer is surprisingly simple: it appears that using the current (JavaScript) timestamp is enough to get the latest version. So a $routeProvider templateUrl for a static resource called “xyz_partial” is simply:

templateUrl: '/resource/' + + '/xyz_partial'

You can see this pattern applied in this (quite new) Salesforce AngularJS sample application created by Pat Patterson.

PS As David Esposito comments, where there are only a small number of resource references, it is arguably cleaner to not use this timestamp approach.