My week with Heroku Connect

Heroku Connect replicates and synchronises data between a Salesforce org and a Heroku Postgres database. So you can build an app in Heroku using one of seven technologies (including Node.js), hook that app up to the Postgres database, and then your app users will be able to see and modify the same data that the Salesforce users see and modify. Bear in mind that this mechanism allows you to share data: it does not allow you to share business logic.

As this solution is built in/on Heroku, it is incredibly easy to setup and get working; I had the Demo Edition setup and the first couple of SObjects working inside an hour. The mapping UI involves a lot of checkbox clicking, but once done the configuration can be exported and imported. The UI provides overview and drill-down on what is going on. One minor problem I had was quickly and efficiently sorted out by Heroku support. All good.

The clean and simple scenario is where the Postgres data is largely handled as read-only. Then (besides whatever technologies you are writing the Heroku app in) the main thing to get used to is that you are writing SQL not SOQL. Also there are two identifier values: the SFID which is the 18 character (case insensitive) Salesforce ID we are used to and a local Postgres integer ID. Foreign key fields from Salesforce reference the SFID not the ID.

Things get more awkward when you want to insert or update the Postgres data. Heroku Connect handles the synchronisation in that direction too, but some aspects of the implementation leak out:

  • The SFID is not available in the transaction where an insert is done (as it is allocated in Salesforce). In my very limited tests, it was available about a second later.
  • Fields such as CreatedDate and any fields populated by Salesforce logic such as insert or update triggers will remain empty until the next synchronization is done; in the basic polling mode that can be up to 10 minutes later. So your users may experience blank fields in the UI when they insert data that change to populated fields when the UI is refreshed 10 minutes later.

Inserting objects into Postgres that have parent/child relationships is awkward. For master/detail, the required pattern is documented in Inserting records with master/detail relationships using external IDs. It requires an “External ID” field to be added to the Salesforce objects (so is a little intrusive at that side) and the Heroku code needs to populate that field and (an automatically added) foreign key with a matching unique value. This then allows Salesforce to create the same master/detail relationship using the normal Salesforce identifier. I don’t know if, but do hope that, the Salesforce foreign key is pushed back into Postgres fairly immediately like the SFID is. There is no mention of how to accomplish this for lookup relationships.

I must mention the pleasure and productivity of writing the full application stack in one language and in JavaScript: AngularJS for the client and Node.js/Express/pg-promise/Passport for the server. All quick to get running for existing Salesforce data thanks to Heroku and Heroku Connect.

Advertisements

Picklist values by record type for AngularJS UI

A convenient Apex API to get the picklist value sub-set per record type has yet to appear. Visualforce does the sub-setting, but that is no help if you are building UI in some other technology such as AngularJS and want the select lists to reflect the sub-setting.

Here is a work-around for that situation, where the aim is to provide JSON to the AngularJS client-side code via a static resource. An API that provides access to both record types and static resources is the Metadata API. Thanks to Andrew Fawcett‘s work (see financialforcedev/apex-mdapi) and improvements in the underlying Salesforce API, this API is now quite simple to call from Apex code.

In the code below, the readPicklists method reads the record type information for 5 record types of a custom SObject called cve__BenefitClaimed__c. (This SObject and its record types are in a managed package that has the namespace prefix cve. Note also that it is the developer name of the record type that is used.) The picklist value data is extracted and placed in nested maps where the first key is the SObject type name, the second key the record type (developer) name and the third key the field name with the value a list of valid picklist values. The updateStaticResource method updates a pre-existing static resource (in a managed package that has the namespace prefix cveep) with the JSON string version of the nested maps.

I run this from a Visualforce admin page. If the picklist assignments are changed, the code is manually re-run.

The result is that the AngularJS code can use its $http service to get the static resource. The data is pre-created and so available quickly, and is also already in JSON format so is easy for the AngularJS code to consume.

Here is the code; sorry it’s rather wide…

public PageReference updatePicklists() {
    
    final String[] recordTypeFullNames = new String[] {
            'cve__BenefitClaimed__c.cve__Accident',
            'cve__BenefitClaimed__c.cve__LongTermCare',
            'cve__BenefitClaimed__c.cve__LongTermDisability',
            'cve__BenefitClaimed__c.cve__ShortTermDisability',
            'cve__BenefitClaimed__c.cve__Survivor'
            };
    
    final String staticResourceFullName = 'cveep__RecordTypePicklistValues';
            
    MetadataService.MetadataPort service = new MetadataService.MetadataPort();
    service.SessionHeader = new MetadataService.SessionHeader_element();
    service.SessionHeader.sessionId = UserInfo.getSessionId();
    
    String jsonString = readPicklists(service, recordTypeFullNames);
    updateStaticResource(service, staticResourceFullName, jsonString);
    
    return null;
}

private String readPicklists(MetadataService.MetadataPort service, String[] recordTypeFullNames) {
    
    Map<String, Map<String, Map<String, List<String>>>> sobMap = new Map<String, Map<String, Map<String, List<String>>>>();
    for (MetadataService.RecordType rt : (MetadataService.RecordType[]) service.readMetadata('RecordType', recordTypeFullNames).getRecords()) {
        MetadataService.RecordTypePicklistValue[] values = rt.picklistValues;
        if (rt.fullName != null && rt.picklistValues != null) {
            String[] parts = rt.fullName.split('\\.');
            String sobjectType = parts[0];
            String recordType = parts[1];
            Map<String, Map<String, List<String>>> rtMap = sobMap.get(sobjectType);
            if (rtMap == null) {
                rtMap = new Map<String, Map<String, List<String>>>();
                sobMap.put(sobjectType, rtMap);
            }
            Map<String, List<String>> fieldMap = rtMap.get(recordType);
            if (fieldMap == null) {
                fieldMap = new Map<String, List<String>>();
                rtMap.put(recordType, fieldMap);
            }
            for (MetadataService.RecordTypePicklistValue picklist : rt.picklistValues) {
                if (picklist.values != null) {
                    List<String> valueList = fieldMap.get(picklist.picklist);
                    if (valueList == null) {
                        valueList = new List<String>();
                        fieldMap.put(picklist.picklist, valueList);
                    }
                    for (MetadataService.PicklistValue value : picklist.values) {
                        valueList.add(value.fullName);
                    }
                }
            }
        }
    }
    
    return JSON.serialize(sobMap);
}

private void updateStaticResource(MetadataService.MetadataPort service, String staticResourceFullName, String jsonString) {
    
    MetadataService.StaticResource sr = new MetadataService.StaticResource();
    sr.fullName = staticResourceFullName;
    sr.contentType = 'text/json';
    sr.cacheControl = 'public';
    sr.content = EncodingUtil.base64Encode(Blob.valueOf(jsonString));
    
    MetadataService.SaveResult[] results = service.updateMetadata(new MetadataService.StaticResource[] {sr});
    for (MetadataService.SaveResult r : results) {
        if (!r.success) {
            String[] errors = new String[] {};
            if (r.errors != null) {
                for (MetadataService.Error e : r.errors) {
                    errors.add('message=' + e.message + ' statusCode=' + e.statusCode + ' fields=' + e.fields);
                }
            }
            throw new EndUserMessageException('Error: ' + String.join(errors, '; '));
        }
    }
}

PS A maximum of 10 record types can be read at once so use multiple calls if you require more than 10.

Cool data tables using @RestResource, AngularJS and trNgGrid

I have an AngularJS application that shows tables of data using:

  • an Apex class that does dynamic SOQL and populates instances of a simple Apex class that are serialised to the client as JSON via the @RestResource annotation
  • the client side is AngularJS that pretty much just passes the JSON data through to a page template
  • the presentation work is all done by the excellent trNgGrid component and Bootstrap styling

The Apex code is clean and simple:

@RestResource(urlMapping='/report')
global without sharing class ReportRest {
    global class Claim {
        public String employeeName;
        public String department;
        public String reportsTo;
        public String claimNumber;
        public String status;
        public String leaveType;
        public Date startDate;
        public Date endDate;
        Claim(SObject c) {
            SObject e = c.getSObject('Employee__r');
            employeeName = (String) e.get('Name');
            department = (String) e.get('Department');
            SObject r = e.getSObject('ReportsTo');
            reportsTo = r != null ? (String) r.get('Name') : null;     
            claimNumber = (String) c.get('Name');
            status = (String) c.get('Status__c');
            leaveType = (String) c.get('LeaveType__c');
            startDate = (Date) c.get('StartDate__c');
            endDate = (Date) c.get('EndDate__c');
        }
    }
    @HttpGet  
    global static Claim[] get() {
        Claim[] claims = new Claim[] {};
        String soql = ...;
        for (SObject sob : Database.query(soql)) {
            claims.add(new Claim(sob));
        }
        return claims;
    }
}

and the trNgGrid markup is even more impressive:

<table tr-ng-grid="tr-ng-grid" class="table table-condensed" items="items"
      order-by="orderBy" order-by-reverse="orderByReverse">
  <thead>
    <tr>
      <th field-name="employeeName"/>
      <th field-name="department"/>
      <th field-name="reportsTo"/>
      <th field-name="claimNumber"/>
      <th field-name="status"/>
      <th field-name="leaveType"/>
      <th field-name="startDate" display-format="longDate" display-align="right"/>
      <th field-name="endDate" display-format="longDate" display-align="right"/>
    </tr>
  </thead>
</table>

You just define the column headers and trNgGrid generates the rows from the JSON data array (called “items” here). The resulting table has column sorting and column searching and other features can be enabled too. As it is written in AngularJS, it leverages AngularJS features such as filters (“longDate” here) for custom formatting.

What is great about this arrangement is that there is no tedious coding involved: all the code serves a purpose and the grunt work is handled by the frameworks. It also scores high on “ease of modification”: an extra column only takes a few minutes to add.

(Contrast this with the JavaScript required in e.g. Connecting DataTables to JSON generated by Apex.)

Here is a screen shot from the real application (with different columns):

TrNgGrid

An @RestResource Apex class that returns multiple JSON formats

The simplest way to write an @RestResource class is to return Apex objects from the @Http methods and leave it up to the platform to serialize these objects as JSON (or XML):

@RestResource(urlMapping='/report/*')
global without sharing class ReportRest {

    public class MyInnerClass {
        public String name;
        public Integer number;
    }

    @HttpGet  
    global static MyInnerClass get() {
        MyInnerClass instance = new MyInnerClass();
        ...
        return instance;
    }
}

This also allows tests to be written that don’t have to deserialize as they can just reference the class instances directly. But the approach imposes these limitations:

  • The response JSON is fixed and determined by the returned classes and their fields so responses that vary depending on the URL requested can’t be produced
  • Error conditions typically get handled by adding error fields to the response object rather than by returning a status code other than 200 and separate error information

Here is an alternate pattern that is a bit more work but in my experience meets the needs of client-side MVC applications (AngularJS in my case) better. The class returns two different JSON formats (depending on the part of the URL after “/report/”):

@RestResource(urlMapping='/report/*')
global without sharing class ReportRest {

    public class Day {
        public Date date;
        public Integer hours;
    }
    
    public class Employee {
        public String name;
        public Day[] approved = new Day[] {};
    }
    
    public class Claim {
        public String employeeName;
        public String claimNumber;
    }
 
    @HttpGet  
    global static void get() {
        RestResponse res = RestContext.response;
        if (res == null) {
            res = new RestResponse();
            RestContext.response = res;
        }
        try {
            res.responseBody = Blob.valueOf(JSON.serialize(doGet(extractReportId())));
            res.statusCode = 200;
        } catch (EndUserMessageException e) {
            res.responseBody = Blob.valueOf(e.getMessage());
            res.statusCode = 400;
        } catch (Exception e) {
            res.responseBody = Blob.valueOf(
                    String.valueOf(e) + '\n\n' + e.getStackTraceString()
                    );
            res.statusCode = 500;
        }
    }
    
    private static Object doGet(String reportId) {
        if (reportId == 'ac') {
            return absenceCalendarReport();
        } else if (reportId == 'al') {
            return absenceListReport();
        } else if (reportId == 'dl') {
            return disabilityListReport();
        } else {
            throw new EndUserMessageException(reportId + ' not implemented');
        }
    }
    
    private static Employee[] absenceCalendarReport() {
        Employee[] employees = new Employee[] {};
        ...
        return employees;
    }
    
    private static Claim[] absenceListReport() {
        Claim[] claims = new Claim[] {};
        ...
        return claims;
    }
    
    private static Claim[] disabilityListReport() {
        Claim[] claims = new Claim[] {};
        ...
        return claims;
    }
    
    private static String extractReportId() {
        String[] parts = RestContext.request.requestURI.split('\\/');
        String lastPart = parts[parts.size() - 1];
        Integer index = lastPart.indexOf('?');
        return index != -1 ? lastPart.substring(0, index) : lastPart;
    }
}

Apex classes are still used to represent the returned data but are explicitly serialized using a JSON.serialize call. As the overall response is being explicitly built, the returned status code can be set allowing the client side to vary its logic depending on that status code. In this example error information – intended to be shown to an end user as signalled by the EndUserMessageException custom exception or unintended and so including a stack trace – is returned as plain text that can be directly shown to the end user.

Serving AngularJS templates from static resources

An AngularJS app typically starts with an “index” page that loads the required JavaScript/CSS and acts as the container for the client-side processed content. The app operates by rendering various templates in response to user interactions into that container.

That “index” page is a good place to obtain information from the “Visualforce” world that can be passed to the “AngularJS” world, and so is best made a Visualforce page. (See Passing platform configuration to an AngularJS app.)

But what about the templates? Typically there are many of these. Should they also be Visualforce pages? At first sight it seems a reasonable thing to do as the templates are “partial pages”. And Visualforce pages have fixed URLs whereas static resources have URLs that include a timestamp making them harder to reference in JavaScript code such as a route provider. And if you use individual static resources per template (rather than a ZIP static resource containing all the templates) each template has its own timestamp.

But providing a clear separation has been made between server-side processing and client-side processing, no Visualforce capabilities are needed for the templates. And using Visualforce pages adds complexity such as requiring profiles to be updated. So how can the static resource timestamp value be handled if static resources are used instead?

The answer is surprisingly simple: it appears that using the current (JavaScript) timestamp is enough to get the latest version. So a $routeProvider templateUrl for a static resource called “xyz_partial” is simply:

templateUrl: '/resource/' + Date.now() + '/xyz_partial'

You can see this pattern applied in this (quite new) Salesforce AngularJS sample application created by Pat Patterson.

PS As David Esposito comments, where there are only a small number of resource references, it is arguably cleaner to not use this timestamp approach.

Passing platform configuration to an AngularJS app

Running a JavaScript client-side MVC app such as an AngularJS app in Salesforce presents the problem of how to obtain configuration information from the platform. Most of the app is best located in a static resource zip file as server-side Visualforce processing isn’t needed. Using relative URLs between the various files in the zip then avoids any dependency on the absolute URL of the zip. (That absolute URL includes a timestamp and also a namespace prefix if a managed package is involved so the fewer references to it the better.)

But there are still a few configuration parameters that are easiest to obtain using Visualforce. The index Visualforce page – that dynamic page content is inserted into – is a good single place to obtain that information and make it available to the rest of the app through JavaScript via Angular’s constant mechanism:

<apex:page showHeader="false" sidebar="false"
        standardStylesheets="false" applyHtmlTag="false">
<html lang="en" ng-app="eepApp" ng-controller="AppController">
<head>...</head>
<body>...

<script src="{!URLFor($Resource.appzip, 'lib/angular/angular.min.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/app.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/controllers.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/filters.js')}"></script>
<script src="{!URLFor($Resource.appzip, 'js/services.js')}"></script>

<script>
(function() {
    var parts = '{! $CurrentPage.Name }'.split('__');
    var namespace = parts.length == 2 ? parts[0] : null
    var restPrefix =  '{! $Site.CurrentSiteUrl }services/apexrest'
            + (namespace ? '/' + namespace : '');
    var pagePrefix = 'https://{! $Site.Domain }';
    var serverUrls = {
        namespacePrefix: namespace ? namespace + '__' : '',
        configRest: restPrefix + '/eep/config',
        employeesRest: restPrefix + '/eep/employees',
        metaRest: restPrefix + '/eep/meta',
        loginPage: pagePrefix + '{! $Page.Login }',
        logoutPage: pagePrefix + '{! $Page.Logout }'
    }
    console.log('serverUrls=' + JSON.stringify(serverUrls));
    
    // This configures the Angular app (declared in app.js)
    eepApp.constant('ServerUrls', serverUrls);
})();
</script>
  
</body>
</html>
</apex:page>

With this setup, any service or controller that needs to reference one of the configuration values just declares a dependency on the ServerUrls object and references the values from that. The result is a clean separation of concerns.

JavaScript testing

Automated tests bring many benefits, but often jQuery JavaScript that is tightly coupled to its associated HTML doesn’t have such tests. As more logic moves into the client, this becomes less acceptable.

AngularJS is designed with testability high on the priority list, and I can confirm that its design and the testing frameworks available make writing JavaScript tests surprisingly easy and often quite enjoyable. I must admit to writing the tests after building the application, and so had to refactor the application to make it more testable. But as with other languages, this refactoring also improved the design of the application code.

I used Jasmine for my tests, both unit tests and end-to-end tests. Here is a unit test example that shows how clean the tests can look:

describe("The UtilService should", function() {

    var service;
    
    beforeEach(inject(function($injector) {
        service = $injector.get('UtilService');
    }));
    
    it('generate arrays', function() {
        expect(service.range(2, 6, 1)).toEqual([2, 3, 4, 5, 6]);
        expect(service.range(-5, 5, 3)).toEqual([-5, -2, 1, 4]);
    });
    
    it('format dates', function() {
        expect(service.formatDate(new Date(2013, 11, 25))).toEqual('2013-12-25');
        expect(service.formatDate(new Date(2014, 0, 3))).toEqual('2014-01-03');
    });
});

For unit tests, there is the rather amazing Karma test runner. This lets you run all your tests – hundreds if you have them, it is that quick – every time you save a JavaScript file. You can use locally installed browsers or run the tests on multiple browsers simultaneously via Selenium Grid locally or out in the cloud. Companies such as SauceLabs offer a huge number of browser/OS combinations so you can go as far as you need to with browser compatibility testing. And it is easy to hook into your Continuous Integration server such as Jenkins:

jenkins

For end-to-end tests there is the Protractor test framework. This also works with Selenium Grid and integrates with Jenkins. This took more work to get going, mainly because I had to add and call a test-only Apex @RestResource class to clear the application data at the start of each test run so the tests could assume no data as a starting point.

Thanks to Node.js packages, these test environments are really easy to setup too.