JavaScript testing

Automated tests bring many benefits, but often jQuery JavaScript that is tightly coupled to its associated HTML doesn’t have such tests. As more logic moves into the client, this becomes less acceptable.

AngularJS is designed with testability high on the priority list, and I can confirm that its design and the testing frameworks available make writing JavaScript tests surprisingly easy and often quite enjoyable. I must admit to writing the tests after building the application, and so had to refactor the application to make it more testable. But as with other languages, this refactoring also improved the design of the application code.

I used Jasmine for my tests, both unit tests and end-to-end tests. Here is a unit test example that shows how clean the tests can look:

describe("The UtilService should", function() {

    var service;
    beforeEach(inject(function($injector) {
        service = $injector.get('UtilService');
    it('generate arrays', function() {
        expect(service.range(2, 6, 1)).toEqual([2, 3, 4, 5, 6]);
        expect(service.range(-5, 5, 3)).toEqual([-5, -2, 1, 4]);
    it('format dates', function() {
        expect(service.formatDate(new Date(2013, 11, 25))).toEqual('2013-12-25');
        expect(service.formatDate(new Date(2014, 0, 3))).toEqual('2014-01-03');

For unit tests, there is the rather amazing Karma test runner. This lets you run all your tests – hundreds if you have them, it is that quick – every time you save a JavaScript file. You can use locally installed browsers or run the tests on multiple browsers simultaneously via Selenium Grid locally or out in the cloud. Companies such as SauceLabs offer a huge number of browser/OS combinations so you can go as far as you need to with browser compatibility testing. And it is easy to hook into your Continuous Integration server such as Jenkins:


For end-to-end tests there is the Protractor test framework. This also works with Selenium Grid and integrates with Jenkins. This took more work to get going, mainly because I had to add and call a test-only Apex @RestResource class to clear the application data at the start of each test run so the tests could assume no data as a starting point.

Thanks to Node.js packages, these test environments are really easy to setup too.

Testing a Database.Batchable implementation

I have some processing code that makes use of batch Apex to avoid hitting governor limits. Running a representative test generates this error:

System.UnexpectedException: No more than one executeBatch can be called from within a testmethod. Please make sure the iterable returned from your start method matches the batch size, resulting in one executeBatch invocation.

Testability is an important feature but sadly missing in a few areas of including this one…

The normal work-around is suggested in the error message. But using that means that the code does not get tested across a batch boundary: such testing is particularly important for Database.Batchable implementations that also implement Database.Stateful to maintain state across batches.

My processing code also makes use of batch chaining where in the finish method of the Database.Batchable sometimes a further instance is created and executed. Unfortunately (but not surprisingly) that chained execution also generates the above error and there is no easy work-around.

Below is a small class I created to work-around both these problems. Instead of invoking Database.executeBatch, invoke BatchableExecutor.executeBatch. When called from a test, this method makes the start/execute(s)/finish pattern of calls synchronously itself and so avoids the above errors. As long as the test uses a small batch size e.g. 3 and makes sure a moderate number of records are returned by the start method e.g. 10 the Database.Batchable logic can be pretty fully tested without hitting any governor limits.

 * Allows basic testing of a Database.Batchable using more than one batch.
public class BatchableExecutor {
    private static final String KEY_PREFIX = AsyncApexJob.SObjectType.getDescribe().getKeyPrefix();
    public static Id executeBatch(Database.Batchable<SObject> batchable, Integer scopeSize) {
        if (!Test.IsRunningTest()) {
            return Database.executeBatch(batchable, scopeSize);
        } else {
            return executeBatchSynchronously(batchable, scopeSize);
    private static Id executeBatchSynchronously(Database.Batchable<SObject> batchable, Integer scopeSize) {
        // Fake implementation of this interface could be added as neeed
        Database.BatchableContext bc = null;
        // Invoke start (assumes QueryLocator is being used)
        Database.QueryLocator start = (Database.QueryLocator) batchable.start(bc);
        Database.QueryLocatorIterator iter = start.iterator();
        List<SObject> sobs = new List<SObject>();
        try {
            // Invoke execute
            while(iter.hasNext()) {
                if (sobs.size() == scopeSize) {
                    // These calls could be wrapped in try/catch too for negative tests
                    batchable.execute(bc, sobs);
            if (sobs.size() > 0) {
                batchable.execute(bc, sobs);
        } finally {
            // Invoke finish
        // Fake id
        return KEY_PREFIX + '000000000000';

Bear in mind that this code does an OK job of emulating the happy path only (as demonstrated by the test below). Also it obviously cannot emulate the transactions, asynchronous execution and object lifecycle of the real mechanism.

private class BatchableExecutorTest {
    private class Fixture {
        List<Account> accounts = new List<Account>();
        Fixture addAccounts(Integer objectCount) {
            for (Integer i = 0; i < objectCount; i++) {
                accounts.add(new Account(Name = 'target-' + i, Site = null));
            insert accounts;
            return this;
        Fixture execute(Boolean useDatabaseExecuteBatch, Integer batchSize) {
            // Test that both mechanisms produce the same results
            Id jobId = useDatabaseExecuteBatch
                    ? Database.executeBatch(new BatchableExecutorTestBatchable(), batchSize)
                    : BatchableExecutor.executeBatch(new BatchableExecutorTestBatchable(), batchSize)
            System.assertNotEquals(null, jobId);
            return this;
        Fixture assert(Integer expectedBatches) {
            System.assertEquals(1, [select Count() from Account where Name = 'start']);
            System.assertEquals(expectedBatches,  [select Count() from Account where Name = 'execute']);
            System.assertEquals(1,  [select Count() from Account where Name = 'finish']);
            System.assertEquals(accounts.size(), [select Count() from Account where Name like 'target-%' and Site = 'executed']);
            return this;
    static void batchableExecutorExecuteBatch() {
        new Fixture().addAccounts(10).execute(false, 3).assert(4);
    static void databaseExecuteBatch() {
        new Fixture().addAccounts(10).execute(true, 10).assert(1);
// Has to be a top-level class
public class BatchableExecutorTestBatchable implements Database.Batchable<SObject>, Database.Stateful {
    public Database.QueryLocator start(Database.BatchableContext bc) {
        Database.QueryLocator ql = Database.getQueryLocator([select Name from Account order by Name]);
    	insert new Account(Name = 'start');
        return ql;
    public void execute(Database.BatchableContext bc, List<SObject> scope) {
    	for (SObject sob : scope) {
    		Account a = (Account) sob;
            a.Site = 'executed';
        update scope;
        insert new Account(Name = 'execute');
    public void finish(Database.BatchableContext bc) {
    	insert new Account(Name = 'finish');

“AsyncApexTests Limit exceeded” blocker

I’ve just hit this “AsyncApexTests Limit exceeded” error when I try to run tests in an org. Running the tests in a different way reports this:

To protect all customers from excessive usage and Denial of Service attacks, we limit the number of long-running requests that are processed at the same time by an organization. Your request has been denied because this limit has been exceeded by your organization. Please try your request again later.

Googling reveals posts like this. And the limit is documented in Understanding Execution Governors and Limits.

But having to wait 24 hours before I can continue working? FUBAR.

Cleaner inner class test fixture pattern

I favor this inner class test fixture pattern for Apex unit tests where a set of related objects are required. It leverages Apex’s support for named parameters when concrete SObject types are created making the code self describing and quick to modify including when fields are added. (Patterns like builder typically need extra methods adding when new fields are added.) And rather late in the day I’ve noticed that the result of an assignment is the value assigned (as in languages like Java).

These two language features together allow this high signal-to-noise ratio fixture style where a single line initializes, assigns and inserts each object:

private class OneTwoThreeTest {
    class Fixture {
        One__c one;
        Two__c two;
        Three__c three;
        Fixture() {
            insert one = new One__c(Number__c = 12.34);
            insert two = new Two__c(One__c = one.Id, String__c = 'Hello');
            insert three = new Three__c(Two__c = two.Id, Checkbox__c = true);
    static void test() {
        Fixture f = new Fixture();
        // ...

When runAllTests=”false” actually means runAllTests=”true”

After several deployments into test environments, we deployed into a customer’s production environment yesterday. One of the deployment steps after installing some managed packages is to push several profiles into the target org using the Ant deploy task with a package.xml that just includes the profiles. It was an unwelcome surprise that all the unit tests in the production org ran. These are tests written by a third party that could have had dangerous side effects; in previous deployments this had not happened and runAllTests=”false” being present in the Ant script suggested it should not.

The explanation is in the Ant deploy runAllTests documentation:

This parameter is ignored when deploying to a Salesforce production organization. Every unit test in your organization namespace is executed.

Whatever the motivation for this behavior, I suggest that returning an error message (containing this text) when runAllTests=”false” is specified for a production org would be a better approach to handling the situation than just ignoring the attribute and running the tests.

Inner class test fixture pattern

Most tests require one or more related SObjects to be created in a known configuration. There are many ways this can be coded:

  • Inline in the static test method: but offers no re-use
  • Created by a static helper method in the test class: but not easy for the test method to access more than one of the created objects
  • Created by a separate fixture class: but hard to support all the variations needed and a lack of cohesion

In general I suggest the pattern illustrated below – using an inner test fixture class – is more effective. It addresses the problems listed above and is tolerant of change:

private class XyzTest {
    // If variations in the Fixture are needed an enum is a clean way of identifying them
    private enum TestCase {
    private class Fixture {
        public Parent__c parent;
        public Child__c[] children;
        public Fixture(TestCase tc) {
            parent = new Parent__c(FirstName__c = 'Jane', LastName__c = 'Doe');
            insert parent;
            children = new Child__c[] {
                    new Child__c(Parent__c = parent.Id, Gender__c = 'Female'),
                    new Child__c(Parent__c = parent.Id, Gender__c = 'Male')
            insert children;
        public void assertParent() {
            Parent actual = [select ... from Parent__c where Id = :parent.Id];
            // System.asserts go here
        public void assertChildren() {
            Child__c[] actual = [select ... from Child__c where Id in :SobUtil.getIds(children)];
            // System.asserts go here
    static void testCase1() {
        Fixture fixture = new Fixture(TestCase.Case1);
        // Code here can access fields, perform assertions and invoke the fixture's assertions
    static void testCase2() {
        Fixture fixture = new Fixture(TestCase.Case2);
        // Code here can access fields, perform assertions and invoke the fixture's assertions

Identifying Apex tests to run using wildcards in sf:deploy

When developing code including tests that builds on a managed package it is typically necessary to exclude the managed package tests from the test runs. For example, the new code might add an additional data constraint that causes a managed package test to fail because its data setup violates the constraint. (There is also the secondary issue that the managed package tests may slow the development cycle if they take many minutes to run.)

The sf:deploy Ant task supports a flag to run all tests or alternatively accepts the names of the test classes to run. If the number of tests involved is small, the latter option works fine. But if you have a large number of tests it starts to become tedious and error prone to maintain the list of tests. This is the same problem that Java projects using the JUnit Ant task face and there the solution is to allow the set of tests that are run to be driven by file name matches in the source tree via the batchtest nested element.

I’ve added this mechanism to the DeployWithXmlReportTask (an extension of sf:deploy) that is available in the force-deploy-with-xml-report-task Google code project. Below is an example of how to use it in a project where all test classes follow the convention of having their file names end in “Test”.

<path id="ant.additions.classpath">
    <fileset dir="ant"/>
<target name="deployAndTestAndReport">
    <delete dir="test-report-xml" quiet="true"/>
    <echo message="deploying to ${sf.username}"/>
        <!-- Run all tests that match the include file name pattern (so avoiding running managed package tests) -->
            <fileset dir="src/classes">
                <include name="*Test.cls"/>