Embracing Apex parallel testing

We have several code bases each with 3000+ test methods and 300+ test classes. We build these – create a scratch org, deploy the code and run all the Apex tests – using Jenkins and our Salesforce DX – Jenkins Shared Library.

Some work was done to use Apex parallel testing a while ago in one of the code bases, but the tests there were plagued with:

Could not run tests on class … because: connection was cancelled here

errors when the tests were run in parallel in a scratch org; the errors did not occur in the namespace org. Those scratch org errors still occur.

This experience made us lower the priority of this work on other code bases, but I’ve just been able to put in a couple of days of work on one, and that has reduced the test run time from two hours down to nine minutes without the above errors being hit. The Jenkins build (checks 6 org configurations in parallel e.g. one has Platform Encryption turned on) now take 45 minutes rather than 150 minutes. This Jenkins Build Time Trend chart (should be blue but some unrelated tests are broken at the moment) tells the story: the builds with parallel Apex tests are on the far right e.g. #467:

Jenkins Build Time Trend chart

The main changes I needed to make were:

  • Make sure each Contact record inserted in the tests had a separate Account to avoid UNABLE_TO_LOCK_ROW errors on a default Account our software uses. See the Record Locking Cheat Sheet for a bit more information on that
  • Find a way to avoid UNABLE_TO_LOCK_ROW errors on hierarchical custom settings updated in the tests to check various configurations of the code. A good way to do this would be to design in a mocking mechanism from the start, but given the large number of references to the custom settings, and a desire to change test code only, I went for using a bodgy Retry.upsertOnUnableToLockRow method instead. That code is listed immediately below.
/**
 * When tests are run in parallel, UNABLE_TO_LOCK_ROW errors occur where tests update the same custom setting.
 * This class aims to get around that by retrying.
 * Can also be applied to ordinary SObjects.
 */
public class Retry {

    // Typically zero or one retry so this should be plenty unless there is some kind of deadlock
    private static final Integer TRIES = 50;
    private static final Integer SLEEP_MS = 100;

    public class RetryException extends Exception {
    }

    public static void upsertOnUnableToLockRow(SObject sob) {

        upsertOnUnableToLockRow(new SObject[] {sob});
    }

    public static void upsertOnUnableToLockRow(SObject[] sobs) {

        if (sobs.size() == 0) return;

        Long start = System.currentTimeMillis();
        Exception lastException;

        for (Integer i = 0; i < TRIES; i++) {
            try {
                SObject[] inserts = new SObject[] {};
                SObject[] updates = new SObject[] {};
                for (SObject sob : sobs) {
                    if (sob.Id != null) updates.add(sob);
                    else inserts.add(sob);
                }
                insert inserts;
                update updates;
                return;
            } catch (DmlException e) {
                // Immediately throw if an unrelated problem
                if (!e.getMessage().contains('UNABLE_TO_LOCK_ROW')) throw e;
                lastException = e;
                sleep(SLEEP_MS);
            }
        }

        Long finish = System.currentTimeMillis();
        throw new RetryException(''
            + 'Retry.upsertOnUnableToLockRow failed first id='
            + sobs[0].Id
            + ' of '
            + sobs.size()
            + ' records after '
            + TRIES
            + ' tries taking '
            + (finish - start)
            + ' ms with exception '
            + lastException
        );
    }

    private static void sleep(Integer ms) {

        Long start = System.currentTimeMillis();
        while (System.currentTimeMillis() < start + ms) {
            // Throw away CPU cycles
        }
    }
}

Bye-bye lazy loading, hello enum and switch

We have quite a lot of old code of this nature, where a map is lazily loaded to provide a fast way to lookup values:

private static final String R_CLAIM = 'claim';
private static final String R_BENEFIT_CLAIMED = 'bc';
...
private static final String R_BENEFICIARY = 'beneficiary';
private static final String R_BROKER = 'broker';
...

private static Map<String, SObjectType> REFERENCE_TO_SOB_TYPE {
    get{
        if (REFERENCE_TO_SOB_TYPE == null) {
            REFERENCE_TO_SOB_TYPE = new Map<String, SObjectType> {
                R_CLAIM => Claim__c.SObjectType,
                R_BENEFIT_CLAIMED => BenefitClaimed__c.SObjectType,
                ...
                R_BENEFICIARY => Contact.SObjectType,
                R_BROKER => Contact.SObjectType,
                ...
            };
        }
        return REFERENCE_TO_SOB_TYPE;
    }
    set;
}

private static SObjectType toSObjectType(String reference) {
    SObjectType sobType = REFERENCE_TO_SOB_TYPE.get(reference);
    if (sobType != null) return sobType;
    else throw new ...
}

But thanks to enum and switch that have been added to Apex in recent years, this can now be coded more cleanly as:

private enum Reference {
    claim,
    bc,
    ...
    beneficiary,
    broker,
    ...
}

private static SObjectType toSObjectType(Reference ref) {
    switch on ref {
        when claim { return Claim__c.SObjectType; }
        when bc { return BenefitClaimed__c.SObjectType; }
        ...
        when beneficiary, broker, ... { return Contact.SObjectType; }
        when else { throw new ... };
    }
}

with these benefits:

  • Reads better and less lines
  • More type clarity (i.e. eliminating the vagueness of String)
  • No building of a whole map when only one value might be used
  • Leaves the compiler do the the optimization of the lookup

Relationship wrapper class generator

For some problems, an in-memory graph of objects is convenient so that business logic can be independent of persistence logic. In other technology stacks, Object Relational Management (ORM) tools help with this problem: business logic can walk around the graph of objects via relationship properties without having to make explicit query calls.

At first sight, the SObject relationship (__r) fields provide part of this capability. But unfortunately, the collections are immutable and the queries that populate them cannot work over multiple levels of parent-child relationships. To work-around this specific part of the problem, I wrote a simple Apex code generator that generates a wrapper class per SObject that provides parent and child reference fields discovered via describe calls. It also provides methods to reliably set those relationships up.

The code generator is available as open source in this relationship-model-generator GitHub project.

Multi-branch, multi-org configuration builds in 3 lines of Jenkinsfile

jdx

As you develop multiple apps and multiple custom solutions, you can end up with a lot of boiler-plate Continuous Integration (CI) code that is painful to maintain.

My colleague, Jeferson Chaves, has addressed this for Jenkins by writing an open-source Jenkins Shared Library that we are now using for all our SFDX/Git builds. See his Streamlining SFDX on Jenkins post for more info and a link to the library.

VSCode encourages bad database design?

I have an SObject that has fields called Criteria1__c, Criteria2__c, Criteria3__c and so on up to 10. (In fact several such numbered fields.) It seemed like a good idea at the time, but a child SObject would be a cleaner approach.

Today I was asked to up the number – instead of 10, make it 20. So I was pleased to discover that in VSCode with an SFDX format project, copying and pasting the 10 existing fields in one go results in the file copies being automatically numbered 11 to 20 exactly as required. Very handy.

(Yes you still have to edit the file content but that is pretty quick.)

From Aura to Lightning Web Components (LWC)

When I started working with Lighting Components a few years ago – the Aura ones – I intended to write a page similar to From Java to Apex, that would have been called “From Angular to Lightning Components”. But that turned out to be a hard thing to do, because working with Angular had been a pleasure, but working with Aura was not. Rather than posting a pretty negative opinion, I decided to not post.

So I was delighted to see the launch of Lightning Web Components (LWC). (I would have been way less delighted if we had gone “all in” on the Aura ones already – not helpful to us that this launch was kept under wraps.) We can largely skip the Aura era, and jump straight to LWC.

To date, I have only looked at some documentation, and sample code, and attended one Salesforce Developer Group meeting on the subject. But it looks like LWC has these big advantages over Aura:

  • Leverages recent browser features rather than requiring a lot of slower software layers (that also made JavaScript debugging challenging)
  • Uses JavaScript to hold the data (not the peculiar v. attributes and map get/set style access)
  • Uses ECMAScript 6 (or is it 7?) which is a big step forward on cleaner JavaScript coding
  • Is primarily based on two files only: a JavaScript .js file and a template .html file
  • Leverages standards so is more attractive to developers
  • Has testability considered from the start
  • Leverages design work done in Aura on the base set of lighting: components so is fairly complete day one
  • Leverages the Lightning Data Service
  • Benefits from the maturity of SLDS
  • Interoperates with Aura components
  • Has good documentation

There may well be missing features at the moment – e.g. the equivalent of application events – but LWC is way easier to buy into than Aura was. And if I wrote a page called “From Angular to Lightning Web Components”, it would likely express a pretty positive opinion.

Dragula drag and drop in a Lightning Component

Dragula is a great library that makes adding drag and drop easy. It handles creating a copy of the element being dragged that moves with the cursor and also shows where the element will go in the drop area. It has a simple API and is self contained. (The Locker Service is happy with version 3.7.2 of Dragula that I used.)

Here is how I hooked it into a Lightning Component to produce this result:

Drag and drop screen shot

In the component’s controller, this code connects the DOM element oriented Dragula up with the component oriented Lightning straight after Dragula is loaded. There is a bit of extra logic to add and remove a placekeeper drop area when there are no items:

({
afterDragulaLoaded: function(component, event, helper) {

    // Components
    var container = component.find('container');
    var from = component.find('from-draggable');
    var to = component.find('to-draggable');

    // Dragula needs the DOM elements
    var drake = dragula([from.getElement(), to.getElement()], {
        direction: 'vertical',
        mirrorContainer: container.getElement()
    });

    // Show/hide the "Drag and Drop Here" item
    // $A.getCallback makes safe to invoke from outside the Lightning Component lifecycle
    drake.on('drop', $A.getCallback(function(el, target, source, sibling) {
        if (source.children.length <= 1) {
            $A.util.removeClass(component.find(helper.placekeeperAuraIdFor(source)), 'slds-hide');
        }
        $A.util.addClass(component.find(helper.placekeeperAuraIdFor(target)), 'slds-hide');
    }));
}
})

This helper is used:

({
placekeeperAuraIdFor: function(element) {
    // Hard to get from DOM back to aura:id so using classes as markers
    if (element.classList.contains('from-draggable')) return 'from-placekeeper';
    else if (element.classList.contains('to-draggable')) return 'to-placekeeper';
    else return null;
}
})

Here is the component markup: it is just hard-coded data and styling to keep this example simple and references the Dragula JavaScript and CSS static resources via a ltng:require at the bottom:

<aura:component >
    
<div aura:id="container">
    
    <div class="slds-text-heading--medium">Candidates</div>
    
    <ul aura:id="from-draggable" class="from-draggable">
        <li class="slds-p-around--xx-small">
            <article class="slds-card">
                <div class="slds-card__body">
                    <div class="slds-tile slds-tile--board">
                        <h3 class="slds-truncate" title="Anypoint Connectors">
                            <a href="javascript:void(0);">Anypoint Connectors</a>
                        </h3>
                        <p class="slds-text-heading--medium">$500,000</p>
                    </div>
                </div>
            </article>
        </li>
        <li class="slds-p-around--xx-small">
            <article class="slds-card">
                <div class="slds-card__body">
                    <div class="slds-tile slds-tile--board">
                        <h3 class="slds-truncate" title="Cloudhub">
                            <a href="javascript:void(0);">Cloudhub</a>
                        </h3>
                        <p class="slds-text-heading--medium">$185,000</p>
                    </div>
                </div>
            </article>
        </li>
        
        <li class="slds-p-around--xx-small slds-hide" aura:id="from-placekeeper">
            <div class="slds-file-selector__dropzone" style="height: 50px">
                <span class="slds-file-selector__text">Drag and Drop Here</span>
            </div>
        </li>
    </ul>
    
    <div class="slds-text-heading--medium">Selected</div>
    
    <div class="slds-panel slds-grid slds-grid--vertical slds-nowrap">
        <ul aura:id="to-draggable" class="to-draggable">
            <li class="slds-p-around--xx-small" aura:id="to-placekeeper">
                <div class="slds-file-selector__dropzone" style="height: 50px">
                    <span class="slds-file-selector__text">Drag and Drop Here</span>
                </div>
            </li>
        </ul>
    </div>
</div>

<ltng:require styles="{!$Resource.DragulaCss}"
              scripts="{!$Resource.DragulaJs}"
              afterScriptsLoaded="{!c.afterDragulaLoaded}"
              />
    
</aura:component>

This CSS is also used:

.THIS li {
    list-style-type: none;
}
.THIS article.slds-card:hover {
    border-color: #1589ee ;
}

There were two awkward areas:

  • As far as I know, there is no way to go from a DOM element back to an aura:id so in Dragula’s “drop” callback I resorted to using a marker class instead.
  • The $A.getCallback wrapping function is needed as explained in Modifying Components Outside the Framework Lifecycle.

Adding to the CKEditor context menu

We are using the excellent CKEditor in some of our Visualforce pages now, partly to get around the problem that standard rich text fields error when they are re-rendered. But explicitly adding the editor also makes it cleaner to leverage its many features and the one described here is the ability to extend the context (right click) menu. I couldn’t find a simple example, so I hope this post will act as that for others.

The JavaScript code is below and works on this text area:

<apex:inputTextArea value="{!ref.Rtf__c}" styleClass="ckeditor" richText="false"/>

The purpose of the menu is to allow fragments of text to be inserted when selected via the context menu. (The actual JavaScript menu code is much longer and generated by Apex code using data returned by describe calls; it is also loaded from a static resource so that it can be cached by the browser.)

Here is a screen shot from the code posted here:

Menu (1)

A brief explanation of the code is that the menu items either return other menu items (when they are the root menu or a sub-menu) or reference a command (when they are a leaf action). The menu groups add dividing lines into the menus. The editor is added based on the ckeditor class name.

The code:

<script src="//cdn.ckeditor.com/4.5.9/standard/ckeditor.js"></script>
<script>

// Menu code
function addMergeFieldMenu(event) {
    var e = event.editor;
    
    // Commands
    e.addCommand("cv_claimant_Birthdate", {
        exec: function(e) {
            e.insertText("\{\!claimant.Birthdate\}");
        }
    });
    e.addCommand("cv_claimant_Name", {
        exec: function(e) {
            e.insertText("\{\!claimant.Name\}");
        }
    });
    e.addCommand("cv_claim_Name", {
        exec: function(e) {
            e.insertText("\{\!claim.Name\}");
        }
    });
    e.addCommand("cv_claim_CreatedDate", {
        exec: function(e) {
            e.insertText("\{\!claim.CreatedDate\}");
        }
    });
    
    // Listener
    e.contextMenu.addListener(function(element, selection) {
        return {
            cv: CKEDITOR.TRISTATE_ON
        };
    });
    
    // Menu Groups; different numbers needed for the group separator to show
    e.addMenuGroup("cv", 500);
    e.addMenuGroup("cv_people", 100);
    e.addMenuGroup("cv_claim", 200);
    
    // Menus (nested)
    e.addMenuItems({
        // Top level
        cv: {
            label: "Insert Merge Field",
            group: "cv",
            getItems: function() {
                return {
                    cv_claimant: CKEDITOR.TRISTATE_OFF,
                    cv_claim: CKEDITOR.TRISTATE_OFF,
                };
            }
        },
        // One sub-menu
        cv_claimant: {
            label: "Claimant Person (claimant)",
            group: "cv_people",
            getItems: function() {
                return {
                    cv_claimant_Birthdate: CKEDITOR.TRISTATE_OFF,
                    cv_claimant_Name: CKEDITOR.TRISTATE_OFF,
                };
            }
        },
        // These run commands
        cv_claimant_Birthdate: {
            label: "Birthdate (Birthdate: date)",
            group: "cv_people",
            command: "cv_claimant_Birthdate"
        },
        cv_claimant_Name: {
            label: "Full Name (Name: string)",
            group: "cv_people",
            command: "cv_claimant_Name"
        },
        // Another sub-menu
        cv_claim: {
            label: "Claim (claim)",
            group: "cv_claim",
            getItems: function() {
                return {
                    cv_claim_CreatedDate: CKEDITOR.TRISTATE_OFF,
                    cv_claim_Name: CKEDITOR.TRISTATE_OFF,
                };
            }
        },
        // These run commands
        cv_claim_Name: {
            label: "Claim Number (Name: string)",
            group: "cv_claim",
            command: "cv_claim_Name"
        },
        cv_claim_CreatedDate: {
            label: "Created Date (CreatedDate: datetime)",
            group: "cv_claim",
            command: "cv_claim_CreatedDate"
        },
    });
}

// Add and configure the editor
function addEditor() {
    CKEDITOR.replaceAll('ckeditor');
    for (var name in CKEDITOR.instances) {
        var e = CKEDITOR.instances[name];
        e.config.height = 400;
    }
    CKEDITOR.on('instanceReady', addMergeFieldMenu);
}
addEditor();

</script>

Sharing code and components across managed packages

If you are creating multiple managed packages and want to re-use some code and components in several of them there is no simple solution. (The approach you might take in the Java world of creating a JAR file that contains many related classes that you use in multiple applications is not available.)

You can put the components in a separate managed package, but that is a course-grained approach and has its own set of problems. This post outlines how to use svn:externals (yes this is SVN; I’m unsure about Git) to add shared components into the source tree, so the shared components just become part of each managed package. They pickup the namespace as part of the normal packaging process.

So in the version control system you have an extra project that contains the components you want to share:

  • SharedComponents
  • ManagedPackage1
  • ManagedPackage2

“SharedComponents” can only have dependencies on the core platform; it cannot have dependencies on anything in your managed packages.

Then in the managed package projects, add external file definitions to the svn:externals property of the src folder:

classes/SharedSms.cls              https://.../src/classes/SharedSms.cls
classes/SharedSms.cls-meta.xml     https://.../src/classes/SharedSms.cls-meta.xml
classes/SharedSmsTest.cls          https://.../src/classes/SharedSmsTest.cls
classes/SharedSmsTest.cls-meta.xml https://.../src/classes/SharedSmsTest.cls-meta.xml

where represents the SVN path to the “SharedComponents” project. In this example there are just two classes but each managed package can opt into as few or as many of the components as it needs. The purpose of the “Shared” prefix is to make it clearer in the managed package source where the components come from. (IDEs like Eclipse also decorate the icon of svn:externals to distinguish them.)

Once the svn:externals definition is in place, an SVN update automatically pulls content from both locations. You need to be using at least version 1.6 of SVN; our Jenkins (Continuous Integration server) was set to use version 1.4 by default so that had to be changed to get the builds to work.

Discipline is needed when modifying components in “SharedComponents” to not break any managed package code that depends on them. Running Continuous Integration builds on all the projects will help give early warning of such problems.

Making one DataTable respond to search/length/page changes in another DataTable

DataTables can be applied to multiple tables in a page and sometimes the content in one table needs to be driven by the content in another table. For example, checkboxes in the first table on a page might act as a filter on later tables in a page. (The setup of such filtering is not covered here; the mechanism that can be used is custom filtering.)

But DataTables also supports a search mechanism that reduces the table rows to ones that match and a pagination mechanism where the number of rows shown or the page shown can be changed. By default, changes to those values in the first table will not cause the later tables to be re-filtered and re-drawn. So if you want the later tables to only correspond to the checked checkboxes that are visible in the first table, extra code has to be added.

The good news is that DataTables does generate events for the changes and so provides a convenient point to hook in extra code. The only complication is that it appears necessary to defer until after the current event processing has been completed by using setTimeout.

So assuming the tables are distinguished by classes firstMarker and laterMarker, this code will get the later tables re-drawn (and so re-filtered) to be consistent with the first table:

(function($) {
    $(document).ready(function() {

        var firstTable = $('table.firstMarker');
        firstTable.DataTable();

        var laterTables = $('table.laterMarker');
        var laterDataTables = [];
        laterTables.each(function() {
            laterDataTables.push($(this).DataTable());
        });

        var drawLaterDataTables = function() {
            setTimeout(function() {
                $.each(laterDataTables, function(index, value) {
                    value.draw();
                });
            }, 0);
        };
        firstTable.on('search.dt', drawLaterDataTables);
        firstTable.on('length.dt', drawLaterDataTables);
        firstTable.on('page.dt', drawLaterDataTables);

        // Filtering logic not shown here
    });
})(jQuery.noConflict());