Dragula drag and drop in a Lightning Component

Dragula is a great library that makes adding drag and drop easy. It handles creating a copy of the element being dragged that moves with the cursor and also shows where the element will go in the drop area. It has a simple API and is self contained. (The Locker Service is happy with version 3.7.2 of Dragula that I used.)

Here is how I hooked it into a Lightning Component to produce this result:

Drag and drop screen shot

In the component’s controller, this code connects the DOM element oriented Dragula up with the component oriented Lightning straight after Dragula is loaded. There is a bit of extra logic to add and remove a placekeeper drop area when there are no items:

({
afterDragulaLoaded: function(component, event, helper) {

    // Components
    var container = component.find('container');
    var from = component.find('from-draggable');
    var to = component.find('to-draggable');

    // Dragula needs the DOM elements
    var drake = dragula([from.getElement(), to.getElement()], {
        direction: 'vertical',
        mirrorContainer: container.getElement()
    });

    // Show/hide the "Drag and Drop Here" item
    // $A.getCallback makes safe to invoke from outside the Lightning Component lifecycle
    drake.on('drop', $A.getCallback(function(el, target, source, sibling) {
        if (source.children.length <= 1) {
            $A.util.removeClass(component.find(helper.placekeeperAuraIdFor(source)), 'slds-hide');
        }
        $A.util.addClass(component.find(helper.placekeeperAuraIdFor(target)), 'slds-hide');
    }));
}
})

This helper is used:

({
placekeeperAuraIdFor: function(element) {
    // Hard to get from DOM back to aura:id so using classes as markers
    if (element.classList.contains('from-draggable')) return 'from-placekeeper';
    else if (element.classList.contains('to-draggable')) return 'to-placekeeper';
    else return null;
}
})

Here is the component markup: it is just hard-coded data and styling to keep this example simple and references the Dragula JavaScript and CSS static resources via a ltng:require at the bottom:

<aura:component >
    
<div aura:id="container">
    
    <div class="slds-text-heading--medium">Candidates</div>
    
    <ul aura:id="from-draggable" class="from-draggable">
        <li class="slds-p-around--xx-small">
            <article class="slds-card">
                <div class="slds-card__body">
                    <div class="slds-tile slds-tile--board">
                        <h3 class="slds-truncate" title="Anypoint Connectors">
                            <a href="javascript:void(0);">Anypoint Connectors</a>
                        </h3>
                        <p class="slds-text-heading--medium">$500,000</p>
                    </div>
                </div>
            </article>
        </li>
        <li class="slds-p-around--xx-small">
            <article class="slds-card">
                <div class="slds-card__body">
                    <div class="slds-tile slds-tile--board">
                        <h3 class="slds-truncate" title="Cloudhub">
                            <a href="javascript:void(0);">Cloudhub</a>
                        </h3>
                        <p class="slds-text-heading--medium">$185,000</p>
                    </div>
                </div>
            </article>
        </li>
        
        <li class="slds-p-around--xx-small slds-hide" aura:id="from-placekeeper">
            <div class="slds-file-selector__dropzone" style="height: 50px">
                <span class="slds-file-selector__text">Drag and Drop Here</span>
            </div>
        </li>
    </ul>
    
    <div class="slds-text-heading--medium">Selected</div>
    
    <div class="slds-panel slds-grid slds-grid--vertical slds-nowrap">
        <ul aura:id="to-draggable" class="to-draggable">
            <li class="slds-p-around--xx-small" aura:id="to-placekeeper">
                <div class="slds-file-selector__dropzone" style="height: 50px">
                    <span class="slds-file-selector__text">Drag and Drop Here</span>
                </div>
            </li>
        </ul>
    </div>
</div>

<ltng:require styles="{!$Resource.DragulaCss}"
              scripts="{!$Resource.DragulaJs}"
              afterScriptsLoaded="{!c.afterDragulaLoaded}"
              />
    
</aura:component>

This CSS is also used:

.THIS li {
    list-style-type: none;
}
.THIS article.slds-card:hover {
    border-color: #1589ee ;
}

There were two awkward areas:

  • As far as I know, there is no way to go from a DOM element back to an aura:id so in Dragula’s “drop” callback I resorted to using a marker class instead.
  • The $A.getCallback wrapping function is needed as explained in Modifying Components Outside the Framework Lifecycle.
Advertisements

Adding to the CKEditor context menu

We are using the excellent CKEditor in some of our Visualforce pages now, partly to get around the problem that standard rich text fields error when they are re-rendered. But explicitly adding the editor also makes it cleaner to leverage its many features and the one described here is the ability to extend the context (right click) menu. I couldn’t find a simple example, so I hope this post will act as that for others.

The JavaScript code is below and works on this text area:

<apex:inputTextArea value="{!ref.Rtf__c}" styleClass="ckeditor" richText="false"/>

The purpose of the menu is to allow fragments of text to be inserted when selected via the context menu. (The actual JavaScript menu code is much longer and generated by Apex code using data returned by describe calls; it is also loaded from a static resource so that it can be cached by the browser.)

Here is a screen shot from the code posted here:

Menu (1)

A brief explanation of the code is that the menu items either return other menu items (when they are the root menu or a sub-menu) or reference a command (when they are a leaf action). The menu groups add dividing lines into the menus. The editor is added based on the ckeditor class name.

The code:

<script src="//cdn.ckeditor.com/4.5.9/standard/ckeditor.js"></script>
<script>

// Menu code
function addMergeFieldMenu(event) {
    var e = event.editor;
    
    // Commands
    e.addCommand("cv_claimant_Birthdate", {
        exec: function(e) {
            e.insertText("\{\!claimant.Birthdate\}");
        }
    });
    e.addCommand("cv_claimant_Name", {
        exec: function(e) {
            e.insertText("\{\!claimant.Name\}");
        }
    });
    e.addCommand("cv_claim_Name", {
        exec: function(e) {
            e.insertText("\{\!claim.Name\}");
        }
    });
    e.addCommand("cv_claim_CreatedDate", {
        exec: function(e) {
            e.insertText("\{\!claim.CreatedDate\}");
        }
    });
    
    // Listener
    e.contextMenu.addListener(function(element, selection) {
        return {
            cv: CKEDITOR.TRISTATE_ON
        };
    });
    
    // Menu Groups; different numbers needed for the group separator to show
    e.addMenuGroup("cv", 500);
    e.addMenuGroup("cv_people", 100);
    e.addMenuGroup("cv_claim", 200);
    
    // Menus (nested)
    e.addMenuItems({
        // Top level
        cv: {
            label: "Insert Merge Field",
            group: "cv",
            getItems: function() {
                return {
                    cv_claimant: CKEDITOR.TRISTATE_OFF,
                    cv_claim: CKEDITOR.TRISTATE_OFF,
                };
            }
        },
        // One sub-menu
        cv_claimant: {
            label: "Claimant Person (claimant)",
            group: "cv_people",
            getItems: function() {
                return {
                    cv_claimant_Birthdate: CKEDITOR.TRISTATE_OFF,
                    cv_claimant_Name: CKEDITOR.TRISTATE_OFF,
                };
            }
        },
        // These run commands
        cv_claimant_Birthdate: {
            label: "Birthdate (Birthdate: date)",
            group: "cv_people",
            command: "cv_claimant_Birthdate"
        },
        cv_claimant_Name: {
            label: "Full Name (Name: string)",
            group: "cv_people",
            command: "cv_claimant_Name"
        },
        // Another sub-menu
        cv_claim: {
            label: "Claim (claim)",
            group: "cv_claim",
            getItems: function() {
                return {
                    cv_claim_CreatedDate: CKEDITOR.TRISTATE_OFF,
                    cv_claim_Name: CKEDITOR.TRISTATE_OFF,
                };
            }
        },
        // These run commands
        cv_claim_Name: {
            label: "Claim Number (Name: string)",
            group: "cv_claim",
            command: "cv_claim_Name"
        },
        cv_claim_CreatedDate: {
            label: "Created Date (CreatedDate: datetime)",
            group: "cv_claim",
            command: "cv_claim_CreatedDate"
        },
    });
}

// Add and configure the editor
function addEditor() {
    CKEDITOR.replaceAll('ckeditor');
    for (var name in CKEDITOR.instances) {
        var e = CKEDITOR.instances[name];
        e.config.height = 400;
    }
    CKEDITOR.on('instanceReady', addMergeFieldMenu);
}
addEditor();

</script>

Sharing code and components across managed packages

If you are creating multiple managed packages and want to re-use some code and components in several of them there is no simple solution. (The approach you might take in the Java world of creating a JAR file that contains many related classes that you use in multiple applications is not available.)

You can put the components in a separate managed package, but that is a course-grained approach and has its own set of problems. This post outlines how to use svn:externals (yes this is SVN; I’m unsure about Git) to add shared components into the source tree, so the shared components just become part of each managed package. They pickup the namespace as part of the normal packaging process.

So in the version control system you have an extra project that contains the components you want to share:

  • SharedComponents
  • ManagedPackage1
  • ManagedPackage2

“SharedComponents” can only have dependencies on the core platform; it cannot have dependencies on anything in your managed packages.

Then in the managed package projects, add external file definitions to the svn:externals property of the src folder:

classes/SharedSms.cls              https://.../src/classes/SharedSms.cls
classes/SharedSms.cls-meta.xml     https://.../src/classes/SharedSms.cls-meta.xml
classes/SharedSmsTest.cls          https://.../src/classes/SharedSmsTest.cls
classes/SharedSmsTest.cls-meta.xml https://.../src/classes/SharedSmsTest.cls-meta.xml

where represents the SVN path to the “SharedComponents” project. In this example there are just two classes but each managed package can opt into as few or as many of the components as it needs. The purpose of the “Shared” prefix is to make it clearer in the managed package source where the components come from. (IDEs like Eclipse also decorate the icon of svn:externals to distinguish them.)

Once the svn:externals definition is in place, an SVN update automatically pulls content from both locations. You need to be using at least version 1.6 of SVN; our Jenkins (Continuous Integration server) was set to use version 1.4 by default so that had to be changed to get the builds to work.

Discipline is needed when modifying components in “SharedComponents” to not break any managed package code that depends on them. Running Continuous Integration builds on all the projects will help give early warning of such problems.

Making one DataTable respond to search/length/page changes in another DataTable

DataTables can be applied to multiple tables in a page and sometimes the content in one table needs to be driven by the content in another table. For example, checkboxes in the first table on a page might act as a filter on later tables in a page. (The setup of such filtering is not covered here; the mechanism that can be used is custom filtering.)

But DataTables also supports a search mechanism that reduces the table rows to ones that match and a pagination mechanism where the number of rows shown or the page shown can be changed. By default, changes to those values in the first table will not cause the later tables to be re-filtered and re-drawn. So if you want the later tables to only correspond to the checked checkboxes that are visible in the first table, extra code has to be added.

The good news is that DataTables does generate events for the changes and so provides a convenient point to hook in extra code. The only complication is that it appears necessary to defer until after the current event processing has been completed by using setTimeout.

So assuming the tables are distinguished by classes firstMarker and laterMarker, this code will get the later tables re-drawn (and so re-filtered) to be consistent with the first table:

(function($) {
    $(document).ready(function() {

        var firstTable = $('table.firstMarker');
        firstTable.DataTable();

        var laterTables = $('table.laterMarker');
        var laterDataTables = [];
        laterTables.each(function() {
            laterDataTables.push($(this).DataTable());
        });

        var drawLaterDataTables = function() {
            setTimeout(function() {
                $.each(laterDataTables, function(index, value) {
                    value.draw();
                });
            }, 0);
        };
        firstTable.on('search.dt', drawLaterDataTables);
        firstTable.on('length.dt', drawLaterDataTables);
        firstTable.on('page.dt', drawLaterDataTables);

        // Filtering logic not shown here
    });
})(jQuery.noConflict());

Instanceof for Apex Date and DateTime

Just spent some time chasing a unit test error where the date returned in some code was a day out. The code being tested was:

SObject sob = ...
Object o = sob.get(f);
Date d;
if (o instanceof DateTime) {
    d = ((DateTime) o).date();
} else if (o instanceof Date) {
    d = (Date) o;
}

In the end this unit test demonstrated the cause of the problem:

@IsTest
private class InstanceofTest {
    @IsTest
    static void date() {
        Object o = Date.today();
        System.assertEquals(true, o instanceof Date);
        System.assertEquals(true, o instanceof DateTime);
    }
    @IsTest
    static void dateTime() {
        Object o = DateTime.now();
        System.assertEquals(false, o instanceof Date);
        System.assertEquals(true, o instanceof DateTime);
    }
}

So a Date is a DateTime (as well as a Date) and that was pushing the code through some unwanted timezone offsetting.

A fix is:

SObject sob = ...
Object o = sob.get(f);
Date d;
if (o instanceof Date) {
    d = (Date) o;
} else if (o instanceof DateTime) {
    d = ((DateTime) o).date();
}

and a lesson learned is to be paranoid about the type system in Apex.

Favour the Typesafe Enum pattern in Apex

A frequent need is to have behaviour that varies according to a value such as picklist String value. The simplest approach is to pass that value around as a String and to compare the value against inline String constants or static final Strings declared somewhere. But weaknesses of this approach are:

  • Not typesafe – other values can be accidentally introduced
  • Not self-documenting – using the type String says nothing about the domain/purpose
  • Any related attributes or logic has to be added inline or in a separate class using if/elseif/else chains

(Java had the enum types language feature added in version 5 to address these problems; Apex’s enum language feature is basic in comparison.)

A pattern that addresses these problems and was often recommended in Java before enum types were added is the “typesafe enum” pattern. Here is an Apex example:

public class Status {

    private static final Map<String, Status> STATUSES = new Map<String, Status>();
    
    public static final Status PENDING = new Status('Pending', 5);
    public static final Status OPEN = new Status('Open', 30);
    public static final Status CLOSED = new Status('Closed', 60);
    
    public static Status valueOf(String name) {
        return STATUSES.get(name);
    }
    
    public static Status[] values() {
        return STATUSES.values();
    }
    
    public String name {get; private set;}
    public Integer days {get; private set;}
    
    private Status(String name, Integer days) {
        this.name = name;
        this.days = days;
        STATUSES.put(name, this);
    }
    
    public override String toString() {
        return name;
    }
     
    public Boolean equals(Object o) {
        if (o instanceof Status) {
            Status that = ((Status) o);
            return this.name == that.name;
        }
        return false;
    }
    
    public Integer hashCode() {
        return name.hashCode();
    }
}

Code using it looks like this:

    private void method1() {
        ...
        method2('xyz', Status.OPEN);
        ...
    }
    
    private void method2(String v, Status s) {
        ...
        Date d = Date.now().addDays(s.days);
        String message = 'Status is ' + s;
        ...
    }

To convert a picklist value (or other String) to an instance of this object:

Status status = Status.valueOf(picklistValue);

To iterate over all the values:

for (Status status : Status.values()) {
    ...
}

Extra methods can be added and so can extra data values.

Note that the equals/hashCode methods are only needed if references are serialized. Examples of where that can happen are in Visualforce’s view state or when Database.Stateful is used in a Batchable.

PS

This approach can be used for run-time loaded values (e.g. using describe calls on picklist fields or loading from JSON) though that of course means not having constants for each value. If that is done I strongly recommend lazy loading to avoid filling the debug log with entries for cases where the full set of values are not needed.

Breaking managed package dependencies

We have several managed packages with customers sometimes installing just one of them and other times several of them (depending on the set of features they want). Calls can be needed between the packages: global interfaces and classes defined in one package – lets call it B – are called from another package – lets call it A.

But once the direct calls are added and new managed package versions created, A cannot be installed without B first being installed because the platform’s package dependency approach is rigid and enforced at installation time. This dependency (illustrated with UML dependency notation) is not what we want:

Dependencies

So how to allow managed package A to call managed package B without the fixed dependency? The trick is to have no compile-time dependency between A and B but instead to introduce a third entity C (that could be another managed package or non-namespaced local code) that calls are made through where dependency on both A and B is not a problem:

Broken

Here is an example of the pattern. The API used to illustrate the pattern has a single method to send an SMS (text) message.

The implementation (that we want to use from other packages) is in package B:

global interface Sms {
    global void send(String number, String message);
}

global class Factory {
    global static Sms createSms() {...}
}

In package A the interface is duplicated together with a mechanism to register a type that implements the interface. The package A code references only this interface and class:

global interface Sms {
    global void send(String number, String message);
}

global class Factory {
    global static Sms createSms() {
        // Use name of type from custom setting
        String s = ...;
        Type t = Type.forName(s);
        return (Sms) t.newInstance();
    }
    global static void registerSmsType(Type t) {
        // Store name of type in a custom setting
    }
}

...
    Factory.createSms().send('38383', 'PREZ');
...

Then in C, a class is implemented that has the signature defined in A and delegates to B to do the work. At some point this class name must be registered with A. If C is a managed package that could be in an InstallHandler or it could be a manual configuration step:

global class Sms implements A.Sms {
    global void send(String number, String message) [
        B.Factory.createSms().send(number, message);
    }
}

...
    A.Factory.registerSmsType(Sms.class);
....

So A and B remain independent and C is the “glue” that connects then together. A can be installed on its own and so can B. If they are both installed, then adding the C managed package or non-namespaced local code allows the call between the packages to be made.

PS

Stephen Wilcox’s Apex Calls Between Independent Packages describes the same pattern.