Salesforce Packaged Modules – what do you think?

If you have worked on server-side JavaScript you will be familiar with the NPM (Node Packaged Modules) site. The Node.js community have managed to corral the loose world of JavaScript into some 74,000 packages that have been downloaded many million times. When I wanted a HTTP client it took me just a few minutes to find one in NPM that I liked and it has been working fine ever since.

In comparison, I know of relatively few libraries of Apex code, and one of the oldest, apex-lang, presently shows only 1100 downloads. So I assume that of the (several) billion lines of Apex that have been written, just about none of it is re-used in more than one org. So not much standing on the shoulders of giants going on.

Yes there is the managed package mechanism, but I think of managed packages as containers for substantial functionality with typically no dependency on other 3rd party managed packages. Perhaps we have all been waiting for Salesforce to introduce some lighter-weight library mechanism.

Some cool features of NPM packages (see the package.json example below) are:

  • automatic dependency chains: if you install a package that requires other packages they will also be automatically installed
  • versioning and version dependency
  • repository locations included
  • very easy to use e.g. “npm install protractor” and 30 seconds later everything you need is installed and ready to run

What might this look like for Salesforce and Apex? There would need to be an spm command-line tool that could pull from (Git) repositories and push into Salesforce orgs (or a local file system) – seems possible. Some of the 40 characters available for names would have to be used to avoid naming collisions and there would need to be a single name registry. The component naming convention could be something like spm_myns_MyClassName for an Apex class and spm_myns_Package for the package description static resource (that would contain JSON). Somehow say 90% test coverage would need to be mandatory. Without the managed package facility of source code being hidden it would inherently be open source (a good thing in my mind). Perhaps code should have no dependency on concrete SObject types? Perhaps only some component types should be be allowed?

To get this started – apart from the tooling and a site – some truly useful contributions would be needed to demonstrate the value.

In the company where I work we have talked about sharing source code between managed packages but have not done it in any formalised way; I wonder if systems similar to what is described here are already in use in some companies? Or are already open sourced? Comments very welcome.

For reference, here is a slightly cut down NPM package.json example:

{
  "name": "protractor",
  "description": "Webdriver E2E test wrapper for Angular.",
  "homepage": "https://github.com/angular/protractor",
  "keywords": [
    "angular",
    "test",
    "testing",
    "webdriver",
    "webdriverjs",
    "selenium"
  ],
  "author": {
    "name": "Julie Ralph",
    "email": "ju.ralph@gmail.com"
  },
  "dependencies": {
    "selenium-webdriver": "~2.39.0",
    "minijasminenode": ">=0.2.7",
    "saucelabs": "~0.1.0",
    "glob": ">=3.1.14",
    "adm-zip": ">=0.4.2",
    "optimist": "~0.6.0"
  },
  "devDependencies": {
    "expect.js": "~0.2.0",
    "chai": "~1.8.1",
    "chai-as-promised": "~4.1.0",
    "jasmine-reporters": "~0.2.1",
    "mocha": "~1.16.0",
    "express": "~3.3.4",
    "mustache": "~0.7.2"
  },
  "repository": {
    "type": "git",
    "url": "git://github.com/angular/protractor.git"
  },
  "bin": {
    "protractor": "bin/protractor",
    "webdriver-manager": "bin/webdriver-manager"
  },
  "main": "lib/protractor.js",
  "scripts": {
    "test": "node lib/cli.js spec/basicConf.js; ..."
  },
  "license": "MIT",
  "version": "0.16.1",
  "webdriverVersions": {
    "selenium": "2.39.0",
    "chromedriver": "2.8",
    "iedriver": "2.39.0"
  },
  "readme": "...",
  "readmeFilename": "README.md",
  "bugs": {
    "url": "https://github.com/angular/protractor/issues"
  },
  "_id": "protractor@0.16.1",
  "_from": "protractor@0.16.x"
}
Advertisements

Winter ’14 – Maximum CPU time woes

From an org that has multiple managed packages installed plus additional source code:

Number of code statements: 148143 out of 200000 ******* CLOSE TO LIMIT
Maximum CPU time: 37266 out of 10000 ******* CLOSE TO LIMIT

Looks like a situation where counting the governor limit across all namespaces rather than per namespace really hurts. See e.g. Script Limits, Begone for the background to this.

Winter ’14 – no more code statement limit

Having bumped up against the 200,000 code statement governor limit in some valid cases, it’s good to see that this safety mechanism is changing (presumably) for the better:

There is no more code statement limit. Although we’ve removed this limit, we haven’t removed the safety mechanism it provided. Instead we’ve limited the CPU time for transactions. The limits are now 10,000ms for synchronous Apex and 60,000ms for asynchronous Apex.

(From Winter ’14 Developer Preview; see the clarifying comments too.)

Edit: Also see Script Limits, Begone!

“AsyncApexTests Limit exceeded” blocker

I’ve just hit this “AsyncApexTests Limit exceeded” error when I try to run tests in an org. Running the tests in a different way reports this:

To protect all customers from excessive usage and Denial of Service attacks, we limit the number of long-running requests that are processed at the same time by an organization. Your request has been denied because this limit has been exceeded by your organization. Please try your request again later.

Googling reveals posts like this. And the limit is documented in Understanding Execution Governors and Limits.

But having to wait 24 hours before I can continue working? FUBAR.

Some lessons learned delivering an enterprise application on Force.com

The app this is based on uses dozens of related custom objects. And pretty much always customers want to add further custom fields and customization logic. The app is also constantly having new features added. These characteristics obviously influence the lessons learned…

  1. Managed released packages work well and so do patch releases. At this point we have created hundreds of versions as we develop incrementally. (Managed beta packages are fairly useless as they don’t support upgrading.) The version compatibility constraints leave junk in the data model and don’t protect against all breaking changes. Customers expect it to be able to back out an upgrade but unfortunately that isn’t supported. There is a major pain point in that the upgrade process does not propagate new versions of things like layouts.
  2. Customer-specific extensions can be delivered using a further managed released package that builds on the core app one or via the Ant deploy task. Given that the former approach does not propagate many common changes on update, the latter approach is usually the best one to use.
  3. Version management just about works (for small teams). The managed packaged org is the definitive version: your Git or SVN copy is your best attempt to track it and doesn’t include information such as the exact set of components in the package. (We use the convention that everything in the org should be in the package.)
  4. The Eclipse based Force.com IDE is poor (as it has had only basic maintenance investment over the last few years) but necessary. It allows the code and other artifacts to be seen and searched as a whole and is the best place to see change and version information.
  5. Unit tests are the safety net that let your managed package change and grow. Use continuous integration server to make sure they are run frequently (and to make sure what you have in Git or SVN is complete).
  6. Governor limit violations mostly but not always result from a design error such as non-linear algorithms (hitting the code statements limit) or non-bulkified queries (hitting the number of SOQL queries limit). But they do come up in other situations. For example, your managed package controller starts some processing that consumes some of its governor limits directly and in its triggers; a trigger added by your customer also runs and updates some of your objects consuming from its own set of governor limits; this then causes triggers back in the managed package to consume more of the managed package governor limits exceeding a limit. Whatever the cause, hitting these limits is a brutal problem: where on other platforms performance might become poor, on Force.com your app is broken and your customer is left unable to work.
  7. There are two eras in how open for extension a Force.com app is. The boundary between these eras is the availability in Summer ’12 of Type.forName and Type.newInstance. These allows extension points to be built into an app and extension code to be added by supplying the names of classes that implement the required interfaces. JSON strings in static resources or directly in custom settings are also a help. Before this it was very hard to add external code.
  8. Embrace the use of Contact and Account. As well as helping your app share this information with other apps, some API calls require the Id value passed to be a Contact Id.
  9. Dynamic SOQL is inelegant (as you lose compile time checks) but necessary in some cases. An example is where you want to do a deep clone of part of your object graph: use describe calls to get all the field names for each object so that custom fields that your managed package does not know about – ones added for specific customers – are included.
  10. While the execution order of before triggers compared to after triggers is defined and important to know and make use of, the execution order of separate triggers for the same event e.g. “before update” on a Contact is not defined. Within one code base discipline and patterns must be applied to define the order. But this problem is hard to address across multiple code bases e.g. your app and extending code triggers.
  11. While it is tempting to write an Apex classes that deals with a single object on the grounds that the code will be cleaner and “it’ll never be used in a bulk situation”, sometime in the future it probably will be used in a bulk situation and result on a governor limit error that stops some important data migration or change from being possible. Always deal in sets of objects.
  12. Client-side JavaScript using e.g. jQuery is a must-have these days. It is easy to introduce into Visualforce pages but has to be hacked into standard layout based UI – see TehNrd’s Show and Hide Buttons on Page Layouts. See Using JavaScript with Force.com for a good overview of other JavaScript techniques.
  13. Sometimes you have the difficult choice between delivering poor usability or instead depending on platform implementation that may change e.g. hack to find field ids.
  14. Profiles are hugely painful to keep updated over time and are rarely tested well enough. Do everything you can to keep the number of them in use small.
  15. On custom settings, and particularly if you are using profile or user-specific hierarchical ones, you will eventually need to use code to keep them updated consistently.

Winter ’12 Developer Console

I didn’t take much notice of the Winter ’12 Force.com Platform Release Developer Console as there was no mention of that important word “breakpoint”. But if you are in debug hell then the features that are provided to allow debugging without having to add System.debug calls – including the ability to capture and view the heap at points marked in your source code – are worth investigating.

This (hour long) presentation demonstrates how to use the features:

It also suggests that the Eclipse tooling (and 3rd party additions to that tooling) will eventually be able to use the same API. I find the current browser-based console fragile (using Chrome) and a bit awkward to use as it packs a lot of tabs and frames into a small space; hopefully the richer UI facilities in Eclipse will help. But worse, on the sort of pages and code that require debugging, i.e. non-trivial larger ones, the volumes of data that the tooling attempts to move from server to client are so large that the time delays make the tooling barely usable at all.

Note that while this tooling can be used as you develop your managed package, when that managed package is installed in a customer’s org all the code-related debug information is unavailable (by default but can be turned back on via a support request) so the ability to debug is also lost.