JavaScript: Squeezing performance out of Dependents

I’ve been spending quite a bit of time working on Sublime Dependents: a free, Sublime Text 3 plugin that helps you navigate front-end codebases. The plugin was built for large, production codebases (originally built for use at Bēhance). As such, performance is a priority. In this post, I’ll talk about some performance optimizations (in the context of the plugin but applicable to other node-based, static-analysis tools); expanding on my original post on the tool.

It’s been a ton of fun to work on this plugin because it’s valuable (using the plugin has become second nature at Bēhance and folks feel lost and pissed when the tool breaks), challenging (the dependency graph is the core data structure), and innovative (loads of npm modules were created for the logic behind the plugin’s features). If you use sublime text 3, give the plugin a try and let me know if it breaks.

Performance is key

Because the tool was created for immediate use within a huge codebase (~1000+ JS modules in the Bēhance codebase), key features like finding the dependents of the current file had to be performant. After talking about using multi-core processing (see the original post) to get the fetching of the dependents down to ~2s, our codebase started to bloat with additional features and vendor dependencies of varying sizes. This negatively impacted performance (but is a natural consequence of a growing codebase): a run of the dependents feature went back up to over 3 to 4 seconds on average.

No one complained (except me). However, it was clear that performance would continue to degrade over time and parallelizing features wasn’t the magic bullet. After some profiling, I came up with a few performance enhancements.

I should probably talk about the hardware that I’m running before giving millisecond counts: MacBook Air (13-inch, Mid 2011), 1.7 GHz Intel Core i5, 4 GB 1333 MHz DDR3.

Avoid reparsing files

The first speedup was to avoid double-parsing files. Dependents uses a tool that I wrote called precinct to detect the module syntax of the JS file (via mrjoelkemp/module-definition) and then extract its dependencies (via a number of detectives). Both steps require traversing an abstract syntax tree (AST) and since the finding of the module syntax and the extraction of the dependencies are separate algorithmic processes, they’re separate npm modules (dependencies of precinct). As a result, the modules didn’t know about each other and each did a parse of a given file’s content.

For an everyday-sized, JS file, the effect of this double (redundant) parsing is negligible (tens of ms, a hundred ms max). However, for big vendor files like jQuery, this double parsing hurts. It takes ~750ms to parse jQuery once using Esprima.

So, I retooled precinct, module-definition, and the numerous detectives to all accept an already parsed AST to avoid reparsing. Sharing the AST and avoiding reparsing saved a few hundred milliseconds across the Bēhance codebase.

Swapping Esprima for Acorn

The Acorn parser totes being more performant than Esprima for producing an AST. For all of the features of Sublime Dependents (so far), I don’t use Esprima’s token list, so swapping that parser for another was feasible.

All of the detectives and many of my static-analysis centric tools use the same AST traversal library (mrjoelkemp/node-source-walk), so swapping the parser only had to happen there. Additionally, support for sharing an AST was also isolated to that location (yay for simple, reusable modules).

Swapping Esprima for Acorn was almost a drop-in replacement (aside from specifying an ecmaVersion). The performance speedup was also a few hundred ms in total. Parsing jQuery was cut down to around ~550ms with Acorn from the ~750ms with Esprima.

Stop parsing certain files

When computing which files depend on your modules, it’s never the case where a 3rd party library imports your application code’s modules. Hence, those libraries will never be dependents of your application code (unless you’ve hacked the library, in which case it’s part of your application code). Those libraries shouldn’t even be considered for candidacy (and shouldn’t be parsed).

By default (and as a quick hack), node-dependents (one of the node libraries used by Sublime Dependents) skipped files in common vendor directory names (vendor, bower_components, node_modules). Allowing user-defined directories and files to exclude from processing/parsing was a natural and necessary feature for improving performance and adapting to uncommon directory structures.

The main hiccup that prevented me from implementing this early on was that the sublime plugin shells out to the node-dependents for various features. I wasn’t sure how I could supply an array of items to a tool via the command-line interface (CLI). It turns out that joining the array elements by a delimiter (like a comma) is the simple and effective fix.

Once this feature landed, as expected, the gains were incredible. A single run of finding the dependents went down to ~2s (coupled with the other performance enhancements I discussed earlier). Again, these numbers are within a large codebase; small codebases were already performant without this feature.

Other ideas

Every time I need to fix the performance of these features, the thought of just using regexes instead of an AST pop into my head. Maybe one day when I’m fed up, I’ll give it a go, but that would only cut down a few hundred ms at this point. I’d gladly pay the few hundred ms to avoid maintaining tricky regexes. Can I get an amen to that?

One of the underlying libraries, node-dir, is an async tool while most of the other tools that I’ve written for dependents are synchronous. So, I’ve been thinking about retooling node-dir to be purely synchronous, but that wouldn’t net too big of a win (100 to 150ms at best).

I’m also thinking about doing some background parsing in the plugin; every 5 or 10 seconds, running the process of determining the dependents for every non-excluded module and caching the results to disk. I’m positive that IDEs are doing this type of indexing for performance. Of course, if a user makes a modification to the dependency tree within that 10 second interval, there’s the potential to get stale/incorrect results. This possibility of error is the main reason why I’m against this type of caching, but the perf benefits are clear.

If you have any other perf ideas, hit me up. Cheers and thanks for reading!