Array handling in JS with functional purity

by Alexis Hope, 29 Jul 2021

A convention has sprung up recently to make pure functions in Javascript. There are pros and cons to this when using aggregate types. Lets understand whats going on so we know when to use this.

The short explanation is. Primitive types in Javascript get passed as value, composite types get passed as a reference. The convention to ensure functional purity, clones are created in the head of a function body before use.

function (input) {
  const pureInput = [...input]
  // ...
}

So Javascript doesn’t give us functional purity, but we mimic it by following a convention. The larger consequence of this is a potentially significant performance impact. Passing the value by reference is a runtime optimization to save copying large blocks of memory for every function call. Consider passing large arrays around the system duplicating itself every-time 😬.

Detailed Explanation

Primitive types (sometimes known as scalar types in C) in Javascript are things like

  1. boolean
  2. string
  3. integer
  4. float
  5. undefined
  6. null

Composite types are things built up with primitive types

  1. Object
  2. Array
  3. Function or also an Object

When using primitive types its not an issue. As the function parameters are duplicated to new variables relative to the function scope.

Side note: primitive types are duplicated except for string which is immutable and passed as reference also.

const a = 1

function increment(param) {
  param++ // cause a mutation
  return param
}

increment(a) // returns 2
increment(a) // still returns 2
a // initial value is preserved as 1

Where we run into trouble is with using composite types. For performance reasons natively they are not copied to the function scope but passed as a reference to the original variable.

const b = []

function push(arr, param) {
  arr.push(param) // cause a mutation
  return arr
}

push(b, 1) // [1]
push(b, 1) // [1, 1]
b // [1, 1]

So with this trend to do more Functional Programming in Javascript keeping functions pure. Its encouraged to duplicate the contents of the input. This solves the issue of unexpectedly mutating a composite value.

const c = []

function pushPure(arr, param) {
  const clone = [...arr]
  clone.push(param) // cause a mutation
  return clone
}

pushPure(c, 2) // [2]
pushPure(c, 2) // [2]
c // []

Some functions I’ve seen in the wild.

const functionInTheWild = (input) => {
  return [...input].some(i =>

Here we’ve copied an array to check its contents. Which is a wasted duplication. We need to be conscious of how this works and when to employ it. The benefit to duplicating is a more defensive function that is less likely to have adverse effects. But encouraging duplication unnecessarily is slowing down our application.

One risk to this when using the spread operator … is not doing a deep clone of nested objects. In the next example both arrays are changed

const budgetData = [
  ["foo"],
  ["bar"],
  ["baz"],
]
const backup = budgetData

const clonedData = [...backup]

clonedData[0][0] = "abc"

console.log(backup)

To be safe we need to do a deep clone with a utility like cloneDeep from lodash. This is the safest way to clone data. But comes with even greater performance cost.

const clonedData = _.cloneDeep(backup)

Another issue with this convention to clone is relying on developers to enforce this. Allowing some instances to slip through or encouraging nit picking on PRs debating the purity of functions.

If you really want purity these two tools come to mind.

Eslint FP rule set which includes a no mutation rule. https://github.com/jfmengels/eslint-plugin-fp

Immutable js offering non native immutable data structures. https://immutable-js.com/

But a small warning both these tools add a fair amount over overhead to a project if you’re not prepared to follow their conventions.

A lot of the time all we’re after is semantically pure functions. In JS without the use of immutable data structures striving for absolute purity is a slippery slope.