A comparison of different approaches to operating on an array

I've been in the process of moving some of my digital life around and one thing that I've had to do is download all of my photos from Google Photos. Thanks to the way those are organized, I found the need to rearrange them, so I wrote a little node script to do it. What the script does is not entirely relevant for this post so I'm not going to go into detail, Here's the bit I want to talk about (edited slightly for clarity):

const lines = execSync(`find "${searchPath}" -type f`)
const commands = lines
  .map(f => f.trim())
  .map(file => {
    const destFile = getDestFile(file)
    const destFileDir = path.dirname(destFile)
    return `mkdir -p "${destFileDir}" && mv "${file}" "${destFile}"`
commands.forEach(command => execSync(command))

Basically all this does is uses the Linux find the command to find a list of files in a directory, then separate the results of that script into lines, trim them to get rid of whitespace, remove empty lines, then map those to commands to move those files, and then run those commands.

I'm pretty sure both of them were suggesting it as a performance optimization because you can reduce (no pun intended) the number of times JavaScript has to loop over the array.

Now, to be clear, there were about 50 thousand items in this array, so definitely more than a few dozen you deal with in typical UI development, but I want to first make a point that in a situation like one-off scripts that you run once and then you're done, performance should basically be the last thing to worry about (unless what you're doing really is super expensive). In my case, it ran plenty fast. The slow part wasn't iterating over the array of elements multiple times, but running the commands.

A few other people suggested that I use Node APIs or even open-source modules from npm to help run these scripts because it would "probably be faster and work cross-platform." Again, they're probably not wrong, but for one-off scripts that are "fast enough", those things don't matter. This is a classic example of applying irrelevant constraints on a problem resulting in a more complicated solution.

In any case, I did want to address the idea of using reduce instead of the mapfilter, then map I have going on there.

With reduce

Here's what that same code would be like if we use reduce

const commands = lines
  .reduce((accumulator, line) => {
    let file = line.trim()
    if (file) {
      const destFile = getDestFile(file)
      const destFileDir = path.dirname(destFile)
      accumulator.push(`mkdir -p "${destFileDir}" && mv "${file}" "${destFile}"`)
    return accumulator
  }, [])

Now, I'm not one of those people who think that reduce is the spawn of the evil one (check out that thread for interesting examples of reducing), but I do feel like I can recognize when code is actually simpler/more complex and I'd say that the reduce example here is definitely more complex than the chaining example.

With loop

Honestly, I've been using array methods so long, I'll need a second to rewrite this as a for a loop. So... one sec...

Ok, here you go:

const commands = []
for (let index = 0; index < lines.length; index++) {
  const line = lines[index]
  const file = line.trim()
  if (file) {
    const destFile = getDestFile(file)
    const destFileDir = path.dirname(destFile)
    commands.push(`mkdir -p "${destFileDir}" && mv "${file}" "${destFile}"`)

Yeah, that's definitely not any simpler either.

So chaining it is

I'd say most of the time, I'm going to be chaining. If I have a performance concern with iterating over the array multiple times, then I'll measure and compare reduce and loops and use the one that's faster (I personally don't think either has an upper leg on being more understandable).