We can all agree that searching for a Javascript bug fix or answer on Google or StackOverflow is not fun 🏴☠️.
Here are twenty short and powerful JavaScript techniques that will maximize productivity ⚡️ and minimize pain 🩸.
Let's dive into the code 🤘
Unique Array
Filter out duplicate values from an Array.
const arr = ["a", "b", "c", "d", "d", "c", "e"]
const uniqueArray = Array.from(new Set(arr));
console.log(uniqueArray); // ['a', 'b', 'c', 'd', 'e']
Unique Array of Objects
The Set
object won't allow you to filter out duplicate objects since each one is different. JSON.stringify
does the trick for us here.
const arr = [{ key: 'value' }, { key2: 'value2' }, { key: 'value' }, { key3: 'value3' }];
const uniqueObjects = Array.from(
new Set(
arr.map(JSON.stringify)
)
).map(JSON.parse)
console.log(uniqueObjects);
See a more efficient but slightly longer method in this comment.
Array Iterator Index
With the .map
and .forEach
javascript iteration functions, you can get the index of each item.
const arr = ['a', 'b', 'c'];
const letterPositions = arr.map(
(char, index) => `${char} is at index ${index}`
)
Split string by # of chars
We can use the .match
regular expression function to split a string by n
characters.
const str = "asdfghjklmnopq";
const splitPairs = str.match(/.{1,2}/g);
console.log(splitPairs); // ['as', 'df', 'gh', 'jk', 'lm', 'no', 'pq']
Alternatively, if you want to split a string by Explanation
In the regular expression /.{1,2}/g
we used, the number 2
stands for how many characters we want to split by. If there is a remainder, this will still work.
n
characters where n
is subject to change, you can do it with new RegExp
.
const splitPairsBy = (n) => str.match(new RegExp(`.{1,${n}}`, "g"))
Split string by different chars
Another regex hack with .match
allows you to split a string like "aabbc" to an array ["aa", "bb", "c"]
.
const str = "abbcccdeefghhiijklll";
const splitChars = str.match(/(.)\1*/g);
console.log(splitChars); // ['a', 'bb', 'ccc', 'd', 'ee', 'f', 'g', 'hh', 'ii', 'j', 'k', 'lll']
Iterate through object
Object.entries
allows us to turn a JSON object to an array of key-value pairs, thus enabling us to iterate through it with a loop or an array iterator.
const obj = {
"key1": "value1",
"key2": "value2",
"key3": "value3"
};
const iteratedObject = Object.entries(obj)
.map(([key, value]) => `${key} = ${value}`);
console.log(iteratedObject); // ['key1 = value1', 'key2 = value2', 'key3 = value3']
Using the Explanation
If obj
is passed through Object.entries
, it will look something like this:
[
["key1", "value1"],
["key2", "value2"],
["key3", "value3"]
]
.map
function alongside object destructuring lets us access the key-values.
Key-Value Array to Object
You can convert an "Object.entryified
" array of key-values back to an object with Object.fromEntries
const entryified = [
["key1", "value1"],
["key2", "value2"],
["key3", "value3"]
];
const originalObject = Object.fromEntries(entryified);
console.log(originalObject); // { key1: 'value1', ... }
Occurrence Counting
You might want to count how many times an item appears in an array. We can use the .filter
function with an iterator to accomplish this.
const occurrences = ["a", "b", "c", "c", "d", "a", "a", "e", "f", "e", "f", "g", "f", "f", "f"];
// creating a unique array to avoid counting the same char more than once
const unique = Array.from(new Set(occurrences));
const occurrenceCount = Object.fromEntries(
unique.map(char => {
const occurrenceCount = occurrences.filter(c => c === char).length;
return [char, occurrenceCount]
})
)
console.log(occurrenceCount); // { a: 3, b: 1, c: 2, ... }
Checkout a solid one-liner to do this in this comment!
Replacement Callback
The .replace
function doesn't limit you to just replacing with a fixed string. You can pass a callback to it and use the matched substring.
const string = "a dog went to dig and dug a doggone large hole";
const replacedString = string.replace(/d.g/g, str => str + "gy")
console.log(replacedString); // a doggy went to diggy and duggy a doggygone large hole
Conditional chaining
Many of you are familiar with running into undefined errors in JS, conditional chaining can prevent a lot of that from happening.
The optional chaining (
?.
) operator accesses an object's property or calls a function. If the object accessed or function called using this operator is undefined or null, the expression short circuits and evaluates to undefined instead of throwing an error.
const obj = {
"a": "aaaaaaa",
"b": null
};
console.log(obj.b.d); // throws an error
console.log(obj.b?.d); // returns undefined
Constrain a Number
Oftentimes you might need to contrain a number to be within a certain range. Doing it with the ternary operator every time you need it is a pain. A function is so much cleaner.
const constrain = (num, min, max) => {
if(num < min) return min;
else if(num > max) return max;
else return num;
}
constrain(5, 1, 3) // 3
constrain(2, 1, 5) // 2
constrain(0, -100, 100) // 0
A much better way to do it is with using Math.min
and Math.max
like this:
const constrain = (num, min, max) => Math.min(Math.max(num, min), max)
Thanks @jonrandy 🙏
Indexing front and back of an array
The .at
function allows you to index an array from the beginning and the end with positive and negative numbers.
const arr = [1, 2, 3, 4, 5];
arr.at(0) // 1
arr.at(1) // 2
arr.at(-1) // 5
arr.at(-2) // 4
Sort alphabetically
Sort an array of strings alphabetically
const words = ["javascript", "typescript", "python", "ruby", "swift", "go", "clojure"];
const sorted = words.sort((a, b) => a.localeCompare(b));
console.log(sorted); // ['clojure', 'go', 'javascript', 'python', 'ruby', 'swift', 'typescript']
💡 Tip: You can switch the order between ascending and descending by switching a.localeCompare(b)
to b.localeCompare(a)
Sort by Truthy/Falsy value
You can sort an array by a truthy/falsy value, placing those with the truthy value first and the falsy values after.
const users = [
{ "name": "john", "subscribed": false },
{ "name": "jane", "subscribed": true },
{ "name": "jean", "subscribed": false },
{ "name": "george", "subscribed": true },
{ "name": "jelly", "subscribed": true },
{ "name": "john", "subscribed": false }
];
const subscribedUsersFirst = users.sort((a, b) => Number(b.subscribed) - Number(a.subscribed))
Number(false)
is equal to zero and Number(true)
is equal to one. That's how we can pass it through the sort function.
Round Decimal to n
digits
You can round a decimal to n
digits with .toFixed
. Note that .toFixed
turns a number into a string, so we have to re-parse it as a number.
console.log(Math.PI); // 3.141592653589793
console.log(Number(Math.PI.toFixed(2)))
Thanks for reading ✨!
I'm open to feedback. If you have any thoughts or comments be sure to share them in the comments below.
Top comments (53)
Unique Array of Objects
JSON.stringify
will give different strings for{ a: 1, b: 2 }
and{ b: 2, a: 1 }
- are you saying these 2 objects are 'different'?Split string by different chars
Using regex as it is intended is not a 'hack'.
Occurrence Counting
Reducing would seem a much better option here:
Constrain a number
Can be simplified:
Sort by Truthy/Falsy value
Avoiding
Number
is considerably better for performance:I'd argue that classic approach of occurrence counting is much easier to read than the reduce approach
with reduce, there are few bits that are just not easy to understand:
{}
, it's passed as 2nd reduce argument, but it's just not easy to see it(a, b) => (..., a)
- it's very hard to understand here that the return value of this function is 'a'a -> key
andb -> occurrencesCount
Yeah, I was lazy with the argument names. Readability and understandability are purely subjective though
Subjective is lazy excuse. UX is is subjective but we figure out what UX is better than other by trying to understand what approach works best for most of people. Unfortunately when it comes to readability - it's too often - that the most senior developer on team often will decide what's the most readable for him and reject all opinions as "subjective" and move forward in fashion of being "pragmatic" - this approach may be fine with other type of decisions for example having more experience tends to help making architectural decisions. However when it comes to readability - experience may be working the opposite way resulting in poorer choices. The more more time we spent looking in weird code the more readable it becomes to us. As experienced engineers we should be very aware of our bias towards making decisions that result in very steep learning curves.
My self as someone who had spent a lot of painful time reading through code parsing reduce that I can't immediately understand (despite coding for 15+ years) and as someone who had to countless time to ask to improve naming in reduce statement (often encountering "sorry I was lazy" excuses), and as someone who countless number of times had to explain to more junior developers how reduce works, I conclude that reduce is fairly poor practice in majority of cases.
Reduce could been little better if initial value was first argument - as opposed to last. The last value is just hard to notice, and logically makes no sense (it's initial yet comes later).
How can readability and understandability NOT be subjective? They're purely dependent on the reader. That's subjective by definition
Subjective does not mean that there isn't approach that overall is better for the majority. Too often people reject arguments as "subjective" simply because it doesn't happen to follow their personal biases, likes/dislikes.
There is a lot of very objective arugment that I gave to you why reduce is bad (in this case). Yet the only thing you can do is reply that "it's subjective".
How can we more objectively approach this issue? If you would attempt to lose your biases we could do it. Basically I have stated my problems with 'reduce()' - what makes it bad. Even if you don't think (subjectively) that it's bad for you, the very fact that it's bad to me, already is a concern that we should attempt to solve. I have proposed solution how we can address the readability issues that I had. Now follow up is, for you to express, if you have readability issue with the approach that I suggested? If you have - we can try to see if I can propose something that works for both. But you don't say it. Instead you quit early from engaging in search for best approach and religiously stick to what you think is subjective and not worth discussing - remaining stuck with your personal favorite approach even if it's not the best approach for both or the whole community.
I'm not rejecting them, merely pointing out that they're purely subjective. You didn't raise a single objective argument - 'not easy to see', 'very hard to understand', and 'hard to understand' couldn't be more subjective.
There are downsides to rigidly promoting readability and 'clean' code above everything else. I brought these up in a previous post:
Preaching 'Clean Code' is Lowering the Quality of Developers
Jon Randy 🎖️ ・ Jun 25 '22 ・ 3 min read
You say it's subjective. But I cannot agree with this. This comes from my personal experience of teaching others, and reading code and I have 15 years of experience in that. But if you are not convinced with my experience - that is fine. It is possible to setup experiment that could confirm or deny the premise that I make. Experiment could look like so: we take 200 random developers. We split them in half. We present them with a piece of code. We ask them a comprehension question, that we ask to answer as fast as is possible. Alternatively we can ask a question to modify the code to do something slightly else - this also would test not only how easy it is to comprehend the code but also how easy it is to change it which is also important. We compare the results of 2 groups. And we have objective answer.
...based on the subjective opinions of 200 developers. The fact remains that readability and understandability are ultimately subjective.
My opinions are also drawn from long experience. I've been programming for 39 years, and doing it professionally for around 27.
The answer will not be subjective. It will be objective result (time it takes to get to correct answer). I'm not asking a question "chose your favorite". I would be asking "what is result of the code" and I'll be measuring the time it takes to get to correct answer. Some developers are smarter than others but on average one group will prevail over the other. Naturally the code that is easier to understand will produce faster results. Alternative question can be set to modify code to do something else. Again we can measure time it takes for each group to get to modified code - the group that gets it faster is objective winner. And even better approach - that truly will make the biggest difference in two approaches - could be to throw in a subtle bug - and ask developer to find and fix the bug. While in case of comprehension developers may often make a confidence guess based on looks - when it comes to subtle bugs - you only get to fix them when you truly parsed and perfectly understood every comma in code.
Totally correct, the results of your poll would be objective - a mere accounting of the results.
However, none of this alters the fact that readability and understandability in themselves are subjective, which is the point you seem to take issue with.
It's not subjective. It has easily measurable effects. Easier to read code is faster to spot bugs, faster to fix bugs, faster to teach unfamiliar developer who's just starting their career. It starts from a discussion on what is hard to understand (the very points I mentioned). And if you don't agree with my complains then please throw a randomized trial and prove it to me that my problems are only in my head and that other people don't have the kind of issues that I encounter when reading your poorly written code.
Again, I never disagreed with you. I merely pointed out that the properties of readability and understandability are subjective, which - by definition - they are.
Just gonna go ahead and throw this Jake Archibald Twitter thread into the mix: twitter.com/jaffathecake/status/12...
As a fellow once-frequent, now-seldom user of
reduce
, I tend to agree with him.I can agree that reduce does get a little bit hard to read, but one-liners or single array iteration functions are super fun to use.
We often endup spending more time fixing bugs than writing code. How fun is it to fix bugs in code that someone found fun to write?
I find it more interesting, educational, and stimulating to fix bugs that are challenging - rather than mundane stupid mistakes. It does nothing but sharpen your skills.
If you aren't enjoying the work you do and the code you write - why are you a programmer? The thrill of solving puzzles and gaining understanding is what drew me to writing code, and I dread the day that it ceases to be like that.
There's beauty in code, elegance in syntax, and art in converting your thought processes into functioning programs. I wish all developers could experience it like this.
I guess we can all agree that nobody likes to waste time on mundane mistakes. Question is what causes mundane mistakes and how to avoid them? And how can we make sure that mistakes are as easy to spot and fix as possible? The way I see it is that some of the ideas that you are trying to promote in fact is often a source of mundane mistakes that are difficult to detect yet you're keen to brush it off as simply being subjective.
Regarding beauty and subjectivity. Yes beauty of code is subjective. But it can be quantified. Subjectivity and beauty is merely a shortcut. By practicing one or another approach we can train our brains in what is beautiful and what's not so that we can more quickly judge what's better what's not without getting too much in depth into details of why something is better or worse - getting into details every time would be wasting a lot of our time. Yet going into details and turning subjective into objective very often is possible with more efforts put into reasoning, argumentation, thought experiments and real experiments.
I can tell you that the code you wrote is ugly and more beautiful would be this and that. You might agree with that and we'd save a lot of time. On other hand if you don't agree that doesn't necessary mean that there is no objectively better or worse approach here. We can get deeper into conversation and maybe we'd find quantifiable ways to judge which approach is better.
To me the approach that is the most easy to read and understand and requires lowest learning curve is the best the most beautiful - that's my brain shortcut that I often apply. And I cannot buy idea that introducing unnecessary complexity as a useful tool of gatekeeping less capable individuals is a beneficial practice (I got that you had this idea on your take on clean code). While gatekeeping using complexity can indeed work and I've seen projects getting away with it I just don't think it's the best we can do.
Seems like you misunderstood my take on clean code too
Wow, thank you for these suggestions! Mind if I link to your comment from the post?
Sure, no problem!
I'm extremely sceptical about this, as any JS engine worth its salt should be able to figure out that
+x
has identical semantics toNumber(x)
.JSBench confirms no noticeable difference (Chrome 110 on Windows 11) jsbench.me/c9le9hworr/1
I tried perf.link - was faster every time (Firefox and Chrome, Linux and Win11) - up to 30% faster in fact
Even if it's 30% faster, it won't matter in the end.
Why? because you won't use
+x
orNumber(x)
with 1 million operations in a second. You'll use it 99% of the time less than 3 times in a row. 30% is worthless here.But using
Number(x)
makes it very easy to understand the intention of the code.Using
+x
will be missed by a lot of devs and will only cause trouble.Readability over pretty much anything, except when you absolutely need the performance, which is very rare. And if you disagree with this, then let's just agree to disagree...
Agree with this.
Number
is much more noticeable at-a-glance in the source code, whereas any minor improvement in perf doesn't make any practical difference and could easily be erased or reversed by future engine optimizations in any case.Small correction to my original comment, though — it turns out that since the introduction of the bigint data type,
+x
andNumber(x)
are no longer semantically identical:Number(1n)
gives1
, whereas+1n
throws an error, because bigints can only be explicitly converted to numbers, not coerced. IMO, this is another argument in favor ofNumber(x)
, as it consistently works for all data types ofx
, without throwing (unless you do something crazy likex = { valueOf() { throw '' } }
).Nice post! Consider the use of the spread operator in the first two tips instead of
Array.from()
:The spread operator doesn't work with typescript and
Set
since the type it returns isn't an array, so that's why I did that in this post.If you're not using ts, the spread operator can absolutely be used.
Thank god I don't use TS.
The spread syntax is for spreading any iterable - which a Set very definitely is. It is useful for way more than just Arrays or Strings - which TS seems to insist it be one of.
High five 🖐🏽. I'm one of the rare ones who avoid TS at all costs. No need to make our lives harder as devs just to get on the bandwagon.
That's not true, here is a running example of just that: typescriptlang.org/play?#code/MYew...
Not a fan of TS, but is this a recent addition? I tried a TS repl (replit.com/languages/typescript) and got this:
Huh, interesting. TIL!
🚩 Red flag! I wouldn't trust the above code for general use.
Your Unique Array of Objects solution uses the incredibly expensive
JSON.stringify
andJSON.parse
to do object comparison. While the code looks neat it is probably the least efficient way to go about it.A better approach would use _.isEqual for deep comparisons, or a concrete comparison on the significant fields, if not all fields are significant (which they rarely are --- e.g. an
id
field may imply the state of the other fields).As others have pointed out, your solution is not even correct.
I found this post via the "Top" posts page, which seems to be because it is getting a lot of engangement. I can't help but wonder whether your bad advice is intentional---to get engangement. You've gotten lots of feedback, but made no adjustments to the post.
The only thing these tips will kill is your app performance.
Terrible terrible advice on the unique object set with help of Json stringify. Terrible performance. And not guaranteed correct results: object after serialization and deserialization might not even be same.
Parsing and stringifying does sort of send a shiver down my back, and I guess that does affect performance quite a bit.
And yes, it is probably true that JSON.stringify can yield incorrect results since non-primitive object key-values are stripped out iirc
I' mean "killer techniques" naah, pretty basic stuff, good post for beginners, don't get me wrong. Also there's a better way to get the same result on the Occurrence Counting technique, check this:
You make just one loop to get the occurrence counting, I think could be optimized more using maps.
I guess that make sense, some of these are more for beginners.
That does seem like a more efficient way to do it, thanks for the suggestion.
really helpful, thanks!
Wasn't aware of the
Object.fromEntries
method, that's kind of useful! Certainly more succinct than iterating through a bunch of entries and extracting key values.a.w.e.s.o.m.e
Great post!
Do you mind if I translate into Japanese, and post it to dev community in Japan?
Sure, as long as you link to the original post.
Thanks! Definitely.
Great post.