DEV Community

Cover image for Better TypeScript... With JavaScript
Adam Nathaniel Davis
Adam Nathaniel Davis

Posted on • Updated on

Better TypeScript... With JavaScript

[NOTE: The library that I reference throughout this post - allow - is now available in an NPM package. You can find it here: https://www.npmjs.com/package/@toolz/allow]

In my previous post (https://dev.to/bytebodger/tossing-typescript-1md3) I laid out the reasons why TypeScript is, for me, a big #FAIL. A lot of extra work in return for a false sense of security and few tangible benefits.

I won't rehash those arguments again. You can browse through that article if you're interested. In this article, I'll be outlining my practical-and-tactical solution in a purely-JavaScript environment.

FWIW, I wrote an article somewhat-similar to this one back in March (https://dev.to/bytebodger/javascript-type-checking-without-typescript-21aa). While the basis of my approach hasn't changed radically, the specifics of my implementation are quite different.

All of the code for this article can be referenced in this single file:

https://github.com/bytebodger/spotify/blob/master/src/classes/allow.js

It's part of my Spotify Toolz project, although I'll also be porting it into my type-checking library.


Alt Text

Type-Checking Goals

Without rehashing content from my previous articles, suffice it to say that there are several key factors that I find important in type-checking:

  1. I care almost exclusively about ensuring type safety at runtime. Telling me that your app compiled means almost nothing to me. Your app compiled. I tied my shoes. We didn't drive off a cliff. Do we all get cookies?? If my app compiles, that's no guarantee that it runs. If my app runs, it's guaranteed to compile. So I focus on runtime.

  2. I care almost exclusively about ensuring type safety at the interfaces between apps. Those could be interfaces between my app and some outside data source - e.g., an API. Or it could be the interface between one function and another. It doesn't matter if the exchange reaches outside my app, or whether the exchange is entirely encapsulated by the app. The point is that, if I know I'm getting "clean" inputs, there's a much greater likelihood that any logic I've written inside the app will perform as expected.

  3. Type-checking should be clean. Fast. Efficient. If I have to spend countless hours trying to explain functioning code to a compiler, then that type-checking is more of a hurdle than a feature. This also means that type-checking should be as complete as it needs to be - and no more. In other words, if I'm receiving an object from an API response that contains 100 keys, but I'm only using 3 of those keys, then I shouldn't have to define the other 97.

  4. "Defensive programming" should be kept to a minimum. In my previous post, @somedood made a good point about the headaches of having to use a continual stream of if checks to ensure that we've received proper data. I thoroughly understand this. Any solution that requires constantly writing new if checks is - a non-solution.


Alt Text

The Basic Approach

In my previous article, I outlined one scenario where we could be passing in a number - but would still need to check inside the function to ensure that the argument is, indeed, a number. The scenario looks like this:

const createId = (length = 32) => {
  if (isNaN(length)) length = 32;
  // rest of function...
}
Enter fullscreen mode Exit fullscreen mode

The simple fact is that, as long as we're targeting runtime issues, there really is no way around this. That's why I focus nearly all of my validations on runtime validations. Because I'm not interested in the faux-security that comes with successful compilation.

I want to know, in real-time, whether something will fail at runtime.


So my "answer" to this problem is that, if I can't eliminate the inside-the-function-body validations, I at least want to make them clean, fast, and efficient. With no manual need to craft fancy if conditions.

In the code linked-to above, I have a basic validation class that I've called allow. allow contains a series of methods that check for various data types.

One key difference in my new approach is that each method is chained. This means that I can perform all of my validations with a single line of code. So whether a function has one argument or a dozen, I don't have copious LoC inside the function spent on validating those inputs.

Another difference is that my latest approach doesn't return any validation values. The methods simply throw on error or... nothing happens. Which is exactly what I want to happen.

Of course, the code can be tweaked so that, in production, the "failure" results in some kind of silent error. But the key is that, if a function receives "bad" data, then I want that function to bail out in some way.

So the following examples will all look similar to this:

const myFunction = (someBoolean = false, someString = '') => {
  allow.aBoolean(someBoolean).aString(someString);
  // rest of function...
}
Enter fullscreen mode Exit fullscreen mode

 

The Simplest Validations

I call these "simple" because there's nothing to do but to pass in the value and see if it validates. They look like this:

// booleans
const myFunction = (someBoolean = false) => {
  allow.aBoolean(someBoolean);
  // rest of function...
}

// functions
const myFunction = (someCallback = () => {}) => {
  allow.aFunction(someCallback);
  // rest of function...
}

// React elements
const myFunction = (someElement = <></>) => {
  allow.aReactElement(someElement);
  // rest of function...
}
Enter fullscreen mode Exit fullscreen mode

Nothing too magical about these. aBoolean(), aFunction(), and aReactElement() will all fail if they do not receive their respective data types.


Enums

Enums can be checked against a simple array of acceptable values. Or you can pass in an object, in which case the object's values will be used to gather the acceptable values.

// one of...
const statuses = ['open', 'closed', 'hold'];

const myFunction = (status = '') => {
  allow.oneOf(status, statuses);
  // rest of function...
}

const colors = {
  red: '#ff0000',
  green: '#00ff00',
  blue: '#0000ff',
}
const myFunction = (color = '') => {
  allow.oneOf(color, colors);
  // rest of function...
}
Enter fullscreen mode Exit fullscreen mode

 

Strings

The simplest way to validate strings is like so:

// string
const myFunction = (someString = '') => {
  allow.aString(someString);
  // rest of function...
}
Enter fullscreen mode Exit fullscreen mode

But often, an empty string is not really a valid string, for the purposes of your function's logic. And there may be other times when you want to indicate a minLength or a maxLength. So you can also use the validation like so:

// strings
const myFunction = (someString = '') => {
  allow.aString(someString, 1);
  // this ensures that someString is NOT empty
  // rest of function...
}

const myFunction = (stateAbbreviation = '') => {
  allow.aString(stateAbbreviation, 2, 2);
  // this ensures that stateAbbreviation is EXACTLY 2-characters in 
  // length
  // rest of function...
}

const myFunction = (description = '') => {
  allow.aString(description, 1, 250);
  // this ensures that description is not empty and is <= 250 
  // characters in length
  // rest of function...
}
Enter fullscreen mode Exit fullscreen mode

 

Numbers

Like strings, numbers can be simply validated as being numerical-or-not. Or they can be validated within a range. I also find that I rarely use allow.aNumber() but I frequently use allow.anInteger(). Because, in most cases where I'm expecting numbers, they really should be integers.

// numbers
const myFunction = (balance = 0) => {
  allow.aNumber(balance);
  // can be ANY number, positive or negative, integer or decimal
  // rest of function...
}

const myFunction = (age = 0) => {
  allow.aNumber(age, 0, 125);
  // any number, integer or decimal, >= 0 and <= 125
  // rest of function...
}

const myFunction = (goalDifferential = 0) => {
  allow.anInteger(goalDifferential);
  // any integer, positive or negative
  // rest of function...
}

const myFunction = (id = 0) => {
  allow.anInteger(id, 1);
  // any integer, >= 1
  // rest of function...
}
Enter fullscreen mode Exit fullscreen mode

 

Objects

This is not for defining specific types of objects. We'll cover that with anInstanceOf. This only checks whether something fits the definition of being a generic "object" and, if you desire, whether the object is of a certain "size".

This also excludes null (which JavaScript classifies as an object) and arrays (which are also, technically, objects). You'll see that there's a whole set of validations specifically for arrays in a minute.

// objects
const myFunction = (user = {}) => {
  allow.anObject(user);
  // can be ANY object - even an empty object
  // rest of function...
}

const myFunction = (user = {}) => {
  allow.anObject(user, 1);
  // this doesn't validate the shape of the user object
  // but it ensures that the object isn't empty
  // rest of function...
}

const myFunction = (user = {}) => {
  allow.anObject(user, 4, 4);
  // again - it doesn't validate the contents of the user object
  // but it ensures that the object has exactly 4 keys
  // rest of function...
}
Enter fullscreen mode Exit fullscreen mode

 

Instances

These validate the shape of an object. Please note that they don't validate the data types within that shape. Could it be extended to provide that level of validation? Yes. Do I require that level of validation in my personal programming? No. So right now, it just concentrates on the existence of keys.

It will also validate recursively. So if you have an object, that contains an object, that contains an object, you can still validate it with anInstanceOf().

anInstanceOf() requires an object, and a "model" object against which to check it. Every key in the model is considered to be required. But the supplied object can have additional keys that don't exist in the model object.

// instance of...
const meModel = {
  name: '',
  address: '',
  degrees: [],
  ancestors: {
    mother: '',
    father: '',
  },
}

let me = {
  name: 'adam',
  address: '101 Main',
  degrees: [],
  ancestors: {
    mother: 'mary',
    father: 'joe',
  },
  height: '5 foot',
}

const myFunction = (person = meModel) => {
  allow.anInstanceOf(person, meModel);
  // rest of function...
}
myFunction(me);
// this validates - me has an extra key, but that's ok
// because me contains all of the keys that exist in 
// meModel - also notice that meModel is used as the 
// default value - this provides code-completion clues
// to your IDE

let me = {
  name: 'adam',
  degrees: [],
  ancestors: {
    mother: 'mary',
    father: 'joe',
  },
  height: '5 foot',
}
myFunction(me);
// this does NOT validate - me is missing the address
// key that exists in meModel
Enter fullscreen mode Exit fullscreen mode

 

Arrays

The simplest validation is just to ensure that a value is an array. Along with that validation, you can also ensure that the array is not empty, or that it is of a specific length.

// arrays
const myFunction = (someArray = []) => {
  allow.anArray(someArray);
  // rest of function...
}

const myFunction = (someArray = []) => {
  allow.anArray(someArray, 1);
  // this ensures that someArray is NOT empty
  // rest of function...
}

const myFunction = (someArray = []) => {
  allow.anArray(someArray, 2, 2);
  // this ensures that someArray contains EXACTLY 2 elements
  // rest of function...
}

const myFunction = (someArray = []) => {
  allow.anArray(someArray, 1, 250);
  // this ensures that someArray is not empty and is <= 250 
  // elements in length
  // rest of function...
}
Enter fullscreen mode Exit fullscreen mode

 

Arrays Of...

It's often insufficient merely to know that something is an array. You may need to ensure that the array contains elements of a particular data type. In other words, you have arrays of integers, or arrays of strings, etc.

All of these come with minLength/maxLength optional arguments, so you can ensure that the arrays are non-empty, or are of a particular size.

// array of arrays
const myFunction = (someArray = [[]]) => {
  allow.anArrayOfArrays(someArray);
  // rest of function...
}

// array of instances
const myFunction = (someArray = [meModel]) => {
  allow.anArrayOfInstances(someArray, meModel);
  // rest of function...
}

// array of integers
const myFunction = (someArray = [0]) => {
  allow.anArrayOfIntegers(someArray);
  // rest of function...
}

// array of numbers
const myFunction = (someArray = [0]) => {
  allow.anArrayOfNumbers(someArray);
  // rest of function...
}

// array of objects
const myFunction = (someArray = [{}]) => {
  allow.anArrayOfObjects(someArray);
  // rest of function...
}

// array of strings
const myFunction = (someArray = ['']) => {
  allow.anArrayOfStrings(someArray);
  // rest of function...
}
Enter fullscreen mode Exit fullscreen mode

 

Real-World Examples

In my Spotify Toolz app, I'm currently using this runtime type-checking. You can view that code here:

https://github.com/bytebodger/spotify

But here are some examples of what they look like in my functions:

const getTrackDescription = (track = trackModel, index = -1) => {
  allow.anInstanceOf(track, trackModel).anInteger(index, is.not.negative);
  return (
     <div key={track.id + index}>
        {index + 1}. {track.name} by {getTrackArtistNames(track)}
     </div>
  );
}

const comparePlaylists = (playlist1 = playlistModel, playlist2 = playlistModel) => {
  allow.anInstanceOf(playlist1, playlistModel).anInstanceOf(playlist2, playlistModel);
  if (playlist1.name.toLowerCase() < playlist2.name.toLowerCase())
     return -1;
  else if (playlist1.name.toLowerCase() > playlist2.name.toLowerCase())
     return 1;
  else
     return 0;
};

const addPlaylist = (playlist = playlistModel) => {
  allow.anInstanceOf(playlist, playlistModel);
  local.setItem('playlists', [...playlists, playlist]);
  setPlaylists([...playlists, playlist]);
}

const addTracks = (playlistId = '', uris = ['']) => {
  allow.aString(playlistId, is.not.empty).anArrayOfStrings(uris, is.not.empty);
  return api.call(the.method.post, `https://api.spotify.com/v1/playlists/${playlistId}/tracks`, {uris});
}
Enter fullscreen mode Exit fullscreen mode

Every function signature is given runtime validation with a single line of code. It's obviously more code than using no validations. But it's far simpler than piling TS into the mix.
 

Conclusion

Does this replace TypeScript?? Well... of course not. But this one little library honestly provides far more value, to me, than a vast majority of the TS code that I've had to crank out over the last several months.

I don't find myself "fighting" with the compiler. I don't find myself having to write compiler checks and runtime checks. I just validate my function signatures and then I write my logic, content in the knowledge that, at runtime, the data types will be what I expect them to be.

Perhaps just as important, my IDE "gets" this. For example, when I define an object's model, and then use it as the default value in a function signature, I don't have to tell my IDE that the user object can contain a parents object, which can contain a mother key and a father key.

You may notice that there are empirical limits to the type-checking that I'm doing here. For example, I'm validating the shape of objects, but I'm not validating that every key in that object contains a specific type of data. I might add this in the future, but I don't consider this to be any kind of "critical flaw".

You see, if I'm passing around shapes, and I can validate that a given object conforms to the shape that I require, there's often little-to-no worry that the data in those shapes is "correct". Typically, if I've received a "bad" object, it can be detected by the fact that the object doesn't conform with the necessary shape. It's exceedingly rare that an object is of the right shape - but contains unexpected data types.

Top comments (21)

Collapse
 
khrome83 profile image
Zane Milakovic

I hear you on typescript. I have a love/hate relationship with it. I will say that Deno has made it a lot more friendly, just because I don’t have to think about setting it up. Does not solve the compiler gripe, or the lack of runtime typechecks.

What is kind of interesting with your approach, is that this is very similar to what we might do in a node server. If you build a APi, you have to treat all incoming data as hostile. They could have the wrong information, or they could be leave out fields, or it could be malicious.

It is both good DX and security practice to do this. The best APIs can tell the user what was wrong. The best security prevents that API from being used as a exploit and pass bad code to some other system.

That being said, I am nervous about passing type checks at run time to the client for code that does not except outside input. Mostly because it is more JavaScript. Larger bundle. Longer time to parse.

I see this being great in “App” experience like Figma. But less idea for marketing pages and experience that have to be blazing fast on mobile device with targeted load time.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

this is very similar to what we might do in a node server. If you build a APi, you have to treat all incoming data as hostile.

Agreed. In theory, you must do this for any "outside" information. In practice, I believe it makes much tighter code to do this even for "inside" information. IMHO, this is a lot of what TS is trying to do. It's trying to get you to validate that information - even when it's all "your" information.

That being said, I am nervous about passing type checks at run time to the client for code that does not except outside input. Mostly because it is more JavaScript. Larger bundle. Longer time to parse.

I hear what you're saying, but I can't claim to agree with the concept. My allow.js file is 6.47 KB in its raw, unminified state. To put this in perspective, my entire bundle size for my Spotify Toolz app, which contains a lot more code than simply allow.js, is 11 KB. Once it's loaded, it doesn't get loaded again every time it's called. It's a one-time "hit" to bundle size - and then the only performance consideration is the overhead needed to run those functions. And those functions are... tiny. It should take a handful of milliseconds to run each check.

Not that I expect many (or even, any) people to follow my example. But, IMHO, the place where this is most useful and meaningful is in frontend applications that rely heavily on outside data. Because, ultimately, you can never fully trust outside data.

That being said, I don't want the cognitive overhead of trying to think of when to use it. I just use it - everywhere.

This also makes my unit testing farrrrrr easier. I don't write 50 different unit tests for each function, trying to account for all the jacked up ways that it could be called.

Thank you so much for the thoughtful feedback!

Collapse
 
khrome83 profile image
Zane Milakovic

Sorry replying again. The cognitive piece I totally get. It’s also valuable if a team aligns on this, as it just make defensive coding a first class citizen because you operationalized it, instead of it being a testing strategy or a fix when you get a bug.

Thread Thread
 
bytebodger profile image
Adam Nathaniel Davis

Agreed. When I'm working outside my own personal code - on larger teams - I have used these kinds of approaches. But I also try to be judicious about it. Because if this little library isn't used anywhere else in the app, and no one else on the team likes it or wants to use it, and my contribution is not in some kinda standalone/modularized portion of the app, then it can be borderline-irresponsible to just start chucking it into the middle of a much larger preexisting codebase.

Collapse
 
khrome83 profile image
Zane Milakovic

Yeah you are spot on with the bundle size. But this will vary based on use case. The larger the app, the more calls for validation, more setup, and most importantly the JS evaluate time. That is actually my bigger concern.

We have a fairly small bundle, but we’re finding that it was taking almost 2 seconds on budget phones to evaluate.

Sorry I don’t remember the number, this was a year ago. But it actually caused us to replatform our whole app. A sizable amount of our users live with bad devices and poor bandwidth. So for us it made sense.

Thread Thread
 
bytebodger profile image
Adam Nathaniel Davis

Yeah - good points. I don't mean to imply that bundle size or runtime aren't valid concerns. It's just that in the vast majority of "modern" apps, assumed to be running on "modern" devices, the overhead to bundle this one (small) code file, and then to run it on entry into each function, is wafer thin.

But obviously, in some scenarios, with some teams, and on some apps, those are absolutely valid concerns. I just laugh sometimes because I've seen too many cases where a JS dev is fretting over whether he should add another 5 KB to a bundle - on an app, that's running in a heavily-marketed site, in which all the other corporate influences have already chunked many megabytes worth of graphics, trackers, video, iframes, etc.

Collapse
 
somedood profile image
Basti Ortiz

Kinda late to the party here, but I can see the utility of this approach. Although it is still inherently "defensive programming", at least it does so in an elegantly friendly and readable fashion.

Since this library provides the runtime checks for user-facing code, then I'd say this is a better solution than just assumptions based on type annotations.

However, I may still opt to use TypeScript internally. As I mentioned in my comment from your previous post, TypeScript lives in a "perfect" world. If I can be sure that your library sanitizes all incoming input, then I would at least find solace in the fact that the internal TypeScript application layer indeed operates in that "perfect" world.

Otherwise, if I wouldn't use TypeScript internally, then I'd have to write validators everywhere, which is not exactly... elegant, per se. The inherent verbosity is an immediate deal breaker in internal layers (where I could assume a "perfect" world in order to keep my sanity).

So in summary, I believe your approach is ideal for setting up the "perfect" world. Validators are necessary for front-facing applications such as clients, user interfaces, and APIs.

But once the "perfect" world is set up (by the aforementioned front end), I believe TypeScript is enough for writing secure applications without the runtime overhead that comes with repeatedly and defensively validating every single "interface" throughout the codebase.

TL;DR: Validators are ideal for setting up the "perfect" world. But once everything is set up, TypeScript can finally take over and enforce the contracts between internal interfaces via type annotations.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Totally agree. Although I don't personally care for TS, I'm not trying to claim that my little validation library is truly a replacement for it. As you point out, I definitely feel there's a "time and place" for TS. I just think that, in many places where people are using it, it's not the best tool for the job.

As for verbosity, that's largely a subjective judgment that everyone makes for themselves. My approach does add one additional LoC to every single function declaration. In my experience, that's still far less than the extra code I end up writing to appease TS. Of course, your mileage may vary.

I appreciate the feedback!

Collapse
 
somedood profile image
Basti Ortiz

The pleasure is mine. Your recent articles really provoked me to think about the true value of TypeScript in my projects.

Before, I would slap in TypeScript everywhere and call it a day. Now, I am very aware of the fact that TypeScript alone is terribly unsafe—and sometimes foolish—in user-facing environments. And for that, I have much to thank for.

Collapse
 
patarapolw profile image
Pacharapol Withayasakpunt • Edited

Now that I research, there seems to be a Babel plugin (tcomb) that can do $Refinement. You can do it side-by-side with Flow, and only use tcomb when you what to emit runtime type checking (e.g. function entry points.)

// @flow
import type { $Refinement } from 'tcomb'

const isInteger = n => n % 1 === 0
type Integer = number & $Refinement<typeof isInteger>;

function foo(n: Integer) {
  return n
}

foo(2)   // flow ok, tcomb ok
foo(2.1) // flow ok, tcomb throws [tcomb] Invalid value 2.1 supplied to n: Integer
foo('a') // flow throws, tcomb throws

In order to enable this feature add the tcomb definition file to the [libs] section of your .flowconfig.

GitHub logo gcanti / babel-plugin-tcomb

Babel plugin for static and runtime type checking using Flow and tcomb

It would probably throw error, if I try to use tcomb alongside TypeScript.

Collapse
 
functional_js profile image
Functional Javascript • Edited

Nice work Adam.

It's eerily similar to my runtime check library. :)

Well maybe not so eerie, because these types of checks just makes sense for validating input.

What's different is that I take a functional approach; so no classes, no methods, no optional params, no this-keyword, no method chaining.

I also have a separate set of funcs in another module specifically for throwing. My typechecker module itself only contains checker funcs that only return bools.

The chaining is a cool idea, but I couldn't use it with oneliner funcs smoothly.

Here's an example usage using the isNotNil checker func...

/**
@func
is the arg an instance of an obj?
- i.e. a {} type and no other javascript typeof "object" object

@notes
an empty obj is also true

@cons
may not work in IE 11

@param {{}} o obj expected
@return {boolean}
*/
export const isObj = o => isNotNil(o) && o.constructor === Object;

Here's another example usage using a "throwIf" func from my throwIf module...

/**
@func
sleep the amount of milliseconds supplied by the arg

@usages
await sleeper(2000);

@cons
must use await, otherwise it won't sleep

@param {number} ms delay in milliseconds
@return {Promise<void>}
*/
export const sleeper = ms => throwIfNumLessThanZero(ms) || new Promise(resolve => setTimeout(resolve, ms));

P.S.

I'll mention also that taking a functional approach using free funcs—each with their own "exports"—allows one to take advantage of tree-shaking. It's usually not common for a module to use more than one or two of the checker or throwIf funcs.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

What's different is that I take a functional approach;

Very shocking, coming from the guy whose username is Functional Javascript :-)

Seriously, though. I rarely use classes for much of anything anymore. I do find them to be practical and useful when creating little libraries of utility functions - which is what this is. Especially when some of those functions need to call each other. But you could definitely do this without the class.

no optional params

I only just recently added them in my latest iteration. I've been using a previous homegrown library for several years with no params. However, one of the biggest things that I like to check for is making sure that the data types aren't empty. Cuz if you're expecting a string/object/array, it's quite common that an empty string/object/array isn't valid.

To get around this before, I'd have functions like aString() and .aPopulatedString(). Adding the optional param was just a way for me to collapse those into a single validation.

no method chaining

That was also something that was only added just recently. I don't think I'd ever written something designed for chaining, but one of my coworkers suggested it because I often have a function with two-or-three arguments. And I want to provide validation on each one. So the chaining is just a way to conveniently and logically collapse into into a single LoC - in the same way that the function signature itself is usually a single LoC.

Here's another example usage using a "throwIf" func from my throwIf module...

It's definitely interesting to see your approach. Great minds, and all that...

I especially like how you've logically concatenate them in front of the eventual function call.

Thanks for the feedback!

Collapse
 
ecyrbe profile image
ecyrbe

Hi Adam,

I skipped this article of yours last year. i just wanted to point one library that might also do the job and that i use everywhere. It's called Joi.

Joi is a really powerfull validation library that covers everything you listed and a lot more.

Here are some of your article examples converted :

import Joi,{assert} from 'joi';

const statuses = ['open', 'closed', 'hold'];

const myFunction = (status = '') => {
  assert(statuses,Joi.string().values(...statuses));
}

const colors = {
  red: '#ff0000',
  green: '#00ff00',
  blue: '#0000ff',
}
const myFunction = (color = '') => {
  assert(color, Joi.string().values(Object.values(colors)));
}
Enter fullscreen mode Exit fullscreen mode
import Joi,{assert} from 'joi';

const myFunction = (balance = 0) => {
  assert(balance,Joi.number());
}

const myFunction = (age = 0) => {
  assert(age,Joi.number().min(1).max(25));
}

const myFunction = (id = 0) => {
  assert(age,Joi.number().min(1));
}
Enter fullscreen mode Exit fullscreen mode

it's really powerfull, it can check anything with any complex schema. you can check emails, patterns, etc...

might be worth giving it a shot.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Also, FWIW, the NPM package for my version of this is now published here:

npmjs.com/package/@toolz/allow

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Very cool - thanks for pointing this out!

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Also, what is your type-checking library if not allow.js?

Yeah, that's right. I used to have a simpler version that you can see here:

github.com/bytebodger/type-checkin...

But I don't really use that anymore. In fact, I'm thinking about making allow an NPM library, if only because I've never actually done that before and it'd be kinda cool to say that I finally created my own NPM package.

I don't think you really mean that literally. Perhaps "Defensive programming code should be kept to a minimum", which is what makes allow.js attractive - the value of runtime checks without a mess of ifs?

Exactly. If you look at my own code, I definitely don't keep defensive programming "to a minimum". In fact, I use it all over the place. But I only do so because the raw LoC are so scant. If it's verbose, and if you have to think to much about it, then defensive programming quickly becomes a burden. And when something's a burden, we jump through all sorts of mental hoops to justify why we shouldn't do it at all.

Collapse
 
tshddx profile image
Thomas Shaddox

What I don't quite understand about this concept is what your code is supposed to do at runtime when it finds a runtime type error. I see that the allow library throws an Error by default and can be configured with any callback. Isn't the only advantage here that you guarantee your program to throw the error at the top of the function, instead of wherever else it would eventually throw an error in the body of the function?

That might make your error monitoring dashboard a little cleaner to look at, but it doesn't seem like it helps the people using the software. I suppose you could also display a slightly better error message, but it's not going to be significantly more useful to any visitor who has no way of resolving the error.

 
bytebodger profile image
Adam Nathaniel Davis

I dunno. That's a good question that I was already thinking of.

I have the React dependency in there for only one of the checks - allow.aReactElement() - which is important to me because I'm mostly a React developer these days. But yeah... I understand that this is package bloat if you're not specifically working on a React project.

I'd probably do it as two packages. The stripped down one for general JS dev, and the "bulkier" one for React dev. Of course, even React devs could still use the slimmer one if they didn't feel the need to use allow.aReactElement().

 
bytebodger profile image
Adam Nathaniel Davis
Collapse
 
jwp profile image
John Peters

Yuck...

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Hahaha. OK.