After having established some requirements and some basic utilities, we're ready for the fun part: putting the pieces together. At the end of this post, we will have our working parser.
Writing our pipeline functions
When we use our parsing library, we will want to declare each property of our JSON as either required or optional, and we'll want to write logic for every representable state. We can write functions that allow us to express this declaratively. Let's call these two functions req
and opt
.
Handling required JSON properties
Making use of the mapTogether
function we wrote previously, let's write the main function that drives the calling code through the pipeline, the req
function. Calling code will use the req
function once for each property we require.
Before we define it, let's break down the parameters. They are:
-
t
: aResult
of the model we're trying to parse the JSON into. -
prop
: the string name of the property we are defining as required. -
decode
: a function that defines how we go from a dictionary of JSONs to the proper type of our property. -
dict
: the dictionary that (hopefully) contains the property we're looking for. -
update
: a function that defines how to update our model if the property exists in the dictionary and the property is the type we expect.
let req = (t: Result.t<'t, string>,
prop: string,
decode: ((Js.Dict.t<Js.Json.t>, string) => Result.t<'prop, string>),
dict: Js.Dict.t<Js.Json.t>,
update: ('t, 'prop) => 't): Result.t<'t, string> =>
mapTogether(t, decode(dict, prop), update);
Again, the mapTogether
function takes two Result
s and a function to apply to the contents if both of them succeed. Here, it's taking a result of the old state of the model and a Result
of the newly parsed property and updating the model if both of those are successful.
Handling optional JSON properties
We can adapt the req
function for optional properties. The main difference is the last parameter: the update
function that determines what to do with our property will have to handle both cases using an option
monad.
let opt = (t: Result.t<'t, string>,
prop: string,
decode: ((Js.Dict.t<Js.Json.t>, string) => Result.t<'prop, string>),
dict: Js.Dict.t<Js.Json.t>,
update: ('t, option<'prop>) => 't):
Result.t<'t, string> =>
switch t {
| Error(str) => t;
| Ok(obj) => Result.Ok(decode(dict, prop) -> toOption |> update(obj));
};
If the Result
we started with was already in an error state, we just return that error state, but if it succeeds, then we attempt to decode the property and run the update function either way, counting on the update function to handle both cases.
Writing a collection of decode
functions
The real meat of the parsing library, then, will be writing decode
functions. Users of the library will need to pass a function to the decode parameter whenever they call req
. Of course, any function with an appropriate type signature will do, so users can write their own, but it'll help to include some common ones. For example, our library needs a function to grab floating point numbers from a dictionary, like so
let number = (dict: Js.Dict.t<Js.Json.t>, prop: string):
Result.t<float, string> =>
getProp(dict, prop)
-> Result.map(json => Js.Json.decodeNumber(json))
-> Result.flatMap(op => toResult(op, typeError("number", prop))) ;
Our number
parser uses the getProp
we defined previously, and if that succeeds, gives us either a parsed number or our type error. Let's define some more ways to decode
.
let str = (dict: Js.Dict.t<Js.Json.t>, prop: string):
Result.t<string, string> =>
getProp(dict, prop)
-> Result.map(json => Js.Json.decodeString(json))
-> Result.flatMap(op => toResult(op, typeError("string", prop))) ;
let numeric = (dict: Js.Dict.t<Js.Json.t>, prop: string):
Result.t<float, string> =>
str(dict, prop)
-> Result.flatMap(str =>
switch Belt.Float.fromString(str) {
| Some(number) => Result.Ok(number);
| None => Result.Error("could not parse number from: " -> Js.String.concat(str));
});
It's your library--continue adding more of these functions to taste.
In conclusion
So this is our library, or the important parts, anyway. If you're confused or if I've left anything out, feel free to examine my source available on GitLab. That same project uses the library in Parsing.res if you'd like to see an example.
Top comments (0)