DEV Community

Cover image for Dynamically Aggregated Hyperlambda Gateway endpoints
Thomas Hansen
Thomas Hansen

Posted on • Originally published at aista.com

Dynamically Aggregated Hyperlambda Gateway endpoints

What I’m about to show you in this article have never previously been done, by anyone, regardless of programming language. To explain it in humanly understandable terms, I have a database, with Hyperlambda snippets. Then I create an HTTP endpoint, that reads all records from my database, and executes all snippets in parallel, on separate threads, for then to return the result of each execution to the caller.

Still, all code that is executed is persisted on the server, so even though it’s “super dynamic” in nature, there are no risks of GraphQL or Firebase “Business Logic Injection Attacks”, such as the incident with BeReal illustrates. On the dynamic scale, between 1 and 10, this sits somewhere at around 100+. It makes all other “dynamic programming languages” seem boring and hopelessly static in comparison. Even super dynamic GraphQL stuff such as Hasura, or PostgREST stuff such as Supabase ends up becoming hopelessly “static” in nature in comparison, while the Hyperlambda implementation becomes a bajillion times more dynamic in nature, while still being 100% perfectly safe from “Business Logic Injection Attacks”.

Some of the use cases could be to have reusable integrations in your database, for then to simply populate your HTTP endpoint with content from your database, at which point you could reuse some of your snippets on multiple endpoints, while modifying your endpoint’s implementation implies simply deleting, adding or changing an existing record. On the reusability scale, this scores yet again somewhere around 100+ in comparison to everything else out there.

With some 50 snippets in your database, totalling at 1,000 lines of code, and some 250 endpoints, with a many to many relationship towards your snippets, you could probably replace 5 million lines of legacy code, without even breaking a sweat!

My implementation in the following YouTube video is executing each snippet in parallel, so the execution time will be no more than the snippet requiring the longest to execute, regardless of whether or not you have 5 snippets in your database, or a bajillion snippets. So it would also probably improve performance with some 2 to 5 orders of magnitudes compared to your existing legacy stuff. To speak in code, imagine being in need of having an endpoint that implements Stripe payments? Well ...

execute "(select content from snippets where name = 'stripe')"
Enter fullscreen mode Exit fullscreen mode

... and you’ve created an endpoint that integrates with Stripe payments, yet still capable of dynamically applying your input arguments to your endpoint.

I will continue expanding upon this, since by for instance adding a many to many relationship table, it becomes super reusable, in addition to its dynamic nature, allowing you to declare which snippets you want to use, simply as a URL field in your third database table – For then to apply a many to many relationship from your URL table to your snippets table. Then as you execute your endpoint, you’d select all records from your snippets table, that somehow is linked towards your currently executed URL. However, for now have fun with what I’m about to show you, and please realise, this has never been shown before through human history – And it is super useful, allows you to write super dynamic code, and create super reusable logic, while having everything executed in parallel, resulting in 100 times faster software.

Below is the code for my HTTP endpoint, 26 lines of code.

.arguments
   name:string
   email:string
validators.email:x:@.arguments/*/email
validators.string:x:@.arguments/*/name
   min:5
   max:25
.lambda
   join
data.connect:code
   data.read
      table:snippets
      columns
         content
   for-each:x:@data.read/*/*/content
      add:x:+/*/*
         hyper2lambda:x:@.dp/#
      add:x:@.lambda/*/join
         .
            fork
   insert-before:x:@.lambda/*/join/*/fork/0
      get-nodes:x:@.arguments
eval:x:@.lambda
add:x:+
   get-nodes:x:@.lambda/**/.result/*
return
Enter fullscreen mode Exit fullscreen mode

Notice, if you remove the validators, and replace the [.arguments] collection with something resembling the following.

.arguments:*
.lambda
   join
data.connect:code
   data.read
      table:snippets
      columns
         content
   for-each:x:@data.read/*/*/content
      add:x:+/*/*
         hyper2lambda:x:@.dp/#
      add:x:@.lambda/*/join
         .
            fork
   insert-before:x:@.lambda/*/join/*/fork/0
      get-nodes:x:@.arguments
eval:x:@.lambda
add:x:+
   get-nodes:x:@.lambda/**/.result/*
return
Enter fullscreen mode Exit fullscreen mode

It’s only 20 lines of code, but it’s 20 lines of code that could at least in theory, replace every single line of HTTP API code you have ever written through your entire life. And, the HTTP API is async to the bone, and executing every single snippet in parallel. Not bad for 20 lines of code if you ask me ... ;)

Psst, in case you didn’t get it, read the article once more, watch the video once more, and realise that the above 20 lines of code, can in theory replace every single line of server side code, ever written throughout human history, by a human software developer ...

I can replace everything that all backend software developers have done throughout human history with no more than 20 lines of code - Not bad for a Thursday ... ^_^

Top comments (0)