DEV Community

Cover image for Transform interface design mockups into ready-to-use UI code with these AI-based tools
Jordi Cabot
Jordi Cabot

Posted on • Originally published at modeling-languages.com

Transform interface design mockups into ready-to-use UI code with these AI-based tools

No programmer wants to spend hours aligning HTML elements or playing with complex CSSs. This is why any low-code tool will generate the User Interface code of your application for you. The problem is that they generate rather basic interfaces, mostly oriented to typical data-entry forms and grids. Anything more complex than that and you are back to tuning the CSS by hand.

On the opposite spectrum, we have a variety of visual mockup/wireframe tools to quickly build a prototype of your desired graphical interface. But those don't include any kind of code-generation capabilities. Therefore, you enter a vicious cycle where "the designer prepares an app design, hands it off to the developer who manually codes each screen. This is where the back and forth starts (colors, responsive layouts, buttons, animations...). In some cases, this process can take weeks or months" (taken from here).

To fix this situation, a few design-to-code tools started to appear. For instance, Supernova takes Sketch designs and transforms them into ready-to-use native UI code. Yotako also aims to help you in a smooth transition from design to code.

But, as in many other fields, the arrival of the new AI era has boosted the number and functionality of these tools (note that, for the same reason, most are still in early/exploratory stages). The promise of these tools is that they can use as the input of the UI code generation process screenshots and even handmade sketches. Thanks to the use of deep learning technologies to recognize the interface elements in the images, they can grasp the desired UI and generate the code that will mimic that design (either by going from the rough sketch to a "formal" wireframe or by directly generating the code). Let's see some of these tools / approaches

Viz2<code>

Viz2<code> is still in beta but it looks very promising. You can use as input a "standard" wireframe design or handmade sketches and export them to Sketch files or a ready to launch website. If the latter, they also promise to generate for you some useful code that helps you integrate that front-end with your data and backend.

They use deep neural networks to recognize and characterize components when scanning the design. Then they try to infer the role and expected behavior of each recognized component to generate a more "semantic" code.

Viz2<code> is the tool featured in the top image illustrating this post.

AirBnB Sketching Interfaces

This experiment from AirBnB aimed mainly at detecting visual components on rough sketches. Their point was that "if machine learning algorithms can classify a complex set of thousands of handwritten symbols  with a high degree of accuracy, then we should be able to classify the 150 components within our system and teach a machine to recognize them".

They trained the system with the set of components they typically use in AirBnB interfaces and managed to get a high degree of recognition.

Unfortunately, it looks this was mainly and exploratory experiment. Even the main person behind it has moved to another company so not sure what the future plans for this is.

Uizard pix2code

Uizard's vision is to improve the designer/developer workflow that too often results in more frustration than innovation. They have recently raised 2.8M in seed funding to quickly turn app sketches into prototypes. Even if there are plenty of tools to specify the mockups of a new product, more often than not, any meeting ends up with a few quick hand-sketches. Their goal is to turn them into prototypes in seconds. The public version of their Sketch2code solution is pix2code (but they have a better commercial version).

Screenshot to Code

This open-source project reuses some of the ML above (especially pix2code) to create an excellent tutorial you can use to learn (and play) with all the ML concepts and models you need to get your hands into this mockup to UI field.

Microsoft's Sketch2Code

Sketch2Code is a Microsoft approach that uses AI to transform a handwritten user interface design from a picture to valid HTML markup code. Sketch2code involves three main "comprehension" phases.

  1. A custom vision model predicts what HTML elements are present in the image and their location.
  2. A handwritten text recognition service reads the text inside the predicted elements.
  3. A layout algorithm uses the spatial information from all the bounding boxes of the predicted elements to generate a grid structure that accommodates all.

The result as you can see is quite impressive.

Sketch2code by Microsoft

Ink to code was an earlier and much more experimental attempt.

Generating interface models and not just code

I would like to see some of these tools generating not the actual code but the models of that UI so that it would be possible to integrate those models with the rest of the software models within a low-code tool. This way, we could easily link and integrate with UI components with the rest of the system data and functionality and generate at once a complete implementation.

Top comments (1)

Collapse
 
kr4idle profile image
Pete Steven

I only knew about Sketch2Code. Didn't know they are so many. Interesting.

There's also Desech Studio where you import your figma, sketch, adobexd file and then integrate it with react, angular or vue. It doesn't have AI, but I think it can help your project a lot, instead of writing the html/css by hand.