Daniel Wellington is a Swedish fashion brand founded in 2011. Since its inception, it has sold over 11 million watches and established itself as one of the fastest-growing and most beloved brands in the industry.
In 2012 we launched our Instagram account, and we became pioneers within influencer marketing. Influencers became our primary marketing channel.
To scale with our challenges and support the developing business, we started using Amazon Web Services (AWS) in 2014. Today we are heavily invested in AWS-platform with a broad range of products such as Amazon Elastic Container Service, AWS Lambda, Amazon DynamoDB, Amazon Sagemaker, Amazon Rekognition, and many more.
Our design principle has been serverless first since 2016. If serverless is not available or practical, we use containers as we consider EC2 as legacy. JavaScript and Go are some of our languages of choice.
The mobile app to optimize our return process that we built with AWS Rekognition service.
Focus on growth and perfection also leads to high standards and requirements for the factories producing our products. To make our pre-quality check work as smoothly as possible to minimize any potential human mistakes, we built an internal mobile app to optimize the return process.
After extensive testing, we realized that this would introduce other problems if we use the same solution but for another challenge. Using the simple optics in a mobile camera on the shining, polished watch wouldn’t give us the level of perfection we needed and not enable other use cases.
We needed efficiency at scale without making this a burden for our assembly line.
Early tests with the Canon EOS 5D Mark IV & Raspberry pi high quality camera connected to a raspberry pi running a bash script (yes, bash!) wrapping aws-cli in the glorious innovation lab at DW
In initial tests, we did we used a DSLR-(canon EOS 5D Mark IV), raspberry HQ- & a USB- camera. We realized that a DSLR-camera would be a better fit with the optics, detail adjustments, and pre-processing on a raspberry pi.
We also searched the market for DSLR cameras running android to deploy our app, but they disappeared as fast as they entered the market. This made us realize that we would need to maintain several layers ourselves, such as hardware, OS, and security (patching, certificate, disk encryption & patching). AWS Greengras would solve parts of the problems, but the ownership was still too much. For us, that meant a step backward, a lot of moving parts, an increase of the TCO (total cost of ownership), and less focus on what is most important, the code.
We started to look for a solution where we could remove such ownership and, at the same time, speed up the transfer time on the edge by doing the pre-processing of the image before it securely touched our code.
We quickly identified Axis Communications cameras to fulfill our pre-processing needs on the edge, and we now started to work together to get it integrated with AWS.
We had learned from our earlier OCR tests that AWS Rekognition service would be the most cost-efficient and powerful setup to use and that it would enable us to do automated quality checks and process-optimization at scale.
We do this by only capturing a specific part (we crop an area) of the picture, then transforming it into black/white and optimizing it. We then have a button connected to the IO port of the camera to trigger the event to send over the image to AWS over secure MQTT.
The benefit of secure MQTT (besides its secured with client certificate) is that it is built for IoT (low bandwidth and low compute power), long-lived and re-usable sessions. You would be amazed how rare the implementation is in off the shelf IoT products.
By presenting our use case, we quickly got the Axis integration team onboard; they utilize their plugin architecture (ACAP) to build secure MQTT support so we could utilize AWS IoT core. Their application is now being released to the public on GitHub with some beta testing from us.
On the AWS side, you can see the diagram below where we consume the data through secure MQTT, then process it via AWS Rekognition, store the result and the original picture on S3 for debugging reasons to be deleted with object expiration automatically.
More in depth about the different parts can be found in the GitHub repository.
For us, this is a big step towards bringing AWS into our assembly process to enable other use cases with the same setup, at the same time, gain quick wins for us. To support the community and the speed of transformation at manufacturers, we will also release our POC (proof of concept) code at GitHub that brings the powerful OCR function from Rekognition and the result delivered over MQTT together with powerful cameras. With some simple code changes, you could quickly get the object Rekognition with thousands of supported objects/scenes and custom labels to train it for your own object detection with a few images.
What about the operational cost? Excluding electricity on the camera we only have a AWS cost of 0,00106371$ USD per capture in the POC without optimizing any code or cost saving plans.
You may ask why? The right platforms and components are key to enable incremental innovation. You may think, to do what? Just To get your creativity started, using custom labels to detect tiny defects or that the clock hands are set to Swedish time when assembled. Yes, we are a Swedish brand and all our customers across the world starts the experience with Swedish daylight-saving time ;-)
This is the POC mounted inside a light tent always to get the same reflections and lighting regardless of whether it is in the lab, test, or the assembly process. A button is also mounted inside of the tent to trigger the event to capture and send the data over MQTT.
The code can be found here, and we would love to see your creative takes on this, even outside the manufacturing area ;-)
Do you want to know more about how it is to work with technology at Daniel Wellington? Take a few minutes to watch the video and if you are open to new challenges, check out our open tech positions.
Top comments (0)