DEV Community

Cover image for I made a Market Research Tool to market my Market Research Tool. Crawl/RAG/LLM
Thomas Wink
Thomas Wink

Posted on

I made a Market Research Tool to market my Market Research Tool. Crawl/RAG/LLM

Using my own marketing tool to market my marketing tool. I thought it would be an interesting use case to share. As a non-marketing guy, it turns out I got some interesting results and insights.

Don't feel like reading? You can check it out for yourself, its free:

Here's an example of the PDF output when I did a run on "Sales channels optimisation" for a specific question:

Image description

Here we clearly get broad answers to a broad question. (Many research questions are more specific). Points one and two seem pretty generic, but are still useful. Point three through, is actually new to me. It makes a lot of sense to market a tool like this to e-commerce sites. New insight gained!

And here follows a screenshot of a little chat I had with the generated marketing report. I asked it to create an outreach plan.

Image description

I get some guidance here. I ran into some "AI laziness" though. It started out by telling me "how" to make an outreach plan. Only after asking the AI to do the work for me, did I actually get an outreach plan. Room for improvement? Absolutely. But It did get the job done.

Then I asked for an Instagram post.

Image description

It's interesting to note the initial style and use of emoticons. The cool thing about the assistant is that you can ask it to create a blog post. If it's too formal, you can ask it to make it less so. Or longer, shorter, funny etc.

When I started this project, one of the drawbacks of LLM's was the information cut-off date. Back then (just a few months ago!) LLM's couldn't just get the latest information from the internet. Thus RAG was born. Add your own data, give it to the AI and start asking it questions. Simple right?

The first test version of MarqtAI uses just a few parameters from the user before it can start researching on the internet. The user selects a research area, and clicks go.

In this diagram (yes, I made it all by myself) it's plain the process is not that complex:

Image description

But there were definitely technical challenges to overcome. More on that later.

The problem that this approach solves is getting more consistent results because of the added control over the whole process:

  • Keyword generation (for running queries against search engines)
  • Source verification (using reputable sources)
  • Data collection (filtering, content extraction, summarisation)
  • prompt engineering (as much an art as a science)
  • Context hardening (I just made that up)

This approach also makes it possible to optimise for token usage (dollars...), token limits and how much data you actually need for an analysis that makes sense. (given the input data, of course. You know... Crap in, crap out)

My friend, a marketing specialist, was experimenting with this kind of online research using a couple of AI tools, such as LLM's that have access to the internet. He was able to get usable, and verifiable results for his clients. But not every time. In the end it turned into a time consuming effort to verify all the data and the sources. In short: the results were not consistent at all. Not surprising, as every request to an LLM can yield different results. Just consider all the variables at play here. Every request yields different data sources, context broadness etc

Currently MarqtAI has the following capabilities:

  • 20 research areas, such as: Branding, Customer experience, Digital marketing, Pricing strategy, Trends, Opportunities & Challenges
  • Each run collect at least 25 resources per question that are pruned, and then analysed.
  • An average research area uses over 200 sources, and analyses this in about 30 minutes.
  • Search engine queries are generated using the current context
  • Follow-up research questions use data gathered in a previous question
  • After data collection, the analysis is done by asking the research questions, and providing the collected data to the LLM.
  • The responses are all recorded and presented in PDF form at the end of a run
  • Research assistant. Basically an AI assistant where you can ask anything about the data in the reports, as well as generate content using that data.

The tool was built in Meteor with only a few extra packages. For the API requests, and of course to do API calls to the LLM. I had to use the beta versions of Meteor 3, and now of course the RC version.

Now for the challenges. A key algorithm that I had to implement was "Map Reduce". Due to the token limit of every request done to the LLM, a solution was needed to circumvent this problem.

Basically, Map Reduce lets you reduce a big set of data in steps until you reach a certain size. By asking the LLM to only extract information from the source text that was related to the context (i.e: branding etc) the reduce algorithm turned out to be pretty efficient. After all the data was compact enough, an analysis was possible in one last request.

Another big challenge was getting consistent replies from the AI. Prompt engineering. Leave any room for ambiguity, and it will get filled. With these large amounts of data is was tedious to troubleshoot, and a new approach to storing the data in "debug mode" had to be made, including fake API requests to speed up the process.

The user interface design was done by a very talented designer in our team. I basically asked for a style guide with a few interface elements, and most importantly: a logo. Implementing the design with React in Meteor was of course a cinch.

The first test users consisted mostly of expert users, who gave us excellent feedback for improvements. We felt a lot of encouragement to continue with a more widely available public beta version. Some feedback that we got:

The user interface looks clear and self-explanatory. I checked "All reports" and took a closer look at the provided reports. The structure of the reports is clear and the scope of the reports is appealing. The content can certainly be inspiring for product development and market development.

Even though we are getting a lot of good results and insight from this tool, the first version is still pretty basic. If we get enough momentum and users I imagine an unending stream of improvements. To mention just a few:

  • More detailed statistics. Access to detailed marketing data, that is not publicly available
  • The possibility to research multiple products at the same time
  • Market research split into products or services.
  • Automated (but curated by the user) social media posts
  • Suggestions?

In summary, I'm trying to market my marketing tool, using my marketing tool. It has been an interesting journey exploring the limitations and possibilities of generative AI in this applied context. Although it might be scary at times, in the right hands this technology could bring humans enormous benefit. I hope.

If you feel like commenting, I would love to read it. Also, feel free to check out the tool in it's early beta state on

More information on the capabilities and contact information:

Finally, I would like to thank all the beta testers and people who engaged us in discussions regarding this topic. Your input has been invaluable.

Top comments (0)