DEV Community

Cover image for How to Automate Tweets for Free with Lyzr Automata
harshit-lyzr
harshit-lyzr

Posted on

How to Automate Tweets for Free with Lyzr Automata

In today's fast-paced world of social media, crafting engaging and viral tweets can be a challenging task. However, with advancements in artificial intelligence and automation, generating compelling content has become more accessible than ever.

Enter Lyzr Automata, a cutting-edge platform that leverages AI to streamline content creation processes. In this blog post, we'll explore How to Automate Tweets with Lyzr Automata, Streamlit and OpenAI, empowers users to effortlessly create captivating tweets on any topic of their choice.

Lyzr.ai

Check out the Lyzr.ai community on Discord - hang out with 237 other members and enjoy free voice and text chat.

favicon discord.com

How It Works
Topic Input: Users begin by entering a topic of interest into the Streamlit web application. Whether it's breaking news, trending events, or niche subjects, the Tweet generator agent can tackle it all.
Data Extraction: Behind the scenes, the application utilizes the SERPAPI and BeautifulSoup libraries to extract relevant information from top search results related to the user's chosen topic. This ensures that the generated tweets are based on up-to-date and reliable sources.
Tweet Generation: Once the data is collected, the Tweet generator agent swings into action. It adopts the persona of a seasoned journalist and tweet influencer, tasked with crafting a viral tweet thread. The agent follows a set of rules, including ensuring the thread is engaging, informative, and addresses the chosen topic effectively.
Output: After processing the input data and adhering to the specified guidelines, the agent produces a series of tweets tailored to the user's topic. These tweets are displayed in the Streamlit app interface for easy access and sharing.


here's a walk-through of the provided code:
Imports and Setup:
pip install streamlit lyzr-automata json requests BeautifulSoup
import streamlit as st
import os
from dotenv import load_dotenv,find_dotenv
from lyzr_automata.ai_models.openai import OpenAIModel
from lyzr_automata import Agent,Task
from lyzr_automata.pipelines.linear_sync_pipeline import LinearSyncPipeline
import json
from PIL import Image
import requests
from bs4 import BeautifulSoup
import re

load_dotenv(find_dotenv())
serpapi = os.getenv("SERPAPI_API_KEY")
api=os.getenv("OPENAI_API_KEY")
Enter fullscreen mode Exit fullscreen mode
open_ai_text_completion_model = OpenAIModel(
    api_key=api,
    parameters={
        "model": "gpt-4-turbo-preview",
        "temperature": 0.2,
        "max_tokens": 1500,
    },
)
Enter fullscreen mode Exit fullscreen mode

streamlit: Used for building the Streamlit web app interface.
os: Used for interacting with the operating system.
dotenv: Used for loading environment variables from a .env file.
lyzr_automata: Used for building and running the Lyzr agent pipeline.
json: Used for working with JSON data.
requests: Used for making HTTP requests.
BeautifulSoup: Used for parsing HTML content.
re: Used for regular expressions.
load_dotenv is called to load environment variables from a .env file. These likely contain API keys for SerpApi and OpenAI.
An OpenAI model object is created using the loaded API key and specified parameters for text completion.

Data Extraction Functions:

def search(query):
    url = "https://google.serper.dev/search"

    payload = json.dumps({
        "q": query,
        "gl": "in",
    })

    headers = {
        'X-API-KEY': serpapi,
        'Content-Type': 'application/json'
    }

    response = requests.request("POST", url, headers=headers, data=payload)
    res = response.json()
    # Print the response JSON object to inspect its structure

    mys = []
    for item in res.get('organic', []):
        mys.append(item.get('link'))
    return mys

def extract_text_from_url(url):
    try:
        # Fetch HTML content from the URL
        response = requests.get(url)
        response.raise_for_status()

        # Parse HTML using BeautifulSoup
        soup = BeautifulSoup(response.content, 'html.parser')

        # Extract text content and replace consecutive spaces with a maximum of three spaces
        text_content = re.sub(r'\s{4,}', '   ', soup.get_text())

        return text_content

    except requests.exceptions.RequestException as e:
        print(f"Error fetching content from {url}: {e}")
        return None

def extracteddata(query):
    result =  search(query)
    my_data = []
    for i in result:
        get_data = extract_text_from_url(i)
        my_data.append(get_data)
    return my_data
Enter fullscreen mode Exit fullscreen mode

search function takes a query string and uses SerpApi to perform a Google search, returning a list of URLs from the search results.
extract_text_from_url function takes a URL, fetches the HTML content, parses it using BeautifulSoup, extracts text content, and returns it. This function handles potential errors during the request.
extracteddata function takes a query, calls search to get URLs, iterates through them using extract_text_from_url, and returns a list of extracted text snippets.

Tweet Generation:

topic = st.text_input("Enter Tweet Topic: ")

def tweet_generator(topic):
    data = extracteddata(topic)

    twitter_agent = Agent(
        role="Tweet Expert",
        prompt_persona=f"""You are a word class journalist and tweet influencer.
                write a viral tweeter thread about {topic} using {data} and following below rules:
                """
    )

    task1 = Task(
        name="Tweet Generator",
        model=open_ai_text_completion_model,
        agent=twitter_agent,
        instructions=f"""write a viral tweeter thread about {topic} using {data} and following below rules:
                1/ The thread is engaging and informative with good data.
                2/ The thread needs to be around than 3-5 tweet.
                3/ The thread need to address {topic} very well.
                4/ The thread needs to be viral and atleast get 1000 likes.
                5/ The thread needs to be written in a way that is easy to read and understand.
                6/ Output is only threads no any other text apart from thread"""
    )
    output = LinearSyncPipeline(
        name="Tweet Pipline",
        completion_message="pipeline completed",
        tasks=[
            task1,
        ],
    ).run()

    return output[0]['task_output']
Enter fullscreen mode Exit fullscreen mode

topic is defined using st.text_input to get the user's desired tweet topic.
tweet_generator function takes the topic:
Calls extracteddata to get relevant text snippets based on the topic.
Creates a Lyzr Agent named "Tweet Expert" with a prompt persona describing its role.
Defines a Task named "Tweet Generator" that specifies the model (OpenAI), agent, and instructions for generating a viral tweet thread (including length, engagement, and content guidelines).
Creates a LinearSyncPipeline named "Tweet Pipeline" that runs the defined task and outputs the generated tweet thread.
Returns the generated tweet thread from the pipeline output.

Output:
If the user clicks the "Get Tweets" button:
tweet_generator is called with the entered topic.
The generated tweet thread is displayed using st.markdown.

if st.button("Get Tweets"):
    tweets = tweet_generator(topic)
    st.markdown(tweets)
Enter fullscreen mode Exit fullscreen mode

Overall, this code demonstrates how to use Lyzr Automata with Streamlit to create a web app that automate tweets based on a user-provided topic. It utilizes SerpApi for data extraction and OpenAI for text completion within the Lyzr agent framework.
try it now: https://lyzr-tweet-generator.streamlit.app/
For more information explore the website: Lyzr


Top comments (0)