ChatGPT is storming the internet. I cannot pass my feed without a single mention of ChatGPT in my socials. So, it is a sin if I am not writing about ChatGPT. I have been using ChatGPT since the beta version was announced. I have been playing around and sharing feedback from the console about its responses. It is fun and helps you to learn at the same time. In this blog article, you will learn about ChatGPT and how to use ChatGPT to solve performance engineers' daily questions.
What is ChatGPT?
First let us understand the basics of GPT. The acronym of GPT is Generative Pre-trained Transformer. That is a lot to digest. Let us simply put it.
It is a type of artificial intelligence model that is trained to generate human-like text. The model is trained on a large dataset of text, such as books, articles, and websites, and learns the patterns and structure of language. Once it is trained, it can be used to generate new text that is similar to the text it was trained on. It can be used for a variety of tasks such as language translation, text summarization, question answering, and text completion.
I have highlighted the critical terms in bold. Basically, you feed a humongous amount of information into an AI model to train. So that, it can generate new info based on what it is trained for.
You may ask, is it like a search engine? The answer is NO. A search engine browses the internet and lists the results based on a query you enter. But GPT generates new text based on the training it had from the dataset. It cannot browse the internet (at this moment).
ChatGPT Signup
Head to https://chat.openai.com/auth/login to sign up for a free account. It is a simple and straightforward process.
Use Cases for Performance Engineers
There are umpteen use cases floating around for ChatGPT e.g., advertising, writing, screenplay, poems, website status update, development, testing, and more.
Let us see the critical ChatGPT use cases for performance engineers.
- Ask ChatGPT to explain performance testing to your child, parents, manager, CTO, CEO, Prime Minister and more.
- Ask ChatGPT to write a performance test plan for your application.
- Ask ChatGPT to compare tools' features.
- Ask ChatGPT to analyze the performance testing results.
- Ask ChatGPT to write code for performance testing
- Ask ChatGPT to list the risks involved in performance testing.
- Ask ChatGPT to build a performance testing tool (insane, right?)
Even Einstein asked questions. Let us ask ChatGPT now.
Explain performance testing to a 6-year-old kid
I asked "explain performance testing to a 6-year-old kid". Below is the ChatGPT's response.
It took a toy as an example and connected it with the performance testing. Not bad, eh?
Performance testing is like testing a toy before you play with it. It helps make sure the toy works well and doesn't break easily. For example, if you were testing a toy car, you would want to see how fast it can go, how well it turns, and how long the battery lasts. Similarly, performance testing is done on computer programs and websites to make sure they work well and don't crash or slow down. It's like a check-up for a computer program to make sure it's healthy and runs smoothly.
Let us shift gears to ask a complex request.
Write a performance test plan for your application
I asked ChatGPT to write a performance test plan for your application. We should not ask it literally as-is. We need to feed information as below.
Write a performance test plan for my application for the below specifications:
- Start with an objectives of performance testing
- Banking web application
- Expected number of peak users is 5000 during normal business hours
- Expected response time is between 1-3 seconds, CPU and Memory - up to 60%
- Scenarios to be tested: Creating an account, login, add payee, transfer funds, check balance, applying for a loan, modifying profile, and logout
- Use any Java, Python or Go based open-source or commercial performance testing tool
- Timeline: Requirements gathering and test environment validation - 1 week, Test scripting - 2 weeks, Test execution - 3 weeks, Results analysis and signoff - 1 week
- Types of testing - load, stress, and endurance
- Add a section for defects lifecycle
- Add a section for known issues
- Add a section for test data management
- Add a section for references
Below is the ChatGPT's response. As you can see, ChatGPT took the information from the above question and added the appropriate context to it e.g. it added test data management, defects lifecycle, and more. I agree it is a high level. You cannot use this as-is in your project. I suggest you build this up based on your requirements.
Performance Test Plan:
Objectives:
To determine the performance and scalability of the banking web application under different load and stress conditions.
To ensure that the application can handle the expected number of peak users of 5000 during normal business hours
To verify that the application meets the expected response time of between 1–3 seconds and CPU and Memory usage of up to 60%
To identify and resolve any potential performance bottlenecks in the application
Scope:
The banking web application will be tested using load, stress, and endurance testing methods.
The following scenarios will be tested: Creating an account, login, add payee, transfer funds, check balance, applying for a loan, modifying profile, and logout.
The testing will be conducted using any Java, Python or Go-based open-source or commercial performance testing tool.
Timeline:
Requirements gathering and test environment validation: 1 week
Test scripting: 2 weeks
Test execution: 3 weeks
Results analysis and sign off: 1 week
Types of testing:
Load testing: To test the application's performance under normal and peak loads
Stress testing: To test the application's performance under abnormal and extreme loads.
Endurance testing: To test the application's performance over a prolonged period of time
Defects Lifecycle:
All identified defects will be reported and tracked using a defect tracking tool
Defects will be prioritized based on their impact on the application's performance
The development team will be responsible for fixing the defects
The test team will be responsible for verifying the fixes
Known Issues:
Any known issues with the application that may affect its performance will be documented and communicated to the relevant parties
Measures will be taken to mitigate the impact of these known issues on the application's performance
Test Data Management:
Test data will be created and managed in a separate environment to ensure that it does not affect the production data
Test data will be created in a way that represents the actual data and usage patterns of the application
References:
Performance Testing Methodology by XYZ
Performance Testing Best Practices by ABC
Performance Testing Tools Comparison by DEF
Note: The referenced books or articles are hypothetical, you can use any actual references that you have used or planning to use.
Compare tools' features
I often get questions about tools this or that?. Well, it depends on your requirements, so many factors influence the decision. Let us ask ChatGPT to compare tools' features.
My question:
Compare JMeter, LoadRunner, k6, Locust, NeoLoad features. I do not have a budget to invest in a tool. But my engineers are excellent at writing Java or Go or Python code. Suggest which tool I can select for performance testing. My application is web based. I need to inject load from multiple machines easily.
ChatGPT's response:
It didn't give any solid conclusion in selecting an open-source tool. It is playing nicely. But it added the tools' features as shown below.
JMeter, LoadRunner, k6, Locust, and NeoLoad are all popular performance testing tools that can be used for web-based applications. Here is a comparison of their features:
- JMeter: It is an open-source tool that is written in Java and can be used for both load and functional testing. It supports distributed testing and can be run on multiple machines. It has a rich set of built-in samplers and plugins, and allows for easy creation of custom scripts. It also has good reporting capabilities.
- LoadRunner: It is a commercial tool that is widely used for load testing. It supports a wide range of protocols and has a user-friendly interface. It also has advanced features like real-time monitoring and automatic correlation. However, it is quite expensive.
- k6: It is an open-source tool that is written in Go and is focused on load testing. It has a simple script syntax and a built-in performance metrics engine. It supports distributed testing and can be run on multiple machines. It also has a cloud service for running test at scale.
- Locust: It is an open-source tool that is written in Python and is focused on load testing. It uses a simple script syntax, allowing you to easily define user behavior. It supports distributed testing and can be run on multiple machines.
- NeoLoad: It is a commercial tool that is used for load and performance testing. It supports a wide range of protocols and has a user-friendly interface. It also has advanced features like real-time monitoring and automatic correlation. It also has a cloud service for running test at scale.
Since you have a team that is excellent at writing Java or Go or Python code and you do not have a budget to invest in a tool, I would suggest using an open-source tool like JMeter, k6 or Locust. These are all great options and have a wide range of features. They also have active communities and a wealth of resources available online. Among these three, if you have specific requirement for test script creation and handling, you can pick the one that you are comfortable with.
Analyze the performance testing results
Now, let us ask ChatGPT to analyze the raw performance testing results from JMeter.
Caution: DO NOT SEND ANY CONFIDENTIAL INFO IN THE CHAT.
I am using a dummy data for testing purpose.
My question: Analyze the below performance test results. Then I pasted the below sample data into the chat.
ChatGPT Response
The performance test results show a series of HTTP requests sent to the URL http://example.com/ with a mix of 200 and 503 status codes. The elapsed time for each request ranges from 24 to 95 milliseconds. The response message is "OK" for all requests.
The thread name indicates that the requests were sent from two different thread groups, "Thread Group 1-1" and "Thread Group 1-2". The success column indicates that all requests were successful and no failure message was recorded.
The bytes and sent bytes columns show that 1591 bytes were received and 109 bytes were sent for each request. The grpThreads and allThreads columns show that there were 2 threads in each group and 2 threads in total.
The latency, idle time, and connect columns show that the latency for each request was between 0 and 33 milliseconds, there was no idle time, and the connect time was between 0 and 56 milliseconds. Overall, these results indicate that the application performed well, with fast response times and no failures.
However, it's worth noting that there are a lot of 503 status codes which indicates that the server is returning a service unavailable error. It's also worth noting that this data is only a sample and additional testing will be required to have a better understanding of the application performance.
It extracted the correct number of threads, HTTP status codes, bytes, elapsed time, and more. It added its analysis for each parameter and concluded with a generic suggestion.
Write code for performance testing
Let us ask ChatGPT to write a code for performance testing for various tools.
Write a LoadRunner script to test https://api.example.com for Web HTTP/HTML protocol
Here is an example of a LoadRunner script that can be used to test the https://api.example.com endpoint using the Web HTTP/HTML protocol:
// Declare the web_custom_request function
web_custom_request("api_example",
// Define the request method and endpoint
"URL=https://api.example.com",
"Method=GET",
// Define any headers that need to be sent with the request
"Header=Content-Type:application/json",
"Header=Accept:application/json",
// Define any body data that needs to be sent with the request
"Body=",
// Define the response handling function
"ResponseFormat=JSON",
"Snapshot=t1.inf",
LAST);
This script uses the web_custom_request function to send a GET request to the https://api.example.com endpoint. The request includes a "Content-Type" and "Accept" header set to "application/json" in order to indicate that the request and response should be in JSON format. The ResponseFormat is set to "JSON" to handle the response correctly. The Snapshot=t1.inf is used to save the response data to a file named "t1.inf" in the script's results directory.
This script is a basic example and can be modified according to the test scenario. As an example, you can add a parameterization for different inputs, add correlation and validation rules, and add think time, looping, and conditions to the script. Additionally, to load test this, you will have to add Vusers, Ramp up and Ramp down time, and define the test scenario using the controller.
My question on k6:
Write a k6 script to test https://api.example.com with sleep of 2 seconds between the requests and upload the results to k6 cloud
ChatGPT's response:
Here is an example of a k6 script that can be used to test the https://api.example.com endpoint with a 2-second sleep between requests and upload the results to the k6 cloud:
import http from "k6/http";
import { sleep } from "k6";
import { Trend } from "k6/metrics";
let myTrend = new Trend("my_trend");
export let options = {
stages: [
{ duration: "5s", target: 1 },
{ duration: "10s", target: 10 },
{ duration: "5s", target: 0 },
],
thresholds: {
"http_req_duration": ["p(95)<200"],
},
ext: {
loadimpact: {
name: "my test",
},
},
};
export default function() {
let response = http.get("https://api.example.com");
myTrend.add(response.timings.duration);
sleep(2);
}
This script uses the http.get() method from the k6/http package to send a GET request to the https://api.example.com endpoint. The sleep(2) method is used to wait for 2 seconds between requests. A trend metric "my_trend" is created to track the response duration of the requests.
The stages option is used to define the load test scenario, where it starts with 1 VU for 5 seconds, then increases to 10 VU for 10 seconds and finally decreases to 0 VU for 5 seconds. The thresholds option is used to define the acceptable response time, in this case, it's set to 95 percentile of the response time should be less than 200ms. Finally, the ext option is used to configure the test name and upload the results to the k6 cloud.
You will have to provide your cloud API key in the command line while running the test like this: k6 run -e API_KEY=YOUR_API_KEY script.js
You can also add more options, like adding more requests, passing parameters, adding validation and assertion rules to this script according to your test scenario.
My question on JMeter:
Write a JMeter script to test https://api.example.com with sleep of 2 seconds between the requests and upload the results to Grafana cloud.
ChatGPT's response:
Here is an example of a JMeter script that can be used to test the https://api.example.com endpoint with a 2-second sleep between requests and upload the results to Grafana cloud:
1. Start by adding the following plugins to your JMeter instance:
The Backend Listener for sending metrics to Grafana Cloud.
The JMeter Plugins Manager for installing additional plugins.
Create a Thread Group and add a HTTP Request sampler.
2. Set the Server Name or IP to https://api.example.com.
Set the Method to GET.
3. Add a Constant Timer to the HTTP Request sampler and set the Thread Delay to 2000 milliseconds. This will add a 2-second delay between requests.
4. Add a Backend Listener to the Thread Group.
Set the InfluxDB URL to the URL of your Grafana Cloud instance.
Set the InfluxDB User and InfluxDB Password to the appropriate credentials for your Grafana Cloud instance.
List the risks involved in performance testing
Let us ask ChatGPT to list the risks involved in performance testing.
ChatGPT's response is insane.
Performance testing can have several risks involved, such as:
- Incorrect test design: If the test design is not appropriate or does not accurately reflect the real-world usage of the application, the results of the performance test may be unreliable and may not identify the true performance bottlenecks of the application.
- Insufficient data: If the test data is not representative of the real-world usage of the application, the results of the performance test may not be accurate and may not identify the true performance bottlenecks of the application.
- Inadequate test environment: If the test environment is not representative of the production environment, the results of the performance test may not be accurate and may not identify the true performance bottlenecks of the application.
- Unforeseen system behavior: Performance testing may reveal unexpected system behavior, such as increased resource usage, capacity issues, and other scalability problems that were not anticipated during development.
- Overload and crash: Performance testing can cause unexpected overloads and crashes of the application or the testing infrastructure, resulting in application downtime, data loss, and other negative consequences.
- Risk to data security: Performance testing may expose sensitive data and pose a risk to data security and compliance.
- Lack of test Automation: If you are not automating your performance tests, it will be time-consuming, and manual testing will be prone to errors, and it will be difficult to reproduce the test results.
- Inadequate monitoring and analysis: If the monitoring and analysis of the performance test results is not adequate, the true performance bottlenecks of the application may not be identified, and the test results may be unreliable.
- Cost: Performance testing can be a resource-intensive process that requires significant investment in terms of hardware, software, and personnel.
- Lack of standardization: Without a standardized process and test methodology, performance testing can be prone to errors and inconsistencies, resulting in unreliable test results and an inability to compare results across different tests or test runs.
Build a performance testing tool
Let us build a performance testing tool in Python (slow, isn't it?) using ChatGPT. Here is my request:
Help me to build a performance testing tool in Python using HTTPX and Panda libraries. I need the following features:
- ability to test the http requests using multiple threads
- handle the errors
- parameterize the input test data
- analyze the test results
- generate HTML report
ChatGPT's response:
Here is an example of how you can build a performance testing tool in Python using the HTTPX and Pandas libraries:
- Start by installing the necessary libraries:
pip install httpx pandas matplotlib
- Create a function that makes HTTP requests using the
httpx
library. This function should take the input parameters (URL, method, headers, and body) as arguments and return the response.
import httpx def make_request(url, method, headers, body): try: response = httpx.request(method, url, headers=headers, json=body) return response except Exception as e: print("Error: ", e)
- Create a function that reads the test data from a CSV file and parameterizes it. This function should return the test data in the form of a list of dictionaries.
import pandas as pd def read_test_data(file_path): df = pd.read_csv(file_path) test_data = df.to_dict(orient='records') return test_data
- Create a function that runs the performance test using multiple threads. This function should take the test data and the request function as arguments and return the test results in the form of a list of dictionaries.
import concurrent.futures def run_performance_test(test_data, request_function): results = [] with concurrent.futures.ThreadPoolExecutor() as executor: futures = [executor.submit(request_function, data['url'], data['method'], data['headers'], data['body']) for data in test_data] for future in concurrent.futures.as_completed(futures): response = future.result() results.append({'url': response.url, 'status_code': response.status_code, 'elapsed_time': response.elapsed.total_seconds()}) return results
- Create a function to analyze the test results using the Pandas library. This function should take the test results as an argument and return the statistics such as mean, median, and standard deviation of the response time.
import pandas as pd def analyze_results(results): df = pd.DataFrame(results) statistics = df.describe() return statistics
Final Thoughts
Personally, I have not tested the above snippets. But I have been using ChatGPT to write simple bash, Python, and Groovy snippets. Sometimes, ChatGPT throws incorrect responses, and you can easily trick it. We cannot expect a baby to run on the day it is born.
Let us embrace the power of AI and use it to build a brighter, more prosperous future for ourselves and for generations to come. This revolutionary AI chatbot website is here to change the world, and we must embrace it with all our strength and determination.
Top comments (2)
Great article! Just for future readers, I would like to say that it is possible to bypass the "at capacity" chatGPT message by reloading the website up to x20 times and waiting for the login screen to show up. If it's not working, you can try again later.
what are your thoughts on using chatGPT to help teach a complete beginner in learning code…it it possible to actually learn fullstack with chatGPT?