Introduction
A few weeks ago, I chatted with a lady at the bus station. We both work in tech and share an interest in it. When we talked about our passion for technology, she asked, "Do you think AI will replace your job?". This question often arises in discussions about AI and ML. In my view, AI and ML won't replace us; they'll boost our abilities, making medical diagnoses more accurate, strengthening security, and improving overall work efficiency.
Am I worried about AI taking over my job after AWS Re:Invent
last week, where they introduced services like Amazon Q and Amazon CodeCatalyst?
Should I be concerned about my job as a Developer with these AI services? Let's explore together and learn more about them.
You probably know SST Framework and Svelte Kit β nothing groundbreaking. Despite not being an expert in Svelte and SST recently, I decided to experiment with these tools, combining SST and Svelte with Amazon Q and Amazon CodeWhisperer using the AWS Toolkit in the IDE.
In this blog post, I'll share my thoughts on this setup and how I found these services through the Amazon Toolkit. You'll also find out if we should be concerned about AI assistants taking over our jobs as Developers.
An Introduction to Svelte, SST, Amazon Q and Amazon CodeWhisperer
Before diving into writing code and utilising various services from the Amazon Toolkit in VSCode, it's worth taking a quick look at what we're about to use, especially if you're not familiar with them.
Svelte
Svelte is a JavaScript tool for constructing UI components, similar to other UI frameworks like React and Vue. However, what sets Svelte apart is that it functions as a compiler, transforming the code into a form compatible with native browser APIs.
SST Framework
SST is an open-source framework designed to facilitate the development and deployment of Serverless stacks on AWS. It operates under the hood by integrating with Amazon CDK. However, its primary benefit is in allowing us to concentrate on creating resources using familiar languages like TypeScript, treating them as Infrastructure as Code (IaC).
Amazon Q
Imagine OpenAI's ChatGPT, Microsoft's Copilots, and now, Amazon has introduced its own AI assistant called Amazon Q.
There are different ways we can use Amazon Q. You can connect it to your business data for a personalised touch to suit your specific nee
ds like a chatbot that is customised for your business.
It's also accessible within the AWS console as an expert who can give you suggestions on architecture and solution designs with best practices. Amazon Q seamlessly integrates with Amazon Insight. This integration assists with visualisations, data analysis, and answering any data-related questions you may have. More importantly, as a developer, you can leverage Amazon Q within your IDE for code improvements, debugging and troubleshooting. That's the use case I am going to focus on in this blog.
Amazon CodeWhisperer
CodeWhisperer is a GenAI-powered tool that assists with code recommendations based on Pseudocode or existing codes. It can be configured within your IDE or through the command line. Extending its capabilities, CodeWhisperer is compatible with certain AWS services, including Amazon SageMaker Studio and Amazon Glue Studio.
You can utilise Amazon CodeWhisperer at no cost by configuring it within your IDE, Authenticate and you're ready to start leveraging its features.
Experimenting with Amazon Q and Amazon CodeWhisperer
I built a full-stack application using React, NextJs, and AWS Serverless. Check out the blog post for a guide on incorporating ML features with Amazon Bedrock into your application. In this part, I'll explain how I used Amazon Q and CodeWhisperer to rebuild the story generator lambda function in my app. Since I wasn't familiar with the SST framework and Svelte initially, I did some reading and watched tutorials before using Amazon Q to assess responses or CodeWhisperer's code recommendations.
Amazon Q For Analysing And Explanation
I began by asking Amazon Q about the SST framework, hoping for a quick and helpful response.
Next, I am keen to know how SST integrates with Svelte.
This convinces me. Now, let's find out how to get started with Svelte. I asked Amazon Q for guidance.
I ran the suggested command, and the Svelte app was created successfully. Then, I asked Amazon Q to explain the structure of a Svelte project.
Before diving into this experiment, I studied the concept of using SST as a Serverless stack with a UI framework like Svelte. The key idea is to have a mono repo, a recommended structure that makes all resources accessible to the Frontend.
Understanding these fundamental concepts, even with an AI assistant, helps in asking accurate questions and receiving precise answers. This approach prevents blindly accepting the assistant's recommendations.
Next, I'll use Amazon Q to inquire about creating an SST Stack and initialising a Svelte app within an SST app.
I like how Amazon Q gives a source link for each part of the answer and suggests follow-up questions as you can see in the screenshots. In this case, I found the SST app creation command in the source. After a few minutes, the SST app was successfully created. To follow best practices, I'll move the Svelte app to the my-sst-app/packages folder, turning the project into a mono repository. Alternatively, you can create a new Svelte app within the SST App.
Amazon Q & CodeWhisperer For Coding
Now, with our SST and Svelte apps set up, let's try out Amazon CodeWhisperer and request an explanation of the code from Amazon Q. We'll use Amazon CodeWhisperer to write the initial lambda function to generate a story with Amazon Bedrock.
Here's the pseudo-code for the lambda function:
When using Amazon CodeWhisperer, wait for suggestions based on your pseudo code, then choose the most relevant one. Alternatively, start typing, and it will provide recommendations. Proceed line by line to complete the lambda function, and I will analyse the suggestions once the lambda implementation is finished.
- To bring in the bedrock client in my SST App, I referred to Amazon Q for guidance.
- Extract and validate the topic from the lambda event
- Construct Prompt
- Construct Payload
- Initialise Bedrock Client and invoke the model
Amazon Q For Debugging
Close to completing the lambda implementation. After applying the suggested code for model invocation, I had to import InvokeModelCommand from Amazon Bedrock. Let's ask Amazon Q about importing the module.
If I follow the suggestions, I'll encounter an import resolution error. Now, I'm using Amazon Q for debugging.
I've used the Amazon Bedrock SDK before, and I know that InvokeModelCommand
is in the client-bedrock-runtime package. Despite tweaking my question for specificity, the response continued to reference the client-bedrock
package. Just to be thorough, I asked the same question to ChatGPT and received a similar response.
The answer seems a bit misleading. As I followed the AI assistant's suggestions for implementing the lambda function, more errors occurred. For example, I had to import the Bedrock Client from client-bedrock-runtime
, and the suggested payload was incorrect. The AI couldn't guide me on how to destructure the generated text response from the Text Model. After fixing these issues, the lambda function for generating the story is now implemented properly.
So far, I used Amazon Q and CodeWhisperer for various tasks like general knowledge questions, coding, and debugging. This experiment continued, I explored using Amazon Q to integrate an AppSync API resource into an SST Stack, deploying the stack, and creating a simple form with a single input field in Svelte. To keep this post concise, I'll conclude my documentation of this experiment here. In the next part, I'll share my thoughts on how the experiment went.
How Did This Experiment Go?
My experiment with Amazon Q and CodeWhisperer wasn't limited to a lambda function; I used CodeWhisperer in Amazon Glue Studio and found it efficient.
Will I use or recommend Amazon Q? Absolutely. Setting it up in my VSCode is convenient, and it's not just for coding β Amazon Q works magic for data analytics and architecture solutions. Check out Wendy's blog post, an AWS Data Hero, explaining how Amazon Q integrates with Amazon Insights and unleashes its power. The provided source link is also helpful.
Are AI answers convincing? I'd say they're quite good. They work well not just for coding and debugging; if you provide an Infrastructure as Code (IaC) template, the AI can analyze it and suggest AWS best practices. When using AI assistants like ChatGPT or Amazon Q, the key is the user's input or "Prompt."
The assistant answers and recommends based on your input. For instance, when a lambda implementation error occurred, I used Amazon CodeWhisperer to import the Bedrock Client package. This time, modified the pseudo code from import Bedrock Client
to import Bedrock runtime client
, and it correctly imported the modules. Pseudo-code precision matters and CodeWhisperer also recommends based on existing code.
Will I use AI assistants for learning and software development?
It depends on whether they're designed for training and learning purposes. Otherwise, following their suggestions might be confusing. They're meant to assist, not do the job for you. I tested Amazon Q and Amazon CodeWhisperer for error fixing and code optimisation, and they performed well. While expecting AI bots like ChatGPT, Amazon Q, or Microsoft Copilot to build your app may be premature, Amazon Q provides solid AWS best practices recommendations.
Should we worry about AI taking over our jobs?
I believe that AI/ML is here to assist and improve our lives. AI extends beyond the technology, impacting various domains with daily use cases. However, will it replace our roles soon? I remain optimistic and say no. We build and train AI solutions, teaching them to save time, cost, and lives. the intention is not to replace roles but practicing responsible use of AI can lead to a brighter future.
In closing, I want to share one of my favourite paragraphs from Chip Huyen's book "Designing Machine Learning Systems." All credit to the author.
In early 2020, the Turning Award winner Professor Geoffrey Hinton proposed a heatedly debated question about the importance of interpretability in ML systems. Suppose you have cancer and you have to choose between a black box AI surgeon who can not explain how it works but has a 90% cure rate and a human surgeon with an 80% cure rate. Do you want the AI surgeon to be illegal? A couple of weeks later, when I asked this question to a group of 30 technology executives at public nontech companies, only half of them would want the highly effective but unable-to-explain AI surgeon to operate on them. The other half wanted the human surgeon. While most of us are comfortable with using a microwave without understanding how it works, many don't feel the same way about AI yet, especially if that AI makes important decisions about their lives.
Top comments (0)