We all have fears – water, heights, weird smells, you name it. Me? I've got a special fear called UI change phobia. It's like my internet journey hits a speed bump every time software decides to change the UI.
Take Twitter, for example. Imagine the "Tweet" button doing a daily dance, from the right bottom to the top left. Sounds like a game, right? Now, picture this happening not just on social media but also in complicated places like the AWS Dashboard. Not so fun now, is it?
If you are someone who enjoys watching, I am curating short videos for the entire blog. If you don't want to miss out, make sure to subscribe.
Getting back to the blog. See, for the expert's UIUX Researchers' layouts, it's like playing interior designer for the internet. Changing the layout gives them analytics and hence a way to curate the best. They ultimately want to make things comfy for us in the long run. But guess who ends up feeling lost in the meantime? Yep, us!
Personally, it's exhausting to keep up with these changes. Imagine having 10 servers to check every day, and each time you press what feels like a gazillion buttons to find the logs. And just when you've got there you realise the product team has shifted the logs options to test their logic map. It's like a bad game of hide-and-seek, and I wouldn't blame anyone for getting a bit frustrated.
And, this is where the magic of Shell Scripting swoops in. Commands barely do the cha-cha, and even if they decide to change, there's a golden rule – it's well-documented, and the logic stays put. Enter the superhero of consistency: your very own Shell Script 🔥
Imagine this: every morning, you wake up, grab your cup of coffee, and you run a simple script. You toss in the resource name, and voilà, like clockwork, the script fetches the logs you desire. It's like having a reliable assistant that doesn't flinch at button shuffles.
I'm a firm believer in DRY (Don't Repeat Yourself), and you should be too! Why press a gazillion buttons when a simple script can do the heavy lifting? So, let's embrace the power of Shell Scripting, where the only dance moves are the ones you decide. After all, in the ever-changing UI landscape, a bit of scripting stability goes a long way.
You are good if you understand the need for Shell Scripting and are motivated enough to learn 🚀✨
Types of Shells 🐚
Shells serve as crucial interfaces between users and the operating system kernel, enabling command-line communication. It acts as an intermediary, interpreting user commands and translating them into instructions that the kernel can execute.
Shells can broadly be classified into 5 main categories, each designed for specific tasks and preferences. We will understand a bit about them and then move on with the commands.
1️⃣ Bourne Shell (sh)
Steve Bourne is credited with developing the inaugural UNIX shell known as sh in 1979. Renowned for its swift operation, sh gained popularity for its speed. However, being the first shell, it lacks convenient features like recalling previous commands with an up arrow or handling logical and arithmetic operations.
2️⃣ C Shell (csh)
Developed by Bill Joy at the University of California, the C shell, denoted as csh, stands out for its programming-oriented features. It includes built-in support for arithmetic operations and adopts a syntax reminiscent of the C programming language.
Addressing a gap in earlier Linux shells, such as the Bourne shell, the C shell introduced command history, allowing users to track and reuse previous commands efficiently. It gained popularity for the incorporation of "aliases," providing a flexible and customizable way to define shortcuts for frequently used commands.
3️⃣ Korn Shell (ksh)
Crafted by David Korn as an enhancement to the traditional sh
the Korn shell, denoted as ksh, stands as a superset of sh
. It incorporates in-built support for arithmetic operations while introducing interactive features akin to the C shell.
A notable feature of ksh is its versatility—it seamlessly executes scripts written for both sh
and csh
. Beyond this compatibility, ksh introduces advanced functionalities such as string, array, and function manipulation, drawing parallels to the expressive capabilities of the C programming language.
ksh gained widespread acclaim for its speed, being recognized as the fastest interpreter during its initial release. This combination of compatibility and advanced features contributed to Ksh's popularity in the Unix and Linux communities.
4️⃣ GNU Bourne Again Shell (bash)
Widely known as Bash, the GNU Bourne-Again shell was crafted for compatibility with the Bourne shell. It incorporates features from various Linux shells, such as the Korn shell and C shell.
Bash stands out by automating the recall of previous commands and allowing easy editing with arrow keys—an enhancement over the Bourne shell, providing a more user-friendly and efficient command-line experience.
5️⃣ Z Shell (zsh)
Zsh, the default on MacOS, is a cutting-edge shell known for pioneering features. It introduced command completion, allowing users to swiftly complete commands with the Tab key. Dynamic filename generation, based on specified conditions, enhances file management efficiency. The creators also championed plugin support, providing a flexible architecture for users to customize their shell environment. This adaptability has made Zsh one of the most feature-rich and user-friendly shells available.
Completing the historical journey through the development of shells provides valuable insights into the evolution of these crucial components. For instance, understanding the progression from the initial Bourne shell to the advanced features of shells like Zsh sheds light on the iterative improvements and innovations. Let's quickly check out the Shell and corresponding path.
Shell | Path |
---|---|
sh | /bin/sh |
csh | /bin/csh |
ksh | /bin/ksh |
bash | /bin/bash |
zsh | /bin/zsh |
Having covered the major shells in use, our focus now turns to Zsh and Bash. These two shells take center stage in modern software development and automation lifecycles, making them pivotal tools in contemporary computing. Before we dive in,
Bash and Zsh are Interpreters
Incase, you are confused between Interpreters and compilers. An interpreter reads and executes code line by line, providing immediate feedback on errors. On the other hand, a compiler translates the entire code before execution, creating a standalone executable file, which can be likened to translating an entire book before reading it. Let's start with learning basic commands.
UNIX Starters 🏁
Even if you don't want to automate or configure corn jobs, these commands will help you work around files and everyday UI tasks. Well, incase you have never seen a terminal,
Currently, I'm on MacOS, utilizing Zsh. Let's dive back into learning commands. Instead of lengthy descriptions, I'll present them in a table for clarity. As we delve deeper, you'll gain a better understanding of each command and its functionalities.
Command | Description | Example |
---|---|---|
ls |
Lists files and directories in the current location. |
ls or ls -l for detailed listing |
cd |
Changes the current directory. | cd Documents |
pwd |
Displays the present working directory. | pwd |
cp |
Copies files or directories. | cp file.txt destination/ |
mv |
Moves or renames files and directories. | mv file.txt newfile.txt |
rm |
Removes/deletes files or directories. | rm file.txt |
mkdir |
Creates a new directory. | mkdir new_directory |
rmdir |
Removes an empty directory. | rmdir empty_directory |
cat |
Concatenates and displays the content of files. | cat file.txt |
grep |
Searches for a pattern in files. | grep pattern file.txt |
chmod |
Changes file permissions. | chmod +x script.sh |
ps |
Displays information about active processes. | ps aux |
kill |
Sends a signal to terminate a process. | kill -9 process_id |
ssh |
Connects to a remote server securely. | ssh username@remote_host |
echo |
Displays text or variables. | echo "Hello, World!" |
Congrats on mastering some commands! But hey, the adventure continues. If things are a bit fuzzy, no worries – just give those commands a spin. Now, let's chat about two command champs that tend to stump folks: chmod
and grep
.
Change Mode 🎨
As a system administrator, one of the most important tasks is to control access management. Let's say, you create a file expenses.txt, you don't want everyone to change the content of the file so you give the needed group read access. But how to do that? Here enters chmod
.
The chmod
command is a powerful tool in UNIX for changing file permissions. It allows users to control access to their files by specifying who can read, write, and execute them. In UNIX, permissions are represented by three categories: owner(u), group(g), and others(o), for every group we can apply a certain permission👇
Permission | Symbol | Explanation |
---|---|---|
Read | -r |
Allows reading the file. |
Write | -w |
Permits modifying the file. |
Execute | -x |
Grants the ability to execute. |
Let's write the command for the above expenses.txt file, wherein we would give the owner the read and write access and group the access of read.
chmod u=rw,g=r expenses.txt
OR
chmod u+rw g+r expenses.txt
After running any of the commands the file access would be modified. To change it further, refer to the table above and run the command with the needed flag and parameters.
Global Regular Expression Print 🗂️
One of the most difficult tasks is to search from the UI. The grep command is a powerful text-search utility that allows users to search for specific patterns or expressions within files. It's incredibly handy for sifting through large amounts of text to find relevant information. The basic syntax is:
grep [options] pattern [files...]
- pattern: The text or regular expression we want to search for.
- file: The file or files we want to search within.
Let's use grep
command to find any word that ends it salary from the expenses.txt file. Such that it searches for developerSalary, managerSalary, internSalary and so on.
grep "Salary$" expenses.txt
The regex symbol $
indicates the end of the line. So, "Salary$" implies a search for every term that ends with Salary. Well, you can use an array of regex symbols and expressions to make your search more efficient. Incase, you don't know much about regex and want me to explain do let me know in the comments.
Variables in Shell 🔡
In UNIX, variables are like containers for information. They make scripts clearer by giving meaningful names to values, allowing data reuse, and helping handle dynamic information, like user input or command results. Variables also play a role in passing information between different parts of a script.
Before we start writing scripts let's get our fundamentals clear with writing variables in the command line itself.
greetings="Hello, World" && echo "$greetings"
Over here, we store the string Hello World in the variable named greetings, and then with the help of the echo command we print it. echo $greetings will also work fine but adding quotation marks "
prevents possible spaces in the variable causing havoc in data. So, as a good practice remember to always add quotes in the variable.
If you have worked with any object-oriented programming language, I am pretty sure you understand the importance of scope
. In simple terms scope of a variable defines it's effective boundaries. In Shell Scope is defined into 3 main categories.
1️⃣ Local Scope: Local variables are specific to the current shell session or script. They have a limited scope and are not visible to external commands or scripts. Example to make the meaning more clear,
EDITOR=vim
crontab -e
In the particular shell session whenever we run crontab -e, it will run the vim editor. When we exit the shell session it will open the default editor.
2️⃣ Environment Scope: Environment variables are accessible to any child process spawned from the current shell. They provide a way to share information between the shell and its subprocesses.
export EDITOR=vim
crontab -e
When we apply the command export it is considered an environment variable. Whenever we spin up crontab -e it will open Vim as the editor. Environment Variables are used for setting values that lives for multiple sessions unless manually removed. For example, whenever we set path we use environment variable
3️⃣ Command Scope: Available for one instance of command, once execution is done instance doesn't persist.
EDITOR=nano crontab -e
Over here, the EDITOR variable is limited to the command. Even if we run crontab in the current shell session, it would open the default editor.
The three variables are important and unique in their own way. We use them as per the need. But don't you think using export, not export becomes pretty confusing with variable scope definition? Well, yes so we generally use Declare
to play around with variables.
Note: All predefined variables are in UPPERCASE
Declare 🤌
To remove the confusion with the usage of export, not export, and set properties to variables declare
is used for simplification. This command is particularly useful for declaring variables with specific properties. The basic syntax is:
declare [options] variable=value
The options flag lets us decide the property of the variable.By default, the variables are stored in strings. The key options available are:
Option | Description |
---|---|
-i |
Declares the variable as an integer. |
-a |
Declares the variable as an array. |
-r |
Makes the variable read-only. |
-x |
Exports the variable to the environment. |
Let's use the options for further understanding
declare -i n=10
The i
flag defines the variable n as an integer and stores the value in n. To make it read only just add -ir
instead of -i
. Declare command is certainly flexible, play around with the options for further clarity.
declare -a arr
arr[0]=10
arr[1]=97
Arrays are the most used data structure and you can simply create one with declare. To print any index value do echo ${arr[1]}
. Incase, you want to create an associative array i.e. key-value pair-based array use -A
instead of -a
. Remember associative arrays are for the latest version of shells so you may not get the support for it in older versions.
Variable can be referenced as $n
While writing scripts we pass value of variables with $n, where n being the number of variable. We will get back to it for further clarity.
First Script ✨
Let's write our first shell script.
vim myscript
When we execute the command above we create a file myscript.sh
, by default .sh
extension is added. Shell considers all files created from the terminal to be a script file unless mentioned. By vim
we indicate to open the vim editor to work on the file. To start writing in the vim editor use escape key + 'i'.
#!/bin/bash
firstname="$1"
lastname="$2"
echo First Name is "$name" and Surname is "$lastname"
To run this script we simply get out of our vim editor.In case, you have not used Vim for the time being just press, the escape key and then use :wq
. It will save the current state of the script and exit from the vim editor.
#!/bin/bash
tells the terminal when we execute our script it should use bash to execute it. Incase you want to use zsh
simply use #!/bin/zsh
. Shebang is used for telling which script we are using. Let's change the file permission:
chmod +x myscript
As we discussed chmod
before, I am confident you know what it means. Over here we permit us to execute the shell script.Now, let's run it:
./myscript
Well, we are done with executing out first shell script 🎉 We get the output. First Name is <space> and Surname is <space>
. But, well we expected firstname and surname instead of blank space right? So, let's pass our values.
./myscript Aniket Pal
And now we have our desired output, ie. First Name is Aniket and Surname is Pal
. We use $1,$2 instead of writing the variable name. We could have used
./myscript firstname="Aniket" lastname="Pal"
But, it creates a problem in remembering what exact variable name we have used. Imagine an example when you wrote a login script and you passed username
and password
as parameters. It will be difficult to remember what was the exact variable name you wrote while creating the script. Thus, in this case, we use $1, $2 and so on.
But their is another problem. Suppose you write a lot of variables and don't remember the exact number at which you ended. So, when you try to initialize a new variable, you need to lookup. So, we use the shift
keyword.
#!/bin/bash
firstname="$1"
shift
lastname="$1"
echo First Name is "$name" and Surname is "$lastname"
So, even when you have hundreds of variables, you just need to use shift and then use $1
. Isn't this amazing? Well, there is something cooler than this.
Why $1, not $0?
When passing arguments to scripts they populate variables. $0 represents the script itself. Say, for example:
./namescript.sh aniket alice bob
Arguments | Variable Number |
---|---|
./namescript.sh | $0 |
aniket | $1 |
alice | $2 |
bob | $3 |
Incase, you want to get the total number of arguments use $#
, to get argument list use $*
and get the arguments in an array use $@
. For the above script use:
Option | Output |
---|---|
$# | 3 |
$* | "aniket alice bob" |
$@ | ["aniket" "alice" "bob"] |
That's all for variables. Now, let's dive into Conditional, yes yes our regular if-else and case statements.
Conditionals 🤔
Just like in any programming language, conditionals are used to make decisions based on conditions. We have if-else
statements and case
in a shell. The basic syntax of if-else statement is as follows:
if condition; then #Code to execute if condition is true; else # Code to execute if condition is true; fi
Wait, if you find it confusing I know it can be, let's write a simple shell command to create a directory named 'aniket'. If it doesn't exist it will print a custom message.
if mkdir "aniket"; then echo "created aniket"; else echo "come on use another name"; fi
Whenever we complete a line we use ;
. Run the above command twice, in the first go you will see "created aniket" and check with ls
command if it has created the directory. When you run the same command again, it will display:
mkdir: aniket: File exists
come on use another name
So, we understood how to use if-else. If you wanted to use else if just add elif
and continue. But what if we wanted to execute checks in the conditionals? Then:
if [ condition ]; then
# Code to execute if condition is true
else
# Code to execute if condition is false
fi
If you look carefully we write the condition inside third brackets meaning inside [...]
. In shell scripting, the or test command is used to evaluate conditional expressions. It is a fundamental component of the if-else construct. Remember, we could have used ()
, but it's preferable to use [...]
. When we proceed further you will get further clarity. Let's check with a simple odd/even detector.
if [ $((number % 2)) -eq 0 ];then echo "even"; else echo "odd"; fi
Since, our length of code is growing it's better to use to script. Too many semi-colons may make it a bit untidy. If, we wanted the above example to be written in a script, it would have been:
#!/bin/bash
number=7
if [ $((number % 2)) -eq 0 ]; then
echo "Even"
else
echo "Odd"
fi
💡 Just above we learned how to take variables as input. As your learning task, modify the above script and try to take the number
as a variable and check if it's working.
The [[ ]]
double square brackets are an enhanced version of [ ]
and provide additional features and improvements in comparison. They are often preferred for complex conditions. Let's see an example:
#!/bin/bash
age=25
if [[ $age -ge 18 && $age -le 30 ]]; then
echo "Person is between 18 and 30"
else
echo "Person is not between 18 and 30"
fi
Remember previously, only one command was present in UNIX, i.e. test. So, if you wanted to check instead of using [], You needed to use test
. [[...]] is a bash extension that is generally used while performing complex calculations. For example:
if test "$age"="10"; then ...
Which can now be written as,
if ["$age"="10"]; then ....
💡 If you are still here with me, great job. Trust me I am proud of you. Just as a bonus for you.
Shellcheck scriptName.sh
Shellcheck is a utility to detect bugs and mistakes in the script. Possibly it can save you hours of hard work in debugging.
Cases 🥺
Case statements are more efficient than using multiple elif statements. The basic syntax is:
case $variable in
pattern1)
# Code to execute for pattern1
;;
pattern2)
# Code to execute for pattern2
;;
pattern3)
# Code to execute for pattern3
;;
*)
# Code to execute if none of the patterns match
;;
esac
Start with case
and end with esac
, just a reminder every block ends with ;;
. $variable is the variable whose value is being tested and pattern1, pattern2, pattern3 are the patterns to match against. Let's understand Case Statements further with fruits example:
#!/bin/bash
fruit="apple"
case $fruit in
"apple")
echo "It's an apple."
;;
"banana")
echo "It's a banana."
;;
"orange")
echo "It's an orange."
;;
*)
echo "It's something else."
;;
esac
*) is a wildcard pattern that matches anything not matched by previous patterns. In this example, the script checks the value of the fruit variable and executes the corresponding block of code based on the matching pattern. If none of the patterns match, the code under *) is executed. The case statement is a powerful tool for handling multiple conditions in a concise and readable manner in shell scripts.
Looping Through Life 🤩
In shell scripting, for, while, and until loops are used for iterative execution of code. Here's an overview of each along with examples:
1️⃣ for loop
The for loop is used to iterate over a sequence of values, such as numbers or elements in an array.
#!/bin/bash
for ((i=0;i<5;i++))
do
echo $i
done
2️⃣ While Loop
The while loop executes a block of code as long as a specified condition is true
#!/bin/bash
i=10
while [i -gt 0]
do
echo $i
((i--))
done
3️⃣ Until Loop
The do-while loop or until loop is similar to the while loop, but it guarantees that the code block is executed at least once before checking the condition.
#!/bin/bash
i=1
until ((i==0))
do
echo $i
i = i-1
done
Just for fun, let's use for loop to see what we can do with it, very specific to shell.
for i in {1..5};do touch file$i.cpp; done
The particular command will let you create files in the current directory, with names file1.cpp, file2.cpp,.. file5.cpp.
💡 Now, as a task try to remove the files that were created using the for loop. Tip: Replace touch with rm.
These loops provide flexibility for repetitive tasks in shell scripts, and the choice between them depends on the specific requirements of the script. If you noticed, all three loops for, while, and until loop had keywords done and done.
When we start the loop we mention do and when we are done looping we end with done. I am pretty sure, you have used multiple loops to date and it would be boring to explain each and everything with further explanation. So, let's dive into functions.
Functions 📚
In shell scripting, functions are used to group a set of commands into a single unit that can be reused and called with a specific name. The basic syntax of the function is as follows,
functionName() {
# Code to be executed
# ...
# Optionally, return a value
return value
}
The function definition in Shell is quite similar to JavaScript, right? With (),{} defining the start and end of the function and return used to return a value from a function. Let's write our first generic function post which we will learn a shell-specific function.
#!/bin/bash
addNumbers() {
sum=$(( $1 + $2 ))
echo "Sum: $sum"
}
addNumbers 5 7
Here, we define a function named addNumbers
. As we learned function parameters can be accessed using $1,$2 so we parse the arguments and add them. In the end, we call our function addNumbers with args 5 and 7.
unset -f functionName
Is used to delete the function if you have defined in local scope of your terminal instead of script. If you want to export functions directly from terminal without using scripts:
declare -xf functionName(){...}
With the flag -x we define exporting and -f we define it as a function. I told you the power of declare
, remember? If not, scroll a bit and revise.
Final Script 💎
Now, as promised let's check out a bit of a Shell-specific example for further clarity. By writing a script that checks the status of multiple servers using functions and parallel execution. Not only we will learn more about function but revise all the topics that we learned so far.
#!/bin/bash
checkServerStatus() {
local server=$1
local status=$(curl -sI "$server" | head -n 1 | cut -d' ' -f2)
if [ "$status" == "200" ]; then
echo "$server is up."
else
echo "$server is down."
fi
}
servers=("http://twitter.com" "http://google.com" "http://blahblah.in")
checkAllServersInParallel() {
for server in "${servers[@]}"; do
checkServerStatus "$server" &
done
wait
}
checkAllServersInParallel
The above function contains quite a few concepts, right? Such as using arrays, for loop, if-else, and understanding the scope of variables such as local. I will demystify the above script, for more clarity.
The checkServerStatus
function takes a server URL as an argument and uses curl to check its HTTP status. An array servers
contains the URLs of multiple servers to be checked. The checkAllServersInParallel
function iterates through the array, calls checkServerStatus
for each server in the background (&), and waits for all background jobs to finish (wait). If you have read previously a bit about how the web works, I am sure you understand why we have used wait over here. The script then checks the status of multiple servers in parallel.
Although the script teaches us and reminds us of almost all the concepts. There is a particular line, which might confuse you, right?
local status=$(curl -sI "$server" | head -n 1 | cut -d' ' -f2)
It was meant to confuse you and remind you there is a long way to go. This line of code utilizes three commands to check the HTTP status code of a server's response. The curl
command fetches the headers of the specified server, head -n 1
selects the first line containing the HTTP status, and cut -d ' -f2
extracts the actual status code. The resulting status code is stored in a variable named status. This approach allows the script to determine the server's responsiveness based on its HTTP status.
As stated before, the purpose, of the blog is to help beginners get started with Shell Scripting. Before I abruptly end it over here, I will add a bit of concepts that can be handy in your Shelling Journey.
Restricted Shell ✋
When aiming for code portability across various Unix-like shells, incorporating POSIX features becomes essential, ensuring compatibility with older shells beyond Bash or Zsh. However, in organizational settings or secure environments, limiting user capabilities is crucial for security. System administrators achieve this through the use of a restricted shell. If you're on your personal computer and wish to verify whether your shell is restricted, you can check by running:
echo $0
This command will display the name of the current shell. If it's a restricted shell, it might be named something like rsh or rbash. Checking for these restrictions helps maintain a secure computing environment in scenarios where executing arbitrary commands is restricted. If you are on your personal computer and want to check if it is restricted run:
shopt restricted_shell
Where, shopt
implies Shell Options. By default Restriction is off, to turn it on run:
rbash
The restricted shell option is read-only and cannot be changed with shopt command. Restriction can often stop from changing directory, accessing particular files and more. It is generally used in large MNCs and always in banks to let users perform as much as required.
With the help of shopt
, we can even configure options for our system. Say, when you want to change your directory instead of writing cd dirName
just type the dirName, by just running
shopt -s autocd
It's always important to know how to get back, so if you want to unset the particular configuration do:
shopt -u autocd
There is an array of options to try, just type shopt and play around.
Taking Input 🧩
In shell scripting, capturing user input is achieved using the read utility, comparable to std::cin in C++. To take input with a prompt read
is the utility we are looking for.
read
Without specifying a variable, the input is stored in the default variable named $REPLY.For a more customized approach, you can use options with read. For instance:
read var
Stores the value in variable var
. To secure sensitive information, like passwords, without displaying them, use:
read -s var
Additionally, you can limit the input length with flags like -n10, restricting it to 10 characters. There are various options available to tailor the behavior of read based on specific needs. Experimenting with these options can provide you with versatile ways to handle user input in shell scripts.
AFinally, when prompting users for specific input, you can enhance the user experience by using the -p flag followed by the prompt text. For example:
read -p "Enter Favorite Dish" dish
This not only captures user input but also provides a clear prompt, improving the overall interaction in your shell script. Experimenting with these features allows you to create more user-friendly and interactive scripts.
Outro 💚
Congratulations🥂 on embarking on your shell scripting adventure! We've covered a lot in this blog, from the basics of variables to the intricacies of case statements, loops, if-else conditions, and even crafting entire scripts. By now, you've probably realized that the world of shell scripting is as vast as the command line itself, and there's always more to learn.
Consider this blog your trusty launchpad, propelling you into the fascinating realm of automation and efficiency. We've just scratched the surface, and there's a galaxy of possibilities waiting for you to explore. Remember, Rome wasn't built in a day, and neither is a master shell scripter!
In our next installment, we'll dive into the mysterious world of scheduling cron tasks and delve deeper into the art of scripting. Get ready to take your scripting skills to the next level!
And for those thinking, "Hey, I need more of this wit and wisdom in my content," consider hiring me as your content freelancer. Let's script some success together!
If you run an organisation and want me to write or create video tutorials please do connect with me 🤝
Top comments (11)
Thank you for this article, Aniket!
I also get thrown off by changing UI easily. Even worse is when you just wanna do some task and get surprised by the tool springing a popup guide on you.
Loved reading about the history of shells, always think history is overlooked in most tutorials and it does allow for a deeper understanding of the way the tool works.
The structure of your article was very clear, as a programmer new to technical writing, there is a lot I can learn from this 🤩.
I appreciate how skimable the tutorial is. I did not have the time right now to follow along, but still could get a good understanding of the presented features.
The way the code examples and explanations were laid out allowed me to skip sections that I was familiar with already and move to the next part quickly.
The "confusing" line in the final code sample was a fun surprise to me. Often in introductory material, the reader gets isolated from what could be next or work towards. For me this results in disorientation after being done, this outlook was a humbling experience and gave me something to strive toward.
Very well done!
Thanks a lot for such a descriptive comment, you just made my day. I barely find people who read or go through the entire post. Glad you found it useful.
Fantastic post! Waiting for the script scheduling blog💯😍
Glad you liked it
Very information thank you
Glad you liked it
Great content 🔥 Keep it up!!
Thanks Divyam
Great post 🔥. Very detailed, will bookmark for later
Thanks Aabhas, hope it helps
Nice post with a good overview of shells.