Mrwan Ashraf@_mrwanashraf@EddyVinckk everyday, I code as its my passion and hobby.
It started that way even before landing a job in software development industry.
I maintain worklife balance and coding for me on weekends is a hobby that I love doing beside other hobbies I have.15:20 PM  26 Sep 2020
Remember when we used to just sprinkle in JavaScript?01:15 AM  25 Sep 2020
Jack Forge@thejackforgeIf you're trying to teach yourself to code, it's important that you dedicate part of your day towards it.
For example, I told my family that I was going to the gas station to fuel up the car but I'm actually parked around the corner studying JavaScript and drinking Starbucks.23:33 PM  24 Sep 2020
Prerequisite (โ๐)
If you're reading this article right now, please considered to read our Part 01: BigO Notation, Part 02: JavaScript Unique Part, Part 03: JavaScript Numbers to Part 18: Advance Strings

Part Table of Contents Description 01 BigO Notation By the end of this chapter, you will understand how to analyze an implementation of an algorithm with respect to both time (execution time) and space (memory consumed). 02 JavaScript Unique Parts BigO is important for analyzing and comparing the efficiencies of algorithms. The analysis of BigO starts by looking at the code, and, applying the rules, applying the rules is because to simplify the BigO notation linear or quadratic rule is not enough. 03 JavaScript Numbers This part 3 will focus on JavaScript number operations, number representation, Number objects, common number algorithms, and random number generation. 04 JavaScript Strings This part 4 will focus on strings, JavaScript String object, and the String objectโs builtin functions. You will learn how to access, compare, decompose, and search strings for commonly used reallife purposes. In addition, the chapter will explore string encoding, decoding, encryption, and decryption. 05 JavaScript Arrays As a JavaScript developer, you will use the array often; it is the most commonly used data structure. Arrays in JavaScript come with a lot of builtin methods. By the end of this part, you will understand arrays and choose the right method 06 JavaScript Object This part will focus on what JavaScript objects are, how they are declared, and how their properties can be changed. In addition, this part will cover how JavaScript classes are implemented using prototypal inheritance. Also this part will be short. 07 JavaScript Memory Management A variable takes up some memory. In C, the programmer allocate and deallocate memory manually. In contrast, modern JavaScript engines have garbage collectors that delete unused variables. However, there are pitfalls(unexpected) that developers can fall into(โ) This part will show these unexpected and present techniques to help the garbage collector minimize the memory problems. 08 Recursion This part 8 introduces the concept of recursion and recursive algorithms(Remember they are different, we will discuss them later(๐)). We will also discuss the definition of recursion and fundamental rules for recursive algorithms. 09 Sets This part focuses on the concepts of sets from both a mathematical definition and on the implementation. Also, Common set operations, as well as their implementations, are covered in great detail (๐ก). 10 Searching and Sorting This part 10 focuses on searching and sorting for arrays. By the end of this part 10, you will understand how to use sorting and searching algorithms for arrays. Also, this article is a bit complicated for beginners, so as much as possible the visual aids is your friend (๐). (๐) 11 Hash Tables A hash table is a fixedsized data structure in which the size is defined at the start. This part 11 explains how hash tables work, and the method of generating a unique key. By the end of this part 11, you will understand various hashing techniques and know how to implement a hash table. (๐) 12 Stacks and Queues This part 12 covers stacks and queues(pronounce as kyooz (๐)) not (kwewe) okay? hehehe (๐ ); both are data structures used in the implementation of complex data structures. You'll learn what the stacks and queues are, how they're used, when they're used, and how to implement them (๐) Let's go! (๐ฅ๐ฅ๐ฅ) 13 Linked Lists A linked list is a data structure in which each node (or element) points to another node. Unlike arrays, which have a fixed size, a linked list is a dynamic data structure. By the end of this part 13, you will understand how to implement and work with linked lists. And oh! (๐ฎ) There are two types of linked lists: singly (โก๏ธ) and doubly (โ๏ธ). Letโs examine the singly linked list first.(๐) Let's go! (๐ฅ๐ฅ๐ฅ) 14 Caching Caching is the process of storing data into temporary memory so that it can be easily retrieved for later use if it is required again. As an example, a database keeps data cached to avoid rereading the hard drive, and a web browser caches web images to avoid redownloading. In this part 14, two caching techniques will discussed: LFU and LRU caching. 15 Trees A general tree data structure is composed of nodes with children nodes. The top node is called the root node. This part 15 will explore different types of trees such as binary trees, binary search trees, and selfbalancing binary search trees. First, this part 15 will cover what trees are and how they are structured. Then, it will cover methods of traversing(crossing or taking a zigzag path) the tree data structure in detail. Finally, you will learn about binary search trees and selfbalancing binary search trees to understand how to store searchable data. (๐) 16 Heaps A heap is an important data structure that returns the highest or lowest element in O(1) time. This part 16 will focus on explaining how heaps are implemented as well as how to work with them. One example is heap sort, which is a sorting algorithm based on heaps. 17 Graphs In this Part 17, you will learn graph basics, including fundamental terminology and graph types. The Part 17 will also cover working with these different graph types and methods of representing graphs in data structures. Finally, algorithms for traversing, searching, and sorting graphs are explored to solve problems such as finding the shortest path between two graph nodes. (๐) 18 Advance Strings Part 18 will cover more complex string algorithms than covered in the previous section. Now that you have heard of certain other data models or structures, they should be easier to comprehend. Specifically, Part 18 will focus on string searching algorithms. (๐) 19 Dynamic Programming Dynamic programming involves breaking down problems into their subproblems. Solving the subproblems and saving those results into memory to access them whenever a repeated problem needs to be solved, the algorithmic complexity decreases significantly (โฌ๏ธ). To explain dynamic programming, letโs re examine the Fibonacci sequence that was discussed in Part 8. Then Part 19 will cover the rules of dynamic programming and walk you through some examples to make the concepts more concrete. (๐) 20 Bit Manipulation Bit manipulation is an advanced topic that JavaScript developers typically do not need to know. However, you should learn a bit about bit manipulation if you want to implement highperformance serverside code. Understanding bit manipulation requires some knowledge of digital logic. Any introductory course in discrete math or circuits would be helpful to understand these concepts.
Part 19: Dynamic Programming (๐ฑ ๐ฅ โก)
Dynamic programming involves breaking down problems into their subproblems. Solving the subproblems and saving those results into memory to access them whenever a repeated problem needs to be solved, the algorithmic complexity decreases significantly (โฌ๏ธ). To explain dynamic programming, letโs re examine the Fibonacci sequence that was discussed in Part 8. Then Part 19 will cover the rules of dynamic programming and walk you through some examples to make the concepts more concrete. (๐)
Motivations for Dynamic Programming (๐ค๐ฅ)
The code for the Fibonacci sequence:
// Recursive Fibonacci Sequence
function getNthFibo(n) {
if (n <= 1) {
return n;
} else {
return getNthFibo(n  1) + getNthFibo(n  2);
}
}
getNthFibo(3);
Recall that the recursive implementation of this algorithm is $O\lparen 2^{n}\rparen$ (๐ก). Upon closer examination, you will notice that much of the same computation is repeated:

Figure 191. Recursion tree for Fibonacci numbers
As shown in Figure 191 (Above), when getNthFibo
for 6
is called, the calculation for 4
, 3
, 2
, and 1
are repeated multiple times. Knowing this, how can you make this algorithm more efficient?
Using a hash table (use a hash table), once the Fibonacci number has been computed, it can be stored like the following implementation:
var cache = {};
function fiboBest(n) {
if (n <= 1) {
return n;
}
if (cache[n]) {
return cache[n];
}
return (cache[n] = fiboBest(n  1) + fiboBest(n  2));
}
fiboBest(10); // Prints "55"
Calculating the Fibonacci sequence for 6
requires calculating the Fibonacci sequence for 4
and 5
. Hence, the Fibonacci sequence for 5
overlaps with the fourth Fibonacci sequence calculation. This is known as overlapping subproblems (๐ก). So the code above refers to the fact that the optimal solution (best solution) to the problem contains optimal solutions to its subproblems.
Rules of Dynamic Programming (๐โ๏ธ)
Dynamic programming (DP) is the method of storing values that were already calculated and using those values to avoid any recalculations. This method can be applied only to those problems with overlapping subproblems and optimal substructure.
Overlapping Subproblems (๐ก๐ โ)
DP stores subproblem solutions typically in a hash table, an array, or a matrix, and this is referred to as memoization (See on Wikipedia). DP is useful for solving problems in which there are many repeated subproblems.
An example of this can be seen with the Fibonacci sequence recursive method. It can be observed that some numbers such as 3
will be recalculated many times. A hash table can be used to store results to avoid any recalculations. Doing this reduces the time complexity from
$O\lparen 2^{n}\rparen$
to just
$O\lparen n\rparen$
, which is an immense change (๐ฎ).
Optimal Substructure (๐โ๏ธ๐)
An optimal substructure is when the optimal solution of a problem can be found by using the optimal solutions of its subproblems.
Note (๐): For example, the shortest path finding algorithms have optimal substructures. Consider finding the shortest path for traveling between cities by car. If the shortest route from Los Angeles to Vancouver passes through San Francisco and then Seattle, then the shortest route from San Francisco to Vancouver must pass through Seattle as well.
Example: Ways to Cover Steps (1๏ธโฃ2๏ธโฃ3๏ธโฃ)
Given a distance (n
), count the total number of ways to cover n
(number of steps or distance) with one, two, and three steps. For example, when n=3
, there are four ways, as shown here:
1. 1 Step, 1 Step, 1 Step, 1 Step
2. 1 Step, 1 Step, 2 Steps
3. 1 Step, 3 Steps
4. 2 Steps, 2 Steps
Note (๐): Look at the pattern carefully (above)
Hereโs the function for achieving the count:
function waysToCoverSteps(step) {
if (step < 0) {
return 0;
}
if (step == 0) {
return 1;
}
return waysToCoverSteps(step  1) + waysToCoverSteps(step  2) + waysToCoverSteps(step  3);
}
waysToCoverSteps(12);
Time Complexity: $O\lparen 3^{n}\rparen$
This recursive method has a large time complexity. To optimize the time complexity, simply cache the result and use it instead of recalculating the values:
function waysToCoverStepsDP(step) {
var cache = {};
if (step < 0) {
return 0;
}
if (step == 0) {
return 1;
}
// Check if exists in cache:
if (cache[step]) {
return cache[step];
} else {
cache[step] = waysToCoverStepsDP(step  1) + waysToCoverStepsDP(step  2) + waysToCoverStepsDP(step  3);
return cache[step];
}
}
waysToCoverStepsDP(12);
Time Complexity: $O\lparen n\rparen$
This shows the power of dynamic programing. It improves time complexity immensely.
Classical Dynamic Programming Examples (๐ด๐ต)
This section will explore and solve some of the classical dynamic programming problems. The first one that will be explored is the knapsack problem (See on Wikipedia).
The Knapsack Problem (โ๐โ )
The knapsack problem is as follows: Given n
(weights) and the values of items, put these items in a knapsack of a given capacity (w
) to get the maximum total value in the knapsack.
Optimal Substructure (๐๐โ๏ธ๐)
For every item in the array, the following can be observed:
 The item is included in the optimal subset.
 The item is not included in the optimal set. The maximum value must be one of the following
 Including the Nth item: Max value obtained with
n1
items minus the Nth item (can only work if the weight of the Nth item is smaller thanW
)  Excluding the Nth item: Max value obtained with
n1
items
Naive Approach (๐ขโก๏ธโ๐ต)
The naive approach implements the described optimal substructure as shown here:
function knapsackNaive(index, weights, values, target) {
var result = 0;
if (index <= 1  target <= 0) {
result = 0;
} else if (weights[index] > target) {
result = knapsackNaive(index  1, weights, values, target);
} else {
// Case 1:
var current = knapsackNaive(index  1, weights, values, target);
// Case 2:
var currentPlusOther = values[index] +
knapsackNaive(index  1, weights, values, target  weights[index]);
result = Math.max(current, currentPlusOther);
}
return result;
}
var weights = [1, 2, 4, 2, 5],
values = [5, 3, 5, 3, 2],
target = 10;
knapsackNaive(4, weights, values, target);
Time Complexity: $O\lparen 2^{n}\rparen$

Figure 192. Recursion tree for knapsack
Figure 192 (Above) shows the recursion tree for a knapsack capacity of 2
units and 3
items of 1
unit weight. As the figure shows, the function computes the same subproblems repeatedly and has an exponential time complexity. To optimize this, you can have the results reference via index and target (weight: w
).
DP Approach (๐ขโก๏ธโ โบ๏ธ)
DP implementation stores the result of the knapsack using the array index and target as a key to a JavaScript object for later retrieval. For recursive calls it will use the stored result, and this reduces the time complexity of the algorithm:
function knapsackDP(index, weights, values, target, matrixDP) {
var result = 0;
// DP Part:
if (matrixDP[index + "" + target]) {
return matrixDP[index + "" + target];
}
if (index <= 1  target <= 0) {
result = 0;
} else if (weights[index] > target) {
result = knapsackDP(index  1, weights, values, target, matrixDP);
} else {
var current = knapsackDP(index  1, weights, values, target, matrixDP),
currentPlusOther = values[index] + knapsackDP(index  1, weights, values, target  weights[index], matrixDP);
result = Math.max(current, currentPlusOther);
}
matrixDP[index + "" + target] = result;
return result;
}
var weights = [1, 2, 4, 2, 5],
values = [5, 3, 5, 3, 2],
target = 10;
knapsackDP(4, weights, values, target, {});
Time Complexity:
$O\lparen n \times w\rparen$
Here, n
is the number of items, and w
is the capacity of the knapsack.
Space Complexity:
$O\lparen n \times w\rparen$
This algorithm requires an n
x w
combination to store the cached results inside matrixDP
.
Longest Common Subsequence (๐๐๐ข)
Given two sequences, find the length of the longest subsequence where a
subsequence is defined as a sequence that appears in relative order without necessarily being contiguous. For example, Edi
, dis
, iso
, and so forth, are subsequences of Edison
. A string has
$2^{n}$
possible subsequences where
$n$
is the length of the string.
Note (๐): As a realworld example, letโs consider a generalized computer science problem that appears in main domains such as bioinformatics (DNA sequencing). This algorithm is also how the diff (See on Wikipedia) functionality is implemented in version control and operating systems. Typically, diff is used to show the changes between two versions of the same file.
Naive Approach (๐ขโก๏ธโ๐ต)
Letting str1
be the first string of length m
, str2
be the second string of length n
, and LCS
be the function, the naive approach can first consider the following pseudocode:
// 1. if last characters of both sequences match (i.e. str1[m1] == str2[n1]):
// 2. result = 1 + LCS(X[0:m2], Y[0:n2])
// 3. if last characters of both sequences DO NOT match (i.e. str1[m1] != str2[n1]):
// 4. result = Math.max(LCS(X[0:m1], Y[0:n1]),LCS(X[0:m2], Y[0:n2]))
With this structure in mind, the following can be implemented:
function LCSNaive(str1, str2, str1Length, str2Length) {
if (str1Length == 0  str2Length == 0) {
return 0;
}
if (str1[str1Length  1] == str2[str2Length  1]) {
return 1 + LCSNaive(str1, str2, str1Length  1, str2Length  1);
} else {
return Math.max(LCSNaive(str1, str2, str1Length, str2Length  1),
LCSNaive(str1, str2, str1Length  1, str2Length));
}
}
function LCSNaiveWrapper(str1, str2) {
return LCSNaive(str1, str2, str1.length, str2.length);
}
LCSNaiveWrapper("AGGTAB", "GXTXAYB"); // Prints "4"

Figure 193. Recursion tree for longest common string length
Time Complexity: $O\lparen 2^{n}\rparen$
Figure 193 shows the recursion tree for EDI
and SON
. As you can see, (ED
, SON
) is repeated.
DP Approach (๐ขโก๏ธโ โบ๏ธ)
The recursive structure described can be translated into a table where the rows each represent a character in str1
and the columns each represent a character in str2
. Each item in a matrix at a row, i
, and a column, j
, represents LCS(str1[0:i], str2[0:j])
:
function longestCommonSequenceLength(str1, str2) {
var matrix = Array(str1.length + 1).fill(Array(str2.length + 1).fill(0)),
rowLength = str1.length + 1,
colLength = str2.length + 1,
max = 0;
for (var row = 0; row < rowLength; row++) {
for (var col = 1; col < colLength; col++) {
var str1Char = str1.charAt(row  1),
str2Char = str2.charAt(col  1);
if (str1Char == str2Char) {
matrix[row][col] = matrix[row  1][col  1] + 1;
max = Math.max(matrix[row][col], max);
}
}
}
return max;
}
longestCommonSequenceLength("abcd", "bc"); // Prints "2"
Time Complexity:
$O\lparen m \times n\rparen$
Space Complexity:
$O\lparen m \times n\rparen$
Note (๐):Here,
m
is the length ofstr1
, andn
is the length ofstr2
.
Coin Change (๐ฐ๐๐ข)
Given a value n
and an unlimited supply of each coin of different values, S = {S1, S2, .. Sm}
, of size M
, how many ways can the change be made. Given N=4
, M=3
, and S = {1,2,3}
, the answer is 4
:
1. 1
, 1
, 1
, 1
2. 1
, 1
, 2
3. 2
, 2
4. 1
, 3
Optimal Substructure (๐๐โ๏ธ๐)
You can observe the following about the number of changes:
1. Solutions without Mth coin
2. Solutions with (at least) one Mth coin
Given that coinChange(S, M, N)
is a function to count the number of coin changes, mathematically it can be rewritten as follows by using the two observations:
coinChange(S, M, N) = coinChange(S, M1, N) + coinChange(S, M, NSm)
Naive Approach (๐ขโก๏ธโ๐ต)
The naive approach can implement the described algorithm using recursion, as shown here:
// RETURN THE COUNT OF WAYS we can sum "coinArr" which have...
// ... index like: [0, ..., numCoins]
function countCoinWays(coinArr, numCoins, coinValue) {
if (coinValue == 0) {
// If the value reached zero, then only solution...
// ...is to not include any coin
return 1;
}
if (coinValue < 0  (numCoins <= 0 && coinValue >= 1)) {
// Value is less than 0 means no solution
// No coins left but coinValue left also means no solution
return 0;
}
return countCoinWays(coinArr, numCoins  1, coinValue) +
countCoinWays(coinArr, numCoins, coinValue  coinArr[numCoins  1])
}
function countCoinWaysWrapper(coinArr, coinValue) {
return countCoinWays(coinArr, coinArr.length, coinValue);
}
countCoinWaysWrapper([1, 2, 3], 4); // Prints "4"
Time Complexity:
$O\lparen n^{m}\rparen$
Space Complexity:
$O\lparen n\rparen$
Note (๐): Here,
m
is the number of available types of coins, andn
is the desired currency to convert into change.
Overlapping Subproblems (๐ก๐๐ โ)
You can see from the recursion tree below that there are lots of overlapping subproblems:

Figure 194. Recursion tree for longest coin change
Note (๐): To solve this, a table (matrix) can be used to store already computed results.
DP Approach (๐ขโก๏ธโ โบ๏ธ)
The matrix has the coinValue
number of rows and the numCoins
number of columns. And i
and j
represent the number of ways given a coinValue
of i
and a numCoins
of j
:
function countCoinWaysDP(coinArr, numCoins, coinValue) {
// Create the matrix:
var dpMatrix = [];
for (var i = 0; i <= coinValue; i++) {
dpMatrix[i] = [];
for (var j = 0; j < numCoins; j++) {
dpMatrix[i][j] = undefined;
}
}
for (var i = 0; i < numCoins; i++) {
dpMatrix[0][i] = 1
}
for (var i = 1; i < coinValue + 1; i++) {
for (var j = 0; j < numCoins; j++) {
var temp1 = 0,
temp2 = 0;
if (i  coinArr[j] >= 0) {
// Solutions including coinArr[j]
temp1 = dpMatrix[i  coinArr[j]][j];
}
if (j >= 1) {
// Soulutions excluding coinArr[j]
temp2 = dpMatrix[i][j  1];
}
dpMatrix[i][j] = temp1 + temp2;
}
}
return dpMatrix[coinValue][numCoins  1];
}
function countCoinWaysDPWrapper(coinArr, coinValue) {
return countCoinWaysDP(coinArr, coinArr.length, coinValue);
}
countCoinWaysDPWrapper([1, 2, 3], 4); // Prints "4"
Time Complexity:
$O\lparen m \times n\rparen$
Space Complexity:
$O\lparen m \times n\rparen$
Note (๐):Here,
m
is the number of available types of coins, andn
is the desired currency to convert into change.
Edit (Levenshtein) Distance (๐๐๐ข)
The edit distance problem considers the following: Given a string (str1
) of length m
and another string (str2
) of length n
, what is the minimum number of edits to convert str1
into str2
?
The valid operations are the following: Insert, Remove, Replace.
Optimal Substructure (๐๐โ๏ธ๐)
If each character is processed one by one from each str1
and str2
, the following is possible:
1. The characters are the same: Do Nothing
2. The characters are different. Consider the cases recursively: Insert (for m
and n1
), Remove (for m1
and n
), Replace (for m1
and n1
).
Naive Approach (๐ขโก๏ธโ๐ต)
The naive approach can implement the described substructure recursively, as shown here:
function editDistanceRecursive(str1, str2, length1, length2) {
// "str1" is empty. Only option is insert all of "str2"
if (length1 == 0) {
return length2;
}
// "str2" is empty. Only option is insert all of "str1"
if (length2 == 0) {
return length1;
}
// Last chars are same:
// "Ignore last chars and count remaining"
if (str1[length1  1] == str2[length2  1]) {
return editDistanceRecursive(str1, str2, length1  1, length2  1);
}
// Last chars is not the same:
// "There are three operations: insert, remove, replace"
return 1 + Math.min(
// Insert:
editDistanceRecursive(str1, str2, length1, length2  1),
// Remove:
editDistanceRecursive(str1, str2, length1  1, length2),
// Replace:
editDistanceRecursive(str1, str2, length1  1, length2  1)
);
}
function editDistanceRecursiveWrapper(str1, str2) {
return editDistanceRecursive(str1, str2, str1.length, str2.length);
}
editDistanceRecursiveWrapper("Edison", "Pebojot"); // Prints "6"
Time Complexity: $O\lparen 3^{m}\rparen$
Note (๐): The time complexity of the naive solution is exponential, and the worst case is when no characters in the two strings match. This makes sense because each call has three calls (insert, remove, replace).

Figure 195. Recursion tree for edit distance
Again, you can see that the same problems are solved over and over again. This can be optimized by constructing a matrix that stores the alreadycomputed results of subproblems.
DP Approach (๐ขโก๏ธโ โบ๏ธ)
The dynamic programming approach will construct a matrix with the dimensions str1
and str2
. The base case is when i
or j
is equal to 0
:
function editDistanceDP(str1, str2, length1, length2) {
// Creating the matrix:
var dpMatrix = [];
for (var i = 0; i < length1 + 1; i++) {
dpMatrix[i] = [];
for (var j = 0; j < length2 + 1; j++) {
dpMatrix[i][j] = undefined;
}
}
for (var i = 0; i < length1 + 1; i++) {
for (var j = 0; j < length2 + 1; j++) {
// If first str1 is empty:
// "Have to insert all the chars of str2"
if (i == 0) {
dpMatrix[i][j] = j;
} else if (j == 0) {
dpMatrix[i][j] = i;
} else if (str1[i  1] == str2[j  1]) {
// If the same:
// "No additional cost"
dpMatrix[i][j] = dpMatrix[i  1][j  1];
} else {
var insertCost = dpMatrix[i][j  1],
removeCost = dpMatrix[i  1][j],
replaceCost = dpMatrix[i  1][j  1];
dpMatrix[i][j] = 1 + Math.min(insertCost, removeCost, replaceCost);
}
}
}
return dpMatrix[length1][length2];
}
function editDistanceDPWrapper(str1, str2) {
return editDistanceDP(str1, str2, str1.length, str2.length);
}
editDistanceDPWrapper("Edison", "Pebojot"); // Prints "6"
Time Complexity:
$O\lparen m \times n\rparen$
Space Complexity:
$O\lparen m \times n\rparen$
Note (๐): Here,
m
is the length ofstr1
, andn
is the length ofstr2
.
Summary (๐๐)
Dynamic programming can be utilized to optimize an algorithm if the following conditions are satisfied:
 Optimal substructure: The optimal solution to the problem contains optimal solutions to its subproblems.
 Overlapping subproblems: The solutions for subproblems are needed multiple times.
To store the already computed solutions to a subproblem, a matrix or a hash table is typically used; this is because both provide $O\lparen 1\rparen$ lookup time. Doing this, the time complexity can be improved from exponential (e.g., $O\lparen 2^{n}\rparen$ ) to polynomial time (e.g., $O\lparen n^{2}\rparen$ ).
Discussion