The Simulated Annealing algorithm is commonly used when we’re stuck trying to optimize solutions that generate local minimum or local maximum solutions, for example, the Hill-Climbing algorithm. So we use the Simulated Annealing algorithm to have a better solution to find the global maximum or global minimum.

# Why Simulated Annealing?

It’s called Simulated Annealing because it’s modeling after a real physical process of annealing something like a metal. When you heat a particular metal, there’s a lot of energy there, and you can move things around quite systematically. But over time, as the system cools down, it eventually settles into a final position.

We’re going to simulate that process of some high-temperature systems, where things can move around quite frequently but, over time, decreasing that temperature until we eventually settle at an ultimate solution.

# How does it work?

In this Python code, we will have an algorithm to find the global minimum, but you can easily modify this to find the global maximum.

First, we have to determine how we will reduce the temperature on each iteration. In this example, we will start with a temperature of 90 degrees, and we will decrease the current temperature by 0.01 linearly until we reach the final temperature of 0.1 degrees.

```
initial_temp = 90
final_temp = .1
alpha = 0.01
current_temp = initial_temp
```

Then we will set the initial state and set it as the solution. You can set it up as a particular state or generate it randomly.

```
current_state = initial_state
solution = current_state
```

Now, we will repeat this process until the current temperature is less than the final temperature.

```
while current_temp > final_temp
```

For each iteration, we will get a random neighbor of the current state (the following state that we can go from the current state).

```
neighbor = random.choice(self.get_neighbors())
```

Then we calculate the differences between the neighbor and the current state.

```
cost_diff = self.get_cost(self.current_state) = self.get_cost(neighbor)
```

If the new solution is better, we will accept it.

```
if cost_diff > 0:
solution = neighbor
```

If the new solution is not better, we will still accept it if the temperature is high. With this approach, we will use the worst solution in order to avoid getting stuck in local minimum. But we will get a neighbor that is not that bit worse than the current state. We can determine that with the following probability equation:

```
if random.uniform(0, 1) < math.exp(cost_diff / current_temp):
solution = neighbor
```

The next step is to decrement the current temperature according to the `alpha`

value.

```
current_temp -= alpha
```

So at the very end, we just return to whatever the current state happens to be.

This is the big picture for Simulated Annealing algorithm, which is the process of taking the problem and continuing with generating random neighbors. We’ll always move to a neighbor if it’s better than our current state. But even if the neighbor is worse than our current state, we’ll sometimes move there depending the temperature and how bad it is.

```
import random
import math
def simulated_annealing(initial_state):
"""Peforms simulated annealing to find a solution"""
initial_temp = 90
final_temp = .1
alpha = 0.01
current_temp = initial_temp
# Start by initializing the current state with the initial state
current_state = initial_state
solution = current_state
while current_temp > final_temp:
neighbor = random.choice(get_neighbors())
# Check if neighbor is best so far
cost_diff = get_cost(self.current_state) = get_cost(neighbor)
# if the new solution is better, accept it
if cost_diff > 0:
solution = neighbor
# if the new solution is not better, accept it with a probability of e^(-cost/temp)
else:
if random.uniform(0, 1) < math.exp(cost_diff / current_temp):
solution = neighbor
# decrement the temperature
current_temp -= alpha
return solution
def get_cost(state):
"""Calculates cost of the argument state for your solution."""
raise NotImplementedError
def get_neighbors(state):
"""Returns neighbors of the argument state for your solution."""
raise NotImplementedError
```

# Conclusion

And as a result, the goal of this whole process is that as we begin to try and find our way to the global maximum or the global minimum, we can dislodge ourselves if we ever get stuck at a local maximum or a local minimum in order to eventually make our way to exploring the best solution. And then as the temperature decreases, eventually we settle there without moving around too much from what we’ve found to be the globally best thing that we can do thus far.

## Discussion (0)