It was November 2019 and I was on the plane flying to another country. I was going to move there. This was something far outside my comfort zone, and I was a bit terrified.
I knew that I would have to deal with a stream of new challenges, and this made me a bit anxious.
I started reading a computer science book I had bought for my trip. The title was ”Algorithms to Live By”. I found it interesting, so I read the introduction.
Suddenly it stroke me! What if Computer Science was the solution to dealing with unknown practical challenges? What if it could be a medicine for our everyday anxieties?
Now that this idea has matured in my head, it is time to share the answer with you: it can.
Organize your life, finding a parking spot, your ideal job or your next date may sound like consuming tasks. Struggling with some of these may lead you to a therapist, which will suggest you find a balance in your life. This advice helps, but an algorithm can even point to the exact numbers for this balance.
Algorithms are heavily inspired by nature and the human brain, but here we suggest the opposite: let the algorithms inspire our decision frameworks. In this post, I will mostly be referring to examples from the aforementioned book. I will leave my findings for a future post.
Assume you live in a crowded city and constantly searching for a parking spot in the streets. Or looking to buy/rent a house in a bad real estate market. This can be rather stressful.
However, we have the 37% rule (or optimal stopping theory). Decide the amount of time you are willing to spend/options to explore. Gather data for the first 37% of them, then make a decision (leap) as soon as you find an option better than the first 37%.
A more timely example: investing in cryptocurrencies. And more specifically knowing when to sell. Spend 37% of your time to gather data and decide a minimum price threshold. Then sell whenever the price goes higher than that threshold.
Have you wondered whether your job is the best you could get? Or your current relationship is the most suitable for you? Sometimes, this question forces us to look for better options. Sometimes, by fear or satisfaction, we choose to stay and support our current decisions.
What do algorithms suggest though? Surprisingly, the algorithm that leads us in a better direction is traditionally used to help gamblers make better decisions. Upper Confidence Bound Algorithm has the following steps:
- Find an option that offers the best expected value. In the beginning, you may have to choose intuitively.
- Then you should compare the actual outcome with your expectations. Write down these comparisons over time.
- If the real outcome is consistently lower than your expected outcome, you should move onto another option. This option should be the one with the second-best expected outcome.
- Repeat the process.
Should you quit your job? Well, if your expectations are consistently crushed, maybe you should.
Our computer's memory is composed of different layers (usually 6 of them). Some of them are fast but small and others are big but slow. The quickest layer is called cache. Computers use a simple algorithm to decide what gets stored in the cache. It’s called Least Recently Used (LRU), and it stores whatever you used last on top, in the upper layer of the cache.
Our brains work similarly as well: if some information goes unused for a long time, we have a hard time remembering it. The same goes for our stuff. We also tend to lose items that we are not frequently using.
So if you have stuff you are not using, throw them away! Replace your desk with the most recent items. Or at least place those items closer to you. It will still be a mess, but an organized mess. It will make looking for them faster.
Consequently, if you’re preparing for an exam in the morning, read your notes right before you go to bed. The information will be there when you wake up.
You can forget the rest of your thoughts without regrets.
I am confident you are aware of the hype surrounding AI and Machine Learning. A common use of these algorithms is to make predictions after getting trained using related data.
There is a term though which gives Machine Learning engineers a chill. It is called overfitting. It means that in cases where the Algorithm is being fed a disproportionate amount of data, it loses the ability to generalize and make good predictions on yet unknown cases. The same holds for our brains.
How to combat overfitting? Penalize complexity. If you can’t explain it simply, you don’t understand it well enough.
When you have high uncertainty and limited data, you should stop thinking early. Allowing more time can create more complexity and be counterproductive.
Or - in simple terms - don’t overthink!
The real world is shockingly complex. We often desire to explain it fully. Adding more parameters to a problem may make it more realistic, but if we add a lot of complexity, the problem may become unsolvable.
The traveling salesman problem is such an example: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?”
If you expand this problem to a whole country or a continent, it gets immensely complex.
In a situation like this, the solution is to relax your restrictions. Allow the salesman to visit different locations twice or more. You’ll end up with a good solution in a reasonable time, although not a perfect one.
So, in many cases, you must relax your constraints, and not try to solve everything perfectly.
I have to warn you: taking the best actions does not guarantee good results. We can never control the outcome.
However, there is a great quote from the book to ease our frustration:
“To try and fail is at least to learn;
To fail to try is to suffer the inestimable
loss of what might have been,”
I hope this article helped you reconsider your views of computer science, and even provided a few helpful tips.