As part of my code camp experience we've been introduced to a lot of new terminology and CS concepts. The bulk of them have been primarily practical in nature; how to use ruby to implement a search, how to connect models in ORMs, what a route actually is and how to configure it, etc. Some of the more interesting theoretical concepts aren't as routinely discussed but are crucial for grasping the purpose behind any particular algorithm utilization. One of those core computting concepts is Data Structures.
The concept of data structures is easy to grasp when you recognize how you already use them in your daily life. As humans, figuring out how to keep useful information around is a component of culture and our ability to work together on larger projects.
Human Centered Examples:
Everyday examples of data structures abound, they're ,very simply, the ways we've decided make the most sense for the type of information we have to store, in order to recall or present that information. During medieval times, we used songs and poetry to store data- because most people couldn't read, encoding the information of the day in an auditory format made the most sense.
Today, we use dictionaries for words in new languages, we store location information as gps coordinates and present them on a map; a 2-d, visual data structure with distance and terrain information encoded in the colors, topography, and oriented on a grid with cardinal directions. If you wanted to keep track of your book keeping information, you may prefer a balance sheet or an excel table. Programming conventions help us keep track of our code in an understable and transferable way. Often the purpose of our programs is to track, analyze, and present huge amount of data that very few humans would be interested in storing inside their heads.
Computer Centered Examples:
When working with particularly large data sets we turn to computers, and we've worked out ways to structure that information in ways that are better suited to their functionality. Arrays, queues, stacks, linked-lists, graphs, trees and their sub-derivative heaps, and hash-tables are some of the structures that we use to efficiently access the information again when we want it.
One of the key data storage possibilities is the array. Arrays are the basis for just about every other data storage structure. An array is a linear chunk of determined length as stored in memory. It will keep track of your information, in order, and it's very hard to access anything from the center. When you fill your array- it's necessary to find a new sequence of memory that you can copy your original information into and then specify a further amount of data for your new array, in order to add to it.
A more complex data structure is the hash-table. Hash-tables are the basis of databases. Their job is to take a key value pair, pass it through the hash function, and then use the output to precisely locate a place for it to live. When accessing the information later, we pass the key back to the hash function and it will point us to that precise spot again.
So, that's a basic explanation of what data structures are. Understanding the nuances of how a particular structure works is actually pretty good reading and food for thought. I hope this makes the concept more approachable and fun to work with!
Leave a comment below on your favorite algo/structure pairs!