Simple structures and Basic Arrays

What is a data structure ?

Let's get some ground rule together. This term, data structure, might seem a little abstract, a little non-specific. And, usually, it's program is we're going to nail that definition down. Well, what does that mean exactly? But for our purposes, abstract and non-specific is good, it's flexible, it's useful. Because a data structure is exactly what it sounds like. It's just some arrangement of data that we've built. Not one lone integer variable over here and a string over here and a full one down here but we've gathered some data together into an orderly arrangement.

Why? Because it's useful, because it makes our lives easier to do this. The same way that if you're working in bills and taxes, you're likely to gather your paperwork together into stacks or folders, and then perhaps your folders into a filing cabinet. Because it's easy to manage when you keep related information together. That you would always write a recipe with the ingredients and the description together because if the exact same information is split up and disconnected it's kind of pointless. You'll end up with, single piece of paper that says four ounces of flour.

So we need data structures in our programs because we think this way as human beings, it's easy to forget this sometimes. We start learning about data structures and we're immediately surrounded with terms like associative arrays and binary search trees. And they sound completely detached from anything we'd ever do as a human being. But they're not, we naturally think in data structures. Okay, we don't use that term, but we think in collections of information. A recipe is an actual data structure, as is a shopping list.

A telephone directory, a dictionary, a thesaurus, an encyclopedia, a flight schedule, these are not haphazard. They have a structure, they have a format. So imagine a bank statement, we can hold that single concept in our head and we know it's a collection of information. It's an arrangement of data, it's not random. It's an intentional list of transactions with an order to it and each transaction itself is in intentional arrangement of data, information, the date, the amount, who it's from or to.

And these have nothing to do with computers, we didn't need a computer to have a bank statement or a telephone directory or a shopping list, they all existed long before computers were a glint in Charles Babbage's eye. Now if you're an object oriented programmer, you may be thinking, Well isn't this what we do with classes and objects? I mean, we define these real-world objects in a program because we think like, or at least we're supposed to. And yes, absolutely. And we'll kind of get into that later, but objects are a type of data structure, just not the only type.

I purposefully use simple, real world examples, because this term data structure does not and should not imply a level of complexity. Sure, we can have huge complex data structures, but we need a lot of small straight forward ones too. So, our working definition in this course is that. A data structure is just an intentional arrangement of data, that's it. It's a proudly non-specific definition and it covers a whole range of possibilities. There is, however, one line I am going to draw.

That our focus in this course is on data structures created and held. In memory, in a running computer program. Yes, at some point you may want to take some or all of that data and persist it. Save it out to a file on disk, or send over the network to cloud storage or to a database, but that is not what we're talking about here. Sure, when you're working with databases, SQL Server, Oracle, MySQL. We have tables, we have rows and columns, and that does meet that definition of an intentional arrangement of data, but I trust you understand where I'm drawing the distinction.

Once that data leaves our program to go to a disk drive or a separate server, that is a different set of issues. So our attention here is on the intentional arrangement of data held in memory. Inside a running application. So with that simple and non-specific definition in place, let's take a look at some simple and non-specific data structures.

Describing simple data structures ?

On our first day, our first hour of programming, we'll be creating variables. Strings, integers, floats, booleans, whatever you have. Now, you may be in a language where you don't need to give your variables a specific data type. But I assume you're familiar with the concept, and with the generic C style syntax I'm using here. On this course, we are not concerned with syntax. We're working on ideas and concepts here, so I don't want you worrying about semicolons or case sensitivity. Or whether this exact code would compile in Eclipse or Visual Studio or Xcode, that would be missing the point.

I'm just describing the idea of having a few variables, a few pieces of data. Because even in the simplest code, you start to find that several variables naturally belong together. One indication is you start to repeat groups of them. Now, sure you could just write these nicely in adjacent lines in your code. Even commenting these informal groups so they're easy to read. But there's no actual order or structure being imposed on this. Nothing that would stop me from writing three variables in one section, but putting four variables in the next section, or two in the next.

Nothing where the compiler would enforce particular names or data types. So, the most basic need for a data structure, is when we just want to impose, to enforce, some kind of systematic organization on this. To intentionally group several variables together and treat this as one item. Now, a quick terminology sidebar. If I were teaching a generic computer science class, the most nonspecific term for the concept of intentionally grouping some values together. Well, that would be a record. In computer science, a record is just a value that itself contains other values.

If we have repeating groups of information, multiple transactions for a bank statement, multiple employees for a phone directory. Multiple books in a catalog, we can imagine each repeating item as one record. Each record represents a single transaction or a single book, whatever it is you're working with. So we would have multiple records, with each record itself containing multiple fields. A field, being the term for an individual piece of information, a single piece of data inside a record. So, each transaction record might have a field for date, a field for the amount, and a field for the payee.

Each book would have fields for title, price, and so on. Now, sidebar to the sidebar. If you've got a bit of a relational database background, you might be thinking, well, hang on Simon, aren't these things called rows and columns? Well, yes and no. Implemented in a table in a relational database they would be. But we're not in a relational database. We are talking about the generic programming concept. And that's where we have records containing fields. Now typically, a record is defined to have a fixed number of fields, and those fields would always appear in the same order.

Here's the thing. While you'll see the terms record and field used in computer science curriculum when talking about this, which is why I wanted to cover them. In a lot practical day to day, in the trenches programming, we don't tend to use the term record and field that much anymore. Because they're a bit too generic. So, I'm not going to be using the terms record and field going forward in this course. And nor, for the few of you interested in such things, am I'm going to use the equivalent mathematical terminology, which would be the term tuple. Now, you may occasionally run into this this term as well reading about this.

A mathematical tuple is a grouped collection of elements, and that's all we're talking about here, grouping things together, giving them a structure. Think of the term triple for three of something, quadruple for four, quintuple for five, sextuple for six, septuple, octuple. That's where the term tuple comes from, a grouped sequence of elements. But, a mathematical focus is not our focus here. And I will say the word tuple only once more in this entire course in about five seconds. Enough hand wavy abstract concepts.

Sidebar, over. We will need a few of these terminology tangents in this course, but for now, back to specifics. How do we implement this idea of records and fields, or mathematical tuples, told you, in a programming language? How do we describe a grouped collection of related elements? Well, you might immediately be thinking about arrays, or defining classes and creating objects. But for the moment, we don't need any of that, not yet. We're first going to cover a simple data structure that predates object oriented languages, and is still very useful now.

A C style struct, up next.

Using C-style structs

Before we get to dealing with repeated groups of data, we just want to group some variables together and give that group a name. Now, the classic old school way is using a C-Style struct. And a struct is a very basic data structure we've had in the C programming language for over 40 years. And by extension, it's also in languages like Objective-C. It's in C++, it's in C#. Now if you haven't encountered structs before. Now I'm not using this as an informal phrase to mean some kind of data structure.

No, struct is a specific key word in C, it is part of the language. Instead of creating these loose informal groups of independent book variables, I could first define a book struct that contains the definition for what each book should consist of. And then, I can create variables from that struct definition. In this case, creating a variable called first that isn't a type of string or a type of double or bool. It says book struct. This will now have its own self-contained set of data, its own nested set of variables, what are called member variables, of that struct.

So I'm using dot syntax here to let me drill down inside the struct and set the individual member variables. Now, this is about as simple a data structure as we get in any real world programming language. It really is just a way of taking variables and packaging them together. Now immediately, a very common question for those of you in object-oriented languages, and I'm assuming that's most of you is. Well, this kind of sounds like a class. So, what's the difference between defining a struct and defining a class? Well, structs are far, far simpler.

Unlike classes and objects, C-Style structs have no behavior, they have no methods, they are only a container for data. Simply a way to package several variables together, that's it. Also, structs don't need to be explicitly instantiated like objects do. And we don't use the word new or alloc in it, we just make them. For those of you comfortable with the idea of references, or pointers. With items being passed around either by value or by reference, structs are value types, unlike objects. They and their contents are typically allocated directly on the stack, not as a reference to some other area of memory.

And of course, struct are from standard C, which isn't object-oriented. C has no classes, no inheritance, no polymorphism, so structs have none of these things. And because of all this, because of their simplicity, structs are sometimes referred to as plain old data structures, or PODS. But even in object-oriented languages, this idea of structs is still useful now. Because there are times when all we want is a simple bare-bones plain old data structure. We don't need the extra weight that comes with the class. Example, if you're working with any kind of graphical element and you need a variable that represents a point on the screen.

Well, to hold a point is two numbers, an x and y position. That is perfect for a struct, that will allow us to provide a formalized structure for two values, we'll call that a point type. Then, we create different variables of that type. We've got some formalities to what data type they are. It lets us pass around that point variable as a single item. Without having constructures, and methods, and properties and getters and setters. Another example, perhaps to store color information. We just need a variable define that will always hold values for red, green, blue and alpha.

Again, this kind of thing would be perfect for a simple struct definition. Now in many languages, you already have things like points, and colors predefined as data types in your provided frameworks. But even in object-oriented languages, you'll often find they are defined as structs and not as classes. In objective c, for example, point r predefined as structs. In the .NET languages, color is a predefined struct type, it's not a class type it's just a struct. So even in this object-oriented world, these 40 year old data structures are still in use and still helpful.

So this takes us to our first language comparison slide. What is the support like for this data structure idea? The idea of structs across the most common programming languages. Now, a word here. Neither you nor I want this course to be a continuous laundry list of. How do we do this in Python? How to do this in Java? How to do this in C++ adnosium? Because most of that wouldn't be relevant to any one person. So when I do a comparison, it'll just be a quick summary of this data structure across a few common general purpose languages.

And I will leave the nuts and bolts of syntax and implementation details as an exercise for you, dear viewer. So structs, we talked about their existence in C, as a few examples, while Objective-C is built directly on top of C structs exists exactly as in C. And they are used a lot in both the iOS and Coco frameworks. In C# and by extension in other .NET languages, structs also exist. They are simple value types. But in .NET, you can add simple methods and properties to a struct. Though Microsoft do recommend if you're going to add any significant functionality, you should probably make it a class and I quite agree.

But once we get to Java, structs are not directly supported. There isn't a struct keyword. The closest equivalent in Java would probably be a simple lightweight class with no methods and only public variables. And I'll leave it to you whether that offends your object-oriented sensibilities. Now likewise, Python doesn't have structs. There's a couple of ways to emulate that behavior but no direct equivalent. And Ruby structs do exist, although they are more of a simplified class, rather than a pure data structure. But okay, this will do for our first language comparison.

We're just kind of trying to get a feel for how this might exist. It might change over different languages. Because we are done now with this level of of simplistic data structure. Let's move on and start talking about collections. About having multiple, repeating items, beginning with the collection we should all be reasonably familiar with, arrays.

Basic arrays

The array is the most fundamental, the most commonly used data structure across all programming languages. Support for simply arrays is generally built directly into the core language itself. And by that, I mean we don't have to link to some external framework or library before we can make an array. Now, as the Python folks will know, it is a slight exception. Python's built-in basic data collection type is the list rather than the array, and we'll get to talking about that shortly. But for our purposes in this section, it's fine just to assume that a list in Python is equivalent to a basic array in other languages.

I do expect that everybody watching this is familiar with the concept and/or syntax of an array in their language. But because A, we're going to do quite a bit of comparison between arrays and other data structures, and B, there is some variation in capabilities of what would be considered a standard array in different languages, it is worth a recap of the fundamentals. You know there are specialize array in most languages. And by specialized I mean arrays that have useful behavior, beyond the basics like sorting and searching, inserting and deleting.

And we'll get to all of those in a second. Let us cover the most rudimentary, elementary, vital ideas of a basic array first. An array is a ordered collection of items. Multiple independent values enclosed inside one named container. So the array is give a name, the individual elements inside it are not. Each separate element inside the array has an index, a number. And that index is not random. It is in order, it's in sequence, and this is what I mean by a ordered collection of items.

And this would be considered a one-dimensional or linear array. We can get to any element knowing just one number, the index. Now here is what is usually, not always, but usually true about basic arrays in programming. Usually, it's a zero-based integer index. The index starts at zero and goes up one at a time. There are a few languages where that's not the case. In Lua, Cobol and Small Talk, arrays automatically start at 1. But for clarity, all our discussion in this course will assume a zero based index.

Next, the simplest arrays are fixed size, also called immutable, meaning unchangeable arrays. They can be created at any size, five elements, 500, 10,000. But once the array is created, you can't add or remove elements from it. Now this is often absolutely fine. If you've got an array holding, say, the number of days across the different calendar months, it makes total sense for that to be a fixed size. But one of the first behaviors added to arrays in any language is to have the ability to dynamically add or remove elements from it while the program is running.

Sometimes that behavior is made available in the standard array in a language, and sometimes you have multiple different array types, depending on whether you need a fixed or resizable array. Now next, simple arrays are typically restricted to a specific type of data. It's an array of integers, or an array of booleans, or an array of strings. And if you try and put the wrong data type in an array element, you'll get an error. Now it is true that most object oriented languages allow you to create arrays of just generic objects, meaning you could put any object in any element, but the more specific, the better.

I point out these restrictions because it would not be unusual to have somebody think, well why would I ever choose a fixed size array if my language has resizable arrays? Or, why would I use an array with a fixed data type when I can describe an array that takes any object? You know, just keep my options open. You would be forgiven for thinking these differences exist. Perhaps because the early programming languages have these kind of very simplistic limited arrays. But now we have more advanced languages with more features. And we might as well grab all those possible features wherever they're available, whether we need them or not.

It's a perfectly understandable train of thought and its totally wrong. There are excellent reasons why you would restrict yourself to arrays of fixed size and specific data types if you can. The reason is, that the more constraints you have, and this will prove true not just for arrays but for any other data structure we're covering. The more constraints you can put in place, the faster and smaller your data structure is able to be. So if I can say that I want an array of a fixed size and a fixed data type, that I want an array of, say, a hundred integers, then the compiler or runtime engine can allocate exactly the right amount of memory for that.

One hundred elements, four bytes each, we're done. We've figured that out. That would just be allocated as one contiguous area of memory and all the elements will be adjacent to each other. Very efficient, very predictable, very fast. But if I'm writing code that effectively says, yeah I want an array, it might be ten elements, maybe a 100, maybe 100,000 and each element might be an integer or it might be a boolean or a string or an image object. And the compiler cannot allocate the ideal amount of space and must introduce overhead to support the idea of the array being recitable.

Not only that, but you as a programmer will have to introduce logic to check and see what you're going to get whenever you access an element in that array. So flexibility introduces overhead. Flexibility might be required, sometimes absolutely necessary, but it adds at the expense of memory and performance. Sure, it isn't going to matter if you only ever have a handful of items. But when your arrays start to grow large, and/or they're volatile and change a lot, it can make a really big difference. But we're getting a little bit ahead of ourselves.

We've started with the classic one dimensional array, and that is the most common. But arrays can certainly get a little more complex than that. So let's talk about multi-dimensional arrays and get into the idea of sorting and searching.

Multidimensional arrays

So, the basic array is referred to as a one dimensional array. And sometimes you'll see arrays represented vertically like this, sometimes horizontally. It's whatever is useful. But taking it one step further, we can have arrays with two dimensions. Sometimes referred to as a matrix or a table because this is, effectively, rows and columns of information. Where any single array element is not accessed with just one index, one number, the way that it would be in a one-dimensional array. But we need two numbers to get to it. The first number to identify which row we're going to, and the second number to pull out the specific column.

Having said that, rather than thinking of this as a real table, as rows and columns, understand that in many programming languages what you have, when you have a two dimensional array, is basically an array of arrays. And what I mean by that is, instead of having a standard, one dimensional array, where each element holds a single value, like an integer, now each element in this array itself contains another array. So we use two numbers. The first one is going to get to the right spot in the first dimension, in this case number two.

And the second one is going to get the right spot in the second dimension in this case, number three. Now two dimensional arrays are helpful for many real world programming problems. We can model real world situations, for example, imagine you are creating a chess game where you can have a 2D array to represent the real eight by eight chess board. Or a slightly more data-oriented example, let's say we're writing a weather tracking application. We could create a two dimensional array where each row represents a day in the year, and each column represents a temperature reading at every hour of that day.

Now, this way, we could drill down to any particular reading at any particular hour of any particular day. But we could also do things like get the average temperature for one day by reading all of the values in one row. Or alternatively, find the average temperature at, say, 8 a.m, in the morning by reading all the values in one column. But let's take this one step further. Because we can have not just two-dimensional arrays but three-dimensional arrays and even more. So, here's an example. Take this idea of tracking temperature readings.

Currently, we have two dimensions of data. The first dimension is days. And the second dimension is hours. But then, let's decide we want to have this same data but for additional locations. One way we can do this is duplicating this entire two dimensional array. I can write code to create several more independent, two dimensional arrays, and juggle a bunch of those different array variables. Or, instead, I could take this same information and put it inside a larger three dimensional array, just having one array variable.

Now one way to think of this is like we're suddenly having a collection of two dimensional arrays. So, if a two dimensional array is a array of arrays, this three dimensional one is an array of arrays of arrays. So now, in a three dimensional array, we need three numbers to get to any one single element. The first number would tell us which part of that first level we're going to. In this case, where each one represents a two dimensional array of a specific location.

And inside that, the second number tells us which day we're going to, and inside that the third number would tell us which hour in that day. Side bar. When we start talking about three dimensional or multidimensional arrays, it can sound more complicated than it really is. Take care that you don't conflate the idea of a three dimensional data structure as having anything to do with 3D physical space or a 3D game program or anything like that. We're talking about three dimensions of data and that is much, much simpler than three dimensions of physical space.

Because when we use dimensions for data, it's really about grouping or nesting information. In this example, we simply had multiple locations. Inside each location, we have multiple days. Inside each day, we have multiple hours. That's it, that's three dimensions of data. Or imagine, and you may not have to imagine. A phone bill for a cell phone family plan, well that typically breaks down into say, three or four different phone lines. And inside that, each of those contains two or three types of services, data, phone, text, each of those contains its own set of repeating detail items.

In each of those contains things like the date, the time, the amount. That is multiple dimensions of data, at least three. And then take the idea of this entire phone bill and realize you get one of these packaged up every month, this entire multidimensional structure of data is repeated. That itself is another dimension. And, I stress this point, because I will get programmers tell me they're fine with one-dimensional arrays, and maybe they get two, but beyond that they think it gets kind of weird and geeky and esoteric and maybe you need some kind of special mental clockwork to deal with it all.

And you don't, and you're doing yourself a disservice to think that way. Because if you understand the idea of a phone bill for a cell phone family plan, you already understand the idea of a multidimensional data structure and why we need one. Okay, you might not have the syntax down cold, but syntax can go hang. We can deal with that later. It's the concept that comes first. Okay, these days, would I realistically write a program that would store cellphone bill data in a rudimentary basic array, even a multidimensional one? No.

There are other data structures that would be more useful, more convenient, that would take this idea and add features that let us sort, and search, and insert, and remove elements. And also be able to access any part of that data. Not just with the straight and numeric index, 17, 2, 28. But maybe using strings or specific keys that have actual meaning to us within our program, rather than arbitrary numbers. But we've got one more concept to cover before we move on to some of those features.

Because while it would be convenient to take all our data and hope it would naturally fit into an exact grid, a nice tidy matrix of a specific height and width, well, of course, it won't, because life and our programs aren't that neat.

Jagged arrays

The typical two-dimensional array is sometimes referred to as a rectangular array. Because it's a grid, a matrix, a table, always to the same number of elements in every row. And many languages have simplified syntax to create a multi dimensional array that is a uniform size. For example, five rows with six columns each, allowing you to think about it that way. And using the two numbers just to get to each element in this uniform grid. But this isn't our only option. Imagine that we're writing an application for some local tourist attraction.

So we're going to have a two dimensional array, and array of arrays, representing the number of tickets sold for every day in every month. So, the first dimension, 0 through 11, will represent January through December. And inside each of those we will have another array for each day in that month. So we begin with January, which has 31 days. We need 31 elements. So it's 0 through 30. Then February comes along. But it only has 28, or sometimes 29 days. So if we keep the array at a uniform size, we will have three unused elements.

March has 31, April only has 30, et cetera, et cetera. Now it's certainly possible that you could represent this information with a two dimensional rectangular array, so where every month always has 31 elements, either leaving the irrelevant elements empty, or setting them to 0 or -99 or some other value that you can then respond to in the logic of your program. But that's not always desirable. Well first, it should offend your programming sensibilities that there are these elements that represent impossibilities like, February the 30th and, April the 31st.

But it is more than just programmer aesthetics. Because if I'm writing code to figure out average ticket sales for any month, I should be able to just grab a row of data, take all the elements in that road, add them all up, and divide by the number of elements without having to then add logic to figure out how many elements I should be ignoring, depending on what index we're at and what year we're in. So for this, and many other situations like this, you might actually want an array of arrays where the internal arrays have different lengths. So this isn't a rectangular array anymore.

This would be refered to as a jagged array for reasons that should be obvious. We have an array of arrays of different lengths. Whereas many languages do have that simplified syntax to create a rectangular multi-dimensional array, creating jagged arrays often takes a little more programming work up front to create the structure, because you typically need logic I'm showing here just as pseudo-code, to construct those internal arrays independently, because they are of different sizes. That's usually worth the work because then, the data structure itself is coherent and it has integrity to it.

And this is kind of a theme we will see repeated with the more complex data structures. But what we're doing is choosing to do a little more work up front to create a data structure with internal integrity, instead of having to add a bunch of logic later on to deal with one that doesn't.

Advanced Array Behavior

Resizable arrays

It's coming with many languages, that once a basic array is created it cannot then be resized. You can create it with however many elements you need in order to change the value of those elements. But you can't add new elements or remove elements from it. Now, there are exceptions. For example, the standard array in JavaScript is already resizeable. The standard array in Ruby is resizeable. But, most languages provide some kind of re-sizable array behavior, when you need it. The terms you might hear is that it's a re-sizable array, or dynamic array, or mutable, meaning changeable, array.

As opposed to, and immutable or unchangeable one. Now, here's a couple of examples of this. In Java, the standard array is fixed-size and a fixed data type. So, it's an array of integers or an array of Booleans. Java being an object oriented language, you can also create arrays of objects like strings. But for a resizeable version, Java offers a couple of classes, the best known being the ArrayList. So instead of creating an array we create an ArrayList object. There's a different kind of syntax. As ArrayList is not baked into the core language itself.

It will require us to link to an external framework and instantiate this as a regular object. ArrayLists can be created totally empty, or given an initial capacity, even some initial values. But after this ArrayList is created, and I've called this object resizable. We can use methods to add to it, which we simply can't do with the standard array. And we also have methods to remove from it. As the equivalent example in Objective-C. Well, Objective-C is built on top of C. So we do have the standard C-style array.

But, in day-to-day usage it's much more common for Objective-C developers to use the class called NSArray. For dealing with arrays of objects. Now if you're new to Objective-C, again, don't worry about the syntax here, this is not the point of this section. I'm simply using the shorthand that creates a fixed size array here of three strings. Now most of the common classes and behaviors in Objective-C programming have this NS prefix. It's a throwback to when Objective-C was developing the next step operating system that the Macintosh OS is built on.

So this NSArray, like the standard Java Array, is also fixed size. It's an immutable array. You can create it to hold as many objects as you need, but you can't add or remove once it's been created. But just as Java has the array, and the array list, on Objective-C we have NSArray, and NSMutableArray. Which behaves identically to NSArray, but adds additional methods and this is a model you'll see across several programming languages. The data structures with what you might think of as slightly different behavior are actually implemented as completely separate classes.

Whenever adding new elements to a resizable array, we have a very important decision to make. Are we adding this new element at the very end of the existing array, are we adding it at the beginning or somewhere in the middle. Now this matters, because typically adding a new element at the end is easier, faster, and requires less work for that array to do. So if we have some kind of method or function that will just allow us to add a value, this just get's added in right at the end. With an index number one higher than the existing final index number. But on the other hand, if we have that existing array and we want to add a new item somewhere else at a specific position.

Let's say we're going to add the value 999 at position two. It will require things to be shuffled around. We'll have to move some of the elements and re-organize them, and re-number those positions in the array. This is possibly a handful of pieces of data, but it could be hundreds or thousands before we can fit the new one in. It's certainly true that the resizeable arrays we're talking about, like ArrayList in Java, and NSMutableArray in Objective-C, or the built in ones in Ruby or JavaScript. Well they will automatically take care of this.

Of shuffling these items around. But just because something is happening in the background doesn't mean you can ignore the fact that this does have a performance impact. And the larger the array the more items will need to be shuffled around and re-indexed. How this re-shuffling is actually implemented under the hood differs from language to language. If the array isn't large, some will attempt to reshuffle items in place, but others will just copy the entire contents of the old array. Into a brand new array, we organizing things as they go. Now across languages, there are a few different terms for adding items to arrays.

And we often need to include the distinction of where we are putting this new item. So if we are appending item, meaning we are adding an item at the end of an existing array. In Java the method is just Add in Objective-C, it's Add Object. In JavaScript, the behavior is called push and this is quite a common term for adding an item to the end of a data structure. That we push the new item onto the end. In Ruby we also push items onto the end. Not really here but in C++ where the vector is our resizable array the method is called push back.

In Python we're technically working with a data structure called a list, not an array but still the equivalent behavior is to append to the end of that list. However, if we want to add this new item not right at the end, but at a specific position in the existing array. We're not pushing onto the end anymore. We may need a different word. So in Java, the method is still just add, but we'd also provide an index for what position in the array this new item should be added. So it can re-index the other items. And Objective-C, it's still Add Object but we also need to provide an index parameter.

In Ruby, well, we're no longer pushing onto the end of the arrays. So we don't use push, we use insert to insert a new item into a specific position and likewise, we also use insert in Python. Now, in JavaScript, we switched from push to the term splice. Splice lets us split an array at a specific position. Inserting a new element, optionally removing some element, and rejoining that together. Now, okay, we're in danger of a my eyes glaze over moment. So understand I'm not trying to have you memorize every key word for doing this.

More to drive home that the same functionality exists in every language you'll encounter, but it just may have a different name. When adding new items it's usually a variant of add, insert, or push. Now when you're removing an item from an array, it's usually some variant of remove, delete, or pop. Although pop is the flip side of push. Meaning that we push an element onto the end of array, we pop an element off the end of the array. So several languages, JavaScript, Ruby and C++, Python, they use pop.

Pop always refers to deleting the last, the final element off an array. If you want to delete an item that's not the last element, you need some way of passing in the index numbers. So with Java, it's remove with the index parameter, with Objective-C. It's remove object at index and so on. Now okay, I get it, folks. This might seem like overkill. And if this information was only relevant for working with arrays, I wouldn't go into it as much. But this is the beginning of understanding that in all the data structure we'll encounter, there's really only five things.

Five fundamental behaviors we might need to figure out. And that's how to access, how to insert, how to delete, how to find, and how to sort. So, how do we access? How do we get to the values in this data structure? Whether that's getting to one specific element or wanting to iterate through all of them. To systematically loop through every element. Then there's how do we add a new item to this data structure? Often including, as we've seen. How we would add an item at a specific position. The flip side of that how do we delete or remove one whether from the end or this particular position.

Okay. How do we find or search that data structure to get a specific item when we don't know where it is or even if it exists at all. And finally can we sort the content of the data structure. So it's in some kind of meaningful order. And that's whether we want to sort it in place or we want another copy of it that's sorted. And these five things would cover 99% of everything you'll ever need to do with any data structure. But let me tell you folks, we often won't get all five and don't expect to For example, many data structures don't support any kind of searching behavior.

It's just a big collection, a big bucket of stuff. If you need to find something, you just programmatically go through all of it yourself. And many don't provide any kind of sorting behavior. Others are naturally sorted and can keep themselves organized. We're about to talk about the basics of sorting and searching. But as I've gone through here with these resizable arrays. And will prove to be true with many of the later data structures we've got coming up. To add a new item to a data structure will typically be some variant of the word add, inset or push and to pull a data structure.

It will be some varied word remove, delete, or pop.

The five requirements of any data structure :

Sorting arrays

Arrays are ordered data, they have a natural sequence, it is of course the array index, the zero, one, two, three and so on. But the contents of the array elements, whether those are integers or strings or anything else aren't in any meaningful order unless you made them that way. But it's not unusual to want the option to sort the values in an array, whether that's numerically or alphabetically, descending or ascending. And, of course, we'd be reshuffling just the values around, the internal structure of the array and its index will always remain at zero, one, two, three, etc.

So, another behavior made available in most languages is the ability to sort the values in a basic array. And you can often write a simple line of code something like this, myArray.sort. Okay, it's not always a method on the array, itself. Sometimes it's a utility method, or utility function, where you provide your array as a parameter. But either way, it's usually very simple to write. But this is the kind of code, that, whenever you do write it, your IDE should play alarm bells, or flash warning signs at you. Or at least play a concerned computer voice, asking, Simon, are you sure you know what you're doing? Because this is one innocent looking code statement that might be doing a trivial amount of work, if we just had six elements here, but it might be doing a colossal amount of work and we can't tell from looking at the line alone.

We'd need to go deeper. How much and how often? How much data is this innocent looking sort statement being asked to sort? Are we talking dozens, hundreds, thousands, tens of thousands of pieces of information and how often are we doing this? Are we calling sort once at the start of the program or inside a loop being called a hundred times a second. This is not the course where we debates the pros and cons of different sorting algorithms and certainly not the course where we're going to write one. That's another course entirely. This is a pragmatic course. If we are going to sort, and we probably will.

Then we will end up using the sort functionality that's already in our language, as it's typically very well when optimized and battle tested. But we need to understand the impact of doing this. And here is really two point to understand when we're sorting simple, or at least reasonably simple arrays. Point one is that internally, the built-in sorting functionality in many languages will look at how big your array is. Doing this in .NET, for example, will automatically switch between different sorting algorithms, like insertion sort, quick sort and heap sort, based on the size of the array.

This is a good thing. And point two, there are two primary implementation styles when you sort an array. What I mean by that is that most languages will attempt to sort an existing array in place but there are a few, that when you ask them to sort, they will instead create new sorted copy of the original array. it is not that one of these is good and the other one is bad. But you certainly need to understand which one your language is doing. And whenever you sort an array, in fact, whenever you sort any data structure, your internal programer should flinch, you should be wary.

Sorting is always computationally intensive, you might need to do it but we want to minimize it. And keeping conscious about how much data we have and how often we're asking to sort it might lead us into choosing different data structures as we learn more about the others.

Sorting arrays of custom objects

In object orientated languages, you often have arrays of your own custom objects, not just arrays of simple numeric or even text values. By default, there's usually no problem creating an array of custom objects, but if you then ask that array to sort those objects using the built in sort functionality in the language. It simply won't know how to do it. Because you need to provide a little more information before you can write the equivalent of myarray.sort and have that work. Because if we have, for example, an array of employee objects, with each one containing multiple properties, multiple pieces of data.

Like employee ID, first name, last name, hire date, department. But what does it mean to sort employee objects? If we want to sort these, should we just look inside each object and sort them by employee ID? Well, that might work, but if you're using this data structure to build an employee directory, you'd probably prefer to sort by the last name ascending. And then, within that, you'd want to sort by the first name. And if two people had the same name, you might then want to order by the department or the higher date.

So you want to be able to control how these are sorted using whatever you decide is meaningful in your definition of employee. Now that might sound like you need to write your own sorting algorithm. But thankfully that's not the case. What you will need to do is provide a little bit of code. Just a little bit of logic. Usually only needs a few lines and it's typically called a comparator, or compare function, or compare method. And this is not sorting, this is comparing. There is a big difference. Sorting is hard, comparing is easy.

A comparator or compare function just needs to be able to take any two of your custom objects, employee a and employee b, let's say. And have a little bit of logic that says,is a less than b, or greater than b, or equal to b? With whatever that means for us. So this example would first compare that last name property of each object. If they're equal, it will then compare the first name. In my simplified example here, if the first name and the last name are equal, it will just say. These are objects are equal to each other, for our sorting purposes it wouldn't matter what order they're in.

But this little bit of logic expressed in your language would be enough to let any built in sorting functionality in your languages arrays or other data structures now be able to sort arrays of your objects because if it's been told how to compare any two objects it then knows how to go through and sort a thousand of them. Now, you'll find these all over the place. Comparators aren't just for arrays. They're useful in many other places. And we'll see this same idea used in other data structures.

Searching arrays

So, how do we find a particular item in a basic array? If I have an array of integers, or an array of strings, or an array of anything, and I just want to know if a specific value exists somewhere in this array. Well, in the simplest rudimentary arrays, like C-style arrays, you don't have any search functionality of any kind. You might have 1,000 elements, but they'll be no easy find or index of function that we can call just to see if a value exists. Now we can certainly find out, we just might need to go through all the items ourselves.

So if I wanted to know if the value 99 exists somewhere in a basic array of integers, where any value could be in any position, I might use something like this pseudo code. Where I'll set up an initial index value and set it to 0, and then I've got a while loop. I could write this as a for loop or a for each or a do loop, whatever works, as long as i is less than the length of the array. We'll check the value at that current position, see if it's equal to 99. If it is, we'll return true. If not, we will add 1 to i, and we'll move on, keep going around this loop.

Until we either find that value, and return true, or we get to the end of the loop, and we'll just return false. Now in this example, where I just want a true or false result, does this value exist somewhere in this array? The best case scenario we could hope for is that the first or zeroth element is the value we're looking for. We could immediately return true and we're done, we wouldn't have to look any further. The worst case is that the value 99 does not appear anywhere in the array. But we would still have to check every single element, whether that's 100 elements or a million, to find that out.

So, this method is referred to as a linear search or a sequential search. This is an inelegant brute-force method. We just plan to go through everything, one at a time, from the beginning to the end. Linear searches are simple to understand and they are easy to write and they will work. They're just slow. And the more stuff you have, the slower they get. As the number of elements in the array grows, the time to search that array matches that growth exactly. If you're familiar with big-O notation or big-O complexity, perhaps you've watched my code efficiency course.

A linear search would be considered a big-O of n, or order n complexity. This is a linear time algorithm, meaning that this increases in a linear fashion along with the size of the input. What that means is, simply, if we double the items in the array, we double the search time. If we have 10 times the items, we have 10 times the search time. Because again, we have to assume the worst case. The value we're looking for does not exist or might be the very last element, but we still have to check everything to figure that out.

And as we'll see in a moment, this is the kind of searching that you may already be doing when you use built-in search functionality in an array or indeed in any other data structure. If you have a simple array, whether it's numeric text or even your own objects, and the items could be in any order inside that array. Then, a linear search might be what you have to do. If there is no order, no predictable sequence to the values in the array and where they might be. Then, there may be no other options than check all the things.

But, if the data is ordered. If it has some kind of predictable sequence where the items are in strict ascending, or descending order. There are much better options for searching than a linear search. We'll talk about things like binary searching as we go forward. So if searching for elements in an array is something you're going to have to do, or even in any data structure. If that's important to you, then having some kind of order to those elements is more and more significant. Okay, you might think. Well, let's just keep all our arrays sorted so they're easy to search.

Well, here's the problem. As we saw in the previous movie, asking a data structure to sort is computationally intensive. It's inherently demanding. And if your data is large or frequently changed, even more so. And you may sometimes settle for a slow search ability. Because the only way of fixing that, of keeping everything sorted, will itself add such a performance hit that it's simply not worth it. And this is the inherent challenge of data structures. You cannot have a data structure that is equally good in all situations.

** Searching arrays** using a while loop : linear (sequential) search: O(n) complexity

Default search if there is no ordering in the data This search slows down as the number of data grows.

Using built-in search behaviour

It's true that as with most of the languages beginning with fixed signs arrays and then offering resizable behavior, or adding sort functionality, we will often find that functionality is provided that lets us to search through an array for a particular value. Now sometimes you just want to know, does a value exist? Yes or no. True or false. And sometimes you want to know, does that value exist, and if so where is it in the array? Now across languages, the most common term for the first piece of functionality, where we want just a yes or no result is contains.

Being able to write something along the lines of this, myArray.contains some value. And just returning a Boolean. But it is more typical that we don't just want to know if a value exists in the array, we want to know what position it's at, what index. And the most common term for this method or function is indexOf. Now once again I'm writing non-specific code here. This behavior might be part of the array object itself, so you could write something like myArray.indexOf. Or in other languages, you may have to use a separate utility function or method that takes two parameters.

One's the array you're searching, the other is the value you're searching for. As you're expecting an integer result from this, the index position of whatever you're looking for, if the value isn't found in the array, what you'll typically get is a result like -1, to indicate that it was not found. But this is a very common piece of functionality. It's actually indexOf in Java, it's indexOfObject in Objective-C. It's indexOf in Javascript. It's indexOf in C# and Visual Basic. It's just index in Python, and it's index or find index in Ruby.

They do exactly the same thing. Now as I'm showing here, you won't always find a contains equivalent. There's not always a verb just to get a Boolean true or false for whether an object exists. But of course what you can do is just call indexOf and ignore what the actual result is, just figuring out if that object existed somewhere in the array. Because here's the thing. Whatever the syntax, whatever the language, anytime you write a statement with the words indexOf or contains, it should be another thing.

That light writing a short statement should give you pause for thought, it should give you an internal flinch response. Because this might be doing a trivial amount of work, but it might be doing a linear search of an entire million item array. And just because you didn't have to write that linear search code, and you can write a small, simple statement like indexOf or contains, well that doesn't mean you're doing anything efficiently. This could be terrible performance. So what's the alternative? Well, once again there might not be one.

If you genuinely have an array of unstructured data, there may be no other option than a total and complete brute force linear search. If however, you can order the data in your data structure, then searching becomes immensely more efficient. Let's take a look at that.

Common needs for a built-in search behavior

Using binary searching

Linear searches are clear, they're understandable, they're easy to write, and they're slow. But if the values in your array are in ascending or descending order, you can do a binary search instead. It's much faster, and here's how this works. Imagine we have an array of integer values. Now this could be strings, could be anything, as long as the values are ordered. And I want to know if the value 99 appears somewhere in this array. Again, with a linear search, I would begin to add the first element and I would check every single value.

But a binary search is different. It begins. Not by looking at the initial element. But by getting the length of the array, in this case it's 22 values. It divides the array in two, and it immediately jumps right to that point. Right to the middle of the array. And compares the value at that position with the one we're looking for. In our case, what we're looking for is 99, and what we've just found at this midpoint is less than that. And, what that means, because these things are in order, is I can now completely discard of this element of the array. And all the elements below it.

So in one move we have just discarded half of the array. We know these values are in sequence, so the value we're after cannot be in the lower half of the array. We don't need to look any further into it. So we then move on. We take the remaining half of the array and we divide it in half, again jumping directly to it's midpoint. We check that against the result we're looking for. Well, this time, 108 is higher than 99, so our result is not this one or anything above it. We can now ignore the upper section of this. And then, we move on, continuing to divide in half every time.

We check the result. We discard the irrelevant half. We move on. So, we either get a match or we realize there is no match to be found. And this is a binary search. And here the word binary is nothing to do with the 0s and 1s of binary computer code. It's binary, simply meaning in two pieces. We are dividing the array into two pieces every time. This is also referred to as a half-interval search or more generally a divide-and-conquer algorithm. And this is an incredibly efficient way compared to linear searching particularly as your arrays get larger.

You can work through arrays of hundreds or even thousands of elements with only a handful of checks. Sounds great. So, a common question. How can I do a binary search if my array is not ordered? Another simple answer, you can't. End of story. Binary searches can only be done on ordered data. That is their inherent constraint. Now you will find binary search functionality already exists in almost every language. For example, Java offers binary search methods in their array utilities. .Net languages have binary search methods on the array classes.

C++ has binary search in the standard template library. Unfortunately, JavaScript doesn't have any kind of built in binary search abilities. We've got bsearch in Ruby. And while Objective-C doesn't have a method with the name binary search, it does support this. There is a version of the index of method again index of used to find something. But were using this option that it using sorting data. A sorted range of data in which case in will do a binary search and not a linear search.

Now as you might imagine if you feed an array that isn't actually sorted into any of this expect some kind of error or no result. Now this is the important thing to take away here. As a pragmatic, practical software developer, you don't want to write binary search algorithms. Okay, they're conceptually simple, but don't underestimate them. The devil is in the details. But seeing as this functionality almost certainly exists in whatever language you have, you should be able to recognize when to use it. You're thinking. Hey, I've got some sorted data and I need to search it.

I might have just written a standard index off statement, and yes, that will work but let me look at the documentation. And make sure that isn't going to do an inefficient linear search through all the data when I could do a much faster binary search instead. Again, it's a performance saving that is there for the taking. Now, although binary search is great for sorted arrays, we will see it again as we go forward, because it's not just for arrays. And next, we're actually going to move away from arrays, and finally start exploring additional data structures. And see how they solves some of the problems that arrays have already presented us.

Binary search = half-interval search

Divide-and-conquer algorithm

Binary searches only work on ordered data

Working with Lists

Introduction to lists

We've encountered the word list a couple of times in passing. Once when talking about array list objects in java. And I also mentioned that the basic built in collection in Python is called a list it's not called an array. So as you might guess from that, a list is a different kind of data structure from an array. And lists are still a pretty simple data structure to think about but there are things a list does better than an array and things it does worse. But here's our first problem. In a course like this that touches on several different programming languages and technologies, this term list isn't quite specific enough to be useful.

As we've seen, the implementation of array differs between languages, but with lists it's even more so. If a Python programmer talks about lists, she means something very specific, very exact. It's a fundamental data type in Python, like strings and numbers are, but in, say, Java, lists. Is not a data type. It's not even a class. It's an interface, meaning an abstract definition. What that means is you don't create a list in Java but you can create objects that support that behavior like creating an array list or a linked list.

And if an Objective-C program I was talking about lists, they're probably writing this behavior themselves because it's not provided in the language. There is no built in NS list to the way you have the NSArray. And a Ruby programmer is unlikely to talk about using lists at all. So, how on earth do we deal with this situation in any kind of meaningful way? Well understand there is a general theoretical programming concept of a list and then there is the specific implementation of a list data structure in several languages which may have taken this basic list idea and kind of run with it and added a whole bunch of functionality to the basic idea of a list so that the lines have become a little bit blurred and in many languages.

Lists can behave like arrays if you need them to. But let us begin with the basic programming concept of a list. And I'll make a couple of distinctions so we can tell the difference between the idea of a list and the actual implementation of a list in any particular language.

Understanding basic list implementations

Both arrays and lists are collections. And I'm using the word collections in its most general sense. But arrays and lists are both ways to group multiple items together under one name. Whether those items are integers, or strings, or custom objects. But the biggest difference between a simple array versus a simple list is in the idea of direct access versus sequential access. And this is all to do with the way that these data structures are stored in memory. Here's what I mean, if we have a simple array.

It's typically allocated in a contiguous area of memory. All the elements, whatever they are, integers, or doubles, or references to objects. They are next to each other. And because of the predictability of this structure, we can go directly to any specific element, just using the index. And it doesn't matter how big the array is, we can jump directly to position 10,000, just as easily as getting to position 0. Any element can be accessed as easily and quickly as any other element. We don't need to work our way through the entire array, unless we're doing something like a linear search, and this is what I mean by arrays support direct access to the individual elements.

Sidebar! Direct access is also, sometimes called random access. I've never been a fan of this term, as random access seems to suggest we have no control over where we're going. We're just shaking the magic eight ball and getting some random element. That's not what random access means. But when you see this term applied to a data structure, just think direct access. We can get to a specific element directly, without having to work our way forwards or backwards through the entire collection. Sidebar over.

That's arrays. So what about lists? The classic distinction, is that if arrays are about direct access, then lists are about sequential access. That a list is a collection of multiple elements, but its structure is not based on a strict numeric index like an array is. Instead, the list keeps track of a sequence of elements. Might be five elements in your list, might be ten, might be a thousand. But, these elements, whatever they are, could be stored anywhere in memory.

They do not need to be located next to each other. The way that an array is typically allocated. Well, that just sounds like a bunch of separate objects, you might think. But wait, we're not done. See, unlike an array, a list doesn't have a numeric index. That's true. Instead, when you access the list, what you get is the first element, sometimes called the first node of the list. And a list node is a very simple rapper object. Often something as basic as a strut, that holds your element, your value. Whatever object that is.

And also contains a link to the next node, the second. In languages like Java and C#, this link would be a object reference. In C or C++, it would be a pointer, but you get some kind of reference to the next node. So when we read the first element, the first node of the list. We use that link to get us to the second node, where ever that's located in memory. We get the value of it and the second node also has a link to the third node. And so on and so on until we get to the final node in the list.

We reach a node that goes no further. Either the link to the next node is null, or it might link to what's called a terminator or sentinel node. And a sentinel node is really just a dummy node, a dummy object, that doesn't contain any element value. It's just there to indicate in no uncertain terms that you have now reached the end of the list. So, this is the idea of a list. More specifically, this is the concept of what's called a linked list. This is still ordered data. There is a specific sequence of nodes, where each node is containing some kind of value, but also contains a link to the next node.

The big difference, number one, the biggest downside of a classic linked list like this is we have to access the elements sequentially. Because of the way this is built,. We cannot jump directly to element number five or ten or 1,000, without beginning at the first node and working our way through, following all the links from one to the next to the next to the next. And this is sequential access. And you might think well, hang on a second Simon, I know I can also do sequential access with a regular array, after all we could do a for loop or a for each and iterate through the entire array just increasing the index one by one.

So, if arrays give us both direct access and sequential access. But why would I ever pick this list thing that limits me to sequential access? Good question, imagining if you would, and here is why. The big difference number two. The enormous advantage of a linked list. Because of the way it's built, adding, removing elements is much, much easier than with an array. See, in an array, where the items are taking up contiguous space in memory, as we've already seen, if I want to insert a new item at position 2, it requires all the other items to be shuffled around, so it can keep its meaningful order to structure, and this can get very demanding, and it gets worse and worse to reshuffle items.

As your arrays get larger, even adding elements right at the end although it's easier in an array, it can still require reallocation of the entire array in order to always keep a contiguous area of memory. To add a new node at the start of this list, I would just create that new node wherever we have space and memory. I would point the list to it and tell it it's the first node and point it to the previous first node. Everything is still now in sequence, it's done, but no other nodes needed to be changed or shuffled around.

Same with the end, if I want to create a new node at the end, I build that new node, I point it to the terminator, and I point the previous last node to it and we're done. Even deleting a node, even from the middle of a list is also simple. If I wanted to remove object3 over here on the right hand side, well it just requires the previous object2 that's currently pointing to object3 changes its link to point to object4. We've then effectively taken a node out of the sequence. We can just go ahead and get rid of object3 and all the other nodes in the list, whether that was five or 5,000 or 500,000 are completely untouched.

So if you have very volatile, changeable data, the difference here can be huge because we don't need to rearrange existing elements. Adding elements to a linked list always takes the same amount of time, whether the list is small or the list is large. It is a fixed time process. But adding or removing elements in an array gets more and more inefficient the bigger the array gets. So our central challenge as we understand our data and what we need to do with it, is that arrays are really good at direct indexing and bad at adding and removing elements.

And lists are really good at adding and removing elements. And bad at direct indexing. Neither of these are good if you have to do a full linear search, although arrays are easier to sort and do a binary search, if that's an option for you. Now, we're going to talk about the support for these ideas in various languages in just a moment, but we're not quite done yet, because we've talked about a linked list. But there's more than one kind of linked lists. Let's see that next.

Using singly and doubly linked lists

We went from the generic idea of a list, which just means an ordered sequence of elements, into the slightly more specific idea of a linked list being the way that it's typically implemented. That the sequence is managed by each node having one link to the next node. Now, this is a linked list. But to be yet a little more specific, this is a called a singly linked list, where we have a single link forward in each node. But, we can also have a doubly linked list.

The difference here would be one more piece of information for each node. Instead of each node having a reference just to the next node, we add one more piece of data that it also has a reference to the previous node as well. Now to represent this visually can look a little intimidating, because I'm showing the idea these objects can be located anywhere that's convenient in memory, they don't have to be next to each other. But each object only has two extra pieces of data. A reference to the next object and a reference to the previous object.

So this is a doubly linked list, it allows us to go forward and backward, to traverse the list in any direction. Now that might not immediately seem that useful, but it makes several operations a little easier. For example, we talked about the idea of removing items from the middle of the list. Now with a doubly linked list, I always have a reference to both the previous node and the next node, it's super simple to make any changes that I need to do, so both removing and inserting is simpler with doubly linked lists. And as we'll see in a moment, when linked lists are implemented in languages, they are typically implemented as doubly linked lists.

Now one question might be, what should we do with the first and last node of the list? Now I've already talked about the idea that the final node in this case object five, its next link might be set to either a no reference saying we're not going anywhere after this. Or it might point to a terminator or sentinel node to explicitly indicate we've hit the end of the list. Now with a doubly linked list, this is also what you would do for the previous link on the first element. So object's one previous link also points to the sentinel or terminator.

However, there is another option for this. Is that rather than these starting and finishing nodes pointing to a terminator, instead, you point the final nodes next reference back to the first node. And the first node's previous reference to the final node. Now this would be considered a circular linked list or, in this case, a circular doubly linked list. It's not typical, but it can be useful for certain problems. But if we get to the end of the list and say next, we just start again at the beginning. So, we have singly linked lists, we have doubly linked lists, we have circular linked lists.

Let's see what kind of support exists for these across common languages.

List support across languages

I introduced this section by saying that the term list had some very different implications when it was used casually, but now we are in a better position to talk about those differences. First up, Java. Now once again in Java, the term list does exist. It is part of the collection's framework in the Java.util package. But the list isn't interface, not a class, meaning it describes behavior that another class can implement. Now one of these classes, we've already seen, the array list. We saw array list used as a resizeable relay, so, there's an interesting name.

Is this an array? Or is it a list? Well, it's kind of a little bit of both in terms of behavior. It is easy to iterate through without using specific indexes and that's a convenient behavior of a list. But really, this is an array. Internally, ArrayList is stored as an array, so it favors direct access and you would not expect great performance for insertions and deletions. No, for our discussions here the most important Java class would be LinkedList. You can create instances of LinkedList.

Now, two things to point out. If I come down and look at some of the basic description, we can see that this is a doubly-linked list implementation. Which is what we just covered. This is a good thing, so that insertions and removals should be as fast as possible. Now, here is another thing however. If I'm looking down a little further, we'll see that we do have the ability to index into this linked list. And what I mean by that, is we have methods like get, that just takes an index integer. So we can use a numeric index, just like with an array, to say we want to get to the fifth, or the 20th, or the 532nd element in this list.

But, hold your horses. Before you start thinking of this linked list as just like an array at the same time, it is really not. If I look a little deeper at some of the description information here,. We'll see likes like this. Operations that index into the list will traverse the list from the beginning or the end, whichever is closer to the specified index. What that effectively means is you can certainly write a line of code like, myLinkedList.get(537).

But what that will do, is the equivalent of going to the first entry in the list and doing 536 next operations. So you can use an index with the Java LinkedList, but it's very, very inefficient to do so. But LinkedList is the implementation of the doubly-linked list in Java. In C#, there is a LinkedList class in System.Collections. It's exactly what you might expect, it again is a doubly-linked list. In Objective C however, there is no built-in linked list.

There are some third party implementations, but there is no direct equivalent in the provided frameworks. Likewise, Ruby does not have a built-in linked list type. It has arrays, and as we'll see it also has things like hash tables and sets. But there's no LinkedList equivalent. Now, Python. I have mentioned the fundamental collection data type in Python is called a list. Well, you might be wondering was that a singly-linked list, a doubly-linked list, a circular doubly-linked list? Well no, it isn't either of them.

Python lists are not linked lists, they are a raise. And just to prove that I'm jumping over to the Python documentation here, their design and history FAQ. There's an option here talking about how are lists implemented, and they will even tell you. Python's lists are really variable-length arrays. So they might have the name list, but they're really arrays, which is why I've brought them into our array discussion. Python does not have a built-in linked list implementation. Combined to the language comparison, and, just to finish up, C++.

The list container and the standard temple library is a doubly-linked list implementation. But here's the thing. If your chosen language does not have a specific linked list implementation, it's not the end of the world. In practical day to day programming, you need array structures far more often than the classic linked list data structure. And even in the case where you do need to do a lot of insertions and deletions, which arrays certainly aren't very good at, we do have some other data structures coming up that might be even better.

Using Stacks and Queues

Using stacks for last-in, first-out

In this section, we're going to talk about the data structures called stacks and queues. Just like the arrays and linked lists we've already talked about. Stacks and queues are also collections. They are different ways we can hold multiple items. And when we say the words stacks and queues in programming, we, they don't have some weird esoteric meaning. They mean exactly the same as if I used the word stack or queue in a conversation with my grandmother. Think of a stack of paperwork, a stack of books, a stack of dirty plates in a restaurant.

The idea of a stack is that when you add something to the stack, you add it to the top. When another dirty plate is brought into the kitchen, it's put on top of the stack. And when we take something from a stack, we take it from the top. We don't pull a dish from the middle of the stack, we don't pull it off the bottom. We always take it from the top. The last plate added will be the first plate removed. Last in, is first out. And this is exactly the concept of a stack in programming. We have a data structure of multiple elements where the last element added to the stack will be the first element removed.

This is last in, first out, or LIFO data structure. Now unlike an array, we do not concern ourselves with a numeric index for these elements. We don't want to think about a numeric index. We don't care about anything except performing two basic operations, that we can add a new item onto the stack and we can remove the first item off the stack. That's it. And the terms we use for this in almost every programming language are that we push and we pop. And we've already seen those words already used with dynamic arrays.

So to push is to add an item to the end of a data structure, so we push a new element onto the top of the stack. And when we want to take an element from the stack, we pop it. And pop will return the top element. And it's important to keep in mind that when we pop, we remove an item. Popping is not just looking at the end. Pop will return the value of an element and delete that element from the top of the stack. So you might wonder, what about if we want to just read the top element, but not remove it? Well, most of the time when stacks are implemented, that is made possible too, and the term associated with this is usually peek.

That we can take a peek at the top element, and that'll give us the value, but it does not remove the element from the stack. And, these are the three fundamental behaviors of stack data structures. Push, pop, and, optionally peek. But, it's always last in, first out. And, here's the important idea to take from this, that working with a stack should be even simpler than working with arrays or linked lists because there is less you can do with a stack. This is an intentionally limited, an intentionally restricted data structure.

We don't ever insert elements in the middle or the beginning. We don't need to worry about maintaining next and previous links. We do not ever need to reshuffle items in their indexes. We don't care about an index at all. All we do is push and pop and maybe peek. And if you're trying to do anything else with this stack, you're using the wrong data structure. So over to language support, stacks are pretty well supported in just about every language. In Java for example, it's just a regular stack class that you can create.

It has just a very few methods here. We've got peek, pop, and push. There is a search, and there's a check to see if it's empty as well. Similar kind of options in .NET. We have a stack class. Which has some of the basic built in ones, but it also has Peek, Pop, and Push. If we jump over into Python, it doesn't have an explicit stack class, but there is a section even in the regular Python documentation of using lists as stacks. Instead of pushing and popping, we append and we pop. Now Ruby doesn't have an explicit stack class, but we could use the array to do this because the array.

As we've seen before, it does have the push and pop methods, so we can use it for this. And in C++, stack is part of the standard template library, again just push and pop methods. If a language doesn't have an explicit stack type like Objective-C, you can still use a dynamic array, and just add and remove items from the end. This is why so many languages have those push and pop methods on their dynamic arrays, so you can just treat them as stacks when you need to.

Understanding abstract data types (ADTs)

When talking about stacks and queues and even linked lists, we're moving into the realms of what are often referred to as abstract data types, ADTs. Now this can be a bit of a vague hand wavy term, particularly because the word abstract has several meanings in programming. So, to understand abstract data types, first think of the idea of just. Data types like integer, Boolean, float, double. A data type doesn't describe a particular value, but it does describe what that value is allowed to be, what's permitted for it.

That an integer is a whole number with a specific allowable range of say negative 2.1 billion to positive 2.1 billion. But a Boolean can be true or false, et cetera, et cetera. And those data types, that formality about what your data can be and do is part of the language itself. But we can take those very basic primitive constructs of the language and start to build more complex data structures after them like stacks and queues and linked lists.

And realize that these kind of data structures aren't representing things you would make for a specific application, like you might define employee, or an airplane or a product or a widget. But stacks and queues and linked lists, these are much more reusable, much more abstract than that. We can refer to them as abstract data types. So a stack is an abstract data type. It has expected to find behavior. If we have a stack, we have a last in, first out data structure that we can push and pop and peek.

That's what we expect a stack to be able to do. But the actual implementation of that stack is abstracted away. It's not that it's a secret, it's just, we shouldn't have to care. Unlike when we talk about basic arrays, where we do need to think about things like contiguous memory, we're not really interested in that when it's working with this stack. How a stack is actually implemented is not that important. A stack could be and often is implemented behind the scenes using a dynamic array.

But it could be implemented with a linked list instead. It really doesn't matter. So I could have a Python programmer talking to a C# programmer, talking to a C++ programmer talking to a Java programmer, and they're all talking about this idea of a stack. And being on the same page about it. Without really caring one jot about how it's actually implemented in any particular language. So, one more distinction here for the object oriented folks among you. Now it might sound very similar, but an abstract data type is not the same thing as an abstract class.

Abstract class has a very specific meaning in object oriented programming languages. As a class, you cannot instantiate, that you must inherit from. That is not what the term abstract data type means. For example, Java has a stack class. It is a regular concrete class. I can instantiate a new stack object, start pushing and popping into it, but a stack is also an abstract data type, meaning it's an implementation of this idea. Of having a last in, first out data structure, which we can push and pop and peek into.

So stack is a real concrete class and it's also an abstract data type. Okay, enough about stacks. Let's get into their flip side, the idea of a queue.

Using queues for first-in, first-out

The key difference between a stack and a Queue. Stacks are last in first out LIFO. Queues are first in first out FIFO. Think of a line at the bank or the supermarket. We can add multiple people to the queue. But when we want to take something from the queue, they should always be taken from the front of the queue. One gets taken from the front, the others shuffle up to the front of the queue and the process continues. Now, queues are used a lot in programming. Think of sending jobs to a printer.

As the printer can only deal with one thing at a time, it has a built in queue. You queue up multiple jobs for printing, and it will take those jobs in the order that they were submitted. First in will be first out. And queues are very commonly used in multi threading and concurrency situations to keep track of what tasks are waiting to be performed and making sure we take them in that order. Now just as with the stack the idea of a queue as an abstract data type is that we do not care about the actual implementation of this data structure behind the scenes.

Whether this is implemented as a linked list or a dynamic array or anything else, we shouldn't have to worry. All we want to be able to do is add an item to the end of the queue, and remove an item from the front of the queue. And as with stacks, we should not even be thinking about numeric indexes. If you're asking the question, but how can I pull an item from the middle of the queue? You should be looking at a different data structure. Now where's the terminology for working with stacks has centralized on push, pop, and peek. The terminology for working with queues is a little more scattered.

As with arrays we'll actually find several different words used for the idea of adding to a queue and removing from a queue. However the key behavior is absolutely the same. All queues are a first in, first out data structure. As a handful of examples, we have the queue class in C#. This is one of the simple available ones here. The two methods there that are most useful are the Enqueue to add an object to the end of the queue and to Dequeue, to remove and return the object at the beginning. Java is a little more involved.

Queue is an interface just as list was, so we can't create a queue object. Instead, there are multiple concrete classes that do support queue behavior, including ArrayBlockingQueue and ConcurrentLinkedQueue. Java has several specialized queue classes and these are mainly for different multi-threading and concurrency situations. For a simple queue behavior the LinkedList class that we've already seen actually supports queue behavior it can act like a queue. In Java we use the words add and remove when working with queues.

Over to Python it does have a queue class and that queue class uses put and get for the idea of adding and removing from the queue. Having said that, the Python class is kind of targeted towards working with threading, which is very common with queues. You can certainly use it for other needs, but you'll see a lot of terminology talking about thread synchronization. And that's the same with Ruby, Ruby does have a queue class, but it's really just intended for synchronizing communication across threads. If you want standard queue behavior that's not targeted at multithreading you can just use the standard array class and the method push to add to the end of the array, which we've already seen.

But instead of using pop in Ruby, which would remove an item from the end of the array just like a stack, we could use the method shift, and shift would return and remove the first element from the array, effectively from the start of the queue and it would shift all the others down by one. Now in Objective-C, there also isn't a built in queue data structure, though you could certainly use the NSMutableArray for this, just adding only to the end and removing only from the front. And in C++ there is a queue container in the standard template library, where we use the methods push_back and pop_front.

And this gets us started, but we're not quite done with stacks and queues just yet. We have a couple of specialized options to cover.

Priority queues and dequeues

Some languages offer a version of a queue, called a priority queue. This allows you to insert new elements into the queue that contain a priority. Could be something as simple as a number, so that new items could be added to the queue ahead of others already there. Now if you add multiple items that have the same priority, they will queue as normal in a first in, first out order. If something comes along with a higher priority, then it will go ahead of them in the queue and it would be removed from the queue beforehand. Anything with a lower priority just queues up as normal behind them.

Now because it's up to you what priority means, you typically have to implement a comparator or a compare function, as when sorting arrays, so that you provide your own little bit of logic for what can be considered a higher or lower or equal priority object than another. Now priority queues don't have the same kind of language support as regular queues. Java has a priority queue class. C++ has priority_queue container. Both of these require you to provide your own comparative. A PriorityQueue's being an abstract data type can be implemented in multiple ways.

In Objective-C you can do a PriorityQueue behavior using a built-in data structure called a binary heap, but that's getting ahead of ourselves. We'll get to heaps a little later on. But as well as Priority Queues, you may also run into this term, D-E-Q-U-E, it's pronounced deck and it means Double Ended Queue. A Double Ended Queue is basically like having a stack and queue at the same time. We have a collection of items and we can add new items to this but we can choose to add to either end and we can remove either from the front or from the rear.

Both removing and adding to either end behaving either like a stack or like a queue. The restriction is we can't remove from anywhere else in the collection. Now seeing this term deque can be a little confusing cause you'll also encounter the term dequeue and these look very similar. Dequeue D-E-Q-U-E-U-E is a verb, an operation, a method we can end queie and deque an element from a queue. In C sharp for example, dequeue is the method to do that. Whereas Deque, D-E-Q-U-E is a noun.

A deque is a specialized kind of queue, it's a double-ended queue. So language support for this or Java like having a queue interface also has a deque interface and linked list supports that deque interface. Notice that linked list is turning out to be a very flexible class indeed in Java. It can behave like a linked list and it can behave like a queue, and it can behave like a double ended queue. In C++ there is a deque container type with methods like push back, push front, pop back, pop front, so you can choose to add and remove from either end.

Python also has a deque class for this. Even though you could use a Python list, the deque is optimized for this kind of usage. Now Ruby doesn't have a built in explicit deque and neither does .net, and although you can provide equivalent behavior with a linked list or even a dynamic array, most of them will support the idea of adding specifically as the first or last element and removing from either the front or the end. But just be conscious of how large that gets as continuously shuffling elements around in the ray is not a wonderfully efficient task.

Alright. Priority queues and deques are certainly not things you will need on a daily basis, but it's useful to know they're available should you need them, and in many cases they are made available as optimized and efficient data structures already there in the language.

Hash-Based Data Structures

Using associative arrays

In the data structures we've talked about so far, we've used a numeric index, a zero based integer to get directly to an element. Or there's been no index at all. With linked lists, we get the first or the last element and traverse forwards or backwards through the entire structure. And with stacks and queues, we don't use an index, we just ask for whatever's at the top of the stack or the front of the queue. But here's where we widen our options, where we get our ability to use meaningful keys to work with elements in our data structures. Not just some arbitrary number, but a way to add or access each element that itself has meaning.

Using an employee ID to get to an employee object. We're using a word to get to a definition of that word. Here's an example. I want to store the names of all US states. I could certainly use a standard basic array, an array of strings to do this. And with an array, each element as we know has a zero based index. And its reason for existence is simply to give us a way to access each of these elements independently. But there's no meaning to this number. There's no relationship between any index and the value that it's connected to. It's just whatever we ended up with when inserting items in the array.

So, something like this might be useful instead. To use the state abbreviation as the index as the key the way to get to each element. Not only do we now have an actual meaningful relationship between these two pieces of data, we might even find both of these useful in our application. Maybe I'd use the abbreviations in a drop down box or a user interface element of some kind. But use the corresponding full names when generating invoices or reports. And this is an Associative Array.

It is a collection of pairs of keys and values. With associative arrays, we don't tend to use the word index. We use the word key as the way that we get to any one specific entry. So, its keys and values grouped into key value pairs. And these pairs belong together, and must stay together. As we know, if we sort a basic array, the values shuffle around, but the indexes stay in their original location. But if we sort an associative array, say sorting by the abbreviation of the state, not the value of the state.

Well, all those pairs of keys and values still need to stay together, they need to stay paired up. Now one implication of this is that unlike a basic array in an associative array the keys do not need to be in any specific order. Sure, you might find it useful to sort them by key. You might find it useful to sort them by value. You might not need to sort it at all. But importantly, any key must be unique in any associative array. The same way that you don't get the same index number appearing twice in a basic array, there can be no duplicate keys in an associative array.

The values, however, can have duplicates if that makes sense. The same way that you could have say, duplicate values in an array of integers, you could also have duplicate in an associative array. But the keys must be unique. Though you're not restricted to just string keys and string values. We could have an associative array with the state abbreviation as the key. But as the value, a full custom object containing the state name and capital and population and area, etc. It is common to use a string as a key.

But you can usually use any type of object as either the key or the value. Having said that, values don't have to be complex we might use simple associative array to store string keys with integer values. Using a piece of text to look up a number, like the product amount in stock with various widgets. The term associative array is a generic term. It's an abstract data type. This is an idea. It's a way of discussing the concept of a data structure that's composed of pairs of keys and values, and across languages.

The practical implementation of associative arrays have different names. In Objective-C and Python, they're called dictionaries, and that's a decent name for them. A dictionary is a way to use a single word to look up a larger meaning. It's a key to a value. In Ruby, you'd implement this using a hash, and as we'll see, hash is a very common word to use here. In Java, you can implement an associative array using a hash table or a hash map. In .NET, they're available as both dictionaries and hash tables.

In C++, you can use a map or hash map. I don't talk about JavaScript very often in this course as I'm sticking more to general purpose languages. But in JavaScript, objects behave like associates of arrays. They are a collection of pairs of keys and values. Now, we haven't yet talked about how these are implemented behind the scenes. How these are stored and created a memory, and the implications of doing things like sorting and searching. And that is intentional. We are still talking about this here as an abstract data type. And we want to be able to talk and think about associative arrays without worrying about how they are implemented.

But it is worth going one level deeper, because behind the scenes, most associate arrays, whether they are called dictionaries or maps or hashs, are implemented using something called a hash table. And a hash table itself is a very important and useful data structure. But to understand a hash table, we first need to understand a hash.

Understanding hash functions

Hashing is a valuable concept in programming. It's used, not just in data structures, but in security, cryptography, graphics, audio. Think of the term hash as used in cooking. Hash browns, corn beef hash. To take some raw ingredients, chop them all up into tiny bits, and mix them all together. Well that's kind of what we do with hashing in computing. It's a way to take our raw ingredients, for us, data whether that data is strings like a password or an entire employee object. And run that through a function called a hash function.

Which, and forgive my simplified description, will grind that data all up and the result that we get out is a small, simplified, purposefully shortened reference generated from that original data. And the result we end up with from putting our data through this hash function, it will be smaller, usually much, much smaller than the original data. It might be a handful of letters and numbers. It might just be an integer. A couple of important points. Hashing, when we do this, is not encryption.

Almost without exception, and certainly all of the examples we're interested in, hash functions are not reversible. They are one way. Another word for this is they are not invertible. You lose information when hashing, that's okay, that's intentional, but the thing to understand is you can not take a hash value result then turn that around, feed it into another function, and get the original data back. That would be like wanting to take a plate of corn beef hash and turn it back into a cow and a bag of potatoes, it isn't going to happen.

So let me give you an intentionally trivial example of a hash function. Say I have a person class defined. Very simple, first name, last name, date of birth. And I want a hash function defined on this class. What that should do, is should return a specific single integer that generates itself from any data in a person object. So, I could do something super simple. But when I have an object, we'll have Sam Jones here. What I'm going to do is take all the letters from the names, give each one a one through 26 A to Z representation, then add up all those numbers.

Then, I'm going to take all the numbers in the date, add them all up, and everything I'm going to do is added together and we'll return that value. I did say it's trivial. But there's our trivial hash value. But importantly, this is repeatable. It is deterministic. What that means is if I take this exact same object, with this same data, and feed it into the hash function again, I would expect the same hash result. And, if I took a duplicate object, a actually different object, but one that contained exactly the same data, the same first name, the same last name, the same date of birth, I would also get the same hash result.

That's okay because if I have two separate objects that I consider to be equal to each other, they should return the same hash value. Let me say that again. If you have two objects that you consider equal, they should return the same hash value. However, the flip side of this is not true. While two equal objects should produce the same hash value, two equal hash values does not guarantee they came from equal objects. Two different objects might, under some circumstances, deliver the same result out of a hash function.

Here's an example of that. We had the Sam Jones object go through the hash function and it added all the different numbers together presenting the different parts of the name. And it came up with a hash value of 2075. Now, most other names would produce totally different results, different hash values. But then Fay Adams comes along, and she's got a particular collection of letters. And a particular birth date. But, unfortunately, that adds up to exactly the same hash. And this is what is known as a hash collision, that we have different objects with different data, but giving the same hash value result.

Now, here, this is, of course, due to my way too simple hash function, but it is possible, even with more complex hash functions, to get the same hash from different objects. In most cases that's okay, we can live with the occasional hash collision but it should reinforce the idea that we cannot take a single hash value and then turn that back into the original data. But here's the question, why, why do we do all of this? Well because being able to take a complex object and boil it down to a single integer representation is very useful in data structures because we can use that integer, that hash value.

As a way to get to a certain location. So let's see one of these data structures in action.

Hashing is not Encryption

(not invertible)

Hashing rules

Hashing Collision

We have different objects with different data but giving the same hash value result.

Using hash tables

This idea of hashing is fundamental to understanding the data structure called a hash table. A hash table is the typical way to implement an associative array. In your language, these might be called dictionaries, or maps, or hashes, or even just called hash tables. The big benefit of hash tables over arrays and over linked lists is they're very fast, both for looking at whether an item exists or finding a specific item in a hash table, and for inserting and deleting items. Let's take a look at why.

When a hash table is created, it's really internally a very undramatic array-based data structure, but it's adding extra functionality to get us past the limitations of an array. Now, hash tables can usually be created with an initial capacity, and if you know how large it's going to be, it is better to do that for memory management. But even if it has to be a flexible number of elements, the idea is the same. That a hash table begins with multiple placeholders, multiple buckets, all just waiting for content.

And buckets, as odd is it might sound, is the term we use for the entries in a hash table. So, we want to add a key value pair to this hash table. Again, we're always adding in pairs. Now, sometimes this verb might be to add, sometimes it's put, sometimes it's insert, but we're adding a key and a value. We pass that information to the hash table. It will take our key and in this case a string, but it could be any object, and it'll run it through the hash function, getting a specific integer out. Now, depending on the hash function, this hash value could be quite a large integer value, so it's not going to just use this as an array index.

There will be a little logic and this is all done within the hash table. It's all done behind the scenes, but we'll reduce that number down, related to the current size of the hash table so that it can attempt to evenly distribute the elements across how ever many buckets we have. And, reducing that number down could be as simplistic as a modular operation. We take the hash value, we divide by the number of buckets in the hash table, and we use whatever the remainder is, in this case five, to map to that specific index in the array a specific bucket where the value is now stored.

Then we add another key value pair. The key is put through the hash function. We get a different value and that value is mapped to another bucket. And this continues on and on. But the idea is that when we need to get an object from the hash table, well, we tell the hash table we want the value for a specific key. It will take that key again, run it through the exact same hash function again, and it can then go directly to the specific bucket that contains the value we're looking for. There's no linear search, no binary search, no traversing a list.

We just go straight to the element we need, even in situations where this hash table has thousands, or tens of thousands of values. It is a very efficient way for both looking up entries and for adding them. Now in a perfect world, we would want each bucket in a hash table to contain one entry exactly, but that's not always possible. As we've seen, we can expect that the occasional hash collision will arise, that we will get the same hash value for different objects. And beyond that, because we're typically reducing the hash value down to a smaller index, and we don't have an infinite number of buckets, as we get hash tables with hundreds or thousands of entries, collisions are going to happen.

We will add a new key value pair, and it's going to attempt to put that value in the same bucket as an existing one. Now, this is not a disaster. It's an expected behavior that this happens. We just don't want it to happen all that much. Now, hash table implementations have different ways to automatically deal with this. These range from, reasonably simple, that each bucket in the hash table itself contains a simple collection, like an array, or a linked list, which can then point to multiple entries. So, we would still be able to get to any one bucket very fast, but once we get to the bucket with multiple items, the hash table will traverse that internal list to find what it is we're looking for.

This is what is called the separate chaining technique in hash tables. With two or three entries in a bucket, it doesn't have a significant impact on any operation. Now, there are other techniques for managing collisions inside hash tables, including open addressing, and the wonderfully named Cuckoo hashing, hop-scotch, and Robin Hood hashing. But we are pragmatic and we are practical programmers, so we will be using whatever's available in our current language. And you will find many languages provide several specialized implementations of hash tables for specific needs, so let's take a look at a few of those.

Supporting hashing

So you may be wondering, before I do any of this, do I have go off and write lots of complicated hash functions? Thankfully, no. It's probably the case that most of your objects already have perfectly acceptable hash functions for most of the things you want to do. For example, every single class in Java already has a hashCode method. It's part of the base object class, so any classes you define and all the objects you create from those classes already have a default hashCode behavior that returns an integer.

And likewise, all Objective C objects have the hash method, it's part of the NS Object base class, and all .NET objects have GetHashCode. All Python objects have a hash method, all Ruby objects have a hash method. Now usually, the default hash functionality will return an integer that's calculated from the ID or the underlying memory address of that object. Meaning you should get a different hash value for every object you call this on. And if you have two separate references pointing to the same actual object, you should still get the same hash.

Which is what you'd want, because it will be calculated based on that actual object's underlying memory address. So if all you need is a unique hash value for each object, you probably already have that behavior. However, when you're writing your own classes, if you ever redefine what it means for objects of your class to be equal, you should always redefine the hash function too. Let me unpack that statement. And this is just us taking a sidestep away from data structures over into object orientation.

When you write a new class, you'll always get some kind of default equality behavior. So you can ask if two objects of that class are equal to each other. Now, usually this default behavior just looks at the ID or the memory address of the object. Is object A actually the same object as object B? Are they pointing to the same location in memory? If so, they're equal. Otherwise, they're not equal. But in the case where you would rather compare the internal state of two objects so that you might have two separate objects that are really copies of each other with the same important internal data.

And you want those to return true for an equality check, you would typically override your own class's equal method or is equal method or EQL method, whatever that's called in your language. Now here's where we start to step back into data structures. If you ever override the equality behavior in your class, you must override the hash behavior. Because hash codes are so tied to equality. Now I said earlier, if you have two objects that you consider equal, they should return the same hash value.

Well, let's turn that should to a must. They must return the same hash. So if you change what it means for your objects to be equal, you should also change what it means to hash that object. Now, okay, bear in mind I'm talking about custom classes here. If all you're ever putting in your hash tables or dictionaries were string keys with string values, you wouldn't need to worry about this. String objects already have their behavior extended. So if you have two separate string objects, they will still count as equal and return the same hash if they have the same content, even if they're actually separate string objects allocated in different parts of memory.

So they shouldn't provide any issues when you use strings in hash tables.

** Hashing in Custom Classes**

Sets, Trees, and Graphs

Using sets

In this last chapter, we're going to cover several remaining classic data structures. Be aware that some of these may not be implemented in your language. Or at least not as part of the standard library. You might need to link to another framework, usually a third party library, or perhaps you can write them yourself. But first, a data structure that almost certainly does exists in your language, the idea of a set. Now, sets are a very simple data structure to use, perhaps even the simplest data structure to use. But to understand how they work, however, it's helpful that we've already gone through things like linked lists and hash tables.

A set is an unordered collection of objects. So what does that mean? Well, just think of a big container you can put a bunch of objects into. That's a set. By unordered, I mean there is no index to this like an array. There is no specific sequence as with a linked list, or a stack, or a queue. There is no key as with a hash table. It's just a bunch of objects in no particular order. You might think this sounds rather limited, and you'd be right. It is intentionally limited. So this begs the question, what is a set good at? Well, here's main feature number one.

Unlike an array, or an associative array, or a linked list, sets do not allow duplicates. You cannot add the same object twice to the same set. Now, the second main feature. Sets are designed for very fast lookup, to very quickly be able to see if we already have a particular object contained in a collection. This is often referred to as checking membership of a collection. If we, get given an object, whether that's a string or a file name or an employee object, and we want to know if we already have that object in an existing collection.

It's very fast to check if it's in a set. Either to find out if it is there, or if it isn't. Now this is something that other collections, other data structures, are not good at. As we've already seen, if we want to find a specific object in an array, we might have to do a linear search of the entire array. If we want to find a specific object in a linked list, we have to start in the beginning and traverse the list. And even with an associative array, we can only get directly to an object if we know what its separate key is. Now behind the scenes, sets are actually using the same idea of hash tables most of the time.

But instead of hashing a key object to store a separate value object, when you're using a set, the key object is the value object. We don't have two pieces of information, we're just adding one. So a set works by taking an object, whatever that is, hashing it, and then using the generated index to store the object itself. But it's using the same general process that we saw previously with the idea of full hash tables. And that means to check membership, to find out if we already know about a particular object, we just repeat the process.

Usually that would be something like a contains method that we can call on the set. If my set contains object one, we'll run that through a hash function again. We'll get that index, and see if there's something there. So it's a super fast lookup for checking the existence of an object in a collection. But it's easy to get the wrong idea about this. Yes, sets support very fast lookup. But we would never use a set just to index directly to a specific object and retrieve that object, because there's absolutely no point.

Because to go directly to any specific object in a set, you already need to have the object. That's the only way you can get the index for it. So the only reason you would ever do this is to check for membership, to check existence. Do I already have this object? Do I have this product in my list of products? Do I have this person in my list of employees? But it's not to just retrieve a value. However, if you want to get everything out of a set, that's a different matter. You can also iterate through all the elements in a set.

That's perfectly acceptable. You just may not have any guaranteed order to them. So, set is an unordered collection of objects. It doesn't support duplicates. But it does support very fast lookup to see if a particular object exists. We're checking membership in the collection. And we can get everything out of a set. We just wouldn't want to use them as a way to get to any one specific item. And if you need any other behavior than this, it's simply not the best data structure. Now, sets are a data structure with very good language support.

There's more simplicity to using sets than just about most of the others. Ruby has Set, Python supports set and frozenset. That's even more efficient if your set can be frozen, which as it might sound, is a way of making the set immutable after it's been created. With a similar idea, in Objective-C, we have NSSet which is frozen, and NSMutableSet which can be added to after it's created. In Java, set is an interface, like list and queue. And the simplest set class is the HashSet, which, as you might imagine, indicates it's using a hash table to implement a set.

Likewise, C# has HashSet as well. And C++, there's a set container in the standard template library. Although interestingly, in C++, sets are implemented not with hash tables, but with a data structure called a binary search tree. And while that seems like an ideal segue to start talking about tree data structures.

Working with Sets

Introduction to tree data structures

The idea of a tree data structure in its most generic sense is that we have a collection of objects. For the moment, let's call them nodes. They have connections, they have links between each other. Oh, okay, we already talked about a data structure made of multiple nodes and links when we've talked about linked lists. But in a linked list, we always go from one node to a single specific next node, to the next node, to the next node. Even if we can traverse the list back and forth, this is a fixed sequence.

But when we have a tree, the difference is that each node might linked to one, or two, or more nodes. And while we could represent this visually in all sorts of confusing ways, the best visual representation is a tree structure. Now, how these objects are physically allocated in memory is unimportant. This is a logical data structure we're talking about. It's not about whether these are taking up contiguous areas in your RAM. It doesn't matter. This is a way for us to think about things and make sense of complexity. Now, let's get a few pieces of terminology down so we're all on the same page.

As when using a linked list, there is always a one specific starting node in a tree data structure. It's referred to as the root node. And yes, if this was a real tree, the root would be in the ground. But we usually draw them in this orientation. Now, this root node can itself contain a value, any object that you find useful. But also as with linked lists, the root node will contain links to other nodes. In C++ and Objective-C, these links would be pointers. In other languages, there'll be object references.

So these next nodes are referred to as child nodes. The root is the parent of both of these child nodes and the root itself has no parent. But each of these child nodes can, itself, contain child nodes, and so on and so on. So, this idea of parent and child nodes continues down the tree. So, a node can be a parent and a child at the same time. This node B, for example, is a child of the root and also a parent of those below it, of it's own child nodes. Child nodes with the same parent are called siblings so the terminology here is not particularly bizarre.

It's fairly understandable. We'll cover a couple more words. Including the idea that if we have a node with no children, that would be refered to as a leaf or leaf node. So, this is enough terminology for the parts of the tree, nothing to strange here. So, I've drawn a tree structure here that with no more than two child nodes for any parent. This is referred to as a binary tree. A binary tree is just a tree structure with a maximum of two child nodes from any other node. Doesn't have to be two. It could be none, it could be one.

But never more than two. There's any more than two child nodes for any other nodes its not a binary tree anymore. Okay, so why is that important? Well, because binary trees are often used for implementing a wonderfully searchable structure called, not surprisingly, a binary search tree or BST.

Understanding to tree data structures

When we talk about the specific kind of binary tree called the binary search tree, there are a few more important terms we need to be comfortable with. Now with a binary tree, there cannot be any more than two immediate child nodes under one node. Now, a node could certainly have two child nodes, which themselves each have two child nodes, which themselves each have two child nodes, and so on and so on ad infinitum, or at least until you run out of memory. But those are the restrictions. In a binary tree, a node can have a zero child nodes, in which case it's a leaf, or a one, or a two.

Why this limitation? Well, for one thing, it gives us clarity. We know that in a binary tree, any node object can only contain two links, two references to children. It won't contain some arbitrary array of links, maybe one, maybe 14, maybe 574 child nodes we have to iterate over. No, just two. And in a binary search tree node, those two links have specific names. They are the left child and the right child. It may help to think about it this way, that every node will always have two links, the left child and right child link, but those links might be null, meaning there's no children.

This node is a leaf. Or the left child link might point to another node, or the right child link might point to another node, but not the left, or both left and right might point to other nodes. Now, why do I seem to be laboring this point? Well, because in a binary search tree, it matters whether a node is added as the left child or the right child, and it's not just, oh it's the first child node, put it on the left, put the second one on the right. That's not the way it works here because in a binary search tree, there are reasons why a child node is inserted at either the left or the right position beneath its parent.

You see, a binary search tree is not some random collection of stuff all strung together. It is a data structure that naturally stays sorted without the immense amounts of reshuffling that would be needed in basic array. And actually a binary search tree is sometimes called a sorted tree or an ordered tree. And here's the first rule of binary search trees, that a left child node must be less than its parent, and a right child node must be more than its parent, and that rule follows all the way down.

As we insert new nodes with values, that is the rule that will always be followed to make sure that the tree stays sorted. On the example I'm showing I'm just using an integer here as a representation of what's stored in a node. A binary search trees are often used to sort key value pairs like an associative array, so what we'd be representing here is the key to an object. And it's the key that would be stored in order in sequence in a binary search tree. So any entry here must be unique. You can't have duplicate keys, just as you don't have duplicate keys in a hash table or even an array.

So, we create binary search tree by adding one node, one key value pair. And let's say its key is 50, so it is the root node. And then we want to add a new node. This has a key of 25. The binary search tree evaluated says, well, this is less than the root, so it goes in as the left child. Then we add another node. This one, coincidentally, has a key of 100. It's evaluated. We see that it's more than the root, so it goes in as the right child. Then we add a new node. It's 75. Again, it's more than the root, so we'll go down towards the right and we'll compare it to the next one.

There is something already there in that right position. We compare again. Well, it's less than this node, number 100, so we'll insert that as the left child of 100. Then we add another node. This is 30. Well, it's less than the root. We''ll go down and compare to the one on the left. Well, it's more than that node, so we'll put it in as the right node here. Let's add a smaller value. We'll add the number 10. We'll compare it to the root. It's less. We'll go down and compare it to the value on the left-hand side. Well, it's less than that as well. We go down again and we'll put it on the left-hand side of the 25 node.

Adding some larger values like 1,000, well, that's going to be added as the right child of the right child. We add the number 2,000. It's on the right all the way down. We add the number 3,000, that's going to be on the right, all the way down again. But of course data structures aren't just used to store elements, we need to be able to retrieve, and this is what binary search trees are good at. We now want to find if a binary search tree contains an object with a key of 2,000. We follow the same rules. We'll compare it to the root node. It's more than that, so we'll head down to the right.

It's more than that one, we'll keep heading down to the right. It's more than that one, we'll keep heading down to the right till we find it or we figure out it's not there. But each time we're heading down one path, we're discarding entire sections of the tree, so it's a very quick way to find a specific element. The other benefit is binary search trees are staying sorted. If we retrieve the items in a left to right sequence, we will get them all out in order. Now, you've probably noticed that we have more levels of nodes on the right hand side than on the left.

Now, it's not unusual for this kind of thing to happen. We can't always guarantee that the way data is added will allow us to build a tree with a perfectly symmetrical structure all the way down. In this case we say that our tree is unbalanced. There are more levels on one side than on the other. The downside of this is it means, on average, we would have to perform more checks to find or insert or delete any values on the right hand side than we would on the left. Now, we are talking here about the abstract idea of a binary search tree data structure.

So do understand that there are several implementations of this binary search tree idea that are self-balancing binary search trees. They include algorithms so that as the tree becomes unbalanced, and it will, it's going to be reorganized to more equally divide the number of levels. Now, this can be something as simple as shifting the position of the root node to reorganizing the entire tree. But the important idea with balancing a binary search tree is not that we always need two child nodes. That part of it doesn't matter.

It's more important that the number of levels are roughly equal to each other. We don't have three levels on the left and 20 on the right. When you start implementing binary search trees, you'll come across common self-balancing binary search tree algorithms that have names like Red-Black trees, AVL or Adelson-Velskii and Landis' trees, Splay trees, Scapegoat trees. Now in this course, I'm not getting any deeper into these because they take us away from data structures and into the algorithms that manage data structures. That's quite a different matter and quite a different level of depth.

This is still an area you can go ahead and get a couple of PhDs in. So, what are the main benefits? Well, binary search trees are fast for insertion, they're fast for deletion, they're fast for accessing any element even at large sizes. But you might think, well, hash tables are apparently good at that too, and that is absolutely true. But because binary search trees stay sorted, what these do that hash tables don't is allow us to get all the items out of the tree in order to enumerate everything in sequence, which typical hash tables don't provide.

And when we start looking at particular languages, trees aren't an area you'd expect as much direct language support as with other data structures. You typically don't just instantiate an object of a binary search tree time, because we're getting into an area where binary search trees may be being used, but they're being used behind the scenes to implement some other data structure. For example, the C++ implementation of a set data structure is done with a binary search tree. You don't need to care. You don't need to think about left child nodes and right child nodes and whether this is balanced or not.

You just use the set. Now likewise, in C# and .NET, there is no binary search tree class, but if you use a sorted dictionary, a collection of key value pairs with the keys in order, that is using a binary search tree to implement that idea. And while dictionaries are often implemented with hash tables, if you want to keep your keys in a sorted order, hash tables aren't really good at that and binary search trees are. Now, Ruby has no built-in tree data structure. There are some third party implementations, and likewise, Python doesn't have a built-in tree data structure.

With Java, we have classes like TreeMap. This stores a collection of key value pairs, but if you read some of the documentation, it will tell you this is a Red-Black tree. That's a self-balancing binary tree. And folks, that is all I'm going to say about binary search trees in this course, but next, I'm going to cover the heap data structure that also uses a binary tree for implementation. So, onward and upward.

BST / Hash Table Comparison

Binary Search Tree

Hash Table

Using heap data structures

Depending on your language you may be familiar with the term heap as meaning an area of memory where you allocate objects but that's not what we're talking about here. A heap is a specific data structure we can use for our own purposes. They're often used in sorting algorithms you may have even heard of something called a heap sort. But they're also a way of implementing other abstract data types like the priority queue we discussed earlier in the course. Heaps are usually implemented as a binary tree, not a binary search tree but still, a binary tree.

A collection of parent child nodes with a maximum of two children under any one parent. But there's a different, simple rule that we use to give a heap a totally different internal structure to a binary search tree. So the basics, heaps are a collection of objects. As we add items to this heap, they are always add top to bottom, left to right. We completely fill out any level before moving on to the next. So we don't have to worry about the tree becoming unbalanced like a binary search tree can.

But it's not just random things added in any random order. There is a rule we apply when we are doing this. And to apply this rule, we have to answer one question. Do we want our heap to be a min heap or a max heap? What that means is that do we want the top of the heap, our root node, to always contain the lowest values in the entire heap? Or do we want the top of the heap, the root node, to always contain the highest value in the entire heap? If we say we want a min heap, the lowest value at the top. And the one rule applied is that any child mode must be greater than its parent node.

If we say we want a max heap, the highest value at the top, any child node must be less than its parent. And that simple rule will do it. Let's see a min heap in action. So, the root node is created. I'm going to go again with a integer value of 50. We add another node. This will be 100. Now remembering that heaps always add top to bottom, left to right, so it's ready for something to be added just at that. Left child node position and we check is this allowed or with a min heap this child must be more than parent so a 100 is more then 50, we're good so we'll just slot it in there.

The next open position will be the right child of the root node we create a new node. It's 75. We'll check that is the child node more than the parent? Yes it is, not a problem. And notice we don't care if this value 75. Is less than or greater than it's sibling. That doesn't matter at all. Just that it's more than it's parent, so we continue top to bottom well it's a binary tree. So we can only have two children so the next available slot. Is going to be the left child of that 100 node. We weed to go this one level deeper.

We add this new node, it's 150, it checks to see if it is greater than its parent node, and there's no problem there. We then add a node called 99. Well the available slot is the right child of that 100 node so it'll dump it in there and then it does the little check. Is 99 more than its parent? Well no it isn't so it's going to swap those. Then it's going to check before finishing. Well is 99 more than its parent? And yes it is. That one is absolutely fine.

And this little swapping of nodes is how a heap keeps itself organized. Then let's add the number 76. The only available slot now is the left child of 75. Again we're going top to bottom, left to right. So it'll slide it in there, it'll compare it to the parent. It's more than the parent, not a problem. We're fine. Then we'll add the number 49. The available position is the right child of 75, we'll put in in there. We will compare that to its parent, is 49 more than 75. No it isn't, so we need to swap those two.

And before it's done here, because we moved things around it's also going to compare 49 to its parent. And no, it isn't more than that so we need to swap those two, we're actually swapping the root node. And this is the way that with a min heap we always bubble up the lowest value to the very top of the heap, to the root. A max heap is exactly the opposite, that a child must be less than its parent so we always bubble the highest value up at the top. But it is important to understand that a heap is not a fully sorted data structure.

Unlike a binary search tree, which does stay sorted and where we can easily traverse the tree and get everything out in sequence. That does not work with a heap. Because if you notice, that at any particular level past the root, the values do not have to be in any specific order, just as long as they're all more than their parent. One of the benefits of this is a heap does not have to do as much reorganization of itself as may be necessary with a binary search tree. The one thing we can be sure about, is that the minimum or the maximum value will always be at the top.

It's because of this one thing, heaps are most useful for is the idea of a priority queue, which we covered conceptually early on. Where we don't care that much about the order of most of the objects. Whatever order we've added them is fine. Unless something should be considered a high priority. In which case we could give it a low value and see it bubble up to the top. So, over to the language support. We have pretty good support for heaps across the various languages. In Python there's the heap queue. This is Python implementing their priority queue as a min heap.

If you notice it. What they call it is, binary trees for which every parent node has a value less than, or equal to any of its children. That's exactly what we just went through. There's no built-in heap in Ruby and, as far as I'm aware, there isn't one in .NET, either. Over in Objective C, there is the CFBinary heap, which they mention in the overview; Binary heaps can be useful as priority queues. It's one of the things they're really good at. Java also has a priority queue, which as you see here, is based on a priority heap. It's doing exactly what we've just gone through here.

Other than that, C++ has priority under score queue container that's also implemented as a heap. And that's heaps. It may not be a wonderfully general purposed data structure, but for things like priority queues, it's the best possible choice.

Heap Implementation

A heap is not a fully sorted data structure

Introduction to graphs

The final data structure in this course I'm going to cover, and quite briefly, is a graph. As you'll see, these, like working with trees and heaps, are often used indirectly to implement a more specific idea. Let me explain. Now first, to be clear, when we're talking graphs as a data structure, this is a totally separate idea from a graph as some kind of visual chart or diagram. Now, we've seen linked lists, a collection of nodes with a specific sequence from one node to the next. We've seen trees, a hierarchy of nodes.

Multiple levels of parent and child nodes. But these have constraints, in a tree, each child only has one specific parent. Each parent has specific child nodes. A child does not have multiple parents. There's no direct link between sibling nodes. And certainly, no links between different branches of the tree. But with the idea of a graph data structure, those limitations are off the table. A graph is a collection of nodes, where any node can link to any other node. And, a node can link to multiple other nodes.

Whatever is useful, whatever is meaningful. We could use a graph to model a social network with each node a person. People linked to a couple of people, who are linked to multiple other people, who link to others, who may very well cycle back to the first person. Now, notice with this kind of structure, you don't have to have the idea of only one fixed starting node, or one fixed root node. It makes sense that you could begin at any node, to find and follow all the connections. Other examples might be for transportation, modeling the distances between cities or for flight routing.

We go from SBA to LAX or PHX, we can then go from any of those points to multiple other locations. We could model a ethernet network in an office, or in an entire building, or a city. So, graphs are great for anywhere where you need to describe a complex system of interconnected points. Okay, some terminology here. Because graphs in computer science are so closely linked to graph theory in mathematics, it is common to hear mathematical terms used. With linked lists and trees, we talked generically about collections of nodes.

Well, in graph theory, we call these nodes vertices. And the connections, or the links between them, are referred to as edges. So, a graph can be described as a set of vertices and a set of edges. We can also say whether those edges should have a direction to them. In some situations, it makes sense that any edge, any connection between two vertices, is only one-way. In other situations, you might want to be able to follow that edge, that link, in either direction. If the edges only go one way, it's called a directed graph.

But if you can follow any edge in either direction, it's an undirected graph. Now furthermore, if it's useful, you can also give every edge a weight. This is a way of associating a number, usually just an integer, with the connection between the two vertices, with each of the edges. Now, you could do this to represent, say, the distances between cities, if you were trying to calculate the shortest route between multiple locations. Or you could use a simpler weight to indicate which edges take priority over other edges, and which edges should be de-prioritized in any logic you have.

And this would be referred to as a weighted graph. Alright, that's some basic terminology, but here's the thing. The focus of this course has been on practical and pragmatic data structures. Things we can understand and then go ahead and create. Using the built-in libraries or frameworks of the common general-purpose languages. But there is no generic graph data structure in Ruby or Python or Java or C# or C++. Because the implementation of any graph is always going to be more specific than that. And we already have used graphs in this course.

A linked list is a kind of graph, a tree is a kind of graph, a heap is a kind of graph. They are all graphs with intentional constraints. A single linked list would be considered a directed graph, we can only go one way between nodes, there's a specific direction. Whereas a doubly linked list is a kind of undirected graph, we can follow the connection between nodes in either direction. And this is what I mean by, this data structure is used to implement more specific pieces. Sure, you can create your own graphs, you can go to town with this idea.

But at this point folks, I'm going to intentionally draw the line. Graphs are an immensely interesting area, but they are situation specific. And further discussion of graphs would be a course by itself. And it probably wouldn't be a short course with the words, foundations of anything, in the title. So next, we're going to finish up this course with a bit of a recap and some suggestions for how to approach choosing the right data structures.

Graphs: Terminology

Linked lists, trees = collection of nodes

Collection of vertices (singular: vertex) <=> collection of nodes

Connection between vertices : edges

Directed / Undirected Graphs

Weighted Graphs

Graph Implementations

There are all graphs with intentional constraints

Conclusion

Review and conclusion

We've covered a lot of content in the last couple of hours. Now some of these data structures may be clearer in your mind, whereas others are perhaps a little vague. A lot of that will have to do with whether you could easily identify a situation where you would use them. And that's not a trivial question, but it's always worth staying conscious of the fact that the first issue is not the data structure. It's the data. How much date do you have? How often does it change? Do you need to sort it, do you need to search it? Now however complex you get, these questions are going to drive your decisions.

You get past the way too simple question of, which data structures are fast, because you have ask, fast for what? Faster access a direct element, faster search, faster insert or delete, faster sort, because those are the key questions. Now sure, if any place where you only have trivial amounts of data, it isn't going to matter very much. There's no reason to debate whether you should be using an array or a linked list or a hash table if all you're doing is storing three strings.

Well, unless of course, those three strings change 500 times a second. In which case, you may want to have that conversation. But, let us recap. Arrays, easy to create. Easy to use. Fantastic for directly indexing into them when you know the position you're going to. But poor at searching if you don't know where an element is as it will do a linear search if the array is unsorted. Now, searching can be quicker if the array is sorted but sorting itself is inherently demanding. Arrays are poor at inserting and deleting, particularly when inserting or deleting from any other location than the start or the end.

And then there's linked lists. Very efficient to add items and very efficient for deleting them. Not that good at directly accessing a specific element because it will require a traversal of the list. And for the same reason, poor at searching and won't retrieve those items in a sorted order unless you created them in the list that way. Stacks and queues have very specific purposes. If you aren't willing to embrace the constraint of last in, first out or first in, first out, you shouldn't be using them.

But a word of caution, it is easy to get caught up in real world analogies like the stack of plates or the queue at the bank. And those are absolutely worthwhile to grasp this concept. But, don't let those examples limit your usage to only modeling real world situations, that's what we have to do with stacks and queues. Now, for example, one of the best uses for a stack in programming is when parsing code or expressions. Where you need to do something like validate, you have the correct amount of opening and closing curly braces, square brackets or parenthesis.

You just start adding those symbols to a stack whenever they open. And when you get a closing one, if it doesn't match what's at the top of the stack, you immediately know there's been a syntax error. And that's much easier to do with stacks than to having to write regular expressions. Or start to manually go through all the letters in the text to parse it yourself. Over to hash tables. These are very fast for inserting and deleting. Very fast for lookup when you have the key. They do require a little more space in memory than say erase because there's is some overhead.

And each retrieval or insertion takes a little processing for the hashing and indexing. But the good thing about that is it's the same amount of processing every time even if the hash table gets very large. Now, if you're using a custom object as a key, you might need to provide your own hash and or equality functionality as we talked about. But in a situation where you have a lot of key value pairs and or volatile data, these beat arrays hands down. Onto sets. As we've seen, sets are fantastic for checking membership.

In situations where you have to collect a lot of objects and you need to both exclude duplicates. And quickly find out if an object is already part of a collection, there's nothing better than a set. And they are terrible at almost everything else. And trees. Binary search trees are the best way when you have large collections of objects that need to stay in order. They've got fast retrieval, they're fast to search, fast to insert and delete. But, like hash tables, they take some overhead to create and manage.

If you don't care about the order of your objects, use something else and heaps. As we saw, were a type of binary tree that are great for priority queues, among a few other things. Now, a general principle that immutable or fixed size data structures tend to be faster and smaller in memory than mutable ones. Something to take from this idea is in the situation where you may initially need a mutable re-sizeable data structure. So you can load it up with an unpredictable amount of data. But that data structure won't then change after the initial load.

Well, it might be worth your while to copy the mutable version into an immutable fixed size version of that data structure. If all you're doing from then on out is just accessing it. Now, if you're looking at existing code to see what to improve on. Wonder what kind of things you should be looking for first, the simple answer there is whatever holds the most data or whatever holds the most volatile data. And do some checking, run some tests, prove these improvements to yourself. There is no better way to really get these data structures than when you see some of the performance games you can get just from simply changing from one structure to another.

Because in your applications, better performance is there for the taking, so take it. Thanks for joining me and I'll see you next time.

Arrays

Linked Lists

Stacks and Queues

Hash Tables

Sets

Binary Search Trees

**Fixed Structures are Faster / Smaller

Choose a fixed (immutable) version where possible

Index