My third blog post was about recursion. You can tell I was not prepared to write about recursion at the time since I had to Google it in the first place. Here's the post:
http://sloglogblogslog.blogspot.ca/2015/02/week-4-recursion.html
Now that I have been working with recursion for longer, I can safely say that thinking about it only makes my brain sore rather than the searing pain it used to cause. So, let's get a little more in depth, and walk through the process of writing a recursive function, and explain why it works.
Let's say we want to write a function that takes a nested list of integers, which could be lists within lists within lists but ultimately they contain integers, and sums them all up. The first thing we want to do when writing a recursive function is decide what the function should do if no recursion is required. In this case, a single integer would be the simplest input, in which case we should simply return the integer. But we should only do this if the input is an integer, so we get:
def recursive(m):
if type(m) == int:
return m
Now we need to know what to do when we are given a list. This would be as simple as summing up the values in the list, which we can do with a default python function.
def recursive(m):
if type(m) == int:
return m
else:
return sum(m)
Unfortunately this will only work for integers and flat lists. Let's see what we would write for a list of lists.
def recursive(m):
if type(m) == int:
return m
else:
return sum([sum(x) for x in m])
Now for a list of list of lists...
def recursive(m):
if type(m) == int:
return m
else:
return sum([sum([sum(y) for y in x]) for x in m])
You can see that this isn't very good, as the expression will just keep getting longer and longer, and will only work for exactly the right kind of list. This is where recursion saves the day.
def recursive(m):
if type() == int:
return m
else:
return sum([recursive(x) for x in m])
This seems almost too good to be true. How is there such little code? How is this even supposed to do what we expect it to? Well, let's trace an example.
>>> m = [2, 4, [2, 6], [[4]]]
>>> recursive(m)
-> sum([recursive(2), recursive(4), recursive([2, 6]),
recursive([[4]])])
We already know that recursive(2) and recursive(4) will return their inputs, so lets keep breaking down what this function is doing.
-> sum([2, 4, sum([recursive(2), recursive(6)]),
sum([recursive([4])])])
-> sum([2, 4, 8, 4]) -> 18
As can be seen, recursion is witchcraft and is only practiced by those wicked of heart.
Thursday, April 9, 2015
Impressions of Week 9
The lab this week had to do binary search trees. In my post "Impressions of Week 7" I summarized what a binary tree was, and mentioned that a binary search tree could be searched more efficiently due to the fact that not every node has to be visited. To elaborate on this, say we have a binary tree such that all of the nodes' data are integers. To the left of each node is a sub-tree (also a binary search tree) where all its nodes contain data that is less than its root, and to the right is a sub-tree (also a binary search tree) containing nodes that have data that is greater than its root. If you are searching for the value 7 in a binary search tree with a root containing the value 15, only one half of the tree needs to be traversed, which would be the left side as the right side only contains values greater than 15. This makes binary search trees much more efficient, as finding values is faster than if it was a flat list.
As an example, here is a visual representation of a binary tree. Let's say we are looking for the value 200. If we used pre-order traversal, we would need to look at 14 nodes before we get to 200 (90, 50, 20, 5, 25, 75, 66, 80, 150, 95, 92, 111, 175, 166, 200). By traversing this as a binary search tree, we would only need to look at 3 nodes before arriving at 200 (90, 150, 175, 200). Looking at the first node, we go to the right since 200 > 90. 200 is also greater than 150 so we go on to 175, where 200 is once again somewhere to the right, and there it is at the end. That's 11 nodes that did not even have to be looked at, which may not seem like much, but if the tree was massive then it could save a lot of time.
Another neat thing about binary search trees is that the data it contains does not have to be numbers, it can be any kind of data as long as its comparable. This could have many applications that would make searching algorithms much more efficient in a variety of useful programs, especially if the number of data entries is extremely large.
Subscribe to:
Posts (Atom)