Tuesday 16 December 2014

Follow up to the Diagonals Problem

In my solution to the diagonals problem, I felt like I wasn't able to adequately explain how I got from the numbers to the final equations. This is because I'm not too sure myself, it was the product of trial-and-error, luck and a bit of inspiration.

In his post, Tim does a great job of showing his thinking that led him to the same formula I got. Also, he posted code to a python program that takes the dimensions of the grid as input and returns the number of squares the diagonal crosses. The trickiest part of the program was the helper function for determining the greatest common factor of m and n. I hadn't thought that far... I was hoping python would have a built-in function for that!

This will be my last post for the course. It was interesting, challenging and I feel well prepared for CSC 236 next semester. I'm a bit embarrassed that I forgot how to take derivatives during the final, but I'm going to chalk that up to exam time stress.

Thanks for reading!


Sunday 14 December 2014

Problem Solving: Diagonals


If there is a grid of squares with a certain number of rows (m) and columns (n), is it possible to determine the number of squares a line will go through if drawn from top left corner to the bottom right? This was a problem given to us in a problem solving class and I would like to discuss how I went about solving it. 

The problem
  • Input: two natural numbers that describe the length and height of the grid (m x n).
  • Output: the number of squares that a diagonal crosses from the top left corner to the bottom right corner of an m x n grid (s). 

The Plan: Draw some m x n grids and diagonal lines. Look for patterns and interesting cases. 


5 x 3, s = 7; 5 x 8, s = 12



First observations:
1) the order of m and n doesn't matter. In the picture below, diagonals through 8 x 1 and 1 x 8 sections both cross 8 squares (blue in the picture below).
2) when m = n, you have a perfect square and s = m = n. A diagonal through an 8 x 8 grid crosses 8 squares, because it goes perfectly from the top left to bottom right corner of each square (green in the picture below).

So for ease of computing s, I can arbitrarily assign m as the smaller number/ odd number/ prime number. Whatever makes writing a function easiest. 

8 x 8, s = 8


I noticed that when m is divisible by n or vice versa, then s = max(n, m). However, this still leaves a lot of the combinations to solve, but it led me to notice that if m and n have the same factor, the line splits up into smaller sections. (i.e. 4 x 6 = 2(2 x 3). This means that you can multiply the number of squares the line passes through in the smaller section by the number of smaller sections (2 * (4 x 3, s = 6) that make up the large section (8 x 6, s = 2 * 6 = 12).


8 x 6 = 2(4 x 3)

Acknowledging when you're stuck: At this point I was a bit stuck. I knew how to determine s for certain specific cases, but there was still the problem of working out s when m and n only shared a common factor of 1. It was getting very complicated looking at grids with lines going all over the place. It was time for a new approach. I turned to just looking at the numbers to see if I could find a pattern. I created a table with values of n and m on each axis and filled in s values.

After staring a while at the numbers and trying out different ways of getting from m and n to s, I came across the equation s = n + m - 1. This seemed to work for the cases I hadn't solved for (i.e. when n and m only have a common factor of 1).

Looking back, I realized that a variation on this equation actually works for all cases, if the largest common factor (F) of m and n is found. Then the equation can be written as:
s = F(n\F + m\F -1)
where F is the largest common factor of m and n. In the end, it didn't matter if n or m were larger, smaller, even or odd.

Sunday 23 November 2014

The Liar's Paradox

My last post on programs halt and bang concluded that bang(bang) halts if, and only if, it doesn't halt, which is clearly a contradiction. This occurs because the self-reference creates a paradox. A classic example of this is the statement:
This sentence is false
Because the sentence references itself, if is both true and false at the same time. This is called the Liar's Paradox, which gets it's name from Psalm 116:11 in which David says:
I said in my alarm, "Every man is liar!"
Again, however you read the statement, you get a contradiction because the sentence is self-referential because the speaker is himself a man. 

I thought this was an interesting little complement to our class discussions on proving something uncomputable by trying to compute it and creating a contradiction.

The Halting Problem

Kurt Godel, who first proved that some truths are unprovable, made logic seem more "illogical" for all subsequent students of mathematics. But this concept has important implications for computer science because there are problems that no algorithm can solve. One of these is Alan Turing's Halting Problem, the first problem that was proved to be uncomputable. Computer programs either halt (return an output, raise an error, or change a mutable variable) or they never halt (get caught in an infinite loop) and it would be very useful for programmers to have a program that says whether any function given any input will halt or not halt. Unfortunately, the algorithm for such a program cannot be written.

def halt(f, i):
   '''Iff f(i) halts, return True.'''
   *some algorithm here*
The proof for this is simple (but still confusing too wrap your head around) and uses program, which we'll call bang, that halts for some inputs and not for others.

def bang(f):
   '''Return 0 if halt(f, f) is False, else don't halt.'''
   # if halt(f, f) is True, go into an infinite loop.
   while halt(f, f):
      pass
   # if halt(f, f) is False, then stop.
   return 0

So what happens if we run bang(bang)? There are two possibilities:

  1.  bang halts, meaning halt(bang, bang) was false. BUT that means halt identified bang as a program that did not halt!
  2. bang does not halt, meaning halt(bang, bang) was true. BUT then halt identified bang as a program that does halt!
Thus, we have a proof by contradiction: bang halts if, and only if, it doesn't halt.

So, if at this point you are completely confused that's okay! I had to work through the explanation several times after class before it started to click. Then I sat my room mate down and took him through the proof. After explaining it to him several times, I felt like an expert. 


Monday 17 November 2014

Problem Solving: The Paper Folding Problem

Weeks ago I introduced the Paper Folding Problem that we spent a class trying to solve. I'm going to walk you through the process that lead me to a possible solution.

The Problem: If you were to fold a piece of paper in half (right over left) , the crease or vertex will point downwards when the piece of paper is unfolded in the original orientation. Using the same technique to make multiple folds in the piece of paper will create a pattern of  up and down creases. If you're given the number of folds, can you produce the pattern of creases?

The Plan: First I started folding a strip of paper and recording the patterns of creases after each fold using arrows to indicate the direction of the crease. (See picture).



I noticed a pattern forming ... the middle crease created by the first fold always pointed down (blue arrows in picture above) and there was a type of symmetry around this central crease. The creases to the left of the central crease where the opposite mirror image (pattern flipped horizontally and vertically) to the pattern of creases on the right. Also, to the pattern of creases left of the central crease after n folds was the same as the whole pattern for n-1 folds (purple ovals in picture above)). Therefore, to determine the pattern for n folds, one could simply write the previous pattern (n-1 folds) + downwards arrow + opposing mirror image of the previous pattern (n-1 folds).

Looking back, I wondered if there was another way to build the pattern that might be easier to program. As I folded a new strip of paper, I wrote on the number of the fold on the creases formed by that fold. I noticed that the creases from the new fold were on either side and between each crease from the previous folds (see picture below). Also, these new creases go down, up, down, up....

Using this, it wouldn't be too difficult to create a program that takes a number of folds (n) and produces a list of ups and downs that present the pattern of creases from left to right in a piece of paper folded n times. I would use recursion to produce a list for n folds, from the list for n-1 folds, from the list for n-2 folds, etc until n=1 and the list is [down]. 

I'd be interested to hear if anyone worked out a different solution! Post a link to your solution in the comments.

Sunday 9 November 2014

"Ah ha!" Moments

The second term test happened this week. If you had asked me a week before the test how I thought it would go, I would have told you I was not expecting to do well because proofs are the bane of my existence. But the day of the midterm I was feeling calm and confident because in that last week before writing everything related to proofs started to click. The assignment and tutorials were a major part of this "awakening". As I looked over slides from the lectures, steps that had completely stumped me before were making sense because I understood the reasoning behind the text.

So I was very pleased when the midterm went well! I understood what was being asked in each of the three proofs and I think I managed to prove (or disprove) them in a logical and clear manner. 

Now I'm hoping the same insight will happen with run time analysis! 

Monday 3 November 2014

Big-Oh and Big-Theta

Optimising the efficiency of algorithms is an important component of computer science, because how smoothly programs work and how much computer processing power needed will depend on run time. Run time analysis is a useful way of describing the resource constraints of a particular algorithm. In class, we've been analysing the efficiency of functions using lists by determining their worst-case run time for input of size n. 

Big-Oh notation expresses the upper bound of a function's run time, by overestimating the number of steps needed to produce the output. For example, the worst-case scenario when sequentially searching for an item in a list of n items, is that the desired item is not in the list, but you only know this after looking at every item in the list. This requires n steps for comparing the input to each item in the list, plus 1 more step for returning the output (False). Therefore, the run-time of this search is linear or O(n).

Big-Theta notation expresses the lower bound of a function's run time, by underestimating the number of steps needed to produce the output. 

Big-Oh and Big-Theta seem to usually have the same type of run time (constant, linear, quadratic, etc.) for a function, but it is multiplied by a different constant (c). Because Big-Oh is the asymptotic upper-bound, its c is greater than the c for Big-Theta, the asymptotic lower-bound.