3×3 Tips – Easy F2L Case


I’ve been doing posts on math and programming lately, so I thought it’d be good if I could do a post on some cubing. Here’s a neat F2L, which many people have trouble with, but can in fact be solved with two triggers. Here’s the case (F2L 18 on AlgDb, if anybody’s wondering):


And the corresponding algorithm (Red as Front):

(R’ F R F’) (R U’ R’ U) (R U’ R’)

Basically, what you’re doing is a Sledgehammer, then a fast trigger, then insert. I find it faster than the original, intuitive version:

y’ (R’ U2 R) (U R’ U’ R)

because of the pesky y’ rotation at the start, and the awkward regrip after the

(R’ U2 R).


That’s the tip for today, stay tuned!



Spivak – Rotating Points Problem

So I decided to do a problem from my copy of Michael Spivak’s Calculus; after a bit of looking, I decided on Problem 1 of Appendix 4 – Vectors. It talks about rotating points (really vectors) on a grid, and how to compute the resulting point if you rotate it by some angle θ.

Keep in mind that I’m not using the latest edition; I’m using the 3rd Edition, and the problem can be found on p. 77.


Let’s jump into it!

The definitions:

    The function \(R_\theta(v)\) applies a rotation of angle \(\theta\) to a point (vector) \(v\). Here’s a picture to illustrate just what I meant by that rather confusing statement:
Helpful Picture

We take that purple point \(v\) and rotate it by some angle \(\theta\) to get \(R_\theta(v)\).

The Problem

The statement is rather simple…

Given \(  \theta \) and a point \(v\), can you find a simple form of \(R_\theta(v)\), which does not require a lot of calculations?

I’ll post the solution below…

The Solution

The road map

I’ll explain the ‘road map’ on how to tackle the problem – then we can start tackling it and sorting out the details along the way! Here’s how we’re going to do it:

  1.  Solve the ‘trivial’ cases, for \( v = (0, 1)\) and \((1, 0)\), respectively. They’ll come in later!
  2. Prove that \(R_\theta(u + w) = R_\theta(u) + R_\theta(w)\)
  3. Prove that \(R_\theta(a \cdot u) = a  \cdot R_\theta(u)\)
  4. Since that any vector \(v\) can be expressed as some combination of \((0, 1)\) and \((1, 0)\) and their multiples, and since that we’ve proved the mechanics along the way, we can put it all together into the closed form of \(R_\theta(v)\)
The Solution

First of all, let’s treat the easy cases:

\( v = (0, 1)\) and \((1, 0)\)

The case for (0, 1). It’s just a rotation for the other case.

Some elementary trigonometry comes into play here; nothing too special.

When you rotate the points \((0, 1)\) with any angle, the result will be \((-sin\theta, cos\theta)\); when you rotate \((1,  0)\) by an angle, the result will be \((cos\theta, sin \theta)\).

That’s the easy cases down, now we’ll prove that

Lemma 1: $$R_\theta(u + w) = R_\theta(u) + R_\theta(w)$$.


Let \(u, w\) be vectors, and assume that the angle formed by the vectors with the \(x\) axis are \( \theta_1 \) and \( \theta_2 \), respectively. Then, the angle that the sum of the vectors makes with the \(x\) axis is \(\frac{\theta_1 + \theta_2}{2}\), which is also the average of the two angles, as shown:

So, if we rotate \( w + u \) by some angle \( \theta \), then the length of \( w + u \) won’t change a bit; it’s clearly the angle of the vector that changes! The angle of the vector goes from \(\frac{\theta_1 + \theta_2}{2}\) to the even more scary looking, \(\frac{\theta_1 + \theta_2}{2} + \theta\). The first part in the fraction is just the angle we already had, and the newly added \( \theta \) is the new addition here. Therefore, \( R_{\theta})(w + u)\) has an angle of \(\frac{\theta_1 + \theta_2}{2} + \theta\). All we need to do is to prove that, if you apply \(R_{\theta}\) first to \(w\) and \(u\), then sum then, then the result will be equal to the above result, by comparing their absolute values and the angle, because if their angles are the same, and so are their absolute values, then they have the same polar co-ordinate, and therefore are the same vector. Let’s go.

Now, consider the two vectors, \( R_{\theta}(w)\) and \( R_{\theta}(u)\). They have the same absolute value as the original vectors, since rotating a point by an angle does not change its absolute value (think a unit circle with fixed radius), so \( \left| R_{\theta}(w) + R_{\theta}(u) \right| = \left| w + u \right|  \). Now, the angle of \(R_{\theta}(w) + R_{\theta}(u)\) is the average of their angles, which is \(  \frac{(\theta_1 + \theta) + (\theta_2+ \theta) }{2} = \frac{\theta_1 + \theta_2}{2} + \theta \)


Doesn’t that just look familiar? Why, yes! That’s the same angle as the previous vector,  \(R_{\theta})(w + u)\)! Therefore, we can confidently say that, since

$$ \left| R_{\theta}(w) + R_{\theta}(u) \right| = \left| w + u \right|  $$


$$ Angle_{R_\theta(u + w)} = Angle_{R_\theta(w) + R_\theta(w)} $$,


$$ R_\theta(u + w) = R_\theta(w) + R_\theta(w) $$.

We’re halfway through the proof! This part was the longest part, and congratulations on those who make it through!

The second Lemma:

$$R_{\theta}(a \cdot w) = a \cdot R_{\theta}( w), a \in \mathbb{R}$$


This one isn’t much different than the last proof: prove that the absolute values of \( R_{\theta}(a \cdot w) \) and \(a \cdot R_{\theta}( w)\) are the same, and then prove that their angles are the same. First, the absolute values. Rotating a point by some angle does not change its length (absolute value), so \( \left|R_{\theta}(a \cdot w)\right|  = \left|a \cdot w\right| \), and \( \left| a \cdot R_{\theta}( w) \right| =  \left|a \cdot w\right| \). That’s the absolute value fixed, what about the angle? Well, stretching a vector by some scalar doesn’t affect its angle, and therefore \(Angle_{R_{\theta}(a \cdot w)} = Angle_{R_{\theta}(w)}\), and \( Angle_{a \cdot R_{\theta}(w)} = Angle_{R_{\theta}(w)}\), and that completes the proof of the lemma. Here’s the second Lemma in its full glory:

Lemma 2: $$ R_{\theta}(a \cdot w) = a \cdot R_{\theta}( w), a \in \mathbb{R} $$

The Epic Finale

Ooh! Here’s the fun part! Notice first that any 2-D vector \(w\) can be written as a set of two points \( (x, y) \), with \(x, y \in \mathbb{R}\). Specifically, it can be written as \( w  = x \cdot \left ( 1, 0 \right ) + y \cdot \left ( 0, 1 \right ) \). Apply the \(R_{\theta}(w)\) function to that, and you get this thing: \(R_{\theta}(w) = R_{\theta}(x \cdot (1, 0) + y \cdot (0, 1))\). By Lemma 1, \(R{\theta}(x \cdot (1, 0) + y \cdot (0, 1)) = R_{\theta}(x \cdot (0, 1)) + R_{\theta}(y \cdot (1, 0))\). And by Lemma 2, these two terms can be split further, since \(x\) and \(y\) are scalars. By Lemma 2,

\( R_{\theta}(x \cdot (0, 1)) + R_{\theta}(y \cdot (1, 0)) = x \cdot R_\theta((1, 0)) + y \cdot R_\theta((0, 1)) \).

Ooh, what’s this? Our old friends, \( R_\theta((1, 0)) \) and \(R_\theta((0, 1))\)? Directly substituting in \(\begin{cases} R_\theta((0, 1)) = (-\sin\theta, \cos \theta)
R_\theta((1, 0)) = (\cos\theta, \sin \theta) \end{cases}\), we obtain,

\(\begin{align*}R_{\theta}(x \cdot (1, 0)) + R_{\theta}(y \cdot (0, 1)) = x \cdot (\cos\theta, \sin \theta)+ y \cdot (-\sin\theta, \cos \theta) \\ = x \cos \theta + x \sin \theta – y \sin \theta + y \cos \theta\\ = (x \cos \theta – y \sin \theta, x \sin \theta + y \cos \theta)\end{align*}\)

, and that completes the proof. Here’s the full thing:

If a vector \( w \) can be given in co-ordinates as \( (x, y) \), then I can apply a rotation of angle \( \theta \) on it, and find its new co-ordinates, which is given by the formula \( (x \cos \theta – y \sin \theta, x \sin \theta + y \cos \theta) \).


I hear some (a few? A little? None?) of you say, “What can this be even used for?” Well, one thing I came up with is the proof of the trigonometric identity, \( cos(\theta + \phi) = \cos \theta \cos \phi + \sin \phi \sin \theta [latex]. Given the point [latex](\cos \theta, \sin \theta)[latex], we wish to add an angle, [latex] \phi \) to it, and find its co-ordinates, given by

$$ R_{\phi}(\cos \theta, \sin \theta) = (\cos \theta \cdot \cos \phi – \sin\theta\cdot\sin\phi, \cos\theta\sin\phi + \sin\theta \cdot \cos\phi)$$.

The \(x\) co-ordinate of this new point is \(\cos \theta \cdot \cos \phi – \sin\theta\cdot\sin\phi\), and its radius one, so we obtain,

$$cos(\theta + $$phi) = \frac{\cos \theta \cdot \cos \phi – \sin\theta\cdot\sin\phi}{1} = sin(\theta + \phi) = \frac{\cos\theta\sin\phi + \sin\theta \cdot \cos\phi}{1} = \cos\theta\sin\phi + \sin\theta \cdot \cos\phi$$, and also,

$$sin(\theta + \phi) = \frac{\cos\theta\sin\phi + \sin\theta \cdot \cos\phi}{1} = \cos\theta\sin\phi + \sin\theta \cdot \cos\phi$$

,which is correct.

Easy Proof of the Divergence of the Harmonic Series

The Harmonic Series is a really good counterexample to the intuition that “any series that tends to 0 converges”. Let’s see why.

How do you determine if a series, particularly infinite ones,  converges or diverges? An intuitive guess would be something along the lines of this:

An Infinite Series $$ S = sum_{n=1}^{\infty}\alpha_n$$ converges if $$\lim_{n \rightarrow \infty} \alpha_n = 0 $$

, an admittedly confusing way of saying that the further you go down the series, the closer you get to zero, and you can get the series as close to zero as possible, by going down far enough down the series. For example, the series $$ \alpha_n = (\frac{1}{2})^n$$ converges, because as you increase $$ n$$, $$ \alpha_n$$ gets really close to zero.

For example:

The series

$$ \alpha_n = (\frac{1}{2})^n$$ converges, and so does

$$ \gamma_n = (\frac{1}{n!})$$ (in fact, to the constant $$ e$$, and so does

$$ \beta_n = (-1)^n\frac{1}{n}$$.

But there is one item missing from the list…

Proof of the Divergence of the Harmonic Series

Well, that’s also called the Harmonic Series (wiki), given to the series of the reciprocals of the Natural numbers.

 Spoilers: it DIVERGES!

Here’s an elegant little proof (by contradiction, as usual) that demonstrates this:

Suppose that the sum $$ \sum_{n=1}^{\infty}\frac{1}{n}$$ converges, and converges to $$ S$$. Then, that can be restated:

$$ S = \sum_{n=1}^{\infty}\frac{1}{n} $$.

If we take every other fraction, and increase its denominator (decreasing its value)

$$\frac{1}{n} \geq \frac{1}{n+1}$$, for $$ n \in \mathbb{N}$$.

Here is are the first few items of the series, explicitly:

$$ S = \sum_{n=1}^{\infty}\frac{1}{n} = \frac{1}{1} +\frac{1}{2} +\frac{1}{3} +\frac{1}{4} +\frac{1}{5} + \frac{1}{6} +\frac{1}{7} +… $$,

making the replacements (to $$ \frac{1}{3}, \frac{1}{5},  \frac{1}{7}… \frac{1}{2n+1}$$, for $$ n \in \mathbb{N}$$):

Replacing the elements with something smaller obviously makes the sum smaller, thus we can write:

$$ S = \sum_{n=1}^{\infty}\frac{1}{n} > \frac{1}{1} +\frac{1}{2} +(\frac{1}{4} +\frac{1}{4}) +(\frac{1}{6} + \frac{1}{6}) +(\frac{1}{8} + \frac{1}{8}) + …$$

,if you add the fractions together, that comes out to:

$$= \frac{1}{1} +\frac{1}{2} + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + …$$,

which come out to

$$ = \frac{1}{2} + (\frac{1}{1}  + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + …) $$

Substituting S for the part after the half, we obtain

$$S = \sum_{n=1}^{\infty}\frac{1}{n} > \frac{1}{2} + (\frac{1}{1}  + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + …) = \frac{1}{2} + S$$,

which, if we take the front and ending terms, becoming the absurd inequality:

$$S > S + \frac{1}{2}$$.

Which is a contradiction.


The result we just proved means one thing:

The Harmonic Series does not converge to a single value.

What does this mean? Well, we can make the series sum to be larger than any number, provided we sum enough terms! Although it may be counter-intuitive, the proof is above. And this can lead to some really counter-intuitive results:

Let’s see one from WikiPedia:

If I had an unlimited amount of blocks, and a table, I can stack the blocks an arbitrary distance from the table, provided I use enough blocks.

This is pretty easy: Just stack the first block with a half sticking out, then the second with a third sticking out, the third with a fourth sticking out, and so on. The amount sticking out will just be the series above, which tends to infinity, albeit very, very slowly.



Train Station Math

Hey all!



The new Cambridge North train station

A video by great Youtuber Mathematician James Grime (aka singingbanana) recently posted a video on a new train station in Cambridge, with some very interesting patterns on it (above). While not derived from Conway’s Game of Life, it is derived from an automata, dubbed Rule 135 by mathematician Stephen Wolfram. Here’s a link (and the wiki page) if you’re interested. It’s an interesting cellular automata that can be described with a positive integer no greater than \(2^8\), or \(256\). Below is an example.

(example is from Rule 30; picture is from the Wolfram MathWorld page.)

Screenshot from Wolfram Mathworld

So, why the name?

Good question!

Notice that the above number can be written as \(00011110_2\). Convert that to decimal, and you get the number 30. Similarly, if you had rule 255. that just means that you’d have to colour in every grind, regardless; if you had rule 0, you wouldn’t have to colour in any grid at all.

So, to answer your question, it’s simply a matter of ‘Rule 35 sounds better than “Rule 00011110 in base 2″‘.

Now that we’ve got the naming out of the way…

Here’s how it works:

  1. Define a starting position. For example, I could have a row of all blank grids, and one filled in in the middle.
  2. For each grid in the second row, look at the grid directly above it, the grid to the upper-left, and to the upper right. The pattern will decide if the grid we’re looking at is coloured or not. For example: if the grid I was looking at had the grid above it filled in, the grid to the upper-left blank, and the upper-right filled in, then we’d have to find the picture corresponding to that picture (fifth one from the left), and see that the grid ought to be filled in.do this for every grid in the row, then move on to the next row. Rinse and repeat.

    Here’s a little demo that I hacked up in JSBin with canvas: Demo

You can get some really beautiful pictures in this: I’ve used the ruleN.randomise() function, and they look great! Block size was set to 50, iterations to 10.


Rule 95
Rule 30
Rule 27



I think it’s amazing that such a basic rule – turning grids black or white can produce such mesmerizing patterns! What’s more, many of them have very interesting properties! Some of them are logic gates: Rule 90 is a XOR gate, along with Rule 110 being proven a Turing Machine! Imagine trying to run a simple ‘Hello World’ on that! My favourite Rule is probably 90, which has the property of generating a Sierpinski Triangle….  Spoilers! I might make a short article explaining the code, but I think the comments are pretty self-explanatory.


Thanks for reading!





Welcome to Cubetopia Ver. 2!

After I accidently forgot to renew my service, and the Namecheap servers deleted my old website records, I’ve been in a kind of lapse, not sure what to do. But now, I’m going to start the blog anew!

The blog will now be more diverse than its predecessor, Cubetopia Ver. 1. It will talk about more than the art of cubing; topics such as Computer Science (admittedly just coding simple things) and maths will be on the blog later.

I hope that my readers are just as excited as I am for the new release!


–Sean, looking for a new start.