Spivak – Rotating Points Problem

So I decided to do a problem from my copy of Michael Spivak’s Calculus; after a bit of looking, I decided on Problem 1 of Appendix 4 – Vectors. It talks about rotating points (really vectors) on a grid, and how to compute the resulting point if you rotate it by some angle θ.

Keep in mind that I’m not using the latest edition; I’m using the 3rd Edition, and the problem can be found on p. 77.

 

Let’s jump into it!

The definitions:

    The function \(R_\theta(v)\) applies a rotation of angle \(\theta\) to a point (vector) \(v\). Here’s a picture to illustrate just what I meant by that rather confusing statement:
Helpful Picture

We take that purple point \(v\) and rotate it by some angle \(\theta\) to get \(R_\theta(v)\).

The Problem

The statement is rather simple…

Given \(  \theta \) and a point \(v\), can you find a simple form of \(R_\theta(v)\), which does not require a lot of calculations?

I’ll post the solution below…

The Solution

The road map

I’ll explain the ‘road map’ on how to tackle the problem – then we can start tackling it and sorting out the details along the way! Here’s how we’re going to do it:

  1.  Solve the ‘trivial’ cases, for \( v = (0, 1)\) and \((1, 0)\), respectively. They’ll come in later!
  2. Prove that \(R_\theta(u + w) = R_\theta(u) + R_\theta(w)\)
  3. Prove that \(R_\theta(a \cdot u) = a  \cdot R_\theta(u)\)
  4. Since that any vector \(v\) can be expressed as some combination of \((0, 1)\) and \((1, 0)\) and their multiples, and since that we’ve proved the mechanics along the way, we can put it all together into the closed form of \(R_\theta(v)\)
The Solution

First of all, let’s treat the easy cases:

\( v = (0, 1)\) and \((1, 0)\)

The case for (0, 1). It’s just a rotation for the other case.

Some elementary trigonometry comes into play here; nothing too special.

When you rotate the points \((0, 1)\) with any angle, the result will be \((-sin\theta, cos\theta)\); when you rotate \((1,  0)\) by an angle, the result will be \((cos\theta, sin \theta)\).

That’s the easy cases down, now we’ll prove that

Lemma 1: $$R_\theta(u + w) = R_\theta(u) + R_\theta(w)$$.

Proof:

Let \(u, w\) be vectors, and assume that the angle formed by the vectors with the \(x\) axis are \( \theta_1 \) and \( \theta_2 \), respectively. Then, the angle that the sum of the vectors makes with the \(x\) axis is \(\frac{\theta_1 + \theta_2}{2}\), which is also the average of the two angles, as shown:

So, if we rotate \( w + u \) by some angle \( \theta \), then the length of \( w + u \) won’t change a bit; it’s clearly the angle of the vector that changes! The angle of the vector goes from \(\frac{\theta_1 + \theta_2}{2}\) to the even more scary looking, \(\frac{\theta_1 + \theta_2}{2} + \theta\). The first part in the fraction is just the angle we already had, and the newly added \( \theta \) is the new addition here. Therefore, \( R_{\theta})(w + u)\) has an angle of \(\frac{\theta_1 + \theta_2}{2} + \theta\). All we need to do is to prove that, if you apply \(R_{\theta}\) first to \(w\) and \(u\), then sum then, then the result will be equal to the above result, by comparing their absolute values and the angle, because if their angles are the same, and so are their absolute values, then they have the same polar co-ordinate, and therefore are the same vector. Let’s go.

Now, consider the two vectors, \( R_{\theta}(w)\) and \( R_{\theta}(u)\). They have the same absolute value as the original vectors, since rotating a point by an angle does not change its absolute value (think a unit circle with fixed radius), so \( \left| R_{\theta}(w) + R_{\theta}(u) \right| = \left| w + u \right|  \). Now, the angle of \(R_{\theta}(w) + R_{\theta}(u)\) is the average of their angles, which is \(  \frac{(\theta_1 + \theta) + (\theta_2+ \theta) }{2} = \frac{\theta_1 + \theta_2}{2} + \theta \)

Heeeeeeeeeeeeeeeeeey

Doesn’t that just look familiar? Why, yes! That’s the same angle as the previous vector,  \(R_{\theta})(w + u)\)! Therefore, we can confidently say that, since

$$ \left| R_{\theta}(w) + R_{\theta}(u) \right| = \left| w + u \right|  $$

and

$$ Angle_{R_\theta(u + w)} = Angle_{R_\theta(w) + R_\theta(w)} $$,

then,

$$ R_\theta(u + w) = R_\theta(w) + R_\theta(w) $$.

We’re halfway through the proof! This part was the longest part, and congratulations on those who make it through!

The second Lemma:

$$R_{\theta}(a \cdot w) = a \cdot R_{\theta}( w), a \in \mathbb{R}$$

Proof:

This one isn’t much different than the last proof: prove that the absolute values of \( R_{\theta}(a \cdot w) \) and \(a \cdot R_{\theta}( w)\) are the same, and then prove that their angles are the same. First, the absolute values. Rotating a point by some angle does not change its length (absolute value), so \( \left|R_{\theta}(a \cdot w)\right|  = \left|a \cdot w\right| \), and \( \left| a \cdot R_{\theta}( w) \right| =  \left|a \cdot w\right| \). That’s the absolute value fixed, what about the angle? Well, stretching a vector by some scalar doesn’t affect its angle, and therefore \(Angle_{R_{\theta}(a \cdot w)} = Angle_{R_{\theta}(w)}\), and \( Angle_{a \cdot R_{\theta}(w)} = Angle_{R_{\theta}(w)}\), and that completes the proof of the lemma. Here’s the second Lemma in its full glory:

Lemma 2: $$ R_{\theta}(a \cdot w) = a \cdot R_{\theta}( w), a \in \mathbb{R} $$

The Epic Finale

Ooh! Here’s the fun part! Notice first that any 2-D vector \(w\) can be written as a set of two points \( (x, y) \), with \(x, y \in \mathbb{R}\). Specifically, it can be written as \( w  = x \cdot \left ( 1, 0 \right ) + y \cdot \left ( 0, 1 \right ) \). Apply the \(R_{\theta}(w)\) function to that, and you get this thing: \(R_{\theta}(w) = R_{\theta}(x \cdot (1, 0) + y \cdot (0, 1))\). By Lemma 1, \(R{\theta}(x \cdot (1, 0) + y \cdot (0, 1)) = R_{\theta}(x \cdot (0, 1)) + R_{\theta}(y \cdot (1, 0))\). And by Lemma 2, these two terms can be split further, since \(x\) and \(y\) are scalars. By Lemma 2,

\( R_{\theta}(x \cdot (0, 1)) + R_{\theta}(y \cdot (1, 0)) = x \cdot R_\theta((1, 0)) + y \cdot R_\theta((0, 1)) \).

Ooh, what’s this? Our old friends, \( R_\theta((1, 0)) \) and \(R_\theta((0, 1))\)? Directly substituting in \(\begin{cases} R_\theta((0, 1)) = (-\sin\theta, \cos \theta)
\\
R_\theta((1, 0)) = (\cos\theta, \sin \theta) \end{cases}\), we obtain,

\(\begin{align*}R_{\theta}(x \cdot (1, 0)) + R_{\theta}(y \cdot (0, 1)) = x \cdot (\cos\theta, \sin \theta)+ y \cdot (-\sin\theta, \cos \theta) \\ = x \cos \theta + x \sin \theta – y \sin \theta + y \cos \theta\\ = (x \cos \theta – y \sin \theta, x \sin \theta + y \cos \theta)\end{align*}\)

, and that completes the proof. Here’s the full thing:

If a vector \( w \) can be given in co-ordinates as \( (x, y) \), then I can apply a rotation of angle \( \theta \) on it, and find its new co-ordinates, which is given by the formula \( (x \cos \theta – y \sin \theta, x \sin \theta + y \cos \theta) \).

Cool!

I hear some (a few? A little? None?) of you say, “What can this be even used for?” Well, one thing I came up with is the proof of the trigonometric identity, \( cos(\theta + \phi) = \cos \theta \cos \phi + \sin \phi \sin \theta [latex]. Given the point [latex](\cos \theta, \sin \theta)[latex], we wish to add an angle, [latex] \phi \) to it, and find its co-ordinates, given by

$$ R_{\phi}(\cos \theta, \sin \theta) = (\cos \theta \cdot \cos \phi – \sin\theta\cdot\sin\phi, \cos\theta\sin\phi + \sin\theta \cdot \cos\phi)$$.

The \(x\) co-ordinate of this new point is \(\cos \theta \cdot \cos \phi – \sin\theta\cdot\sin\phi\), and its radius one, so we obtain,

$$cos(\theta + $$phi) = \frac{\cos \theta \cdot \cos \phi – \sin\theta\cdot\sin\phi}{1} = sin(\theta + \phi) = \frac{\cos\theta\sin\phi + \sin\theta \cdot \cos\phi}{1} = \cos\theta\sin\phi + \sin\theta \cdot \cos\phi$$, and also,

$$sin(\theta + \phi) = \frac{\cos\theta\sin\phi + \sin\theta \cdot \cos\phi}{1} = \cos\theta\sin\phi + \sin\theta \cdot \cos\phi$$

,which is correct.

Easy Proof of the Divergence of the Harmonic Series

The Harmonic Series is a really good counterexample to the intuition that “any series that tends to 0 converges”. Let’s see why.

How do you determine if a series, particularly infinite ones,  converges or diverges? An intuitive guess would be something along the lines of this:

An Infinite Series $$ S = sum_{n=1}^{\infty}\alpha_n$$ converges if $$\lim_{n \rightarrow \infty} \alpha_n = 0 $$

, an admittedly confusing way of saying that the further you go down the series, the closer you get to zero, and you can get the series as close to zero as possible, by going down far enough down the series. For example, the series $$ \alpha_n = (\frac{1}{2})^n$$ converges, because as you increase $$ n$$, $$ \alpha_n$$ gets really close to zero.

For example:

The series

$$ \alpha_n = (\frac{1}{2})^n$$ converges, and so does

$$ \gamma_n = (\frac{1}{n!})$$ (in fact, to the constant $$ e$$, and so does

$$ \beta_n = (-1)^n\frac{1}{n}$$.

But there is one item missing from the list…

Proof of the Divergence of the Harmonic Series

Well, that’s also called the Harmonic Series (wiki), given to the series of the reciprocals of the Natural numbers.

 Spoilers: it DIVERGES!

Here’s an elegant little proof (by contradiction, as usual) that demonstrates this:

Suppose that the sum $$ \sum_{n=1}^{\infty}\frac{1}{n}$$ converges, and converges to $$ S$$. Then, that can be restated:

$$ S = \sum_{n=1}^{\infty}\frac{1}{n} $$.

If we take every other fraction, and increase its denominator (decreasing its value)

$$\frac{1}{n} \geq \frac{1}{n+1}$$, for $$ n \in \mathbb{N}$$.

Here is are the first few items of the series, explicitly:

$$ S = \sum_{n=1}^{\infty}\frac{1}{n} = \frac{1}{1} +\frac{1}{2} +\frac{1}{3} +\frac{1}{4} +\frac{1}{5} + \frac{1}{6} +\frac{1}{7} +… $$,

making the replacements (to $$ \frac{1}{3}, \frac{1}{5},  \frac{1}{7}… \frac{1}{2n+1}$$, for $$ n \in \mathbb{N}$$):

Replacing the elements with something smaller obviously makes the sum smaller, thus we can write:

$$ S = \sum_{n=1}^{\infty}\frac{1}{n} > \frac{1}{1} +\frac{1}{2} +(\frac{1}{4} +\frac{1}{4}) +(\frac{1}{6} + \frac{1}{6}) +(\frac{1}{8} + \frac{1}{8}) + …$$

,if you add the fractions together, that comes out to:

$$= \frac{1}{1} +\frac{1}{2} + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + …$$,

which come out to

$$ = \frac{1}{2} + (\frac{1}{1}  + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + …) $$

Substituting S for the part after the half, we obtain

$$S = \sum_{n=1}^{\infty}\frac{1}{n} > \frac{1}{2} + (\frac{1}{1}  + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + …) = \frac{1}{2} + S$$,

which, if we take the front and ending terms, becoming the absurd inequality:

$$S > S + \frac{1}{2}$$.

Which is a contradiction.

Conclusion:

The result we just proved means one thing:

The Harmonic Series does not converge to a single value.

What does this mean? Well, we can make the series sum to be larger than any number, provided we sum enough terms! Although it may be counter-intuitive, the proof is above. And this can lead to some really counter-intuitive results:

Let’s see one from WikiPedia:

If I had an unlimited amount of blocks, and a table, I can stack the blocks an arbitrary distance from the table, provided I use enough blocks.

This is pretty easy: Just stack the first block with a half sticking out, then the second with a third sticking out, the third with a fourth sticking out, and so on. The amount sticking out will just be the series above, which tends to infinity, albeit very, very slowly.

 

 

Welcome to Cubetopia Ver. 2!

After I accidently forgot to renew my service, and the Namecheap servers deleted my old website records, I’ve been in a kind of lapse, not sure what to do. But now, I’m going to start the blog anew!

The blog will now be more diverse than its predecessor, Cubetopia Ver. 1. It will talk about more than the art of cubing; topics such as Computer Science (admittedly just coding simple things) and maths will be on the blog later.

I hope that my readers are just as excited as I am for the new release!

 

–Sean, looking for a new start.