Monday, March 16, 2015

Distributional Calculus Part 4: Properties of Distributions

So, in the previous post, we found that distributions gave an alternate way to characterize functions; that is, by mapping from a set of test functions instead of from a compact set $X$. Test functions turn out to be completely central to how operations are performed! In fact, I'll spoil the entire content of this post by saying any operation on $f$ can be 'moved' to apply on the set of test functions instead.

But before that, let's list some basic properties which are more evocative of elementary real analysis than anything else. For a distribution $\langle f, \phi\rangle$:
  • Linearity, i.e. $f(a\phi_1+\phi_2)$ for any real constant $a$ and test functions $\phi_1,\ \phi_2$;
  • There exists a sequence of test functions $\{\phi_n\}$ such that $\phi_n \to f$
All of these properties are necessary, but we'll be making the most use out of the second one. Recall the super-useful integral characterization of a distribution $$T_f(\phi)=\int_{\mathbb{R}}f(x)\phi(x)\,dx?$$ That can only be expressed if $f$ is a function with no weird generalized properties. Yet now, if we consider $f$ as the limit of a sequence of test functions, $\phi_n$ is a classically defined function for all $n$ and it is now possible to write  $$T_f(\phi)=\lim_{n\to\infty}\int_{\mathbb{R}}\phi_n(x)\phi(x)\;dx$$ for any generalized function $f$.* Great! Now we can look at any and all distributions the easy way.

The real magic starts when we attempt to translate the distribution. Recall that any function can be translated $y$ units by taking $f(x-y)$ instead of $f(x)$; the same thing can be done for generalized functions by considering $\lim_{n\to\infty}\langle \phi_n(x-y),\phi(x)\rangle$. (Let's define the translation function tau as $\tau_y\phi(x)=\phi(x-y)$.) Using some simple $u$-substitution magic, \begin{align}\langle\tau_yT_f,\phi\rangle &=\lim_{n\to\infty}\langle \phi_n(x-y),\phi(x)\rangle\\&=\lim_{n\to\infty}\int_{\mathbb{R}}\phi_n(x-y)\phi(x)\;dx;\qquad u = x-y\\&=\lim_{n\to\infty}\int_{\mathbb{R}}\phi_n(u)\phi(u+y)\;du\\&=\langle T_f,\tau_{-y}\phi\rangle.\end{align} We have essentially found that any distribution can be translated by applying the opposite translation to every test function in $\mathcal{D}$. To reiterate:$$\langle\tau_yT_f,\phi\rangle =\langle T_f,\tau_{-y}\phi\rangle.$$ Hooray!

Differentiating a distribution works in much the same way as translation in that the operation gets pawned off onto the test function but with an extra minus sign. However, it does involve an extra technique: integration by parts. I assume that nobody who is reading this is unfamiliar with the practice, but, for the sake of cute mnemonics, a friend of my fiancé's refers to $$\int u\;dv = uv - \int v \;du$$as "sudv uv svidoo."

Let's take a moment to appreciate how adorable that is.

The actual fancy differentiation trick can be proved in essentially one integration-by-parts step:\begin{align}\left\langle \frac{d}{dx}T_f,\phi\right\rangle &= \lim_{n\to\infty}\int_\mathbb{R}\left(\frac{d}{dx}\phi_n(x)\right)\phi(x)\;dx\\&= \lim_{n\to\infty}-\int_\mathbb{R}\phi_n(x)\left(\frac{d}{dx}\phi(x)\right)\;dx \\ &=\left\langle T_f, -\frac{d}{dx}\phi(x)\right\rangle\end{align}(brownie points if you've already figured out what happened to the $uv$ term). This identity is essential for a crazy number of distributional calculus proofs.

For example, we can directly use this identity to prove the Dirac delta function is the distributional derivative of the Heaviside function in two seconds. Let $T_H= \int_0^\infty \phi(x)\,dx$ represent the Heaviside distribution. Now, from the above identity, we conclude $$\left\langle \frac{d}{dx}T_H(x),\phi\right\rangle=\left\langle T_H(x),\frac{d}{dx}\phi\right\rangle=-\int_0^\infty \phi(x)\,dx=\phi(0)-\phi(\infty)=\phi(0),$$ that is, because $\phi$ is zero at infinity. Yet $\phi(0)=\langle \delta, \phi\rangle$ by definition! We're done here.

As super awesome as that is, there should be some material on how all this pertains to weak solutions of DEs up on Thursday. Woooo! This is basically my definition of a party!

* The MCT happened here. Shhh.

Wednesday, March 11, 2015

Distributional Calculus Part 3: Distributions

Sorry for the delay, guys! I just started a rather demanding full-time job, so it may be a bit hard to keep up the quality of these posts. Let's hope it gets easier...

Today brings us to the most important definition in distributional calculus: the distributions themselves.

Here's the formal definition using the set of test functions $\mathcal{D}(\mathbb{R})$ we defined earlier:

Any linear functional $T: \mathcal{D}(\mathbb{R}) \to \mathbb{R}$ a distribution. In addition, for a locally integrable function $f(x):X\to\mathbb{R}$, a corresponding distribution can be defined by $$T_f(\phi)=\int_{\mathbb{R}}f(x)\phi(x)\;dx.$$We usually write $\langle T, \phi\rangle$ instead of $T(\phi)$ and call the set of all distributions of this type $\mathcal{D}'(\mathbb{R})$.

There are only two things needed to truly understand this definition; how to take the average of a continuous function and what test functions are. Check out the integrand. Multiplying the target function $f(x)$ by each individual test function $\phi(x)$ has the effect of scaling $f(x)$ at every point---in particular, the integrand zeros out outside the support of $\phi(x)$, while the other points are weighted depending on $\phi(x)$. Hence every individual component of the definition a weighted average of $f(x)$ over a compact set. (Strichartz directly compares this to finding the temperature of a room with a thermometer: it won't display the temperature at one point, rather the average temperature of some portion of the area.) If each of these weighted averages are known for every existing $\phi(x)$, that is what defines the distribution.

Defining distributions in this way lets us account for objects that we think look like functions, but actually aren't. The Dirac delta function is the perfect example---the infinite value at zero ruins anything, so it isn't really a function*. However, the integral of $\delta(x)$ is bounded no matter what test function we weight it by, so the 'average' exists over every possible range, meaning $\delta(x)$ is a distribution. In particular, $$\langle \delta,\phi\rangle = \phi(0).$$

It would be useful to go over a couple useful properties of distributions, starting with the issue of consistency. This was supposed to happen today! Unfortunately, I'm dead tired and need to go lie down forever. Let's leave the important properties for next week.

* The Dirac delta function is to functions what killer whales are to whales... a complete misnomer.

Wednesday, March 4, 2015

Distributional Calculus Part 2: Compact support and test functions

Our goal with this series is to provide a resource for basic distribution theory that includes all of the formal definitions, justifications and theorems with as little hand-waving as possible, while also fully explaining these definitions through appeals to intuition.The following is written assuming an audience who cares or wants to care about mathematical formality but needs some intuitive background in order to learn quickly.

A few minor definitions are needed to understand what distributions represent. We define a set $X$ and function $\phi: X\to\mathbb{R}$ for the rest of this post.

The first two definitions are very simple.

Definition: Suppose $\phi$ is in $L_p(\mathbb{R}^n)$ and $X$ is open. We say $\phi$ is locally integrable if, for all compact subsets $A$ of $X$,
$$\int_A |\phi(x)|\;dx< \infty.$$The space of all such functions is called $L_p^{loc}$.

The formal definition of compactness can be found here. For those who haven't studied real analysis, a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded.

Definition: The support of $\phi$, written supp($\phi$), is the closure of the set of points in X where f is non-zero. That is,
$$\operatorname{supp}(\phi) = \{x\in X \,|\, \phi(x)\ne 0\}.$$(Topologists use a slightly different definition.)

From here, a slightly more specific property can be considered:

Definition: A function $\phi$ is said to have compact support if $supp(\phi)$ is compact.

It's hard to come up with a compactly supported function without specifying that the complement of the support is zero. As a result, most easily representable test functions, even the continuous and infinitely differentiable ones, are defined piecewise. We consider a few examples.

Note that compact support can also be interpreted as the function vanishing outside a compact set; continuous functions are always nonzero on an open set, so taking the closure in the definition of support is necessary.

One of the simplest examples of a compactly supported function is $\chi_A(x)$, where $A$ is a compact set and
$$\chi_A(x)=\left\{\begin{array}{ll}1&x\in A\\0& x \notin A.\end{array}\right.$$This is the identity on $A$ and zeros out everything else. In fact, the composition of $\chi_A(x)$ with any function on $x$ will have compact support as well. Here are a couple examples:

(Test yourself! Is H(x) from the previous post compactly supported? Are B-splines?)

This example leads well into the last definition.

Definition: A function $\phi$ is a test function if  it has compact support and is infinitely differentiable (i.e., in $C^\infty$). We refer to the space of all test functions on a set $X$ as $\mathcal{D}(X)$.

This is a crucial definition! It's weird for a function to have compact support but to also be infinitely differentiable, so let's generate a couple examples. Consider
$$\psi(x)=\left\{\begin{array}{ll}e^{-\frac{1}{1-x^2}}&|x|<1\\0& |x|\geq 1.\end{array}\right.$$This is a lot smoother than the previous function, and looks like a bump:

A slightly more complicated example would be
$$u_A(x)=\int_{\mathbb{R}^n}\chi_A(x-y)u(y)\;dy$$where $u(x)$ is a locally integrable function in $\mathbb{R}^n$ (the technique used to generate this example is called Sobolev's mollification method). If you're familiar with convolution already, it should not be difficult to prove this function is compactly supported. It looks like someone built a sandcastle shaped like a regular $\chi$ function and a wave rolled over it:

Many operations, such as translation and scaling, preserve infinite differentiability and compact support. Linear combinations of test functions and products of test functions are also test functions themselves.

(Test yourself! Can test functions be analytic?)

So, our point---these definitions are necessary in order to understand what distributions are. We'll go into this in detail next week.

Monday, March 2, 2015

Distributional Calculus Pt. 1: What is it?

In high school, despite being told I was "good at math" for being able to perform simple algebra, I was terrified of calculus. It was a scary word---"calculus"---and I didn't want to be outed as an impostor who wasn't ever good at math at all. That's how I ended up enrolled in the easiest calculus course offered at my high school, a place where most people took AP Calc. That's also how I ended up bored with the slow pace and lack of formality of my first calculus course, and transferred to AP Calc halfway through the year. That's also when I developed the unmitigated desire to become a mathematician; the calculus floodgates had been opened, and the only cure was more calculus. Calculus was followed by real analysis. Real analysis was followed by functional analysis.

Which brings us here... to the ultimate form of calculus. But why? Why does such a thing exist?

The catalyst for developing a more general form of calculus came when some people, such as physicists and engineers, decided it was okay to consider derivatives of non-differentiable functions. We consider the Heaviside step function ($H(x)$) as the quintessential example: this function is constant and hence has a zero derivative everywhere except at the jump discontinuity, where the classical definition of the derivative breaks down. One could reason that, because the derivative at a point is the slope of the tangent line, and the tangent line at the jump is a vertical line with infinite slope, $H'(0)$ is infinity. We therefore understand the derivative of the Heaviside function to be zero everywhere except at the jump, where it's infinite. That's the Dirac delta function ($\delta(x)$)!

Generally---and I apologize for stereotyping here---generally, physicists and engineers are totally okay with this interpretation and accept it as fact, but mathematicians are upset by the hand-waving. It particularly bothered Sergei Sobolev and Laurent Schwartz, whose work lead to the first mathematical justification of these ideas. This formalization of the engineers' and physicists' approaches grew to be called distributional calculus.

Distributions (also called generalized functions) define a broad set of function-like objects including, but not limited to, classical functions (hence, generalized functions). Distributional calculus is the study of calculus on this larger class of objects. This certainly allows for a formal reimagining of the Heaviside example given above: the Heaviside function is nondifferentiable at a point, but its distribution is differentiable everywhere! It can also be used to describe "weak" solutions of DEs. So, if you're like me and can't get enough calculus, it's just... more. More calculus.

Distributional calculus is also a great demonstration of the central public-relations conflict of real/functional/complex analysis: it's both the coolest thing anyone has done, ever, but also completely inaccessible to laypeople. In particular, the notation gets very intimidating, very fast. (Converting any idea from functions to distributions requires several million extra symbols.)

Our goal with this series is to provide a resource for basic distribution theory that includes all of the formal definitions, justifications and theorems with as little hand-waving as possible, while also fully explaining these definitions through appeals to intuition. There are already great books that deal with the formal side of distribution theory (Haroske and Triebel, 2008; Friedlander and Joshi, 1998) and great books that eschew formality in order to be accessible to physicists and engineers (Strichartz, 2003). These books are much better than a series of blog posts---that's why the authors of the books get paid. However, we adopt a different approach for our audience: the first set of textbooks caters to analysts, the second to people who don't care for analysis, while we assume the audience cares or wants to care about mathematical formality but needs some intuitive background in order to learn quickly.

Without further exposition, here's the game plan for March:

  • Week 1 & 2: Basic definitions (compact support, test functions, distributions, distributional derivatives, all that good stuff)
  • Week 3: The big examples
  • Week 4: A couple important theorems
  • Week 5 (March 31st): Recent papers /books for suggested further reading

Lastly, especially if you're a non-mathematician who doesn't care about overt formality, I cannot recommend the Strichartz enough. It's hilarious! I definitely got something out of it despite being peeved at the lack of formal analysis.

[1] Haroske, Dorothee, and Hans Triebel. Distributions, Sobolev spaces, elliptic equations. European Mathematical Society, 2008.
[2] Friedlander, Friedrich Gerard, and Mark Suresh Joshi. Introduction to the Theory of Distributions. Cambridge University Press, 1998.
[3] Strichartz, Robert S. A guide to distribution theory and Fourier transforms. Singapore: World Scientific, 2003.

Wednesday, February 25, 2015

The 6 Stages of Math Writing

Here's some news! I've decided to devote all of March to the basics of distributional calculus. In undergrad, I had a professor that taught distributional calculus from a purely theoretical standpoint and refused to match this with intuition, so this will be an adventure in explaining math for me as well.

It goes so well with the blog title---we're the Analyisisters! Let's throw some analysis at everyone!

In the meantime, sit tight while I pretend to be Seinfeld.

How would you write out the solution to this problem at each stage of university life?

Let $A$, $B$ be matrices in $\mathbb{R}^n$. If $AB = I$, then $A^kB^k=I$ for all $k \in \mathbb{N}$.


B^kA^k &= B\ldots BBAA\ldots A\\
&= B\ldots BIA \ldots A\\
&= B \ldots BA \ldots A\\
&= B \ldots BIA \ldots A = I

Yes, I know math homework is supposed to be written in complete sentences, but, why bother? I'm the chosen one who will be able to understand what this means 12 years later.

I mean, come on. I understand it right now. It's really easy.


$BA = I$. Then $B^{k+1}A^{k+1} = B^kBAA^k=B^kIA^k=B^kA^k=I$.

Oh, you were serious about that sentence thing? And the sentences have to end with periods? Are you sure? Okay.

Hey, are you going to take points off if I don't put it in a sentence? Why are you doing that? I didn't know that was going to happen.


We know that $BA=I$. Suppose $B^kA^k=I$. Therefore,
\end{align}Therefore, using the properties of the identity, $B^{k+1}A^{k+1}=B^kA^k = I$. Therefore, this proves our statement.

They'll never guess my favorite connecting word.


This can be solved using induction. We are given that $BA = I$, providing the base case, so we suppose that $B^kA^k = I$ to show that $B^{k+1}A^{k+1}= I$. We then find that
B^{k+1}A^{k+1}&= B^kBAA^k\\
&= B^kA^k = I,
\end{align}as desired.

Wow, can you believe how I wrote as a frosh? Who even thinks that's okay? I guess that it shows that I know how important math writing is. That.


We proceed inductively with the given base case $BA=I$. Suppose $B^kA^k = I$ towards demonstrating $B^{k+1}A^{k+1}$ to be the identity as well. Using the definition of integer exponents and both given/inductive hypotheses, we conclude
$$B^{k+1}A^{k+1}=B^kBAA^k=B^kA^k=I;$$that is, the conditions of induction are satisfied and the original statement follows. This fact can be used to show equivalency of left and right inverses (i.e., $AB = I$ iff $BA = I$ for square $A,\ B$ of concordant dimensions).

Varying sentence structures, excessively clear logic, weird punctuation marks, parenthetical statements. Look! Revel in my competence! Feel the 2 hours I spent formatting the answer until it was textbook perfect!

Do you want to see my personalized LaTeX class with a multi-page macro set designed specifically for this field? I made it while my friends were at the bar.


Given $BA=I$ as the inductive hypothesis, observe that
$$B^{k+1}A^{k+1}=B^kBAA^k=B^kA^k=I$$implies the above.

There's no way I'm spending more than 5 minutes on this trivial problem. Why are you even showing it to me? I have several papers to review and two classes to prepare for. This it pointless.

Stay tuned next week, where we do absolutely nothing funny and go down the rabbit hole of formal math! (I need to update my macro set.)

Monday, February 23, 2015

Rolling Shutter + Moving Things = WICKED

There is a point in every blog's life where the audience and niche becomes set in stone, a point which this blog seems to be quite far from reaching. Do I go through the proof of Hölder's Inequality with informal language and cute pictures? Or, instead, simple mental math tricks that everyone alive should know? A smattering of recent interdisciplinary papers I have opinions on, or stories of working with high school and middle school tutees? Macros in $\LaTeX$? Householder reflectors? That time I found out biologists use "units" to refer to a different quantity for every substance?

So here we fall back on the old "what is Peter up to" shebang, which is never not funny. I feel truly blessed to have a partner who spends hours looking at fluid dynamics in bubble solution and can spell his initials in a 9x9 puzzle cube. The fields he finds interesting (look at all the things prime numbers can do! pretty pictures!) are also more accessible to laypeople than the fields I find interesting (okay, now memorize definitions for 2 years! in two more years you will be able to appreciate distributional calculus!). Maybe that's why there are so few famous analysts.

The biggest fight we ever had was over his finitism. He tried to convince me it was silly to model reality using irrational numbers that can't be described using a finite amount of information; I sat on the bed sobbing because the axiomatic structure he was proposing didn't have a clear measure, and so how do sets get mass, and HOW DOES INTEGRATION WORK IN YOUR CRAZY WORLD? DON'T YOU CARE ABOUT THEORETICAL JUSTIFICATION? HUH?!

Pictures, right? Everyone likes pictures?

Some background: this particular incident occurred when Peter discovered his cellphone camera took pictures by storing data from the top down, so that the photos were separated into horizontal lines that were actually taken at different times. (Wikipedia assures me this is called rolling shutter.) Usually, this doesn't make a difference---unless if one were to take pictures of something spinning or vibrating really fast.

So of course that's what he did for a whole week.

Here's what his mom's spinning flamingo looks like in real life:


... but with a rolling shutter, it's a curved monstrosity... 

An ordinary fan looks like it has vertical blades:


Bouncing balls show deformity:

And, for our personal favorite, filming a cello gives a visualization of the old $u_tt = c^2\nabla^2u$:


All things considered, it wasn't a bad way to spend a week.

 Got these? Share 'em!

Wednesday, February 18, 2015

Convexity and You: Unpacking the Definition

Real Analysis is notorious for taking easy-to-understand concepts and repackaging them in a thick theoretical barrier. Take the epsilon-delta definition of continuity---it's impossible to prove anything with the information "the function, uh, doesn't have any holes," but it's impossible to develop a mental picture given only the theoretical perspective. For this reason, one of the biggest barriers to learning any type of analysis is properly connecting the intuitive idea and the theoretical representation.

We'll focus here on one of the less transparent definitions: convex functions. Convex functions can be understood intuitively as "the area above the function is a shape that doesn't go inwards on itself"... and theoretically as
Given convex set $X$, a function $f:X\to\mathbb{R}$ is convex if for all $x_1,\ x_2\in X$ and $t \in [0,1]$, $f(tx_1+(1-t)x_2)\leq tf(x_1)+(1-t)f(x_2)$.

This is the part where, during an analysis course, you are expected to nod your head at the alphabet vomit (at least this time it's the Roman alphabet, not the Greek, that tossed its cookies). Let's make some sense out of what information is being conveyed.

First of all, to understand the definition of convex functions, you must know what convex sets are. A set is convex if any two points (call them $x_1$ and $x_2$) can be connected by a straight line that is contained in the set. If the set is not convex (i.e. "goes inwards" visually), then there will be at least two points whose connecting line goes outside the set.

Now the domain of $f$ is a convex set $X$, which should explain what the points $x_1$ and $x_2$ are doing in the definition: they correspond to the two arbitrary points that we want to try and connect with a line. This brings us to the purpose of defining $t \in [0,1]$. Consider the function $y(t)=tx_1+(1-t)x_2$. Since $y(0)=x_2$, $y(1)=x_1$ and $y$ itself is a linear functional, this function represents a straight line segment starting at $x_2$ and ending at $x_1$. Thus the purpose of $t$ is to create the parametrized line segment joining points $x_1$ and $x_2$.

We are given that $X$ is a convex set, so it is certainly true that the line $tx_1+(1-t)x_2$ is completely contained in $X$, the domain of $f$. This makes it completely legit to consider $f(tx_1+(1-t)x_2)$ as the image of this line. The image of a straight line in the domain won't necessarily be a straight line itself, but will instead be a path along the function starting at $f(x_2)$ and ending at $f(x_1)$. Hence the expression $f(tx_1+(1-t)x_2)$ is asking us to consider the section of $f(x)$ that connects* $f(x_1)$ and $f(x_2)$.

This brings us to the last part of the inequality
$$f(tx_1+(1-t)x_2)\leq tf(x_1)+(1-t)f(x_2).$$
Just as before, the second expression $tf(x_1)+(1-t)f(x_2)$ is representing a parametrized line segment, joining the points $f(x_2)$ and $f(x_1)$. We are now comparing two paths between $f(x_1)$ and $f(x_2)$: one is a straight line, and the other a path on the function. The inequality places a lower bound on where the straight line can be. If the straight line is above the path on $f$ everywhere---that is, if it satisfies the above inequality---it is contained in the area above $f(x)$ (the epigraph of $f$).

That's exactly the definition of a convex set, but applied to the space above $f$... cool.

Here's a picture for $X = \mathbb{R}$:

That's what the definition is communicating. I hope that was insightful for someone!

*(does not refer to connectedness in the mathematical sense)

Monday, February 16, 2015

Solving the Spider Problem

A few days ago, while searching for tweets containing the word 'math', I came across this problem:

Who wouldn't attempt to solve it after that commentary? Poor spider, though. That must have been tiring.

I'm certain any readers would want to attempt this for themselves as well, so my solution (and the accompanying story!) can be found after the jump break.

Wednesday, February 11, 2015

Mathematical Words in Different Languages (Pt. 2 - Armenian!)

Hello all, and welcome to my inaugural Analysisters post! This will be a short one, just expanding upon Tuesday's Math Words post in the only language I know well enough to write about -- (Eastern) Armenian. Also, apologies for the slight tardiness, as this Analysister resides on the West Coast, and is also allergic to deadlines.

Here is a list of common math words, translated into Armenian, and then transliterated in the way I was taught. Since the Armenian alphabet has 39 letters, there are several common mappings from the Armenian to English alphabets, not even taking into consideration different dialects. Thus, if there are discrepancies, that is probably the reason. So, without further ado...

English Armenian Transliteration
Mathematics Մաթեմատիկա Matematika
Theorem Թեորեմ Teorem
Lemma Լեմմա Lemma
Proposition Դատողություն Dataroghutyun
Definition Սահմանում Sahmanum
Proof Ապացույց Apatsuyts
Open Բաց Bats
Closed Փակ Pak
Algebra Հանրահաշիվ Hanrahashiv
Integral Ինտեգրալ Integral
Differential Դիֆերենցիալ Diferentsial
Geometry Երկրաչափություն Yerkrachaputyun
Function Ֆունկցիա Funktsia
Finite Սահմանափակ Sahmanapak
Infininte Անսահման Ansahman
Countable Հաշվելի Hashveli
Uncountable Անհաշվելի Anhashveli
Physics Ֆիզիկա Fizika

That's all for now. As always, comments are welcome.

Monday, February 9, 2015

Mathematical Terms in Different Languages (Pt. 1)

Readers, all both of you, I apologize. Not a whole lot happened this week in terms of math (aside from Project Euler, which is like the fight club of math-CS in that they share a set of rules. NEVER TALK ABOUT PROJECT EULER.) While my fiancé and I usually find something interesting to talk to each other about once a week, I spent the last week (+ two months) moping about job searching and he spent over 7 hours yesterday doing side quests in FFX. So here's a fun fluff piece.

Common mathematical terms in different languages!

(I apologize in advance for my preference of languages using the Roman alphabet. This is in no way meant to suggest that people speaking the following languages have made more significant contributions to math than people speaking languages that are not included, and is instead a side effect of the compiler's inability to read these alphabets, thus preventing error-checking. I'll be happy to add languages if anyone with better language skills wants to help!)

Corrections by fluent speakers are welcome. Note: when a word has multiple meanings, we are looking to specifically choose the one that relates to the mathematical concept.

English Spanish French German Hungarian
Mathematics Matemáticas Mathématiques Mathematik Matematika
Theorem Teorema Théorème Theorem Tétel
Lemma Lema Lemme Lemma Lemma
Corollary Corolario Corollaire Korollar Következmény
Proposition Proposición Proposition Aussage Állítás
Definition Definición Définition Definition Definíció
Proof Demostración Démonstration Beweis Bizonyítás
Open Abierto Ouvert Offene Nyílt
Closed Cerrado Fermé Abgeschlossene Zárt
Continuous Continuo Continu Stetig Folytonos
Differentiable     Derivable Dérivable Differenzierbare Differenciálható
Analytic Analítico Analytique Analytisch *
Integrable Integrable Intégrable Integrierbar Integrálható
Function Función Fonction Funktion Függvény
Set Conjunto Ensemble Menge Halmaz
Space Espacio Espace Raum Tér
Dimension Dimensión Dimension Dimension Dimenzió
Group Grupo Groupe Gruppe Csoport
Finite Finito Fini Endlich Véges
Infinite Infinito Infini Unendlich Végtelen
Countable Numerable Dénombrable Abzählbar Megszámlálható
Uncountable No numerable Non dénombrable  Überabzählbare Megszámlálhatatlan
Polynomial Polinomio Polynôme Polynom Polinom
Calculus Cálculo Calcul Infinitesimalrechnung  Számítás*
Limit Límite Limite Grenzwert Határérték
Series Serie Série Reihe Numerikus sor
Sequence Sucesión Suite Folge Sorozat
Convergent Convergente Convergent Konvergent Konvergens
Divergent Divergente Divergent Divergent Divergens
Derivative Derivado Dérivé Derivat Derivált
Integral Integral Intégrale Integral Integrál

Stay tuned for Part 2, in which another Analysister helps out with Armenian!

(* We aren't sure/can't find a dedicated word.)

Wednesday, February 4, 2015

Taken's Theorem and Dynamic Correlation

As much as I dislike regurgitating content instead of producing it, this Sugihara et al. paper is possibly the coolest thing I have ever seen, and has been for a couple years now.

Since MathJax (what we are using to format everything in $\LaTeX$) probably doesn't have BibTeX support, I'm going to go ahead and do an academic no-no by just providing the link to the paper, and no citation[1].

The goal of this paper is to introduce a new method, convergent cross mapping (CCM), a test meant to help determine whether one event in a nonlinear system causes another. As the introduction notes, two populations interacting nonlinearly can go through phases where the behavior is similar, they behave oppositely, or there appears to be no relation. This makes applying traditional measures of correlation or causation useless in such situations.

Enter Taken's theorem: a theorem stating (in extreme layman's terms) that it is possible to 'reconstruct' a chaotic attractor using one of its components. WHICH IS SO COOL. Sugihara et al. concluded that if two components were members of the same system, they would not only be able to reconstruct the original system, but one component would predict the behavior of the other. Hence the 'nearest neighbors' to a data point in the first component should be associated timewise with the nearest neighbors to the corresponding data point in the second component, providing the systems are related, and this predictive ability should get better as more data are taken into account. This is the gist of CCM, which is explained more clearly below and in the paper.

What happens when this method is applied to real data on sardine and anchovy population? Read the paper to find out! If you're a member of the general public, it's less intense and more explanatory than math papers generally tend to go, and a great read if you're even a tiny bit into population ecology. (Not a whole lot is said on the exact implementation, but the numbers they're getting look like correlation coefficients between a variable and its nearest-neighbor estimate as more data is added. I should try to do this in MATLAB and post code.)

Did I mention the videos? George Sugihara's son made two brilliant videos to illustrate where the idea came from. (I want to be his friend.) Here's one on Taken's theorem:

There's also one demonstrating the manifold reconstruction:

Lastly, a brief description of how CCM works:

Yeah. I'm not kidding about this being the coolest thing ever!

I learned of this paper during a talk George Sugihara gave in 2013. Of course, the videos are very pretty, but the topic also illustrates something bigger: how applied mathematicians can make breakthroughs by studying "useless" theoretical topics. Some of my old professors were fond of claiming all pure math eventually becomes applied math. This is a great recent example of such creativity; who would have expected Taken's theorem to relate to causality in ecosystems?

Theory: it's what separates us from the engineers! Or just another excuse for the applied folks to read analysis textbooks.

[1]... Nope, my heart won't let me do it. Here's the citation:
Detecting Causality in Complex Ecosystems. George Sugihara, Robert May, Hao Ye, Chih-hao Hsieh, Ethan Deyle, Michael Fogarty, and Stephan Munch. Science. 26 October 2012: 338 (6106), 496-500. Published online 20 September 2012 [DOI:10.1126/science.1227079]

Monday, February 2, 2015

How I Learned to Stop Worrying and Love Fminsearch

(I'd intended to write a post on subsets of null sets that are not null sets, but some lovely person has already posted it on Wikipedia!)

My field involves a lot of fitting ODE parameters to experimental data, so, as expected, I have a long and storied relationship with distance minimization algorithms.

Particularly fminsearch, MATLAB's built-in Nelder-Mead simplex direct search function.

If a network executive decided for some reason to make a sitcom based on my life, fminsearch would be the lovable goofball character whose laziness is the basis for many a cheap joke.

"FMINSEARCH!!! Stop watching football and clean up all those Funyun wrappers from off the floor!" I'd scream. To which fminsearch would reply, "I can't see them! They're not contained in my initial simplex!" Oh, fminsearch....

Fminsearch is great for converging exactly to local minima, but suffers in a couple ways, the main problem being its inability to detect global minima outside its starting range. This problem arises because the underlying algorithm is local (operates on a closed subset of parameter space) and deterministic (will return the same best fit every time if options/initial conditions are unchanged). Of course, the easiest fix is then to pair it with a global, nondeterministic fitting algorithm such as MCMC (Markov Chain Monte Carlo methods) or a genetic algorithm. The new hybrid algorithm then at least has a chance of breaking out of local minimum wells. However, fminsearch is much better at converging exactly to local minima, so it's a good idea to run fminsearch at the end, just in case.

A similar issue occurs minimizing over several parameter values. Although it is possible to use fminsearch to optimize several parameters at once, my advisors and I have had more luck fitting one parameter at a time iteratively. Beware! The order in which the parameters are fitted has a huge effect on the outcome. Less sensitive parameters may not change much if they are fitted last, and if two parameters are related by dependence, it can be difficult to fit them separately. I've had more luck implementing MCMC with Latin Hypercube Sampling (LHS).

Lastly, it can be difficult to find a local minimum in which constraints on parameter size are satisfied (for example, if the algorithm keeps assigning a negative value to a parameter that shouldn't be negative). This is again a situation that should be passed to MCMC, because reducing the average step size in parameter space will cause parameter values to stay closer to the initial conditions. Another 'cheating' fix would be to alter your distance function to output absurdly high numbers when a parameter value enters the no-no range---this is probably the best way to go if you want to stick with fminsearch.

These are, at a broad level, the most important things I've learned in my years of practically dating fminsearch. I'm cataloging them here in case someone looking for guidance can be spared a few couple fights with my favorite MATLAB function.

If any readers (ha, ha) want me to post some iterative fminsearch or MCMC code, I would be happy to provide a watered-down version!

Wednesday, January 28, 2015

Grading Stories: "Cheese Weight" and Thusforthwith

One thing I love about the internet is being able to share stories and moments from everyday life. Here are a couple about something I'm sure other academics will be able to relate to: grading stories.

Cheese Weight

My alma mater enforced mathematical writing guidelines and the use of $\LaTeX$ very strongly. Yet some people, notably non-majors, chose to ignore those guidelines completely and complain when points were taken off for writing. Some people handed in scratch work done in pen on graph paper in consistently gigantic writing. Some people *coughEigenpetercough* printed out the questions in $\LaTeX$... two problems to one page, in landscape form... then did them out by hand in tiny writing. Some people *coughalsoEigenpetercough* did the homework in $\LaTeX$ but omitted large amounts of information to fit every proof-based problem on one side of one page.

Then there are the people with just plain bad handwriting. While grading with a friend, I encountered a homework that exemplified this while grading a core class; apparently, one of the people in the class was secretly a chicken tied to a Ouija board. Here's how it went down.

Me: Hey, do you have any idea what these two words are?

Friend: ..........

Me: It looks like it says "cheese weight".

Friend: It does, but that doesn't have anything to do with the problem.

Me: Can you tell from context?

Friend: .... no.... (to another person) Hey, do you know what this says?

Someone else: ..... looks like "cheese weight"?

Friend: How about you?

Yet another person: I have no idea.

Me: Well, "cheese weight" it is then.

And that's how someone got their work back with "what's a cheese weight?" written as a comment.

Runner-up for best handwriting-related mishap goes to the person who tried to write "I used Professor X's code," but botched the last two letters in "code" in a way that evoked, erm, Little Professor X.



As a fan of both analysis and silly things, I can't help but enjoy when they're combined. This story is about a friend who perfected this combination.

My friend, at the time, was taking the same real analysis course I was grading, so I mentioned to him how funny it was when people used archaic connecting words: "thusly", "wither" and the like. From there we started trying to come up with the most ridiculous word. Thenceforth! Thuswith! Whencehence!

So of course every homework I got from this friend had at least one made-up connecting word (despite being typed up quite nicely). This continued without incident, until one day:

Me: This is hilarious! I'm worried about you slipping up and doing it on the test, though.

Him: Why not?

Me: Well... the professor might notice, and you might get docked some points...

Him: Hmm...

Which obviously culminated in him PUTTING FAKE WORDS ON THE ANALYSIS TEST.

And guess what?




Readers, do you have any grading stories? Let me know if anyone tries to pull off using fake connecting words---not everyone may be as lucky!

Monday, January 26, 2015

Adventures in Linear Algebra with the Prismatoy

As the nature of the first few posts here should somewhat suggest, my fiancé and I spend a whole lot of time talking to each other about math. He needs a nom. Let's call him Eigenpeter.

The latest installment of "Peter finds an interesting idea, spends 1 hour worth of whiteboard lecture on representation theory to his algebra-phobic lover and makes a Mathematica toy in 15 minutes" is brought to you by Prismatoy, a cube that can be collapsed into a parallelpiped:

Basically, we wandered into a puzzle store where he picked one of these up and didn't put it down. (We did pay before leaving!)

I like this because, when restricted to any one of the 6 faces, it gives a visualization of the linear transformation
$$\left[\begin{array}{cc}1 & \cos\theta\\ 0 & \sin\theta\end{array}\right]$$
(up to transformations, scaling and unitary operations) with $0<\theta\leq \frac{\tau}{4}$* being the acute angle in the final configuration. You could derive this quickly at home by imagining one face as a unit square, then exploiting some basic trig to find that the transformation maps (0,1) to ($\cos\theta$,$\sin\theta$), (1,1) to ($1+\cos\theta$, $\sin\theta$), and leaves the bottom side of the square unchanged. The above then follows from knowing how the transformation acts on the standard basis vectors for $\mathbb{R}^2$.

Peter noted that the volume of this structure is given by the area of the base times the height, which, in this case, is the determinant of the linear transformation that takes it from cube form to its parallelpiped shape. To demonstrate this, we name the three vectors along the given three sides $\vec{a}$, $\vec{b}$ and $\vec{c}$:

The area of the base is given by $\lvert\vec{a} \times \vec{b}\rvert$---one can see this because
which corresponds exactly to the area in the first picture, except the vectors are no longer of unit length. Now recall that $\vec{a}\times\vec{b}$ is a vector perpendicular to both $\vec{a}$ and $\vec{b}$. Taking the dot product $(\vec{a} \times \vec{b})\cdot \vec{c}$ only takes into account the component of $\vec{c}$ that is parallel to $\vec{a}\times\vec{b}$---in other words, perpendicular to both $\vec{a}$ and $\vec{b}$---in other words, the height of the parallelpiped! Hence taking the magnitude of this quantity gives us base times height, which is volume.

But wait, there's more! The quantity $\left\lvert(\vec{a} \times \vec{b})\cdot \vec{c}\right\rvert$ can be written as
$$(\vec{a} \times \vec{b})\cdot \vec{c}=\sum_{i=1}^3\left(\sum_{j=1}^3\sum_{k=1}^3 \epsilon_{ijk}a_jb_k\right)c_i$$
where (in case the reader hasn't seen it before) the Levi-Civita symbol $\epsilon_{ijk}$ essentially acts as the 'opposite' of the Kroenecker $\delta$ function, i.e.
$$\epsilon_{ijk}=\left\{\begin{array}{ll}1 & i=1, j=2, k=3;\ i=3, j=1, k=2;\ i=2, j=3, k=1\\ 0 & i=k=j\\ -1 & \textrm{else}\end{array}\right..$$
Now imagine taking the determinant of the matrix
$$\left[\begin{array}{ccc} \lvert & \lvert & \lvert\\ \vec{c} & \vec{a} & \vec{b}\\ \lvert & \lvert & \lvert\end{array}\right].$$
I won't put the algebra all out here, but calculating the determinant according to the definition and rearranging it will give the previous nested sum. This technique can also be used to prove that
$$(\vec{a} \times \vec{b})\cdot \vec{c}=(\vec{b} \times \vec{c})\cdot \vec{a}=(\vec{c} \times \vec{a})\cdot \vec{b}.$$
Yay! It is now evident that
$$V_{ppiped} = \left\lvert(\vec{a} \times \vec{b})\cdot \vec{c}\right\rvert = \textrm{det}[\vec{c}\  \vec{a}\ \vec{b}].$$
More generally, the determinant of a matrix is a factor indicating what change in volume (or area, or the appropriate dimensional quality) it produces.

EPILOGUE: He spent most of Saturday trying to sketch the manifold of possible shape configurations of this object, then trying to determine whether the set of operations on the toy in SL(3) was a group, then made several demonstrations of these and similar phenomena in Mathematica. However, he did not do the dishes. I cut the lecture for brevity.


Wednesday, January 21, 2015

5 New High-Level Math Jokes

We here at the Analysisters enjoy puns, and are happy to contribute to the already-gigantic list of math jokes every once in a while. Here are some we've come up with over the years!

1. A topologist walks into $\bar A$. It's closed.

2. (3 variants) Q: If chocolate is a Hilbert space and peanut butter is its dual, why can every element in peanut butter be written as an inner product $\langle y,x \rangle$, where $y$, $x$ are chocolates and $x$ is uniquely fixed?
        A: Reese's Representation Theorem.

      Q: Why is peanut butter the adjoint of chocolate? Why is chocolate the adjoint of peanut butter?
      A: Reese's Representation Theorem.
      (Thanks, Paul!)

      Q: What's a mathematician's favorite candy?
      A: Riesz's Pieces.

3. What did the analyst have for dinner?

4. Your mama isn't Lebesgue integrable because she doesn't vanish at infinity!

5. Eight mathematicians walk into a diner. The first one says they're not hungry, and orders nothing. The next three order a beer, a hamburger, and french fries, respectively. The next three order a burger and fries, fries and a beer, and a burger and a beer, respectively. The last one orders a burger, fries, and a beer.

"I'm sorry, I can't fill your order," she says.

"Why is that?"

"This is only a partial order."

BONUS: Not necessarily in joke format, but I refuse to refer to
$$F_n = \frac{1}{\sqrt{5}}(\phi^n - \bar{\phi}^{n})$$
as anything except "that formula that shoots water up your butt".

Monday, January 19, 2015

MIT Mystery Hunt 2015: Let's Hear it for Random Hall!

(Note: Links may temporarily not work without a login. I will update them as the situation changes.)

Readers, I have to admit that my wonderful fiancé has a life-altering addiction.

To puzzles.

He wakes up in the morning and does five crosswords, usually in five minutes each; is never caught without a copy of The Enigma; plays puzzle games when he comes home; and, over dinner, tells me about the latest puzzles he solved. He also participates in a minimum of 10 puzzlehunts a year with his MIT puzzlehunting buddies. I generally stay out of his way for the online ones, but there's no physically avoiding the MIT Mystery Hunt. So off I was dragged.

Let me admit: I was expecting to play a supporting role, making sure my dude gets enough food and sleep. I was not expecting to personally crack several puzzles, including 2 metas. Guess what happened.

As everyone on my team agreed, this year's puzzlehunt was particularly well-constructed! This year's theme was 20,000 Leagues Under the Sea and was presented by One Fish, Two Fish, Random Fish, Blue Fish. The short format of many of the puzzles made it possible for individuals to solve puzzles alone, which is a great confidence boost for newer puzzlers such as myself; and the way in which the puzzles were released (solving puzzles gave more Deep, which revealed puzzles hiding in the ocean) made it possible to work around roadblocks, eliminating the frustration of being stuck on everything. Feeling Bluefin was probably the team favorite, and a lot of people liked Nautilus's Duplicated Quest as well. I could not believe there was a Dresden Codak puzzle* released so early---we love Aaron Diaz here! Mad props!

There are always a couple puzzles that require special knowledge, which is great if you spent the last 5 years listening to showtunes instead of doing puzzles. I definitely enjoyed the auditory (directly or otherwise) puzzles such as the theater one*, Nina and Topsy-Turvy. Someone in Random Hall has pipes! The best 'esoteric knowledge' moment came when one of the puzzles required reading a diving chart*---apparently one of the members of our team was an experienced diver all along.

My fiancé, being a long-time language puzzle master, enjoyed everything from Flat Containers to The Curse of The Atlantean's Tomb, both of which he made me help with. Representative Characters (math!) was also a nice surprise (math!) because it required understanding his field (math!) in order to solve. Although our team concentrated on solving earlier metas rather than the Atlantean puzzles towards the end, he also glanced at Practice in Theory (physics!) and enthused at me about it for an hour.

Other than Feeling Bluefin, the puzzles with the cutest premises were Follow the Bees!, Montages and MIT Mystery Hunt. So adorable.

Our team enjoyed all of the physical puzzles (even the meta!). We had people solving cubes, picking locks, cracking the gelt puzzle so we could eat it; I decoded the knitted square, and one girl with INFINITE PATIENCE sat on the floor for hours putting together the paper jigsaw that had scared off everyone else. Seriously, I was amazed by her persistence.

This was a very different experience from last year's hunt: for me, personally, most of the change in experience quality was due to being on a smaller team. Fewer people means having more opportunity to get an 'aha!' moment and a higher fun-to-automation ratio. The large number of easy puzzles (School of Fish round) also made the hunt continuously accessible to new puzzlers, but I found myself avoiding them in preference of harder puzzles as time went on.

It didn't hurt that everyone on our team was awesome and cracked jokes the entire time! Even my fiancé, who is normally reserved and academic, was dropping the sass left and right. Highlights include giving Ariel a bottle of hair dye when she asked for us to give her a soul (she's a ginger) and the phrase 'Chocolate Rain' being used to describe makin' it rain with gelt. Nevertheless, Puzzfeed has outdone us all.

If you're new to the puzzle scene, get on a smaller team (10-30 people) with some experienced puzzlers and some MIT/Boston residents. You can also look at any of the puzzles listed above for at least an entire year (and Random Fish plans to publish a fancy book of the School of Fish puzzles.) I'm not even going to preface that with 'if you want to'. GET ON A TEAM. IT IS FUN.

Now for some flats before bed...

*(not linked due to spoilers)

Wednesday, January 14, 2015

Why Learning Math is Important (General)

A common question I get from tutees is, why learn math at all? Many others on the internet have provided satisfactory answers to this question, so I'll try my best to come up with a couple new points.

(A side note to those who are highly educated in humanities, social sciences, etc.: I am not claiming that every one of these benefits is unique to mathematical learning. Those disciplines are useful for critical thinking as well!)

Problem-solving practice

If you don't go into a technical career, the odds that you'll have to use algebra or calculus every day are slim. However, no matter what you do in life, you will have to know how to effectively solve problems. It's unfortunately difficult to practice and develop good problem-solving skills on their own---that often comes with experience---but doing relatively simple math problems is a good substitute.

How would that work? When you start a new videogame, the game doesn't immediately drop you into the final boss fight; you start by doing the tutorial instead. This is what schools are hoping to accomplish by giving you simple problems to solve: they may be in a weird format, and you may not be sure how they connect with day-to-day life, but you're being given them because these number problems are some of the simplest problems that are possible to solve. Furthermore, it's not a bad thing that algebra problems are disconnected from your real life---if they were, then there would be much greater punishments for failure to solve them. I'm certain you'll agree a broken friendship or broken arm is much worse than losing a couple of test points!

To summarize: think of grade school math problems as the tutorial level to real-life problems, and their disconnect from regular life as a protective safety net.

Increased ability to communicate abstract ideas

Math can be seen as not only a tool, or a scientific discipline, but also a language. Understanding and communicating mathematical ideas requires a set of symbols and vocabulary that people wouldn't learn just by going through life. Ideas related to math do pop up from time to time, and it feels great when you know what the answer is, and how to explain it!

Here's an example: let's say you and some friends are trying to get to a frozen yogurt place a block away. We call your current location A and the location of the frozen yogurt place B. Also, let's call the frozen yogurt place Froyomorphism for the sake of puns. Your friends, whose favorite colors are purple, green and blue, suggest the following three paths on the map:

Now you are asked to choose the best path. Because you are good at judging distances, you know that the purple and green paths are the same length, but the blue path is much longer. Could you communicate this concept to your friends without using math? Without using the word 'sum' or 'length' or evoking a visual proof? Probably, but it would be much harder. The point is: even if you're right, you may end up taking the blue path if no one can explain to Blue why the other two paths are shorter.

If two people have a shared vocabulary that can be used to talk about abstract objects, they can exchange information about what essentially amounts to different lines of thought. This is how people get smarter and better at problem solving.

Protection against being exploited

Most people think they are smart. However, as I'm sure you've figured out by now, not everyone is. Several people/institutions/etc. have realized this and use people's lack of mathematical awareness to make a living. Gambling is a classic example: also see the Monty Hall Problem or Bertrand's Box Paradox for situations where common sense can be deceiving.

However, not only can ill-meaning people use your unwillingness to think about mathematics (and academic prospects in general) to separate you from your money, they can twist information to separate you from your ideals and beliefs as well. Most people like the idea of experiments being able to prove, disprove, support, or refute ideas, but don't want to dig through heavily written academic papers to find the point. This is where exploitative people come in. If they can bank on the audience being too busy or unable to read the source material, they can make their audience believe whatever they want---even if it comes at the expense of the audience! See Flaws and Fallacies in Statistical Thinking for tons of real-world examples; Stephen Campbell explains this better than a blog post ever could.

The only way to protect yourself against this is to be able to read and analyze scientific papers without needing someone to tell you what you mean. In many cases, this requires some knowledge of statistics (math), experimentalism versus mathematical modeling and the implications, or what conclusions can be drawn from the data presented (logic, which is part of math).

Not looking like a tool on the internet

If you've spent any amount of time on the internet, at all, you may have come across someone who is angry at their opponents for not understanding "logic" and "reason." You may have seen someone make a statement along the lines of "that doesn't make any logical sense" without noting what the error is (or invoking a fallacy incorrectly). You may have seen someone who is incapable of understanding that a smart person may disagree with them, and who concludes that if someone disagrees with them, that person is stupid.

Judging by the relatively low proportion of people with bachelor's degrees in mathematics or philosophy, it stands to reason that very few of these people have had real training in formal logic. Someone with completely illogical arguments would have no way of knowing so (i.e., a special case of the Dunning-Kruger Effect.) On the other hand, someone who has studied higher-level mathematics can recognize what is and is not logically consistent, which affects how they act in everyday life as well. This makes everyday life a lot---a lot!---easier. (I could go on for days about this; but that's a story for another blog post... or twelve.)

Yet embarrassing oneself is often caused by a lack of empathy---what about that? Math has no relation to that, unfortunately. [Sad face.]

Concluding remarks

Oh, and talking about math is awesome, and we have the best jokes.

(Check back in the future for specific examples of how some topics you may have seen are actually used by mathematicians and scientists!)

Monday, January 12, 2015

Vector Calculus... with Poles?!

A couple days ago, my partner and I were about to go to sleep, when I wondered out loud whether Stokes' Theorem and the Divergence Theorem would hold for functions that were analytic except at finitely many poles (it's what engaged couples do in bed!).

Since my fiancé is a physicist, he already knew the answer for the Divergence Theorem, and was happy to clue me in: for $f(r, \theta,\phi)$ in polar coordinates,
$$\iiint_\Omega \nabla \cdot f(r,\theta,\phi) \; d\Omega=\oint_{\partial \Omega} f(r,\theta,\phi)\;d\partial\Omega.$$
appears to break down when a pole occurs in the interior of $\Omega$. In order to demonstrate this, we consider the function $f(r,\theta,\phi)=\frac{1}{r^2}$, which has the classical divergence (in polar coordinates)
$$\nabla \cdot \frac{1}{r^2} = \frac{\partial }{\partial r}\frac{r^2}{r^2}=0,$$
forcing the left-hand side to be zero. Yet, because $\frac{1}{r^2}$ is constant on the surface of the sphere, the right-hand side evaluates to
$$\oint_{\partial \Omega}\frac{1}{r^2}\hat{r}\cdot\hat{r}\;dA=\frac{1}{r^2}\oint_{\partial \Omega}dA=4\pi,$$
which is nonzero! Hence the Divergence Theorem does not hold in this case... for classical forms of divergence.

Knowing my interest in analysis (see blog title, above), my fiancé clarified that when a function has poles, we redefine divergence in the sense of distributions so that Green's Theorem does hold. Apparently then
$$\nabla \cdot \frac{1}{r^2} = 4\pi\delta(r)$$
which I plan to check rigorously in the future (no money for books). It hadn't occurred to me yet that that distributions could also be used to generalize multivariate forms of the derivative, so this was an interesting way for the conversation to go.

We quickly noted that for Stoke's Theorem in 2D, with $f(z)=p(z)+iq(z)$ and $z=x+iy$,
$$\int\int_\Omega \frac{\partial p}{\partial x} - \frac{\partial q}{\partial y} \;d\Omega=\int_{\partial \Omega} p\; dx - q\;dy$$
for the real part and
$$\int\int_\Omega \frac{\partial p}{\partial y} + \frac{\partial q}{\partial x} \;d\Omega=\int_{\partial \Omega} p\; dy + q\;dx$$
for the imaginary part. Assuming that $f(z)$ is analytic leads to one of the proofs of Cauchy's Integral Theorem, but, as differentiability is a condition for Stokes' Theorem, it is expected to break down under classical conditions for a complex function with finite poles. However, given my partner's earlier insight with the Divergence Theorem, I wouldn't be surprised if a distributional equivalent existed for Stokes' Theorem as well.

I'm certain there are a few texts that could clear up exactly how this happens rigorously... sounds like something fun to do in the future!

(Please excuse my slightly incorrect use of notation. Some symbols are not supported in MathJax.)

Wednesday, January 7, 2015

Funny Papers: Overly Expressive Lab Mouse

Scientists are people too, and they have a sense of humor. We don't claim to be a research humor blog (that's what Annals of Improbable Research is for.) However, sometimes we come across something that is so hilarious it has to be shared.

In this case, it's a silly figure in a medical research paper. I doubt the legality of posting an image from someone else's protected academic work, so here's the source (Figure 2) and a short description of what's happening in the figure:

  • The first image in the chain is a newly infected lab mouse with a darling smile on its little face.
  • Endotoxins produce IL-12, TNF and IFN-gamma in the mouse, contributing to shock. The mouse's smile has been reverted; it is now unhappy at its predicament.
  • The toxic shock leads to weight loss. Here, weight loss is manifested as someone copy-pasting an image of the sad mouse and shrinking it a bit.
  • The poor mouse dies. The authors have helpfully put 'x's in the eyes and put the mouse's little feet up to communicate to the reader that the mouse is, indeed, dead.
A shout-out to C.A. Biron and R.T. Gazzinelli for demonstrating that, while mathematicians have the best jokes, experimental biologists have us completely beat on black humor!

Monday, January 5, 2015

Math Test-taking Strategy

Math testing strategy varies a bit from the strategies that would normally be useful for classes requiring a lot of rote memorization. This is mostly because, in addition requiring a student to interpret and regurgitate information beforehand, math tests also involve a performance element that tests short-term problem solving ability. You may recognize this as an important skill to have! Before we list some specific ways to prepare, keep in mind that:
  • Mathematical ability can be improved. Some people have a tendency to give up on math if they aren't good at it immediately. However, if you spend more time on math than your classmates; whether it is going to math camp, working with a private tutor, thinking about outside problems, or figuring out how things work; you will get better faster. Nobody comes out of the womb being able to solve every type of math problem. It's like playing a musical instrument or learning to dance: the more practice, the better.
  • Getting an A in most classes requires a level of understanding that is not taught in the class. In American schools, an A grade is meant to signify that a student is going above and beyond what is required, even if it looks like all of the testing material is being taught in the class. There is often far more to the material! For example, a lot of students understand the general material being taught, but get slammed on small mistakes such as minus sign errors. Catching and being aware of these errors is something that the student must develop on their own, and it is hard to explicitly teach. Small things like this are often the difference between an A and a C.
Got that? Good! If you're studying for a test and aren't quite sure what to start with, there are several 'levels' of understanding the material, which, for your purposes, we'll express in three separate categories:
  • Basic understanding. You're at this level if you can read and understand everything in the textbook chapter, and know how to do the problems that aren't word problems.
  • Familiarity. You're at this level if you understand what the answers should 'look' like, why the answers 'look' that way, and how to fix something if you've made a mistake. To get to this level, you have to be observant and look for patterns in the work you're doing. Knowing where mistakes can arise in a certain type of problem is very powerful!
  • Creative application. If basic understanding is like knowing how to get home, and familiarity is knowing how to get home even if you've taken a wrong turn, creative application is like getting home by parkour. You're at this level if you understand the technique so well that even when it's not mentioned, you know when it has to be used. This type of understanding is the most important for word problems.
Let's use (scalar) multiplication as an example. Someone would have basic understanding if they could multiply 45x25 on paper (or in their head! Can you?). They're familiar with how multiplication works if they can explain why the answer cannot be 725, or 329670, or 2621. Lastly, they will have mastered multiplication as a concept if they can use it to solve problems such as 'how much do 45 vending-machine gumballs cost'? or use multiplicative identities to prove the exponent rules.

So, in a nutshell, if you understand the book and can do all the problems, you're still not guaranteed to do well on tests. Nooo! How can this be?

The primary problem is that very few books go into detail on how things work and how to recognize mistakes. Yet recognizing where mistakes happen and how to fix them is vital---not to mention that knowing how methods work is the only way to understand word problems! Here are a few helpful things you can do to improve your math testing ability:
  • Know exactly where your abilities are for each type of problem. Can you do mental multiplication? Can you do basic algebra problems? Can you prove that L^2 is complete? Being honest about where your weaknesses are makes it much easier to conquer them. Needing more practice on a specific type of problem isn't a bad thing, and you'll save time if you focus on only the hardest problems.
  • Develop 'sanity checks'. 'Sanity checks' are what I call pieces of information you remember to check whether you've made a mistake in the problem. Using the multiplication example above, a good example of a sanity check would be 'an odd number times an odd number cannot be an even number.' This helps build your familiarity with the material and could save you from losing tons of points.
  • Build the test. Even if you don't know which specific problems will be on the test, you can generally work out how many of each type of problem will be on the test. This will help you direct your attention towards whatever will win you the most points. For example, you might be okay with everything on the test except one very, very hard type of problem: would it be better to focus on figuring out the hard problems, or making sure you don't mess up on the moderate problems? This depends on how many of the hard problems will be on the test.
  • Talk with friends. We don't mean about videogames. There aren't a lot of ways to develop creative thinking for mathematics other than 'think about math a lot and try to come up with and solve math problems outside of school', but this is one of the more fun ones. Your friends may have some insight about math that hasn't occurred to you yet, or you may be able to solve a hard problem by working together. In any case, talking directly to people who know more than you will teach you a lot.
  • Look for patterns. If you don't want to talk to your friends about math, or have some pride about developing things on your own, remember that math is all about finding and exploiting patterns. Once you've found a pattern, try to figure out where it comes from. This line of thinking often leads to developing newer, faster ways of solving problems. See our mental math post for some simple examples of pattern exploitation.
  • Hire a private tutor. We said earlier that talking to people who know more than you will improve your skills very quickly: well, private tutors know a lot of math and they can teach you a lot about math. There's no shame in needing one! If someone is very good at the guitar and wants to become even better, they hire a private guitar teacher. The same is true for math. This is how you should see a private math tutor: someone who can help you improve very, very quickly and understand mathematics far beyond what you are taught in class.
These are all things you would do before the test happens (we've left out the obvious "read the textbook and do the problems" advice.) While the test is happening, try to:
  • Read the test beforehand and do the easiest problems first.
  • Temporarily improve performance by stretching, chewing gum, drinking caffeine, or listening to energetic music.
Now you should be focused and in the moment!

Lastly: even if progress seems slow sometimes, or you're having trouble catching up, don't feel hopeless. If you spend time thinking about how the methods work, you will improve.

Good luck, and happy testing!


Good day, internet!

We are (currently) two flat broke prospective scientists: one future physicist with no money and one mathematician with no money, who thought it might be a good idea to start a blog in order to spread the love. Here you'll find insight gained from teaching, stories about research, discussion of odd problems, jokes about LaTeX, and possibly an informal paper review once in a while.

We had intended on starting a free tutoring video series, but had to quit due to severe technical difficulties (see 'broke,' above). Boo! Perhaps that's what the future holds.