## Congratulations Janet Liou-Mark, recipient of the MAA Metro NY’s Distinguished Teaching Award

City Tech professor Janet Liou-Mark was presented with the Distinguished Teaching Award by the Mathematical Association of America’s New York Section at their annual meeting on May 3rd.   As her colleague, I can confirm that her creativity, positivity, enduring belief in her students, and indomitable energy are truly astonishing – Janet, we salute you!  Congratulations, and well deserved.

Professor Janet Liou-Mark (right) being presented her award by New York Section Chair-Elect Elena Goloubeva (left).

For more details, check out the announcements on the MAA and City Tech sites.  For more on Janet, take a look at this great interview by Mari Watanabe-Rose here on the CUNY Math Blog.

Posted in Uncategorized | | 3 Comments

## CUNY Math Conference

Almost 200 participants gathered last month as CUNY hosted it’s biennial Math Conference for faculty on May 9, 2014 at the Graduate Center.  With a theme centered around ‘Effective Instructional Strategies’, the day-long conference featured presentations on remedial math education, technology, pedagogy,  and communication.  “Globalizing Our Classrooms” was the keynote address from Deborah Hughes Hallett from the University of Arizona.  The program, abstracts and presentation slides are available to peruse online.  The day proved extra engaging through live tweets to #cunymath14.

Beyond the presentations, faculty enthusiasm for the day was palpable.  Colleagues dispersed across a large University were able to come together around the shared passion of math instruction.  Special thanks to the planning committee: Warren Gordon (Baruch), John Verzani (CSI), G. Michael Guy (Queensborough), and John Velling (Brooklyn).

Posted in Uncategorized | | 1 Comment

## Teaching a large Calculus I class – lessons learned

Last semester I taught a large section of Calculus I. There were 124 students in the class. Teaching a large class is not for everyone, but if you are so inclined, it can be a rewarding experience provided you pay attention to certain details.

Teaching a large class of over a hundred students requires a good deal of management skill. This sort of management isn’t a one-time thing like preparing lecture notes and reusing them. No, this sort of management is an integral component of teaching large classes. There’s management of the students and management of the graduate teaching assistants. If management is not your cup of tea, then it’s best to stick to the usual class maximum of 35 students.

At least one teaching assistant is indispensable, at the very least to help with monitoring exams and collecting and returning homework, quizzes, and exams. Otherwise too much class time will be spent on these sorts of administrative tasks. Liang Zhao served as teaching assistant for my Calculus I class last semester. He did a great job. The students appreciated our seamless efficiency.

There are many ways to assign responsibilities to a teaching assistant. I assigned grading of homework and quizzes (25% of the grade) to Liang. In my classes homework and quizzes are graded generously with relaxed deadlines; there is no reason why everyone cannot get a high score.

I also organized two informal recitation sessions per week. These sessions scheduled from 8:00 am – 9:00 am on Tuesdays and Thursdays quickly became popular especially since Liang is a good instructor. These sessions had the effect of not only helping students to finish their homework, but also of making sure they were on time. Students got into the habit of coming well before 9:00 am for the 9:00 am class. Punctuality is a big deal in a large class. Otherwise there will be students strolling in at all times. It must be enforced through a combination of incentives and consequences. It is an ongoing management issue.

Together, Liang and I worked hard to make this class a success, considerably over what was expected of us. Our reward was that things went well. It would be very easy to spend a lot of time and still have all sorts of problems leading to a frustrating experience. Fortunately our strategies were effective.

I graded the exams (2 tests and a final each 25% of the grade). If I didn’t do this, I wouldn’t have a good sense of what the students were learning. I got to know the students and was able to give them one-on-one attention – something that is hard to do in a large class. I am not advocating this particular strategy as it is extremely time-consuming, merely noting that I found it effective at communicating my leadership style.

Anonymity in numbers is an issue in a large class. Some interesting things occur when the students think the professor does not know Jill from Jane or Fred from Frank.

One such thing is changing test answers and asking for a re-grade under the assumption the professor made a mistake. My tests are not multiple-choice tests and I give partial credit. It is possible to overlook a correct step here and there when grading so many exams. My solution was to photocopy the exams before returning. Yes, all 124 of them. These are the organization and management issues I was talking about. It’s an ongoing thing throughout the semester. The plus side is that I have a wealth of data that I can analyze at my convenience.

When taking a test sitting so close to each other, it is hard not to accidently look at a neighbor’s test inches away. This made everyone uncomfortable. I handled this issue by using two large classrooms for exams, one monitored by me and the other by the TA. Students were able to sit comfortably at a respectable distance from each other. For the final exam all the students were in the auditorium that seats 260. I thought this was better than splitting up the class.

Room size can become an issue. I had 124 students in a 130 seat classroom. If I could make one suggestion, it would be to reduce the maximum size of the class to 110 so as to fit comfortably in the many large classrooms that seat 130 to 160. The seats are too tightly crammed together making it difficult to get in and out when filled to capacity. I noticed that students preferred sitting on the floor and the steps rather than sitting so very close to each other.

This is especially important in a math class because a larger auditorium may be problematic. I don’t think there is any effective substitute for writing on the board while explaining math. I also think a technology-enhanced lecture is good and I used the computer and projector often. But writing on the board is a basic strategy for teaching math. If the classroom is too large, like for example, the 260 seat auditorium, then students in the back cannot see the board – there is only so big we can write.

It would be a mistake to teach a large class thinking one can do the same amount of work or a little more and get double teaching credits. It is more work than teaching two regular size classes (for a maximum of 70 students). For the administration, it is three sections for the price of two. For faculty it frees up some in-house teaching hours for advising graduate and undergraduate students and mentoring teaching assistants. This is especially so for faculty with limited in-house teaching hours, whether the limitation is due to grant commitments, commuting difficulties or something else. It could be a win-win situation for both faculty and administrators.

In my next post I will talk about the data I collected and the results of my analysis.

## Is RSA Safe?

There has been some talk in the news recently that the security provided by the RSA encryption algorithm isn’t as secure as it used to be.

RSA is an acronym standing for Rivest, Shamir, and Adleman, the individuals who designed the algorithm at MIT in 1977.

An equivalent system was developed by British mathematician Clifford Cocks in 1973, though at the time he was working for a clandestine branch of the government, and his work went unattributed for many years.

Clifford Cocks

The RSA algorithm is one of the cryptographic workhorses of the internet, helping to put the “s” in “https” on the websites we use every day.  Through ingenious means, which we won’t discuss here, it is also used to produce digital signatures, which guarantee that messages originate from their specified sources.

The article linked above, concerning looming insecurities in RSA, discusses a recent talk delivered by Alex Stamos at the annual Black Hat conference in Las Vegas.  From the article it is difficult to fathom how dire the crisis might be.  It’s said that Stamos is disconcerted by recent progress made by French mathematician Antoine Joux on the discrete logarithm problem.  There is no comment from Joux, whose word I would find more definitive.  How certain can we be that the alarm isn’t an illusion?

Those things will become more clear in coming weeks no doubt.  But what is the discrete logarithm problem, and how does it relate to RSA?

We should start with a review of the ordinary logarithm function, invented by the Scottish wizard Napier in the 17th century. We then give a short description of modular arithmetic, define the discrete logarithm problem, and review RSA.  Finally, we’ll see how all these fit together, and why the ability to compute discrete logarithms quickly challenges current security protocols.

A logarithm is a function which takes two positive real numbers, a base $b$ together with an input $a$ and returns the number $c$ such that $b^c = a$. In notation we express this by $\log_b(a) = c$.

The fact that there is such a number $c$ is interesting in itself.  The existence of $c$ can quickly be confirmed by examining a graph of the exponential function.  The following figure is for $b = 2$, though for other values of $b$ the figure is essentially the same.  Note that for any output (on the $y$ axis) there is an input (on the $x$ axis) such that the function strikes the output with the input.  This justifies the definition of the logarithm.

The graph of the curve described by y = 2^x

To understand the discrete logarithm, it is necessary to understand the discrete context.  Here we are not concerned with real numbers, but rather with integers (whole numbers).  In fact, we are only concerned with the numbers $\{0,1,2,\ldots,n-1\}$ for a fixed integer $n$.  We can do arithmetic in this finite realm provided we are willing to “wrap around” when our sums an products go out of scope.  The exact nature of what happens is discussed in this introductory article from the Khan Academy.  If you would like something more serious, the article by Gauss himself is not difficult, and uses (in fact introduces) all the modern notation.  Amazingly I cannot find an English edition of Disquisitiones Arithmeticae online, and so I refer the reader to the excerpts found in the collection God Created the Integers.

To give a few fast examples, we write
$5^2 \equiv 1 \mod{4}$

to mean that if we were to consider $25 = 5^2$ stones and count them out in groups of 4, in the last pile we would have a single stone.  We call 1 the residue of 25 modulo 4. Note that if we were to take 5 stones and count them out in groups of 4, then in the last pile we would have only 1 stone.  That is, the residue of 5 mod 4 is already 1.  Note also that in this case the product of the residues is the residue of the product.  In other words, using C++ notation,

$5^2 \% 4 == (5\%4)(5\%4)$ because $1 = 1\cdot 1$.

In fact this is true in general, and explains many of the properties of numbers we learn in grade school.  For instance we learn that a number is divisible by 3 if and only if its digits sum to three.  I will use modular arithmetic to show why this is true for the arbitrary example of 2349. First expand using the definition of a decimal number.

$2349 = 2\cdot 10^3 + 3\cdot 10^2 + 4 \cdot 10 + 9$

Now note that $10 \equiv 1 \mod{3}$ and so the same is true for any power of 10.  Thus $2349 \equiv 2+3+4+9 \mod{3}$.

Now $2349$ is divisible by three if and only if $2349 \equiv 2+3+4+9 \equiv 0 \mod{3}$.  That is, it is equivalent to the condition that the sum of the digits is also 0 mod 3, or in other words that the sum of the digits is divisible by 3.  This gives something of the flavor of discrete arithmetic.

What should a logarithm be in the discrete context?  We can use the old definition, with an additional twist.  For numbers $b$ and $a$ in $\{0,1,\ldots,n-1\}$ define the discrete logarithm (base $b$ ) of $a$ to be the number $k$ in $\{0,1,\ldots,n-1\}$ such that $b^k \equiv a \mod{n}$.  For instance, because we know that $11^5 \equiv 10 \mod{17}$, it follows that the discrete logarithm of 10 base 11 mod 17 is 5.

Again, we have the question of whether the logarithm is well defined.  Is it the case that for any choice of $a,b$ and $n$, the integer $k$ exists such that $b^k \equiv a \mod{n}$?  The answer is no — you should find it easy to produce a counter example.  Also, unlike the continuous cases, the discrete exponential function is not one-to-one.  This means that the uniqueness of $k$ is also an issue.

In group theoretic terms, the base now has to be a generator of the multiplicative group of integers mod $n$ in order for the definition of logarithm to make sense.  These details don’t matter for a rough discussion of discrete logs as they apply to cryptography.

Questions of efficient means of computing discrete logarithms arise in many cryptographic systems, but we will focus our attention of RSA.  At this point we need some account of what RSA is, for which I have written the this using the iPython notebook to produce an annotated example.

After following the link and diligently reading, you must now know that the crux of the RSA algorithm is the decryption step $M = C^{d} \mod{n}$.   Can we cast this as a question about a logarithm?  Recall from the RSA example that the secret parts of the above equation are $d$ and $M$.  The ciphertext $C$ is public knowledge, as is the modulus $n$.  But anyone is free to encrypt a message using any public key.  This means that we can pick $M$, and so we can know that value too.  Thus the real mystery is $d$, the private exponent.

To find $d$, what we need to know is:  To what power must $C$ be raised in order to be congruent to $M$ modulo $n$?

In other words, to crack RSA we want to know the discrete logarithm of $M$ base $C$ modulo $n$.

For this reason, if Joux or his colleagues ever do find a fast method for computing discrete logarithms, the current implementations of many common cryptographic systems, including systems for producing digital signatures, will become obsolete.

This is not the only way to break RSA.  It seems like it should be easier in fact to crack RSA for a particular message $M$ rather than find $d$ and unravel the whole system.  To do this, imagine that the message $M$ is produced not by us but by someone communicating with the victim.  To recover $M$ we must solve for it in the equation

$C = M^e \mod{n}$,

in which it is the only unknown.  This is not a logarithm problem, but is instead the problem of discrete root extraction.  In fact this problem has its own name — it is called the RSA problem.  Obviously no practical means is yet known for solving this problem either.

RSA could fall because of advances in the science of number factoring.  While this has not yet led to the gelding of RSA as far as anyone is saying, still the speed with which numbers can be factored has improved in dramatic and unexpected ways.

Shortly after RSA was announced, the popular mathematics writer Martin Gardner asked Rivest, Shamir, and Adleman for an encrypted message with which he could tease his readers.  They agreed, and produced an encoded message using a 129-digit public key.  The value of $n$ in the key was: $1143816257578888676692357799761466120102182 967212423625625618429357069352457338$ $97830597123563958705058989075147599290026879543541$

The prize for producing a solution was \$100.  Rivest calculated, based on mathematical technology existing at the time, that factoring this number would require 40 quadrillion years.  This figure assumed a machine capable of performing 1 billion modular multiplications per second, which seems to have been achieved at the PC level only in 2009.

As this article explains, Rivest’s figures were off by many orders of magnitude, but not because he underestimated the growth in computing power.  Rather, he was overly optimistic about innovations in factoring large numbers, in particular sophisticated variants of the quadratic sieve. This article by Pomerance outlines some of the history.

For those who are curious about the solution to Gardener’s puzzle, it may interest you to know that (as a team of hackers found in 1994)

$1143816257578888676692357799761466120102182 967212423625625618429357069352457338$ $97830597123563958705058989075147599290026879543541$ = $34905295108476509491478496199038 98133417764638493387843990820577$ $\times 32769132993266709549961988190834 461413177642967992942539798288533$

The message which was encoded read: THE MAGIC WORDS ARE SQUEAMISH OSSIFRAGE

I am indebted to Julian Brown’s book The Quest for the Quantum Computer for this anecdote.  Incidentally Brown’s book is a good starting place for reading about that other perennial threat to our online security: quantum computing.

Posted in Uncategorized | 5 Comments

## Jewish Mathematicians in Germany

A year or so ago I stumbled across Reuben Hersh’s “Under-represented Then Over-represented: A Memoir of Jews in American Mathematics” in the pages of a recent Best Writing on Mathematics volume.

That article describes the arc of Jewish mathematical history in the time of WW II and afterwards, featuring some personal recollections acquired during Hersh’s tenure as a student at the Courant Institute.  This was my first encounter with the story of the effect that the Nazi regime had on mathematical culture in Germany, perhaps best summarized by Hilbert’s famous response to Bernhard Rust’s query about the state of mathematics at Göttingen under fascism:  “There is really none anymore.”

A fascinating prequel to Hersh’s observations and memories can now be found at an exhibit on display at the Center for Jewish History on 16th Street.  Transcending Tradition: Jewish Mathematics in German-Speaking Academic Culture will be available from the time of this writing until January 2014.  Admission is free.

The exhibit tells its story in three epochs, beginning in years previous to 1871, following through the days of the Wilhelmine Empire, and focusing at last on the days of the Weimar Republic (1919-1933) and the immediate aftermath.

There are many names mentioned, but to give a sampling from each time period:

pre 1871:  Leopold Kronecker (Berlin), Rudolph Lipschitz (Breslau), Carl Gustav Jacob Jacobi (Königsberg).

1871-1919: Max Noether (Heidelberg), Felix Hausdorff (Greifswald), Hermann Minkowski (Göttingen)

1919-1931:  Richard Courant (Göttingen), Max Dehn (Frankfurt), Gábor Szegő (Königsberg)

Much of the biographical content can of course be read from home on Wikipedia, but certain facts from the display are unlikely to be encountered elsewhere.   The exhibits, with large photographs and reproductions of handwritten correspondence, offer a sense of communion that it’s difficult to feel over the internet.  I certainly learned some things I didn’t know before.

For instance, even as late as the mid 19th century, baptism was a prerequisite for holding an academic position in Germany.  The eventual admittance of Jews into academic institutions (as students) was as much motivated by questions of social control (eg the regulation of Jewish medical practitioners) as by liberal political motives.

At the end of the Weimar Republic there were 94 full professorships in mathematics in the German states, and of these 28 were occupied by Jews or scholars of Jewish descent.  After 1933, 127 mathematicians, including five women, were driven out of Germany, as a result of the Law for the Restoration of the Professional Civil Service, a Nazi ordinance with an obvious subtext.

Much of the exhibit focuses on the German Mathematical Society (DMV).  This institution was formed largely because of the efforts of Jewish mathematician Georg Cantor in Halle, around 1890.  In the Nazi era, under the leadership of Wilhelm Süss (and others), the organization was used as a political tool for the persecution of mathematicians with Jewish associations. There are issues related to the continuity of the DMV during the war which I do not fully understand.  However, the exhibition says that the society was reestablished in the French occupation zone in 1948 by Erich Kamke, who lost his professorship in 1937 because of a Jewish spouse.  Certain scholars, in particular Max Dehn, refused to rejoin.  After 1948 Süss had a change of heart and began to deliberately approach Jewish emigre mathematicians.

It is said that the first individual to appreciate the scale of the mass dismissals of German mathematicians during the Nazi period was Max Pinl, who published his findings in Jahresbericht der DMV despite considerable opposition during the mid to late 1960′s.

The exhibition features some interesting personal profiles.  There is a board dedicated to the Jewish graduate students of Hilbert and the oral culture of mathematics they helped to initiate at Göttingen. Orality was a distinguishing feature of the department in the first 3rd of the 20th century.

I had been unaware of the particularly tragic circumstances in which Hausdorff was placed by the war.  In 1938, aged 74, facing age related prejudice in addition to religious persecution, Hausdorff was not able to secure a position abroad, despite letters of appeal (several of them displayed) written on his behalf by figures such as Courant, Weyl, and Von Neumann. He spent the duration of the war under Nazi rule.

Emmy Noether, who has a board almost to herself, was displaced.  Additionally she had a brother Fritz (also a mathematician, at Breslau) who emigrated to the Soviet Union during the war, where he was arrested in 1937 in Stalinist persecutions and shot in 1941.

There is a storyboard describing the history of Moses Mendelssohn and his descendants. Two of his granddaughters married mathematicians, and the offspring of one of these unions was Kurt Hensel, discoverer of the Henselian ring.

Hans Hahn, the thesis supervisor of Gödel, describes an abiding interest in philosophy, and says that he was “almost unfaithful to mathematics, so enticed was I by the charms of philosophy.” This is an interesting remark from the advisor of one of the most philosophical of modern mathematicians. Incidentally Gödel was not Jewish, though he did flee the atmosphere of Vienna in 1936 after his friend and colleague Moritz Schlick was shot dead by a pro-Nazi student.

The exhibition, which is traveling around the world (most recently it was in Chicago) is both touching and disturbing.  With free admission in a beautiful neighborhood, a visit makes a profitable use of a summer afternoon.

Posted in Uncategorized | 4 Comments

## What’s the Point of Math?

NPR made me smile again. Stephen Strogatz was a guest for the game show “Ask Me Another.” Click on the link and you can listen to the entire show, or just scroll down and go straight to his segment (By the way, the show tells you how many degrees the Cornell professor is separated from Kevin Bacon!).

Strogatz on the show says there are two types of people when it comes to math: those who say, “I don’t have a math head,” and those “I’m good with math but don’t know why I need to do it.” The response to the latter, the author of “The Joy of X” says, would be this (paraphrased): You watch Michael Jordan play basketball. You listen to music. You don’t need to do those things but you do because they enrich your life. Math is the same (if your degree/job doesn’t require math, that is). Yes, agreed. But teachers wouldn’t force me to watch Michael Jordan at school…

Math Blog readers, what are your responses to this question: What’s the point of doing math?

Posted in Uncategorized | 1 Comment

## Group theory for liberal arts

I often teach a course with the enigmatic title “Fundamentals of Mathematics I”, intended for liberal arts majors. This is usually the last encounter with math for students in non-scientific disciplines. The syllabus contains a decent amount of optional topics so it is quite possible to tailor the material to one’s taste and professional interests. As many of us are well aware, a course like this poses its unique challenges. Unlike calculus, where the topics are fairly standard, a “fundamentals” course must, in my view, depend much more on mathematical ideas and much less on drudgery computations. Of course, students need to compute, but the result of their computational efforts should be exciting and fun, rather than a more or less meaningless answer that matches the solutions page at the end of the chapter.

These are the topics I usually teach: set theory, logic, group theory, and combinatorial methods. All lend themselves to a great deal of enjoyment, where students are confronted with deep ideas (e.g. what is truth? what is counting? does infinity come in different “sizes”?). Some students feel bewildered when they discover how difficult it can be to “count”.

It is a time-tested favorite of mine to teach them about groups. But unlike a formal course in abstract algebra, I tell them briefly what a group is, how abstract notions can be fun and useful, and after showing them the cyclic groups, both infinite and finite, I proceed fairly quickly to the dihedral groups D3 and D4 (of orders 6 and 8 respectively). I construct them as the groups of rigid symmetries (rotations and reflections) of the triangle and square. D3 is such a revelation since it is the smallest group that happens to be non-commutative; and I usually spend several lectures drawing pictures and discussing composition of symmetries. It is really exciting to reveal to them how “multiplication” is not a universal idea and it can be defined as a non-commutative binary operation. After producing the multiplication tables of both D3 and D4 we set out to explore the orders of individual elements as well as the subgroup structures of each one of these groups. We emphasize the subgroups of rotations and reflections.

One of my favorite results in elementary group theory is Lagrange’s theorem: If G is a group of order n and H is a subgroup of order m, them m divides n. I take advantage of the sheer simplicity of this result and “test” it for D3 and D4. It is not magic, I tell them, it’s a theorem!… I also exploit Lagrange’s result by bringing the (finite) cyclic groups back and sharing the pleasant fact that the converse of Lagrange is true in that context: if Cn is the finite cyclic group of order n and m is any divisor of n, there exists a subgroup (necessarily cyclic) of Cn whose order is m. And yes, we “test” the truth of this and explore the consequences when n is prime.

Many topics presented in more advanced courses in combinatorics, abstract algebra, logic, etc. can certainly be made accessible to a liberal arts audience. The trick, of course, is to explain the ideas in layman’s terms, progress to some level of formalization, and work out many enlightening examples. Teaching this course has been very satisfactory indeed. My hope is and has always been to leave my students with some long-lasting interest in the ideas behind mathematics, as well as a taste of what mathematicians do.

Posted in Uncategorized | 1 Comment

## Cancer Math

In a recent New York Times op-ed piece, Angelina Jolie revealed that she had a double mastectomy to reduce her risk of breast cancer.  She had a family history of cancer and tested positive for flaws in the BRCA 1 gene.  She wrote: “My doctors estimated that I had an 87 percent risk of breast cancer …”  Her disclosure was big news, but according to literature on judgement under uncertainty, the statistics she mentioned are likely to be misinterpreted.

While never explicitly stated, the 87 percent risk is presumably the cumulative lifetime risk.  My assumption is based on a Times article published in the following day: “Women who carry BRCA mutations have, on average, about a 65 percent risk of eventually developing breast cancer, as opposed to a risk of about 12 percent for most women…  Ms. Jolie wrote that the estimate for her was 87 percent.”

Twelve percent (for most women) is consistent with the familiar statistic that “one in nine” women would develop breast cancer.  This figure has become a mantra in the popular press and breast cancer screening programs, and has terrified many people.  (See this New York Times Magazine cover article entitled “Our Feel-Good War on Breast Cancer.”)  But what does it mean?  According to an article published in the New England Journal of Medicine, many younger women view this 12-percent statistic as a short-term probability and grossly overestimate the risk of breast cancer in a 10-year period.

Source: K.-A. Phillips, G. Glendon, and J. A. Knight, “Putting the risk of breast cancer in perspective.” New England Journal of Medicine, 340, 141-144 (1999).

Based on the above table, we see that 1+3=4 out of 1,000 women developed breast cancer in their 30s, and 5+8=13 out of 1,000 women during their 40s.  In other words, a woman entering her 30s has a 0.4 percent chance of breast cancer in the next 10 years.  The risk of breast cancer increases with age; a woman entering her 40s has a 1.3 percent chance of the disease in the following decade.  These figures are far less than 12 percent that many people were led to believe.  Breast cancer is much more common at older ages, which makes sense.  Note that cancer incidence and cancer mortality are not the same: 1+2+3+3+3+4+5+6+6=33 out of 1,000, or 3.3 percent women will die of breast cancer by age 85.  But we also see that about six times as many women die of cardiovascular disease.  For a further discussion, Gerd Gigerenzer’s book Calculated Risks is an excellent reference.

The “one in nine” statistic is rarely explained properly, and misinterpretation fuels unnecessary levels of fear.  For this reason, the general public needs to approach Ms. Jolie’s 87 percent figure with caution.  Most importantly, her situation is not representative of what a typical 37-year-old woman would face.

Now with an understanding that 87 percent is more likely to be the cumulative lifetime risk of breast cancer for a woman (with BRCA mutations) who lives past the age of 85, the next question is what action should someone take.  There is no easy answer, as it depends on an individual’s own personal tolerance for risk, and the cost and benefit of mastectomy.  According to a study that followed up 214 women at high risk of breast cancer who had undergone mastectomy at the Mayo Clinic in Minnesota, incidence of breast cancer was reportedly to be reduced by 92 percent.  Again, we need to understand how this number was obtained.  By comparing the 214 women with their sisters who had not undergone mastectomy, about 38 were expected to develop breast cancer, but actually only 3 were actually observed.  In terms of relative risk reduction, $(3-38)/38 \approx -0.92$, which is very impressive.

However, we can present the exact information in a different way.  Mastectomy prevented $38-3=35$ breast cancer incidences among 214 women, which also means that $214-35=179$ women had no benefit from mastectomy.  For some medical professionals, they prefer expressing clinical implications of the findings in terms of “number needed to treat.”  In this case, the number of patients who would need to be treated to prevent a bad outcome is then $214/35 \approx 6$.  It has been shown that results expressed as the relative risk reduction and those expressed as the number needed to treat have different influences on decisions about treatment.

From the Book of Common Prayer, “we have left undone those things which we ought to have done, and we have done those things which we ought not to have done.”  One is frequently faced with such choices because most actions in our life do not guarantee absolute certainty.  The Mayo Clinic study demonstrates that among the high-risk group, most women (5 in 6) would not develop breast cancer even if they kept their breasts, and a few (1 in 71) would develop breast cancer even if they had their breasts removed.  For educators, I think it is important to ensure that students appreciate the meaning of probability and are able to describe risk in a variety of ways, so as they are empowered to make informed decisions by themselves.

Posted in Uncategorized | 1 Comment

## Sloppy Math and the Austerity Debate

In 2010, two Harvard economists, Carmen Reinhart and Kenneth Rogoff, circulated a paper demonstrating that GDP growth is negatively correlated to public debt (debt-to-GDP ratio to be more precisely).  Their paper was highly influential and has been used to support the global austerity agenda.  For instance, it was cited in the “Paul Ryan Budget,” p. 80.  But the problem is that the paper is rife of basic math errors, as pointed out in a recent paper by Thomas Herndon, Michael Ash and Robert Pollin of the University of Massachusetts, Amherst.  On April 19, Paul Krugman wrote an op-ed piece entitled “The Excel Depression” for the New York Times publicizing an embarrassing Excel error in Reinhart and Rogoff’s spreadsheet.  (You can see the Excel screen capture here; instead of AVERAGE(L30:L49), Reinhart and Rogoff entered AVERAGE(L30:L44), excluding Australia, Austria, Belgium, Canada, and Denmark from their calculation.)  A week later, Reinhart and Rogoff contributed their own op-ed piece; they suggested that their mistake was inconsequential and continued their claim that “growth is about 1 percentage point lower when debt is 90 percent or more of gross domestic product.”  In their online appendix, the chart below was shown to make such a point.

Source: Reinhart and Rogoff’s New York Times online appendix.

You can find extensive discussions over the Reinhart-Rogoff controversy on the internet.  Here I just want to highlight some of the technical issues involving merely elementary arithmetic.  In fact, Thomas Herndon, a PhD candidate, discovered RR’s Excel error through a course assignment.  Students were supposed to replicate the findings of a famous paper in order to learn econometric techniques.  As RR’s work involves essentially calculating the means and medians, Herndon’s professor almost didn’t let him take on the project.  (See this New York Magazine article and this Colbert Report interview.)

A casual reader might get an impression of downward slide in GDP growth based on the above chart, but a careful reader might have noticed that the chart compares medians and arithmetic averages.  Students who have taken an introductory statistics course know that medians and means (often loosely referred to as averages) can be quite different if the distribution is skewed.  Furthermore, means and medians without accompanying measure of how much the data is spread out (such as standard deviation or interquartile range) provide incomplete information.

I wrote to Professor Ash and he kindly supplied additional information.  He and Professor Pollin published a response to Reinhart and Rogoff in the Times, with a comprehensive technical supplement.  They also made the data public to allow a close examination.  To illustrate the effect of Reinhart and Rogoff’s Excel error, one can make a comparison between the New York Times chart and a corrected one based on the inclusion of Australia, Austria, Belgium, Canada, and Denmark.  This inclusion alone increases median GDP growth by 0.3 percentage points in the 90 percent and above public debt/GDP category.

Median figures for blue line are from the NY Times; median figures for red line are from Table 1 of technical supplement by Ash and Pollin, “RR calculation method but with corrected spreadsheet.” The Excel error alone is responsible for a difference of 1.9-1.6=0.3 percentage points in RR’s above 90% debt/GDP category.

It is unclear why RR excluded available data of earlier years for Australia, Canada and New Zealand.  By including these three countries, it further increases median GDP growth in RR’s highest debt/GDP category to 2.5%, which is only 0.4 percentage points lower than that in the next highest debt/GDP category.

Median figures for blue line are from the NY Times; median figures for red line are from Table 1 of technical supplement by Ash and Pollin, “recalculation with both corrected spreadsheet calculations and inclusion of Australia, Canada and New Zealand early years.” The difference in median GDP growth is only 2.5-2.9=-0.4 percentage points between 60 to 90% category and above 90% category.

While most economists acknowledged some correlation between high debt and low GDP growth, the above 2 charts illustrate that the claim “growth is about 1 percentage point lower when debt is 90 percent or more of gross domestic product” by Reinhart and Rogoff is based on sloppy math and is unsubstantiated.

In the Times online appendix, Reinhart and Rogoff stated that they “gave significant weight to the median estimates, precisely because they reduce the problem posed by data outliers.”  Yet two paragraphs later, when reporting their findings in a paper published in 2012 (joined by Vincent Reinhart), they gave the means only.  They compared the mean from 1800 to 2011, 2.3 percent, to the mean of Herndon et al. from 1945 to 2009, 2.2 percent.  If we just cherry pick numbers, let us compare the tainted result that Reinhart and Rogoff published in the Times with means from 2000 to 2009 only.  Contrary to Reinhart and Rogoff’s central claim, GDP growth in the over 90 percent public debt/GDP category has actually outperformed GDP growth in the 60 to 90% public debt/GDP category.

In this chart, the median figures for blue line are again from the NY Times, and the mean figures are from Table 4 of technical supplement by Ash and Pollin. Data from 2000 to 2009 contradict Reinhart and Rogoff’s claim.

But the above chart is not particularly meaningful, because the standard deviation of each GDP growth figure (as shown in Ash and Pollin’s supplement) is at least 0.3.  A more responsible way to present the means is the following graph, taken from their supplement.

Source: technical supplement by Ash and Pollin, Figure 2.

People who have working experience with data know that mean or median conceals a lot of information.  In this case, it is evident that there is a wide range of economic performance outcomes in a given category.  Using the file containing data from 1946 to 2009 (the basis of the New York Times line chart above), one can make a scatterplot.  It is apparent that the relationship between public debt and GDP growth varies significantly.

Source: The file RR-processed.dta posted in U Mass website. Working spreadsheet was provided by RR; the U Mass group corrected errors and cleaned up the data.

By now, you should have realized that the same data can be graphed in different ways, which can lead to a vastly different impression.  The Reinhart-Rogoff affair offers educators a great opportunity to introduce many other quantitative reasoning topics, such as which measures of central tendency to use, correlation does not imply causation (does high debt cause low growth, or the other way around?), and so on.  It is quite feasible to ask students to perform their own analyses and make their own inferences, and I strongly encourage you to do so.

## Analysis of a Calculus Test – Part 2

Continuing the analysis (over-analysis perhaps) of the test, I began to wonder how students are performing on the limits portion (36 points) of the 100-point test as compared to the formula-driven derivatives portion (50 points). Certainly, the derivative is defined in terms of limits.

I have a particular fondness for limits because it is an attempt to understand infinity. I like teaching limits using an informal, visual approach at first, followed by the standard techniques for finding limits second. I end with the formal (epsilon-delta) definition and an indication of why it is needed. Teaching limits slowly and thoroughly is an opportunity to bring in some history and philosophy through, for example, Zeno’s paradoxes.

It would be good to devote a couple of classes to the epsilon-delta definition and formal proofs, but I’m not sure the broad audience in a standard calculus class is ready for it. It seems that proofs are being reduced or eliminated entirely in all the lower division math courses, but this is a topic for future post.

The overall test average was 70%.   The average of the limit portion of the test was 69%, whereas the average of the derivative portion was 76%. The students seem to be performing better on derivatives as compared to limits.  The figure below also shows this.

I like to see the histogram skewed to left. I don’t see any reason for the histogram  to have a normal distribution in a Calculus course (another post).

Now, is this a function of my tests? It is possible that I put harder limit problems than derivative problems. It would be interesting to know if other calculus instructors get similar results.

Another thing I noticed is how poorly students do on finding the derivative using the definition of the derivative. I put two such problems on the test.

The class average is only 52%, despite my best efforts at teaching this particular topic and solving more-than-usual homework problems on the board. The distribution below shows that there are a lot of students getting less than 50% on these two problems. In fact there were a few zeros.

This is not a distribution I like to see. To me it says the students aren’t learning what I would like them to learn.

However, the correlation between the limits portion of the test and the definition-of-derivatives portion is only 0.41. This low correlation says something useful. Students who don’t know how to compute the derivative using the formal definition have some confusion with the concept and notation of functions, not necessarily limits. This could be help in teaching this topic in a different manner.

I must say this is not entirely unexpected. There are a lot of things we suspect are true based on experience and folklore. It is helpful when the data confirms what we think is true, as it does in this case.

Posted in Uncategorized | Tagged , | 2 Comments