Tuesday, September 13, 2016

Then and Now: Devlin’s Angle Turned Twenty This Year

Devlin’s Angle turned 20 this year. The first post appeared on January 1, 1996, as part of the MAA’s move from print to online. I was the editor of the MAA’s regular print magazine MAA FOCUS at the time, continuing to act in that capacity until December 1997. (See the last edition of MAA FOCUS that I edited here.)

Keith Devlin at a mathematical exposition summit in Oregon in 1997. L to R: Ralph Abraham (Univ of California at Santa Cruz), Devlin, Roger Penrose (Univ of Oxford, UK), and Ivars Peterson (past MAA Director of Publications for Journals and Communications).

One of the innovations I made when I took over as MAA FOCUS editor in September 1991 was the inclusion of an editorial (written by me) in each issue. Though my ten-times-a-year essays were very much my own personal opinion, they were subject to editorial control by the organization's Executive Director, supported by an MAA oversight committee, both of which had approved my suggestion to do this. Over the years, the editorials generated no small amount of controversy, sometimes based on a particular editorial content, and other times on the more general principle of whether an editor’s personal opinion had a proper place in a professional organization's newsletter.

As to the latter issue, I am not sure anyone’s views changed over the years of my editorial reign, but the consensus at MAA Headquarters was that it did result in many more MAA members actually picking up MAA FOCUS when it arrived in the mail and reading it. That was why I was asked to write a regular essay for the new MAA Online. Though blogs and more generally social media were still in the future, the MAA leadership clearly had it right in thinking that an online newsletter was very much an organ in which informed opinion had a place.

And so Devlin’s Angle was born. When I realized recently that the column turned twenty this year — in its early days we thought of it very much an online “column”, with all that entailed in the world of print journalism — I was curious to remind myself what topic I chose to write about in my very first post.

Back then, I would have needed to explain to my readers that they could click on the highlighted text in that last sentence to bring up that original post. For the World Wide Web was a new resource that people were still discovering, with 1995-96 seeing its growth in academia. Today, of course, I can assume you have already looked at that first post. The words I wrote then (when I might have used the term “penned”, even though I typed them at a computer keyboard) provide an instant snapshot of how the present-day digital world we take for granted looked back then.

A mere twenty years ago.

Monday, August 1, 2016

Mathematics and the End of Days

A scene from Zero Days, a Magnolia Pictures resease. Photo courtesy of Magnolia Pictures

The new documentary movie Zero Days, written and directed by Alex Gibney, is arguably the most important movie of the present century. It is also one of particular relevance to mathematicians for its focus is on the degree to which mathematics has enabled us to build our world into one where a few algorithms could wipe out all human life within a few weeks.

In theory, we have all known this since the mid 1990s. As the film makes clear however, this is no longer just a hypothetical issue. We are there.

Ostensibly, the film is about the creation and distribution of the computer virus Stuxnet, that in 2011 caused a number of centrifuges in Iran’s nuclear program to self-destruct. And indeed, for the first three-quarters of the film, that is the main topic.

Most of what is portrayed will be familiar to anyone who followed that fascinating story as it was revealed by a number of investigative journalists working with commercial cybersecurity organizations. What I found a little odd about the treatment, however, was the degree to which the U.S. government intelligence community appeared to have collaborated with the film-makers, to all intents and purposes confirming on camera that, as was widely suspected at the time but never admitted, Stuxnet was the joint work of the United States and Israel.

The reason for the unexpected degree of openness becomes clear as the final twenty minutes of the movie unfold. Having found themselves facing the very real possibility that small pieces of computer code could constitute a human Doomsday weapon, some of the central players in contemporary cyberwarfare decided it was imperative that there be an international awareness of the situation, hopefully leading to global agreement on how to proceed. As one high ranking contributor notes, awareness that global nuclear warfare would (as a result of the ensuing nuclear winter) likely leave no human survivors, led to the establishment of an uneasy, but stable, equilibrium, which has lasted from the 1950s to the present day. We need to do the same for cyberwarfare, he suggests.

Mathematics has played a major role in warfare for thousands of years, going back at least to around 250 BCE, when Archimedes of Syracuse designed a number of weapons used to fight the Romans.

In the 1940s, the mathematically-driven development of weapons reached a terrifying new level when mathematicians worked with physicists to develop nuclear weapons. For the first time in human history, we had a weapon that could bring an end to all human life.

Now, three-quarters of a century later, computer engineers can use mathematics to build cyberwarfare weapons that have at least the same destructive power for human life.

What makes computer code so very dangerous is the degree to which our lives today are heavily dependent on an infrastructure that is itself built on mathematics. Inside most of the technological systems and devices we use today are thousands of small solid-state computers called Programmable Logic Controllers (PLCs), that make decisions autonomously, based on input from sensors.

What Stuxnet did was embed itself into the PLCs that controlled the Iranian centrifuges and cause them to speed up well beyond their safe range to the point where they simply broke apart, all the while sending messages to the engineers in the control room that the system was operating normally.

Imagine now a collection of similar pieces of code that likewise cause critical systems to fail: electrical grids, traffic lights, water supplies, gas pipeline grids, hospitals, the airline networks, and so on. Even your automobile – and any other engine-driven vehicle – could, in principle, be completely shut off. There are PLCs in all of these devices and networks.

In fact, imagine that the damage could be inflicted in such a catastrophic and interconnected way that it would take weeks to bring the systems back up again. With no electricity, water, transportation, or communications, it would be just a few days before millions of people start to die, starting with thousands of airplanes, automobiles, and trains crashing, and soon thereafter doubtless accompanied by major rioting around the world.

To be sure, we are not at that point, and the challenge of a malicious nation being able to overcome the difficulty of bringing down many different systems would be considerable – though the degree to which they are interdependent could mitigate that “safety” factor to some extent. Moreover, when autonomous code gets released, it tends to spread in many directions, as every computer user discovers sooner or later. So the perpetrating nation might end up being destroyed as well.

But Stuxnet showed that such a scenario is a realistic, if at present remote, possibility. (Not just Stuxnet, but the Iranian response. See the movie to learn about that.) If you can do it once (twice?), then you can do it. The weapon is, after all, just a mathematical structure; a piece of code. Designing it is a mathematical problem. Unlike a nuclear bomb, the mathematician does not have to hand over her results to a large, well-funded organization to build the weapon. She can create it herself at a keyboard.

That raw power has been the nature of mathematics since our ancestors first began to develop the subject several thousand years ago. Those of us in the mathematics profession have always known that. It seems we have now arrived at a point where that power has reached a new level, certainly no less awesome than nuclear weapons. Making a wider audience more aware of that power is what Gibney’s film is all about. It’s not that we face imminent death by algorithm. Rather that we are now in a different mathematical era.

Friday, July 15, 2016

What Does the UK Brexit Math Tell Us?

The recent (and in many respects ongoing) Brexit vote in the United Kingdom provides a superb example of poor use of mathematics. Regardless of your views on the desirability or otherwise of the UK remaining a member of the European Community (an issue on which this column is appropriately agnostic), for a democracy to base its decision on a complex issue based on a single number displays a woeful misunderstanding of numbers and mathematics.

Whenever there is an issue that divides a population more or less equally, making a decision based on a popular vote certainly provides an easy decision, but in terms of accurately reflecting “the will of the people”, you might just as well save the effort and money it costs to run a national referendum and decide the issue by tossing a coin—or by means of a penalty shootout if you want to provide an illusion of human causality.

Politicians typically turn to popular votes to avoid the challenge of wrestling with complex issues and having to accept responsibility for the decision they make, or because they believe the vote will turn out in a way that supports a decision they have already made. Unfortunately, with a modicum of number sense, and a little more effort, it’s possible to take advantage of the power that numbers and mathematics offer, and arrive at a decision that actually can be said to “reflect the will of the people”.

The problem with reducing any vaguely complex situation to a single number is that you end up with some version of what my Stanford colleague Sam Savage has referred to as the Flaw of Averages. At the risk of over-simplifying a complex issue (and in this of all articles I am aware of the irony here), the problem is perhaps best illustrated by the old joke about the statistician whose head is in a hot oven and whose feet are in a bucket of ice who, when asked how she felt, replies, “On average I am fine.”

Savage takes this ludicrous, but in actuality all-too- common, absurdity as the stepping-off point for using the power of modern computers to run large numbers of simulations to better understand a situation and see what the best options may be. (This kind of approach is used by the US Armed Forces, who run computer simulations of conflicts and possible future battles all the time.)

A simpler way to avoid the Flaw of Averages that is very common in the business world is the well-known SWOT analysis, where instead of relying on a single number, a team faced with making a decision lists issues in four categories: strengths, weaknesses, opportunities, and threats. To make sense of the resulting table, it is not uncommon to assign numbers to each category, which opens the door to the Flaw of Averages again, but with four numbers rather than just one, you get some insight into what the issues are.

Notice I said “insight” there; not “answer”. For insight is what numbers can give you. Applications of mathematics in the natural sciences and engineering can give outsiders a false sense of the power of numbers to decide issues. In science (particularly physics and astronomy) and engineering, (correctly computed) numbers essentially have to be obeyed. But that is almost never the case in the human or social realm.

When it comes to making human decisions, including political decisions, the power of numbers is even less reliable than the expensively computed numbers that go into producing the daily weather forecast. And surely, no politician would regard the weather forecast as being anything more than a guide—information to help make a decision.

One of mathematicians’ favorite examples of how single numbers can mislead is known as Simpson’s Paradox, in which an average can indicate the exact opposite of what the data actually says.

The paradox gets its name from the British statistician and civil servant Edward Simpson, who described it in a technical paper in 1951, though the issue had been observed earlier by the pioneering British statistician Karl Pearson in 1899. (Another irony in this story is that the British actually led the way in understanding how to make good use of statistics, obtaining insights the current UK government seems to have no knowledge of.)

A famous illustration of Simpson’s Paradox arose in the 1970s, when there was an allegation of gender bias in graduate school admissions at the University of California at Berkeley. The fall 1973 figures showed that of the 9,442 men and 4,321 women who applied, 44% of men were admitted but only 35% of women. That difference is certainly too great to be due to chance. But was there gender bias? On the face of it, the answer is a clear “Yes”.

In reality, however, when you drill down just one level into the data, from the school as a whole to the individual departments, you discover that, not only was there no gender bias in favor of men, there was in actuality a statistically significant bias in favor of women. The School was going out of its way to correct for an overall male bias in the student population. Here are the figures.


In Departments A, B, D, and F, a higher proportion of women applicants was admitted, in Department A significantly so.

There was certainly a gender bias at play, but not on the part of University Admissions. Rather, as a result of problems in society as a whole, women tended to apply to very competitive departments with low rates of admission (such as English), whereas men tended to apply to less-competitive departments with high rates of admission (such as Engineering). We see a similar phenomenon in the recent UK Brexit vote, though there the situation is much more complicated. British Citizens, politicians, and journalists who say that the recent referendum shows the “will of the people” are, either though numerically informed malice or basic innumeracy, plain wrong. Just as the UC Berekeley figures did not show an admissions bias against women (indeed, there was a bias in favor of women), so too the Brexit referendum does not show a national will for the UK to leave the EU.

Britain leaving the EU may or may not be their best option, but in making that decision the government would do well to drill down at least one level, as did the authorities at UC Berkeley. When you do that, you immediately find yourself with some much more meaningful numbers. Numbers that tell more of the real story. Numbers on which elected representatives of the people can base an informed discussion as how best to proceed—which is, after all, what democracies elect governments to do.

Much of that “one level down” data was collected by the BBC and published on its website. It makes for interesting reading.

For instance, it turned out that among 18-24 years old voters, a massive 73% voted to remain in the UK, as did over half of 25-49 years of age voters. (See Table.) So, given that the decision was about the future of the UK, the result seems to provide a strong argument to remain in the EU. Indeed, it is only among voters 65 or older that you see significant numbers (61%) in favor of leaving. (Their voice matters, of course, but few of them will be alive by the time any benefits from an exit may appear.)

Source: http://www.bbc.com/news/uk-politics- 36616028

You see a similar Simpson’s Paradox effect when you break up the vote by geographic regions, with London, Scotland, and Northern Ireland strongly in favor of remaining in the UK (Scotland particularly so).

It’s particularly interesting to scroll down through the long chart in the section headed “Full list of every voting area by Leave”, which is ordered in order of decreasing Leave vote, with the highest Leave vote at the top. I would think that range of numbers is extremely valuable to anyone in government.

There is no doubt that the British people have a complex decision to make, one that will have a major impact on the nation’s future for generations to come. Technically, I am one of the “British people,” but having made the US my home thirty years ago, I long ago lost my UK voting rights. My interest today is primarily that of a mathematician who has made something of a career arguing for improved public understanding of the sensible use of my subject, and railing against the misuse of numbers.

My emotional involvement today is in the upcoming US presidential election, where there is also an enormous amount of misuse of mathematics, and many lost opportunities where the citizenry could take advantage of the power numbers provide in order to make better decisions.

But for what it’s worth, I would urge the citizens of my birth nation to drill down one level in your referendum data. For what you have is a future-textbook example of Simpson’s Paradox (albeit with many more dimensions of variation). To my mathematician’s eye (trained as such in the UK, I might add), the referendum provides very clear numerical information that enables you to form a well-informed, reasoned decision as to how best to proceed.

Deciding between the “will of the older population” and the “will of the younger population” is a political decision. So too is deciding between “the will of London, Scotland, and Northern Ireland” and “the will of the remainder of the UK”. What would be mathematically irresponsible, and to my mind politically and morally irresponsible as well, would be to make a decision based on a single number. Single numbers rarely make decisions for us. Indeed, single numbers are usually as likely to mislead as to help. A range of numbers, in contrast, can provide valuable data that can help us to better understand the complexities of modern life, and make better decisions.

We humans invented numbers and mathematics to understand our world (initially physical and later social), and to improve our lives. But to make good use of that powerful, valuable gift from our forbearers, we need to remember that numbers are there to serve us, not the other way round. Numbers are just tools. We are the ones with the capacity to make decisions.

* A version of this blog post was also published on The Huffington Post.

Tuesday, June 7, 2016

Infinity and Intuition

On May 30, Gary Antonick’s always interesting Numberplay section in the New York Times featured a contribution by Berkeley mathematician Ed Frenkel on the difficulties the human mind can encounter when trying to come to grips with infinity. If you have not yet read it, you should.

Infinity offers many results that are at first counter-intuitive. A classic example is Hilbert's Hotel, which has infinitely many rooms, each one labeled by a natural number printed on the door: Room 1, Room 2, Room 3, etc., all the way through the natural numbers. One night, a traveler arrives at the front desk only to be told be the clerk that the hotel is full. "But don't worry, sir," says the clerk, "I just took a mathematics course at my local college, and so I know how to find you a room. Just give me a minute to make some phone calls." And a short while later, the traveler has his room for the night. What the clerk did was ask every guest to move to the room with the room number the next integer. Thus, the occupant of Room 1 moved into Room 2, the occupant of Room 2 into Room 3, etc. Everyone moved room, no one was ejected from the hotel, and Room 1 became vacant for the newly arrived guest.

This example is well known, and I expect all regular readers of MAA Online will be familiar with it. But I expect many of you will not know what happens when you step up one level of infinity. No sooner have you started to get the hang of the countable infinity (cardinality aleph-0), and you encounter the first uncountable infinity (cardinality aleph-1) and you find there are more surprises in store.

One result that surprised me when I first came across it concerns trees. Not the kind the grow in the forest, but the mathematical kind, although there are obvious similarities, reflected in the terminology mathematicians use when studying mathematical trees.

A tree is a partially ordered set (T,<) such that for every member x of T, the set {y in T : y < x} of elements below x in the tree is well ordered. This means that the tree has a preferred direction of growth (often represented as upwards in diagrams), and branching occurs only in the upward direction. It is generally assumed that a tree has a unique minimum element, called the root. (If you encounter a tree without such a root, you can simply add one, without altering the structure of the remainder of the tree.)

Since each element of a tree lies at the top of a unique well ordered set of predecessors, it has a well defined height in the tree - the ordinal number of the set of predecessors. For each ordinal number k, we can denote by T_k the set of all elements of the tree of height k. T_k is called the k'th level of T. T_0 consists of the root of the tree, T_1 is the set of all immediate successors of the root, etc.

Thus, the lower part of a tree might look something like this:

(It could be different. There is no restriction on how many elements there are on each level, or how many successors each member has.)

A classical result of set theory, sometimes called König's Lemma, says that if T is an infinite tree, and if each level T_n, for n a natural number, is finite, then T has an infinite branch, i.e., an infinite linearly ordered subset.

It's easy to prove this result. You define a branch {x_n : n a natural number} by recursion. To start, you take x_0 to be the root of the tree. Since the tree is infinite, but T_1 is finite, there is at least one member of T_1 that has infinitely many elements above it. Let x_1 be one such element of T_1. Since x_1 has infinitely many elements above it and yet only finitely many successors on T_2, there is at least one successor of x_1 on T_2 that has infinitely many elements above it. Let x_2 be such an element of T_2. Now define x_3 in T_3 analogously so it has infinitely many elements of the tree above it, and so on. This simple process clearly defines an infinite branch {x_n : n a natural number}.

Having seen why König's Lemma holds, it's tempting to argue by analogy that if you have an uncountable tree T (i.e., a tree whose cardinality is at least aleph-1) and if every level T_k, for k a countable ordinal, is countable, then T has an uncountable branch, i.e., a linearly ordered subset that meets level T_k for every countable ordinal k.

But it turns out that this cannot be proved. It is possible to construct an uncountable tree, all of whose levels T_k, for k a countable ordinal, are countable, for which there is no uncountable branch. Such trees are called Aronszajn trees, after the Russian mathematician who first constructed one.

Here is how to construct an Aronszajn tree. The members of the tree are strictly increasing (finite and countably transfinite), bounded sequences of rational numbers. The tree ordering is sequence extension. It is immediate that such a tree could not have an uncountable branch, since its limit (more precisely, its set-theoretic union) would be an uncountable strictly increasing sequence of rationals, contrary to the fact that the rationals form a countable set.

You build the tree by recursion on the levels. T_0 consists of the empty sequence. After T_k has been constructed, you get T_(k+1) by taking each sequence s in T_k and adding in every possible extension of s to a strictly increasing (k+1)-sequence of rationals. That is, for each s in T_k and for each rational number q greater than or equal to the supremum of s, you put into T_(k+1) the result of appending q to s. Being the countable union of countably many sets, T_(k+1) will itself be countable, as required.

In the case of regular recursion on the natural numbers, that would be all there is to the definition, but with a recursion that goes all the way up through the countable ordinals, you also have to handle limit ordinals - ordinals that are not an immediate successor of any smaller ordinal.

To facilitate the definition of the limit levels of the tree, you construct the tree so as to satisfy the following property, which I'll call the Aronszajn property: for every pair of levels T_k and T_m, where k < m, and for every s in T_k and every rational number q that exceeds the supremum of s, there is a sequence t in T_m which extends s and whose supremum is less than q.

The definition of T_(k+1) from T_k that I just gave clearly preserves this property, since I threw in EVERY possible sequence extension of every member of T_k.

Suppose now that m is a limit ordinal and we have defined T_k for every k < m. Given any member s of some level T_k for k < m, and any rational number q greater than the supremum of s, we define, by integer recursion, a path (s_i : i a natural number) through the portion of the tree already constructed, such that its limit (as a rational sequence) has supremum q.

You first pick some strictly increasing sequence of rationals (q_i : i a natural number) such that q_0 exceeds the supremum of s and whose limit is q.

You also pick some strictly increasing sequence (m_i : i a natural number) of ordinals less than m that has limit m and such that s lies below level m_0 in the tree.

You can then use the Aronszajn property to construct the sequence (s_i : i a natural number) so that s_i is on level m_i and the supremum of s_i is less than q_i.

Construct one such path (s_i : i a natural number) for every such pair s, q, and let T_k consist of the limit (as a sequence of rationals) of every sequence so constructed. Notice that T_k so defined is countable.

It is clear that this definition preserves the Aronszajn property, and hence the construction may be continued.

And that's it.

NOTE: The above article first appeared in Devlin’s Angle in January 2006. Seeing Frenkel’s Numberplay article prompted me to revive it and give it another airing.

Wednesday, May 4, 2016

Algebraic Roots – Part 2

What does it mean to “do algebra”? In Part 1, published here last month, I described how algebra (from the Arabic al-Jabr) began in 9th Century Baghdad as a way to approach arithmetical problems in a systematic way that scales. It was a way of thinking, using logical reasoning rather than (strictly speaking, in addition to) arithmetical calculation, and the first textbook on the subject explained how to solve problems that way using ordinary language, not symbolic expressions. Symbolic algebra was introduced later, in 16th Century France.

Just as the formal algorithms of Hindu-Arabic arithmetic make it possible to do arithmetic in a purely procedural, rule-following way (without the need for any thought), so too symbolic algebra made it possible to solve algebraic problems by manipulating symbolic expressions using formal rules, again without the need for any thought.

Over the ensuing centuries, schools focused more and more exclusively on the formal, procedural rules of arithmetic and symbolic algebra, driven in part by the needs of industry and commerce to have large numbers of people who could carry out computations for them, and in part for the convenience of the school system.

Today, however, we have digital devices that carry out arithmetical and algebraic procedural calculations for us, faster and with greater accuracy, shifting society’s needs back to arithmetical and algebraic thinking. This is why you see the frequent use of those terms in educational circles these days, along with number sense. (All three terms are so common that definitions of each are easily found on the Web by searching on the name, as is also the case for the more general term mathematical thinking.)

As more (and hopefully better) technological aids are developed, the nature of the activity involved in solving an arithmetical or algebraic problem changes, both for learning and for application. The fluent and effective use of arithmetical calculators, graphing calculators (such as Desmos), spreadsheets, computer algebra systems (such as Mathematica or Maple), and Wolfram Alpha, are now marketable skills and important educational goals. Each of these tools, and others, provides a different representation of numbers, numerical problems, and algebraic problems.

One consequence of this shift that seemed to take an entire generation of parents off guard, is that mastery of the “traditional algorithms” for solving arithmetic and algebraic problems, which were developed to optimize human computations and at the same time create an audit trail, and which used to be the staple of school mathematics instruction, became a much less important educational goal. Instead, it is evidently far more valuable for today’s students to spend their time working with algorithms optimized to develop good arithmetical and algebraic thinking skills, that will (among other things) support fluent and effective use of the new technologies.

I said “evidently” above, since to those of us in the education business, it was just that. With hindsight, however, it seems clear that in rolling out the Common Core State Standards, those in charge should have put much more effort into providing that important background context that was evident to them but, clearly, not evident to many people not working in mathematics education.

I was not involved in the CCSS initiative, by the way, but I doubt I would have done any better. I still find it hard to wrap my mind round the fact that the “evident” (to me) need to modify mathematics education to today’s world is actually not at all evident to many of my fellow citizens—even though we all live and work in the same digital world. I guess it is a matter of the educational perspective those of us in the math ed business bring to the issues.

But even those of us in the education business can sometimes overlook just how much, and how fast, things have changed. The most recent example comes from a highly respected learning research center, LearnLab in Pittsburgh (formerly called the Pittsburgh Science of Learning Center), funded by the National Science Foundation.

The tweet shown below caught my eye a few weeks ago.

The tweet got my attention because I am familiar with DragonBox, and include it in the (very small) category of math learning apps I usually recommend. (I also know the creator, and have given occasional voluntary feedback on their development work, but I have no other connection to the company.)

“Ineffective”? “#dragonboxfail”? Those are the words used in the tweet. But neither can possibly be true. DragonBox provides an alternative representation for linear equations in one unknown. Anyone who completes the game (for want of a better term) has demonstrated mastery of algebraic thinking for single variable linear problems. Period. (There is a separate issue of the representation that I will come to later.)

Indeed, since the mechanics in DragonBox are essentially isomorphic to the rules of classical symbolic algebra (as taught in schools for the last four hundred years), completing the game demonstrates mastery of those mechanics too. From a logical perspective then, the tweet made no sense. All very odd for an official tweet from a respected, federally-funded research institute. Suspecting what must be going on, I looked further.

The tweet was in response to a review of DragonBox, published by EdSurge. I recognized the name of the reviewer, Brady Fukumoto, a former game developer I had meet a few times. It was a well analyzed review. Overall, I agreed with everything Brady said. In particular, he spent some time comparing “doing algebra in the DragonBox representation” to “doing algebra using the traditional symbolic equations representation”, pointing out how much richer is the latter—but noting too that the former can result in higher levels of student engagement. Hardly the “promote” of a product that LearnLab accused him of. Indeed, Brady correctly summarized, and referenced (with a link) the Carnegie Mellon University study the LearnLab tweet implicitly referred to.

I recommend you read Brady’s review. It gets at many aspects of the “what does it mean to do algebra?” issue. As does playing DragonBox itself, which toward the end gradually replaces its initial “game representation” with the standard symbolic equation representation on a touch screen (a process often referred to as deconcretization).

Unlike the tweet, the CMU paper was careful in stating its conclusion. The authors say, and Brady quotes, that they found DragonBox to be “ineffective in helping students acquire skills in solving algebra equations, as measured by a typical test of equation solving.” (The emphasis is mine.)

Now we are at the root of that odd tweet. (One should not make too much of a tweet, of course. Twitter is an instant medium. But, rightly or wrongly, tweets in the name of an organization or a public figure are generally viewed as PR, presenting an authoritative, public stance.) The folks at LearnLab, their knowledge of educational technology notwithstanding, are assuming a perspective in which one particular representation of algebra is privileged; namely, the traditional symbolic one. (Which is the representation they adopt in developing their own algebra instruction app, an Intelligent Tutoring System called Lynnette.) But as I pointed out last month, that representation became the dominant one entirely by virtue of what was at that time the best available distribution technology: the printing press.

With newer technologies, in particular the tablet computer (“printed paper on steroids”), other representations are possible, some better suited to learning, others to applications. To be sure, there are learning benefits to be gained from mastering symbolic algebra, perhaps even from doing so using paper-and-pencil, as Brady points out in his review. But at this stage in the representational technology development, we should adopt a perspective of all bets being off when it comes to how to best represent algebra in different contexts. I think it highly unlikely that we will ever again view algebra as something you learn or do exclusively by using a pen to pour symbols onto a page.

Indeed, with his background in video game design, Brady ends his review by rating DragonBox according to three metrics:

Fun Factor – A: I collected all 1,366 stars available in DragonBox 1 and 2 and had a great time.

Academic Value – B: I worry that many will underestimate the effort needed to transfer DragonBox skills to practical algebra proficiency.

Educational Value – A+: Anytime a kid leaves a game with thoughts like, “algebra is fun!” or “hey, I’m really good at math!” that is a huge win.

The LearnLab researchers are locked into the second perspective: what he calls Academic Value. (So too is Brady, to some extent, with his use of the phrase “practical algebra proficiency” to mean “symbolic algebra proficiency.”)

Make no mistake about it, transfer from mastery in an interactive engagement on a tablet to paper-and-pencil math is not automatic, as both Brady and the CMU researchers observe. To modify the old horse aphorism, DragonBox takes its players right to the water’s edge and dips their feet in, but still the players have difficulty drinking. (My best guess is that, for most learners it takes a good teacher to facilitate transfer.)

I note in passing that initially I had difficulty playing DragonBox. My problem was, classical, symbolic algebra is a second language to me that I have been fluent in since childhood and use every day. I found it difficult mastering the corresponding actions in DragonBox. Transfer is difficult in both directions.

At the present moment in time, those of us in education (or learning research) should absolutely not assume any one representation is privileged. Particularly so when it comes to learning. In that respect, Brady is right to note that DragonBox’s success in terms of his third metric (essentially, attitude and engagement) is indeed “a huge win.”

In the world in which our students will live their lives, arithmetic, algebra, and many other parts of mathematics, should be learned, and will surely be applied, in multimedia environments. All the evidence available today suggests that mastery of the traditional symbolic representation will be a crucial ingredient in becoming proficient at arithmetic and algebra. But the more effective practitioners are likely to operate with the aid of various technological tools. Indeed, for some future practitioners, mastery of the traditional symbolic representation (which is, remember, just a user interface to a certain kind of thinking) may turn out to be primarily just a key step in the cognitive process of achieving conceptual understanding, not used directly in applications, which may all be by way of mathematical reasoning tools.

Exactly when, in the initial learning process, it is best to introduce the classical symbolic representation is as yet unclear. What the evidence of countless generations of students-turned-parents makes abundantly clear, however, is that teaching only the classical symbolic approach is a miserable failure. That much is affirmed every time a parent posts on social media that they are unable to understand a Common Core math question that requires nothing more than understanding the place-value representation of integers. (Which is true of most of the ones I have seen posted.)

There is some evidence (see for example Jo Boaler’s new book) that a more productive approach is to use learning technologies to develop and sustain student engagement and develop a growth mindset, and provide learning environments for safe, productive failure, with the goal of developing number sense and general forms of creative problem solving (mathematical thinking), bringing in symbolic representations and specific techniques as and when required.

**Full declaration: I should note that my own work in this area, some of it through my startup company BrainQuake, adopts this philosophy. The significant learning gains obtained with our first app were in number sense and creative problem solving for a novel, complex performance task. Acquisition of traditional “basic skills” with our app comes about (intentionally, by design) as a valuable by-product. The improvement we see in the basic skills category is much more modest, and may well be better achieved by a tool such as LearnLab’s ITS. In a world where we have multiple representations, it is wise to make effective use of them all, according to context. It is not a case of an interface “fail”; to say that (with or without a hashtag) is to remain locked in past thinking. Easy to do, even for experts. Rather, in an era when algebra is being forced to return to its roots of being a way of thinking to help us solve practical problems, using all available representations in unison can provide us with a major win.

Monday, April 4, 2016

Algebraic roots – Part 1

Fig. 1: A problem from the first ever algebra textbook.
The first ever algebra text book was written in Baghdad around 830CE, by the Persian mathematician Muhammad ibn Musa al-Khwarizmi, our modern word “algebra” coming from the Arabic term al-Jabr, a technique for balancing an equation, described in the book. If you were a student – or a teacher – back then, the problem shown above (Figure 1) is the kind of thing you would be faced with in your math class. It is a direct translation from the original Arabic of a problem in al-Khwarizmi’s book.

Most modern readers, on seeing this for the first time and being told it is an algebra problem, are surprised that there are no symbols. Yet it is clearly not an “algebra word problem” in the usual sense. It’s just about numbers. It is, in fact, a quadratic equation problem. Figure 2 below is the same problem as we would present it in an algebra textbook today.

Fig. 2: Al-Khwarizmi's quadratic equation in modern notation.

Symbolic algebra, as we understand it today, was not introduced until the Sixteenth Century, when the French mathematician François Viète took what until then had been a discipline presented in prose, and turned it into the symbolic process we are familiar with today.

This is not to say that mathematicians back in Ninth Century Persia did not use symbolic expressions in their work. They surely did. The issue is how they presented it in textbook form. In the days when books were handwritten and duplicated by hand-copying, the author of a mathematics book was faced with a problem that other writers did not have to worry about: faithful copying. Copying of manuscripts was largely done by monks in monasteries. While masters of the written word – they did, after all, “live by a book” — few monks mastered mathematics, and hence could not be relied upon to create an accurate copy of anything other than prose. Aware of this issue, authors of mathematics books wrote everything in prose.

With the introduction of the printing press in the Fifteenth Century, however, everything changed. Indeed, one of the first printed books published after Gutenberg printed his famous edition of the Bible was an Italian book on practical arithmetic. True, to handle a symbolic textbook, you have first to master the linguistic rules for reading, writing, and manipulating symbolic expressions, but once you do, algebra becomes a whole lot easier to do, as a line-by-line comparison of Figures 1 and 2 makes abundantly clear. (Actually, it’s lines-by-line!)

Notice, however, that the two presentations of the quadratic problem specify the same problem, and both solutions are, from a logical deduction point of view, the same. To some extent, the al-Khwarizmi’s prose version describes what goes on in your head when you solve the problem. At least — and this is where I am going with this — it does if you solve the problem by thinking about it.

With the symbolic presentation, it is possible to reduce the solution of an algebra problem to the mindless (literally), algorithmically-specified manipulations of symbols. Ever since the invention of the printing press, generations of students quickly discovered that you can pass an algebra test by mastering a collection of symbolic-manipulation rules. No understanding necessary. Moreover, when taught this way, the teacher’s job became immeasurably easier. It is easier to teach rules to be followed than to develop thinking skills, and it is easy to evaluate a student’s work if the goal is simply to check that it accords with the rules and arrives at the correct answer. (Indeed, teachers soon realized that the quickest way to grade a student’s work was to first see if the answer is correct, and only if it is not look at the symbolic working.)

While the student in Ninth Century Baghdad solved (linear and quadratic) equations by performing essentially the same steps as a student would today, with the problem presented in words, and the solution written out (presumably) in words, it can’t be carried out in a mindless fashion. The human mind can learn to follow rules for manipulating symbols, without knowing what they mean, but words are so much an integral part of human thinking that we cannot use them without their having meaning (albeit possibly a meaning other than the one intended by the author of an algebra book).

There is, then, a potential loss in taking algebra from a prose presentation to a symbolic one: namely, the student can lose the appreciation that algebra is a powerful way of thinking with countless uses in the everyday world. Instead of algebra being a codification of human logical thinking that emerges from within, it becomes a set of externally imposed, and often arbitrary-seeming rules to be mastered by repetitive practice. The natural, relevant, and empowering becomes the artificial, pointless, and tedious. (Those of us who like symbolic algebra see beyond the rules.)

“When will I ever use algebra?” today’s student justifiably asks. In terms of rule-based, symbol manipulation, the answer is, for most people (not all – and this is educationally significant), “Never.” But in terms of algebra, that codified way of thinking that has evolved and developed considerably since al-Khwarizmi’s day, the answer is, “All the time.” (Whenever you use a spreadsheet, for example.)

In the introduction to his algebra book, al-Khwarizmi declared that he was presenting

“... what is easiest and most useful in arithmetic, such as men constantly require in cases of inheritance, legacies, partition, lawsuits, and trade, and in all their dealings with one another, or where the measuring of lands, the digging of canals, geometrical computations, and other objects of various sorts and kinds are concerned.”

This was cutting edge stuff back then. It doesn’t get much more practical than that!

As al-Khwarizmi explains, he was asked to write his book by the Caliph, who recognized the importance — for trade and engineering in particular (both of which were crucial to the regional society at the time) — of making those new methods of calculation widely available. The Caliph’s reasoning was as sound and significant then as it would be today. When a society reaches a state of development where trade and commercial and financial activity go beyond two people engaging in one-off transactions, it needs a more efficient tool than basic arithmetic. What is required is arithmetic-at-scale. When you boil it down to its essence, that is what algebra is. Al-Khwarizmi’s book codifies and formalizes the numerical reasoning that people use in their daily personal and professional lives in a fashion that enables them to operate at scale.

In the years since the printing press made it possible to produce algebra textbooks that used symbolic representations, the focus in the algebra class has gradually shifted from being about sophisticated reasoning about numbers to an often mindless game of symbol manipulation. For several centuries that could be justified on the grounds that the only effective way for society to be able to handle the arithmetic-at-scale required to advance was to train lots of people to carry out the necessary calculations. And for that, the most efficient way is to use rule-based, symbolic manipulation. The people carrying out those calculations no more had to understand what they were doing than the electronic calculator on your iPhone has to understand what it is doing. All that matters it that it – the human symbolic-algebraist or the calculator app — gets the right answer.

But now that those of us in more advanced societies (and in a great many less advanced societies, for that matter) do have ready access to those powerful calculating devices, devices that in addition to performing numerical calculations can also solve algebraic problems (arithmetic-at-scale, aka the electronic spreadsheet), the once-important societal need for many human symbolic calculators has gone away. What is required today is that people can make effective use of those new tools. That has shifted the emphasis back from symbolic-rule-mastery to the kind of formalized, rigorous thinking about quantitative matters that, thanks to al-Khwarizmi, we call algebra. Only now, we are back to the realm, not of symbol manipulation, but codified, logical, rigorous thinking about issues in our lives and in the world we inhabit.

To be sure, symbolic algebra is not going away. It is way too powerful to ignore. But whereas it used to be possible to provide a rationale for teaching algebra as pure, rule-based symbolic manipulation (albeit a societal rationale that views people as fodder for industry), it makes no sense to teach it that way today.

Which is why the Common Core now directs the focus not on the symbolic rules that dominated math instruction for centuries past, but on sophisticated mathematical thinking skills that develop and require a deeper understanding of numbers. This is why there is now so much talk of “number sense” and why Mary and Johnny are coming home from school with homework questions that their parents often find strange and occasionally incomprehensible.

In other words, algebra has returned to its roots. (Pun intended.)

In Part 2 of this commentary, to be published here next month, I will look at how those same digital technologies that have rendered obsolete much of what used to constitute K-12 algebra education, have provided new ways to teach the subject that are ideally suited to the way we use — and will increasingly use even more — algebra. After all, if the printing press turned algebra from prose to symbolic expressions, what will algebra look like now that the digital computer, and in particular the tablet device, has largely replaced the printing press?

NOTE: I realize that there is little in this month’s post that is new to MAA members. But as I know from emails and comments I receive, Devlin’s Angle posts find their way to a wide variety of readers, occasionally onto the desks of governors, education administrators, and others who play a role in the nation’s education system. With so much media attention currently being given to a mathematics education proposal being made by an individual having little knowledge of mathematics or current mathematics education (see last month’s column), I thought it timely to bring us back to an appreciation of algebra (i.e., algebraic thinking) that was apparent to a Ninth Century Caliph in Baghdad, and which is even more relevant to our lives today than it was back then.

I cannot avoid ending by observing that 2016 will surely go down as the year when the US media devoted more media space and time to individuals pontificating on topics they knew almost nothing about, than they did to experts, of which the United States has large numbers with global reputations. I think many editors would benefit from a (good) course in algebraic thinking.

Tuesday, March 1, 2016

The Math Myth that permeates “The Math Myth”

March 1 saw the publication of the book The Math Myth: And Other STEM Delusions, by Andrew Hacker. MAA members are likely to recognize the author’s name from an opinion piece he published in the New York Times in 2012, with the arresting headline "Is Algebra Necessary?"

Yes, I thought you’d remember it! It’s almost up there with John Lennon’s murder in terms of knowing where you were at the time you first heard of it. But just to be sure we are all on the same page, let me recap that, in that essay, Hacker, a retired college professor of political science who over the years had taught some non-majors math courses, laid out a case for dropping algebra as a required course in K-12 and college.

Before I dive into Hacker’s new book, you would be advised to refresh your memory of the case he presented in that article, since his book is essentially an extension of what he said then, expanded to cover the entire Common Core Mathematical Standards. Prior to writing this review, I wrote an article for the Huffington Post in which I summarized, with my commentary, his 2012 article, together with a recent interview he gave to the Chronicle of Higher Education.

In my article, I noted that Hacker has no idea what algebra really is. His focus is entirely on school algebra as it is very often taught, as a collection of rules for manipulating symbolic expressions. What his argument actually establishes, with sound arguments and good examples, are two conclusions I would agree with:
  1. Algebra as typically taught in the school system is presented as a meaningless game with arbitrary rules that does more harm than good.
  2. There are strong arguments for teaching algebra as it was originally developed and how professional mathematicians today view it.
I’ll leave you to read my HuffPost piece for more of the gory details. For the benefit of lay readers who may come to this site, I should though repeat here the brief summary I gave in that article of the difference between algebra (as mathematicians understand and practice it) and the rule-based-manipulation-of-symbolic-expressions that so often passes for algebra in our schools.

First codified by the Persian mathematician al-Khwarizmi in his book The Compendious Book on Calculation by Completion and Balancing (balancing = al-Jabr), written in Bhagdad around CE 820, algebra is a powerful method for solving numerical problems more efficiently than by arithmetic. It does so by introducing two new ways of handling numerical problems.

First, algebra provides methods for handling entire classes of numbers, rather than specific ones. (That’s where those x’s, y’s, and z’s come in, but that’s just an implementation detail introduced in France several centuries later.)

Second, it provides a way to find numerical answers not by computing, which is often very difficult, but by reasoning logically to hone-in on the answer, using whatever information is available. Thus, whereas in arithmetic you work forwards, starting with numbers and computing with them to arrive at an answer, in algebra you work backwards, starting by postulating an answer and reasoning logically to figure out what it is. True, this powerful application of human logical reasoning capacity frequently gets boiled down to mastering various symbolic procedures to “Solve for x,” but again that’s just a particular implementation. Numerical forensics would be a sexier, and more descriptive, term for the real thing.

The familiar symbolic expressions calculus usually taught in schools as “algebra” was a particular implementation of al-Khwarizmi’s ways of thinking introduced by the French mathematician François Viète in the 16th Century (700 years after algebra first began) to streamline paper-and-pencil problem solving. A more recent implementation of algebra is the computer spreadsheet.

Since his new book follows the same line of attack as his 2012 opinion piece, but with his sights widened from school algebra to the Common Core, instead of crafting another analytic essay, I will do what Hacker himself does, and list a number of examples to make my case. More precisely, I’ll select some of the 20 instances (in a book of just over 200 pages) where I found a claim that is either plain wrong, wildly misleading, or otherwise problematic, and ask where he went wrong. In marking 20 pages, it’s likely I missed some. There were so many wild and inaccurate claims, I frequently found myself skimming through.

First though, I should repeat what I said in my HuffPost article about his algebra piece. Just as his essay actually amounted to a strong argument in favor of teaching algebra to all students (albeit not the rule-based manipulations of formulas so often presented in place of algebra), so too his book includes a strong argument in favor of Common Core Math. In the same way that Hacker mischaracterized algebra in 2012, so too his portrayal of the CCSSM (Common Core State Standards for Mathematics) is totally at odds with the real thing—though not quite so far off if you turn your attention from the Standards themselves to some implementations of the CC.

One of the book’s flaws is that Mr Hacker seems to get carried away with the flow of his rhetoric, since for the most part his argument consists of the erection of a series of straw men which he then, in time-honored tradition, proceeds to attack.

“It’s a waste of time forcing kids to master azimuths and asymptotes,” he cries [not an exact quote] as early as page 2.

I had to look up the word azimuth, since in my entire career as a mathematician and mathematics educator, I had never come across it. According to Wikipedia, azimuth is a “concept used in navigation, astronomy, engineering, mapping, mining and artillery.” I ran a search for the word on the entire, 93-page CCSSM document and, as I expected, it did not turn up. Straw man.

Asymptotes are a different matter, of course, since a general sense of asymptotic behavior of functions is useful in many walks of life. The word is mentioned, but just once, in the CCSSM, in the section on Interpreting Functions (F-IF), where it says:

Graph rational functions, identifying zeros and asymptotes when suitable factorizations are available, and showing end behavior.

That’s it. One mention, buried towards the end of the document, in the section that says the student should:
  • Understand the concept of a function and use function notation
  • Interpret functions that arise in applications in terms of the context
  • Analyze functions using different representations
From the overall thrust of Hacker’s argument, I think it’s clear he believes this kind of knowledge is indeed important for everyone to have. But it’s also clear it is not a central pillar of the CC, to be used on page 2 to set the scene for what his book is about.

Unfortunately, this example is indeed a good characterization of his overall argument: to knock down straw men.

We’re told that if our nation is to stay competitive, on a given morning all four million of our fifteen-year-olds will be studying azimuths and asymptotes,” he writes. (I am still on page 2, with over 200 more pages to go.) He provides no citation regarding who, exactly, is making this proclamation for the nation’s future. It’s not just disingenuously misleading, it’s about as far from reality as you could imagine, and not because of those azimuths. (See momentarily for the real story.)

He continues, “Then, to graduate from high school, they will face tests on radical notations and elliptical equations.”

To be sure, you will find mention of the word radical in the CCSSM, in the context of “Work with radicals and integer exponents” in the Section on Expressions and Equations (8.EE), which provides the helpful illustration,

For example, estimate the population of the United States as 3 × 108 and the population of the world as 7 × 109, and determine that the world population is more than 20 times larger.”

Again, this is exactly the kind of thing Hacker says (towards the end of his book) students should be able to do! And it is entirely reasonable that they be asked to demonstrate that ability on a test.

“Elliptical equations” is another straw man.

The point is, what Hacker keeps attacking are straw men. The CCSS is just what its name implies, a set of standards. It is not a curriculum, nor does it specify anything remotely like a daily, or even weekly timetable. How and when teachers across the land cover the various standards is for them, or perhaps their school district, to decide. As far as the CCSS are concerned, teachers can operate fluidly, depending on how their class progresses. (And no one will even suggest that they mention azimuths, let alone force the class to master them.)

I would hazard a guess that Hacker has never looked at the CCSS document. Nor sat in on many math classes, as I have, and observed what actually goes on in today’s schools.

Caveat: I get to see classes to which, for one reason or another, I have been invited to visit. Likely they are some of the best, since their teachers invite me along so their students can talk for a while with someone who has devoted a career to mathematical research. I hear enough stories to be prepared to believe things are often a lot worse. Perhaps even as bad as Hacker says. But his book is purported to be about educational policy, not what you can actually find in good or bad classrooms.

Not only does Hacker give no indication he is familiar with the Common Core—the real one, not the azimuth-strewn, straw-man version he creates—he gives every indication he does not understand mathematics as it is practiced today. (He also does not know that pi is irrational, but I’ll come to that later.)

Certainly, the examples he selects to illustrate the irrelevancy (in today’s world) of some of the test problems students are asked to solve simply demonstrate that he is lacking the basic, every-day, number sense he is arguing for. Let me give just three examples.

On page 48, Hacker presents a question he took from an MCAT paper. It provides some technical data and asks what happens to the ratio of two inverse-square law forces between charges of given masses when the distance between them is halved. The context Hacker provides for this question is that medical professionals needs to be able to read and understand the mathematics used in technical papers. His claim is that this requirement does not extend to the physics of electrical and gravitational forces. In that, he is surely correct. But anyone with a grain of number sense will recognize at once that the setting is totally irrelevant. It’s a simple question about what happens to a ratio when the underlying scale is changed. The answer, of course, is nothing happens. It’s a ratio. The changes to the numerator and denominator cancel out. The ratio remains the same.

What this question is asking for is, Do you understand what a ratio is? Surely that is something that any medical professional who will have to read and understand journal articles would need to know. Hacker completely misses this simple observation, and presents the question as an example of baroque mathematical testing run amok.

On page 70, he presents a question from an admissions test for selective high schools. A player throws two dice and the same number comes up on both. The question asks the student to choose the probability that the two dice sum to 9 from the list 0, 1/6, 2/9, 1/2, 1/3. Hacker’s problem is that the student is supposed to answer this in 90 seconds. Now, I share Hacker’s disdain for time-limited questions, but in this case the answer can only be 0. It’s not a probability question at all, and no computation is required. It just requires you to recognize that you can never get a sum of 9 when two dice show the same number. As with the MCAT question, the question is simply asking, Do you understand numbers? In this case, do you recognize that the sum of two equal numbers can never be odd.

Finally, on page 101, Hacker presents a list of mathematics requirements high school students must meet in order to study at Harvard and similar universities. The list includes the names of various kinds of analytic functions. As usual, Hacker seems overwhelmed by the technical terms, or worries that the students will be, but all the list is asking for is that students can read graphs and charts and know what they represent in terms of growth and change. An essential skill, surely, for anyone in today’s information-rich world, not just students at elite universities.

You get the pattern surely? Hacker’s problem is he is unable to see through the surface gloss of a problem and recognize that in many cases it is just asking the student if she or he has a very basic grasp of number, quantity, and relationships. Yet these are precisely the kinds of abilities he argues elsewhere in the book are crucial in today’s world. He is, I suspect, a victim of the very kind of math teaching he rightly decries—one that concentrates on learning rules and mastering formal manipulations, with little attention to understanding.

This, surely, explains why he would write (page 96), “Reasoning mathematically may be a nice skill, but it is not relevant to most of life. We reason about many things: parenting, marriage, careers. Do we learn how to reason about these things by learning algebra?

If he had asked instead if we learn such reasoning in a typical school algebra class, I would agree with his implied answer of “No.” But algebra arose by codifying the everyday reasoning people carried out—and still carry out today—about the numerical or quantity aspects of any human activity that involves them. (Trade, commerce, and civil engineering were the original applications.)

From that historical perspective, it is absolutely clear that learning algebra can help us master such reasoning. It helps by providing an opportunity to carry out that kind of reasoning free of the complexities a problem generally brings with it when it arises in a real world context.

The tragedy of The Math Myth is that Hacker is actually arguing for exactly the kind of life-relevant mathematics education that I and many of my colleagues have been arguing for all our careers. (Our late colleague Lynn Steen comes to mind.) Unfortunately, and I suspect because Hacker himself did not have the benefit of a good math education, his understanding of mathematics is so far off base, he does not recognize that the examples he holds up as illustrations of bad education only seem so to him, because he misunderstands them.

The real myth in The Math Myth is the portrayal of mathematics that forms the basis of his analysis. It’s the same myth you see propagated in Facebook posts from frustrated parents about Common Core math homework their children bring home from school.

In the interests of their overall cardiovascular health, I have to recommend that math educators do not read The Math Myth. But if you do, perhaps you should start with the final chapter, titled “Numeracy 101.” Here, at least, you will find things you are likely to agree with, as he lays out what he believes would be a good quantitative literacy course for college students.

But even there, where all seems warm and friendly and positive, you will be jolted by Hacker’s fundamental lack of knowledge of mathematics. He writes,

Along with phenomena like earthquakes and cyclones, nature also has some numbers that control or explain how the world works. One of them is pi, whose 3.14159 goes on indefinitely, at least as far as we know.”

Yes, you read that last part correctly.

“Few people writing today … can make more sense of numbers” proclaims the Wall Street Journal on the cover of Hacker’s book. Well, if that’s the view of the newspaper that purports to have the expertise to cover the nation’s financial markets, it is only a matter of time before we have another financial meltdown.