Friday, November 4, 2016

Mathematical Milk and the U.S. Presidential Election

Keith Devlin mails his completed election ballot. What does math have to say about his act?
With the United States is the final throes of a presidential election, my mind naturally turned to the decidedly tricky matter of election math. Voting provides a great illustration of how mathematics – which rules supreme, yielding accurate and reliable answers to precise questions, in the natural sciences and engineering – can lead us astray when we try to apply it to human and social activities.

A classic example is how we count votes in an election, the topic of an earlier Devlin’sAngle post, in November, 2000. In that essay, I looked at how different ways to tally votes could affect the imminent Bush v. Gore election, at the time blissfully unaware of how chaotic would be the process of counting votes and declaring a winner on that particular occasion. The message there was, particularly in the kinds of tight race we typically see today, the different ways that votes can be tallied can lead to very different results.

Everything I said back then remains just as valid and pertinent today (mathematics is like that), so this time I’m going to look at another perplexing aspect of election math: why do we make the effort to vote? After all, while elections are sometimes decided by a small number of votes, it is unlikely in the extreme that an election on the scale of a presidential election will hang on the decision of a single voter. Even if it did, that would be well within the range of procedural error, so it makes no difference if any one individual votes or not.

To be sure, if a large number of people decide to opt out, that can affect the outcome. But there is no logical argument that takes you from that observation to it being important for a single individual to vote. This state of affairs is known as the Paradox of Voting, or sometimes Downs Paradox. It is so named after Anthony Downs, a political economist whose 1957 book An Economic Theory of Democracy examined the conditions under which (mathematical) economic theory could be applied to political decision-making.

On the face of it, Downs’ analysis does lead to a paradox. Economic theory tells us that rational beings make decisions based on expected benefit (a notion that can be made numerically precise). That approach works well for analyzing, say, why people buy insurance year after year, even though they may never submit a claim. The theory tells you that the expected benefit is greater than the cost; so it is rational to buy insurance. But when you adopt the same approach to an election, you find that, because the chance of exercising the pivotal vote in an election is minute compared to any realistic estimate of the private individual benefits of the different possible outcomes, the expected benefits of voting are less than the cost. So you should opt out. [The same observation had in fact been made much earlier, in 1793, by Nicolas de Condorcet, but without the theoretical backing that Downs brought to the issue.]

Yet, many otherwise sane, rational citizens do not opt out. Indeed, society as a whole tends to look down on those who do not vote, saying they are not "doing their part." (In fact, many countries make participation in a national election obligatory, but that is a separate, albeit related, issue.)

So why do we (or at least many of us) bother to vote? I can make the question even more stark, and personal. Suppose you have intended to "do your part" and vote. You wake up on election morning with a sore throat, and notice that it is raining heavily. Being numerically able (as all Devlin’s Angle readers must be), you say to yourself, "It cannot possibly affect the result if I just stay at home and nurse my throat. I was intending to vote, after all. Changing my mind about voting at the last minute cannot possibly influence anyone else. Especially if I don’t tell anyone." The math and the logic, surely, are rock solid. Yet, professional mathematician as I am, I would struggle out and cast my vote. And I am sure many Devlin’s Angle readers would too – most of them, I would suspect.

So what is going on? We can do the math. We are good logical thinkers. Why don’t we act according to that reasoning? Are we conceding that mathematics actually isn’t that useful? [SPOILER: Math is useful; but only when applied with a specific purpose in mind, and chosen/designed in a way that makes it appropriate for that purpose.]

Which brings me to my main point. To make it, let me switch for a moment from elections to the Golden Ratio. In April 2015, the magazine Fast Company Design published an article titled "The Golden Ratio: Design’s Biggest Myth," in which I was quoted at length. (The author also drew heavily on a Devlin’s Angle post of mine from May 2007.)

With a readership much wider than Devlin’s Angle, over the years the Fast Company Design piece has generated a fair amount of correspondence from people beyond mathematics academia, often designers who have not been able to overcome drinking Golden Ratio Kool-Aid during their design education. One recent email came, not from a designer but a high school math teacher, who objected to a statement the article quoted me (accurately) as saying, “Strictly speaking, it's impossible for anything in the real-world to fall into the golden ratio, because it’s an irrational number.” The teacher had, it was at once clear to me, drunk not just Golden Ratio Kool-Aid, but Math Kool-Aid as well.

In the interest of full disclosure, let me admit that, in the early part of my career as a mathematics expositor, I was as guilty as anyone of distributing both Golden Ratio Kool-Aid and Math Kool-Aid, to whoever would drink it. But, as a committed scientist, when presented with evidence to the contrary, I re-examined my thinking, admitted I had been wrong, and started to push better, more honest products, which I will call Golden Ratio Milk and Mathematical Milk. I described Golden Ratio Milk in my 2007 MAA post and peddled it more in that Fast Company Design interview. Here I want to talk about Mathematical Milk.

The reason why the Golden Ratio’s irrationality prevents its use in, say architecture, is that the issue at hand involves measurement. Measurement requires fixing a unit of measure – a scale. It doesn’t matter whether it is meters or feet or whatever, but once you have fixed it, that is what you use. When you measure things, you do so to an agreed degree of accuracy. Perhaps one or two decimal places. Almost never to more than maybe twenty decimal places, and that only in a few instances in subatomic physics. So it terms of actual, physical measurement, or manufacturing, or building, you never encounter objects to which a numerical measurement has more than a few decimal places. You simply do not need a number system that has fractions with denominator much greater than, say, 1,000,000, and generally much less than that.

Even if you go beyond physical measurement, to the theoretical realm where you imagine having an unlimited number decimal places available, you will still be in the domain of the rational numbers. Which means the Golden Ratio does not arise. Irrational numbers arise to meet mathematical needs, not the requirements of measurement. They live not in the physical world but in the human imagination. (Hence my Fast Company Design quote.) It is important to keep that distinction clear in our minds.

The point is, when we abstract from our experiences of the world around us, to create mathematical models, two important things happen. A huge amount of information is lost; and there is a significant gain in precision. The two are not independent. If we want to increase the precision, we lose more information, which means that our model has less in common with the real world it is intended to represent. Moreover, when we construct a mathematical model, we do so with a particular question, or set of questions in mind.

In astronomy and physics, and related domains such as engineering, all of this turns out to be not too problematic. For example, the simplistic model of the Solar System as a collection of point-masses orbiting around another, much heavier, point-mass, is extremely useful. We can formulate and solve equations in that model, and they turn out to be very useful. At least they turn out to be useful in terms of the goal questions, initially in this case predicting where the planets will be at different times of the year. The model is not very helpful in telling us what the color of each planet’s surface is, or even if it has a surface, both of which are certainly precise, scientific questions.

When we adopt a similar approach to model money supply or other economic phenomena, we can obtain results that are, mathematically, just as precise and accurate, but their connection to the real world is far more tenuous and unreliable – as has been demonstrated several times in recent years when those mathematical results have resulted in financial crises, and occasionally disasters.

So what of the paradox of voting? The paradox arises when you start by assuming that people vote to choose, say, a president. Yes, we all say that is what we do. But that’s just because we have drunk Election Kool-Aid. We don’t actually behave in accordance with that statement. If we did, then as rational beings we would indeed stay at home on election day.

Time to throw out the Kool-Aid and buy a gallon jug of far more beneficial Election Milk: (Presidential) elections are about a society choosing a president. Where that purpose impacts the individual voter is not who we vote for, but in providing social pressure to be an active member of that society.

That this is what is actually going on is illustrated by the fact that U.S. society created, and millions of people wear, "I have voted" badges on election day. The focus, and the personal reward, is not "Who I voted for" but "I participated in the process." [For an interesting perspective on this, see the recent article in the Smithsonian Magazine, "WhyWomen Bring Their “I Voted” Stickers to Susan B. Anthony’s Grave."]

To be sure, you can develop mathematical models of group activities, like elections, and they will tend to lead to fewer problems (and "paradoxes") than a single-individual model will, but they too will have limitations. All mathematical models do. Mathematics is not reality; it is just a model of reality (or rather, it is a whole, and constantly growing, collection of models).

When we develop and/or apply a mathematical model, we need to be clear what questions it is designed to help us answer. If we try to apply it to a different question, we may get lucky and get something useful, but we may also end up with nonsense, perhaps in the form of a "paradox."

With both measurement and the election, as is so often the case, one benefit we get from trying to apply mathematics to our world and to our lives is we gain insight into what is really going on.

Attempting to use the real numbers to model the acts of measuring physical objects leads us to recognize the dependency on the physical activity of measurement.

Likewise, grappling with Downs Paradox leads us to acknowledge what elections are really about – and to recognize that choosing a leader is a societal activity. In a democracy, who each one of us votes for is inconsequential; that we vote is crucial. That’s why I did not just spend a couple of hours yesterday making my choices and filling in my ballot and leaving it at that. I also went out earlier today – in light rain as it happens (and without a sore throat) – and put my ballot in the mailbox. Yesterday I acted as an individual, motivated by my felt societal obligation to participate in the election process. Today I acted as a member of society.

As a professional set theorist, I am familiar with the relationship between, and the distinction between, a set and its members. When we view a set in terms of its individual members, we say we are treating it extensionally. When we consider a set in terms of its properties as a single entity, we say we are treating in intensionally. In an election, we are acting intensionally (and intentionally) – at the set level, not as an element of a set.

* A shorter version of this article was published simultaneously in The Huffington Post.



Wednesday, October 12, 2016

It was Twenty Years Ago Today

The title of the famous Beatles song does not exactly apply to Devlin’s Angle. The online column (now run on a blog platform, but unlike most blogs, still subject to an editor’s guiding hand) is in its twentieth year, but it actually launched on January 1, 1996.

In last month’s column, I looked back at the very first post. It was a fascinating exercise to try to put myself back in the mindset of how the world looked back then, which was about the time when the World Wide Web was just starting to find its way onto university campuses, but had not yet penetrated the everyday lives of most of the world’s population.

That period of intense technological and societal change – looking back, it is clear it was just beginning, in the first half of the 1990s being more evolutionary rather than the revolutionary that was soon to follow – and the strong sensation of change both underway and pending, is reflected in some of the topics I chose to write about each month in that first year. Here is a list of those first twelve posts, with hyperlinks.


Along with essays you might find in a mathematics magazine for students (February, June, July, August, November, December), there are reflections on where mathematics and its role in the world might be heading in the next few years.

January’s post, about the growth of computer viruses in the digital domain, was clearly in that Brave New World vein, as I noted last month, and in February I focused on another aspect of the rapid growth of the digital world, with a look at the ongoing debate about the future of Artificial Intelligence. Though that field has undoubtedly made many advances in the ensuing two decades, the core argument I summarized there seems as valid today as it did then. Digital devices still do not “think” in anything like a human fashion (though these days it can sometimes be harder to tell the difference).

The posts for April, May, and October looked at different aspects of the “Where is mathematics heading?” question. Of course, I was not claiming then, nor am I suggesting now, that the core of pure mathematics is going to change. (Though the growth of Experimental Mathematics in the New Millennium was a new direction, one I addressed in a Devlin’s Angle post in March 2009.) Rather, I was taking a much broader view of mathematics, stepping outside the mathematics department of colleges and universities and looking at the way mathematics is used in the world.

The October post, in particular, turned out to be highly prophetic for my own career. Shortly after the terrorist attack on the World Trade Center on September 11, 2001, I was contacted by a large defense contractor, asking if I would join a large team they were putting together to bid for a Defense Department contract to find ways to improve intelligence analysis. I accepted the offer, and worked on that project for the next several years. (From my perspective, that project and the work that followed did not end uniformly well, as I lamented in an AMS Notices opinion piece in 2014.) When that project ended, I did similar work for a large contractor to the US Navy and another project for the US Army. In all three projects, I was living in the kind of world I portrayed in that October, 1996 column.

In fact, my professional life as a mathematician for the entire life of Devlin’s Angle has been in that world – a way of using mathematics I started to refer to as “mathematical thinking.” In a Devlin’s Angle post in 2012, I tried to articulate what I mean by that term. (The term is used by others, sometimes with different meanings, though I see strong overlaps and general agreements among them all.) That same year, I launched the world’s first mathematical MOOC on the newly established online course platform Coursera, with the title “Introduction to Mathematical Thinking”, and published a book with the same title.

With the world as it is today, in particular the pervasive (though largely hidden) role played by mathematics and mathematical ideas in almost every aspect of our lives, I would hazard a guess that there are far more people using “mathematical thinking” than there are people doing mathematics in the traditional sense.

If so, that would make the professions of mathematician and mathematics educator two of the most secure careers in the world. For there is one thing in particular you need in order to engage in (effective) mathematical thinking about a real world problem: an adequate knowledge of, and conceptual understanding of, mathematics. In fact, that need was ever so, but it often tended to be overlooked in the pre-digital eras, when doing mathematics meant engaging in a lot of paper-and- pencil, symbolic computations, which meant that the bulk of mathematics instruction focused on computation, with wide ranging knowledge and conceptual understanding often getting short shrift.

But those days are gone. Today, we carry around in our pockets devices that give us instant access to pretty well all of the world’s mathematical information and computational procedures we might need to use. (Check out Wolfram Alpha.) But the thinking still has to be done where it always has: in our heads.


Tuesday, September 13, 2016

Then and Now: Devlin’s Angle Turned Twenty This Year

Devlin’s Angle turned 20 this year. The first post appeared on January 1, 1996, as part of the MAA’s move from print to online. I was the editor of the MAA’s regular print magazine MAA FOCUS at the time, continuing to act in that capacity until December 1997. (See the last edition of MAA FOCUS that I edited here.)

Keith Devlin at a mathematical exposition summit in Oregon in 1997. L to R: Ralph Abraham (Univ of California at Santa Cruz), Devlin, Roger Penrose (Univ of Oxford, UK), and Ivars Peterson (past MAA Director of Publications for Journals and Communications).

One of the innovations I made when I took over as MAA FOCUS editor in September 1991 was the inclusion of an editorial (written by me) in each issue. Though my ten-times-a-year essays were very much my own personal opinion, they were subject to editorial control by the organization's Executive Director, supported by an MAA oversight committee, both of which had approved my suggestion to do this. Over the years, the editorials generated no small amount of controversy, sometimes based on a particular editorial content, and other times on the more general principle of whether an editor’s personal opinion had a proper place in a professional organization's newsletter.

As to the latter issue, I am not sure anyone’s views changed over the years of my editorial reign, but the consensus at MAA Headquarters was that it did result in many more MAA members actually picking up MAA FOCUS when it arrived in the mail and reading it. That was why I was asked to write a regular essay for the new MAA Online. Though blogs and more generally social media were still in the future, the MAA leadership clearly had it right in thinking that an online newsletter was very much an organ in which informed opinion had a place.

And so Devlin’s Angle was born. When I realized recently that the column turned twenty this year — in its early days we thought of it very much an online “column”, with all that entailed in the world of print journalism — I was curious to remind myself what topic I chose to write about in my very first post.

Back then, I would have needed to explain to my readers that they could click on the highlighted text in that last sentence to bring up that original post. For the World Wide Web was a new resource that people were still discovering, with 1995-96 seeing its growth in academia. Today, of course, I can assume you have already looked at that first post. The words I wrote then (when I might have used the term “penned”, even though I typed them at a computer keyboard) provide an instant snapshot of how the present-day digital world we take for granted looked back then.

A mere twenty years ago.

Monday, August 1, 2016

Mathematics and the End of Days

A scene from Zero Days, a Magnolia Pictures resease. Photo courtesy of Magnolia Pictures

The new documentary movie Zero Days, written and directed by Alex Gibney, is arguably the most important movie of the present century. It is also one of particular relevance to mathematicians for its focus is on the degree to which mathematics has enabled us to build our world into one where a few algorithms could wipe out all human life within a few weeks.

In theory, we have all known this since the mid 1990s. As the film makes clear however, this is no longer just a hypothetical issue. We are there.

Ostensibly, the film is about the creation and distribution of the computer virus Stuxnet, that in 2011 caused a number of centrifuges in Iran’s nuclear program to self-destruct. And indeed, for the first three-quarters of the film, that is the main topic.

Most of what is portrayed will be familiar to anyone who followed that fascinating story as it was revealed by a number of investigative journalists working with commercial cybersecurity organizations. What I found a little odd about the treatment, however, was the degree to which the U.S. government intelligence community appeared to have collaborated with the film-makers, to all intents and purposes confirming on camera that, as was widely suspected at the time but never admitted, Stuxnet was the joint work of the United States and Israel.

The reason for the unexpected degree of openness becomes clear as the final twenty minutes of the movie unfold. Having found themselves facing the very real possibility that small pieces of computer code could constitute a human Doomsday weapon, some of the central players in contemporary cyberwarfare decided it was imperative that there be an international awareness of the situation, hopefully leading to global agreement on how to proceed. As one high ranking contributor notes, awareness that global nuclear warfare would (as a result of the ensuing nuclear winter) likely leave no human survivors, led to the establishment of an uneasy, but stable, equilibrium, which has lasted from the 1950s to the present day. We need to do the same for cyberwarfare, he suggests.

Mathematics has played a major role in warfare for thousands of years, going back at least to around 250 BCE, when Archimedes of Syracuse designed a number of weapons used to fight the Romans.

In the 1940s, the mathematically-driven development of weapons reached a terrifying new level when mathematicians worked with physicists to develop nuclear weapons. For the first time in human history, we had a weapon that could bring an end to all human life.

Now, three-quarters of a century later, computer engineers can use mathematics to build cyberwarfare weapons that have at least the same destructive power for human life.

What makes computer code so very dangerous is the degree to which our lives today are heavily dependent on an infrastructure that is itself built on mathematics. Inside most of the technological systems and devices we use today are thousands of small solid-state computers called Programmable Logic Controllers (PLCs), that make decisions autonomously, based on input from sensors.

What Stuxnet did was embed itself into the PLCs that controlled the Iranian centrifuges and cause them to speed up well beyond their safe range to the point where they simply broke apart, all the while sending messages to the engineers in the control room that the system was operating normally.

Imagine now a collection of similar pieces of code that likewise cause critical systems to fail: electrical grids, traffic lights, water supplies, gas pipeline grids, hospitals, the airline networks, and so on. Even your automobile – and any other engine-driven vehicle – could, in principle, be completely shut off. There are PLCs in all of these devices and networks.

In fact, imagine that the damage could be inflicted in such a catastrophic and interconnected way that it would take weeks to bring the systems back up again. With no electricity, water, transportation, or communications, it would be just a few days before millions of people start to die, starting with thousands of airplanes, automobiles, and trains crashing, and soon thereafter doubtless accompanied by major rioting around the world.

To be sure, we are not at that point, and the challenge of a malicious nation being able to overcome the difficulty of bringing down many different systems would be considerable – though the degree to which they are interdependent could mitigate that “safety” factor to some extent. Moreover, when autonomous code gets released, it tends to spread in many directions, as every computer user discovers sooner or later. So the perpetrating nation might end up being destroyed as well.

But Stuxnet showed that such a scenario is a realistic, if at present remote, possibility. (Not just Stuxnet, but the Iranian response. See the movie to learn about that.) If you can do it once (twice?), then you can do it. The weapon is, after all, just a mathematical structure; a piece of code. Designing it is a mathematical problem. Unlike a nuclear bomb, the mathematician does not have to hand over her results to a large, well-funded organization to build the weapon. She can create it herself at a keyboard.

That raw power has been the nature of mathematics since our ancestors first began to develop the subject several thousand years ago. Those of us in the mathematics profession have always known that. It seems we have now arrived at a point where that power has reached a new level, certainly no less awesome than nuclear weapons. Making a wider audience more aware of that power is what Gibney’s film is all about. It’s not that we face imminent death by algorithm. Rather that we are now in a different mathematical era.

Friday, July 15, 2016

What Does the UK Brexit Math Tell Us?

The recent (and in many respects ongoing) Brexit vote in the United Kingdom provides a superb example of poor use of mathematics. Regardless of your views on the desirability or otherwise of the UK remaining a member of the European Community (an issue on which this column is appropriately agnostic), for a democracy to base its decision on a complex issue based on a single number displays a woeful misunderstanding of numbers and mathematics.

Whenever there is an issue that divides a population more or less equally, making a decision based on a popular vote certainly provides an easy decision, but in terms of accurately reflecting “the will of the people”, you might just as well save the effort and money it costs to run a national referendum and decide the issue by tossing a coin—or by means of a penalty shootout if you want to provide an illusion of human causality.

Politicians typically turn to popular votes to avoid the challenge of wrestling with complex issues and having to accept responsibility for the decision they make, or because they believe the vote will turn out in a way that supports a decision they have already made. Unfortunately, with a modicum of number sense, and a little more effort, it’s possible to take advantage of the power that numbers and mathematics offer, and arrive at a decision that actually can be said to “reflect the will of the people”.

The problem with reducing any vaguely complex situation to a single number is that you end up with some version of what my Stanford colleague Sam Savage has referred to as the Flaw of Averages. At the risk of over-simplifying a complex issue (and in this of all articles I am aware of the irony here), the problem is perhaps best illustrated by the old joke about the statistician whose head is in a hot oven and whose feet are in a bucket of ice who, when asked how she felt, replies, “On average I am fine.”

Savage takes this ludicrous, but in actuality all-too- common, absurdity as the stepping-off point for using the power of modern computers to run large numbers of simulations to better understand a situation and see what the best options may be. (This kind of approach is used by the US Armed Forces, who run computer simulations of conflicts and possible future battles all the time.)

A simpler way to avoid the Flaw of Averages that is very common in the business world is the well-known SWOT analysis, where instead of relying on a single number, a team faced with making a decision lists issues in four categories: strengths, weaknesses, opportunities, and threats. To make sense of the resulting table, it is not uncommon to assign numbers to each category, which opens the door to the Flaw of Averages again, but with four numbers rather than just one, you get some insight into what the issues are.

Notice I said “insight” there; not “answer”. For insight is what numbers can give you. Applications of mathematics in the natural sciences and engineering can give outsiders a false sense of the power of numbers to decide issues. In science (particularly physics and astronomy) and engineering, (correctly computed) numbers essentially have to be obeyed. But that is almost never the case in the human or social realm.

When it comes to making human decisions, including political decisions, the power of numbers is even less reliable than the expensively computed numbers that go into producing the daily weather forecast. And surely, no politician would regard the weather forecast as being anything more than a guide—information to help make a decision.

One of mathematicians’ favorite examples of how single numbers can mislead is known as Simpson’s Paradox, in which an average can indicate the exact opposite of what the data actually says.

The paradox gets its name from the British statistician and civil servant Edward Simpson, who described it in a technical paper in 1951, though the issue had been observed earlier by the pioneering British statistician Karl Pearson in 1899. (Another irony in this story is that the British actually led the way in understanding how to make good use of statistics, obtaining insights the current UK government seems to have no knowledge of.)

A famous illustration of Simpson’s Paradox arose in the 1970s, when there was an allegation of gender bias in graduate school admissions at the University of California at Berkeley. The fall 1973 figures showed that of the 9,442 men and 4,321 women who applied, 44% of men were admitted but only 35% of women. That difference is certainly too great to be due to chance. But was there gender bias? On the face of it, the answer is a clear “Yes”.

In reality, however, when you drill down just one level into the data, from the school as a whole to the individual departments, you discover that, not only was there no gender bias in favor of men, there was in actuality a statistically significant bias in favor of women. The School was going out of its way to correct for an overall male bias in the student population. Here are the figures.

                       

In Departments A, B, D, and F, a higher proportion of women applicants was admitted, in Department A significantly so.

There was certainly a gender bias at play, but not on the part of University Admissions. Rather, as a result of problems in society as a whole, women tended to apply to very competitive departments with low rates of admission (such as English), whereas men tended to apply to less-competitive departments with high rates of admission (such as Engineering). We see a similar phenomenon in the recent UK Brexit vote, though there the situation is much more complicated. British Citizens, politicians, and journalists who say that the recent referendum shows the “will of the people” are, either though numerically informed malice or basic innumeracy, plain wrong. Just as the UC Berekeley figures did not show an admissions bias against women (indeed, there was a bias in favor of women), so too the Brexit referendum does not show a national will for the UK to leave the EU.

Britain leaving the EU may or may not be their best option, but in making that decision the government would do well to drill down at least one level, as did the authorities at UC Berkeley. When you do that, you immediately find yourself with some much more meaningful numbers. Numbers that tell more of the real story. Numbers on which elected representatives of the people can base an informed discussion as how best to proceed—which is, after all, what democracies elect governments to do.

Much of that “one level down” data was collected by the BBC and published on its website. It makes for interesting reading.

For instance, it turned out that among 18-24 years old voters, a massive 73% voted to remain in the UK, as did over half of 25-49 years of age voters. (See Table.) So, given that the decision was about the future of the UK, the result seems to provide a strong argument to remain in the EU. Indeed, it is only among voters 65 or older that you see significant numbers (61%) in favor of leaving. (Their voice matters, of course, but few of them will be alive by the time any benefits from an exit may appear.)

Source: http://www.bbc.com/news/uk-politics- 36616028

You see a similar Simpson’s Paradox effect when you break up the vote by geographic regions, with London, Scotland, and Northern Ireland strongly in favor of remaining in the UK (Scotland particularly so).

It’s particularly interesting to scroll down through the long chart in the section headed “Full list of every voting area by Leave”, which is ordered in order of decreasing Leave vote, with the highest Leave vote at the top. I would think that range of numbers is extremely valuable to anyone in government.

There is no doubt that the British people have a complex decision to make, one that will have a major impact on the nation’s future for generations to come. Technically, I am one of the “British people,” but having made the US my home thirty years ago, I long ago lost my UK voting rights. My interest today is primarily that of a mathematician who has made something of a career arguing for improved public understanding of the sensible use of my subject, and railing against the misuse of numbers.

My emotional involvement today is in the upcoming US presidential election, where there is also an enormous amount of misuse of mathematics, and many lost opportunities where the citizenry could take advantage of the power numbers provide in order to make better decisions.

But for what it’s worth, I would urge the citizens of my birth nation to drill down one level in your referendum data. For what you have is a future-textbook example of Simpson’s Paradox (albeit with many more dimensions of variation). To my mathematician’s eye (trained as such in the UK, I might add), the referendum provides very clear numerical information that enables you to form a well-informed, reasoned decision as to how best to proceed.

Deciding between the “will of the older population” and the “will of the younger population” is a political decision. So too is deciding between “the will of London, Scotland, and Northern Ireland” and “the will of the remainder of the UK”. What would be mathematically irresponsible, and to my mind politically and morally irresponsible as well, would be to make a decision based on a single number. Single numbers rarely make decisions for us. Indeed, single numbers are usually as likely to mislead as to help. A range of numbers, in contrast, can provide valuable data that can help us to better understand the complexities of modern life, and make better decisions.

We humans invented numbers and mathematics to understand our world (initially physical and later social), and to improve our lives. But to make good use of that powerful, valuable gift from our forbearers, we need to remember that numbers are there to serve us, not the other way round. Numbers are just tools. We are the ones with the capacity to make decisions.

* A version of this blog post was also published on The Huffington Post.

Tuesday, June 7, 2016

Infinity and Intuition

On May 30, Gary Antonick’s always interesting Numberplay section in the New York Times featured a contribution by Berkeley mathematician Ed Frenkel on the difficulties the human mind can encounter when trying to come to grips with infinity. If you have not yet read it, you should.

Infinity offers many results that are at first counter-intuitive. A classic example is Hilbert's Hotel, which has infinitely many rooms, each one labeled by a natural number printed on the door: Room 1, Room 2, Room 3, etc., all the way through the natural numbers. One night, a traveler arrives at the front desk only to be told be the clerk that the hotel is full. "But don't worry, sir," says the clerk, "I just took a mathematics course at my local college, and so I know how to find you a room. Just give me a minute to make some phone calls." And a short while later, the traveler has his room for the night. What the clerk did was ask every guest to move to the room with the room number the next integer. Thus, the occupant of Room 1 moved into Room 2, the occupant of Room 2 into Room 3, etc. Everyone moved room, no one was ejected from the hotel, and Room 1 became vacant for the newly arrived guest.

This example is well known, and I expect all regular readers of MAA Online will be familiar with it. But I expect many of you will not know what happens when you step up one level of infinity. No sooner have you started to get the hang of the countable infinity (cardinality aleph-0), and you encounter the first uncountable infinity (cardinality aleph-1) and you find there are more surprises in store.

One result that surprised me when I first came across it concerns trees. Not the kind the grow in the forest, but the mathematical kind, although there are obvious similarities, reflected in the terminology mathematicians use when studying mathematical trees.

A tree is a partially ordered set (T,<) such that for every member x of T, the set {y in T : y < x} of elements below x in the tree is well ordered. This means that the tree has a preferred direction of growth (often represented as upwards in diagrams), and branching occurs only in the upward direction. It is generally assumed that a tree has a unique minimum element, called the root. (If you encounter a tree without such a root, you can simply add one, without altering the structure of the remainder of the tree.)

Since each element of a tree lies at the top of a unique well ordered set of predecessors, it has a well defined height in the tree - the ordinal number of the set of predecessors. For each ordinal number k, we can denote by T_k the set of all elements of the tree of height k. T_k is called the k'th level of T. T_0 consists of the root of the tree, T_1 is the set of all immediate successors of the root, etc.

Thus, the lower part of a tree might look something like this:



(It could be different. There is no restriction on how many elements there are on each level, or how many successors each member has.)

A classical result of set theory, sometimes called König's Lemma, says that if T is an infinite tree, and if each level T_n, for n a natural number, is finite, then T has an infinite branch, i.e., an infinite linearly ordered subset.

It's easy to prove this result. You define a branch {x_n : n a natural number} by recursion. To start, you take x_0 to be the root of the tree. Since the tree is infinite, but T_1 is finite, there is at least one member of T_1 that has infinitely many elements above it. Let x_1 be one such element of T_1. Since x_1 has infinitely many elements above it and yet only finitely many successors on T_2, there is at least one successor of x_1 on T_2 that has infinitely many elements above it. Let x_2 be such an element of T_2. Now define x_3 in T_3 analogously so it has infinitely many elements of the tree above it, and so on. This simple process clearly defines an infinite branch {x_n : n a natural number}.

Having seen why König's Lemma holds, it's tempting to argue by analogy that if you have an uncountable tree T (i.e., a tree whose cardinality is at least aleph-1) and if every level T_k, for k a countable ordinal, is countable, then T has an uncountable branch, i.e., a linearly ordered subset that meets level T_k for every countable ordinal k.

But it turns out that this cannot be proved. It is possible to construct an uncountable tree, all of whose levels T_k, for k a countable ordinal, are countable, for which there is no uncountable branch. Such trees are called Aronszajn trees, after the Russian mathematician who first constructed one.

Here is how to construct an Aronszajn tree. The members of the tree are strictly increasing (finite and countably transfinite), bounded sequences of rational numbers. The tree ordering is sequence extension. It is immediate that such a tree could not have an uncountable branch, since its limit (more precisely, its set-theoretic union) would be an uncountable strictly increasing sequence of rationals, contrary to the fact that the rationals form a countable set.

You build the tree by recursion on the levels. T_0 consists of the empty sequence. After T_k has been constructed, you get T_(k+1) by taking each sequence s in T_k and adding in every possible extension of s to a strictly increasing (k+1)-sequence of rationals. That is, for each s in T_k and for each rational number q greater than or equal to the supremum of s, you put into T_(k+1) the result of appending q to s. Being the countable union of countably many sets, T_(k+1) will itself be countable, as required.

In the case of regular recursion on the natural numbers, that would be all there is to the definition, but with a recursion that goes all the way up through the countable ordinals, you also have to handle limit ordinals - ordinals that are not an immediate successor of any smaller ordinal.

To facilitate the definition of the limit levels of the tree, you construct the tree so as to satisfy the following property, which I'll call the Aronszajn property: for every pair of levels T_k and T_m, where k < m, and for every s in T_k and every rational number q that exceeds the supremum of s, there is a sequence t in T_m which extends s and whose supremum is less than q.

The definition of T_(k+1) from T_k that I just gave clearly preserves this property, since I threw in EVERY possible sequence extension of every member of T_k.

Suppose now that m is a limit ordinal and we have defined T_k for every k < m. Given any member s of some level T_k for k < m, and any rational number q greater than the supremum of s, we define, by integer recursion, a path (s_i : i a natural number) through the portion of the tree already constructed, such that its limit (as a rational sequence) has supremum q.

You first pick some strictly increasing sequence of rationals (q_i : i a natural number) such that q_0 exceeds the supremum of s and whose limit is q.

You also pick some strictly increasing sequence (m_i : i a natural number) of ordinals less than m that has limit m and such that s lies below level m_0 in the tree.

You can then use the Aronszajn property to construct the sequence (s_i : i a natural number) so that s_i is on level m_i and the supremum of s_i is less than q_i.

Construct one such path (s_i : i a natural number) for every such pair s, q, and let T_k consist of the limit (as a sequence of rationals) of every sequence so constructed. Notice that T_k so defined is countable.

It is clear that this definition preserves the Aronszajn property, and hence the construction may be continued.

And that's it.

NOTE: The above article first appeared in Devlin’s Angle in January 2006. Seeing Frenkel’s Numberplay article prompted me to revive it and give it another airing.

Wednesday, May 4, 2016

Algebraic Roots – Part 2

What does it mean to “do algebra”? In Part 1, published here last month, I described how algebra (from the Arabic al-Jabr) began in 9th Century Baghdad as a way to approach arithmetical problems in a systematic way that scales. It was a way of thinking, using logical reasoning rather than (strictly speaking, in addition to) arithmetical calculation, and the first textbook on the subject explained how to solve problems that way using ordinary language, not symbolic expressions. Symbolic algebra was introduced later, in 16th Century France.

Just as the formal algorithms of Hindu-Arabic arithmetic make it possible to do arithmetic in a purely procedural, rule-following way (without the need for any thought), so too symbolic algebra made it possible to solve algebraic problems by manipulating symbolic expressions using formal rules, again without the need for any thought.

Over the ensuing centuries, schools focused more and more exclusively on the formal, procedural rules of arithmetic and symbolic algebra, driven in part by the needs of industry and commerce to have large numbers of people who could carry out computations for them, and in part for the convenience of the school system.

Today, however, we have digital devices that carry out arithmetical and algebraic procedural calculations for us, faster and with greater accuracy, shifting society’s needs back to arithmetical and algebraic thinking. This is why you see the frequent use of those terms in educational circles these days, along with number sense. (All three terms are so common that definitions of each are easily found on the Web by searching on the name, as is also the case for the more general term mathematical thinking.)

As more (and hopefully better) technological aids are developed, the nature of the activity involved in solving an arithmetical or algebraic problem changes, both for learning and for application. The fluent and effective use of arithmetical calculators, graphing calculators (such as Desmos), spreadsheets, computer algebra systems (such as Mathematica or Maple), and Wolfram Alpha, are now marketable skills and important educational goals. Each of these tools, and others, provides a different representation of numbers, numerical problems, and algebraic problems.

One consequence of this shift that seemed to take an entire generation of parents off guard, is that mastery of the “traditional algorithms” for solving arithmetic and algebraic problems, which were developed to optimize human computations and at the same time create an audit trail, and which used to be the staple of school mathematics instruction, became a much less important educational goal. Instead, it is evidently far more valuable for today’s students to spend their time working with algorithms optimized to develop good arithmetical and algebraic thinking skills, that will (among other things) support fluent and effective use of the new technologies.

I said “evidently” above, since to those of us in the education business, it was just that. With hindsight, however, it seems clear that in rolling out the Common Core State Standards, those in charge should have put much more effort into providing that important background context that was evident to them but, clearly, not evident to many people not working in mathematics education.

I was not involved in the CCSS initiative, by the way, but I doubt I would have done any better. I still find it hard to wrap my mind round the fact that the “evident” (to me) need to modify mathematics education to today’s world is actually not at all evident to many of my fellow citizens—even though we all live and work in the same digital world. I guess it is a matter of the educational perspective those of us in the math ed business bring to the issues.

But even those of us in the education business can sometimes overlook just how much, and how fast, things have changed. The most recent example comes from a highly respected learning research center, LearnLab in Pittsburgh (formerly called the Pittsburgh Science of Learning Center), funded by the National Science Foundation.

The tweet shown below caught my eye a few weeks ago.



The tweet got my attention because I am familiar with DragonBox, and include it in the (very small) category of math learning apps I usually recommend. (I also know the creator, and have given occasional voluntary feedback on their development work, but I have no other connection to the company.)

“Ineffective”? “#dragonboxfail”? Those are the words used in the tweet. But neither can possibly be true. DragonBox provides an alternative representation for linear equations in one unknown. Anyone who completes the game (for want of a better term) has demonstrated mastery of algebraic thinking for single variable linear problems. Period. (There is a separate issue of the representation that I will come to later.)

Indeed, since the mechanics in DragonBox are essentially isomorphic to the rules of classical symbolic algebra (as taught in schools for the last four hundred years), completing the game demonstrates mastery of those mechanics too. From a logical perspective then, the tweet made no sense. All very odd for an official tweet from a respected, federally-funded research institute. Suspecting what must be going on, I looked further.

The tweet was in response to a review of DragonBox, published by EdSurge. I recognized the name of the reviewer, Brady Fukumoto, a former game developer I had meet a few times. It was a well analyzed review. Overall, I agreed with everything Brady said. In particular, he spent some time comparing “doing algebra in the DragonBox representation” to “doing algebra using the traditional symbolic equations representation”, pointing out how much richer is the latter—but noting too that the former can result in higher levels of student engagement. Hardly the “promote” of a product that LearnLab accused him of. Indeed, Brady correctly summarized, and referenced (with a link) the Carnegie Mellon University study the LearnLab tweet implicitly referred to.

I recommend you read Brady’s review. It gets at many aspects of the “what does it mean to do algebra?” issue. As does playing DragonBox itself, which toward the end gradually replaces its initial “game representation” with the standard symbolic equation representation on a touch screen (a process often referred to as deconcretization).

Unlike the tweet, the CMU paper was careful in stating its conclusion. The authors say, and Brady quotes, that they found DragonBox to be “ineffective in helping students acquire skills in solving algebra equations, as measured by a typical test of equation solving.” (The emphasis is mine.)

Now we are at the root of that odd tweet. (One should not make too much of a tweet, of course. Twitter is an instant medium. But, rightly or wrongly, tweets in the name of an organization or a public figure are generally viewed as PR, presenting an authoritative, public stance.) The folks at LearnLab, their knowledge of educational technology notwithstanding, are assuming a perspective in which one particular representation of algebra is privileged; namely, the traditional symbolic one. (Which is the representation they adopt in developing their own algebra instruction app, an Intelligent Tutoring System called Lynnette.) But as I pointed out last month, that representation became the dominant one entirely by virtue of what was at that time the best available distribution technology: the printing press.

With newer technologies, in particular the tablet computer (“printed paper on steroids”), other representations are possible, some better suited to learning, others to applications. To be sure, there are learning benefits to be gained from mastering symbolic algebra, perhaps even from doing so using paper-and-pencil, as Brady points out in his review. But at this stage in the representational technology development, we should adopt a perspective of all bets being off when it comes to how to best represent algebra in different contexts. I think it highly unlikely that we will ever again view algebra as something you learn or do exclusively by using a pen to pour symbols onto a page.

Indeed, with his background in video game design, Brady ends his review by rating DragonBox according to three metrics:

Fun Factor – A: I collected all 1,366 stars available in DragonBox 1 and 2 and had a great time.

Academic Value – B: I worry that many will underestimate the effort needed to transfer DragonBox skills to practical algebra proficiency.

Educational Value – A+: Anytime a kid leaves a game with thoughts like, “algebra is fun!” or “hey, I’m really good at math!” that is a huge win.

The LearnLab researchers are locked into the second perspective: what he calls Academic Value. (So too is Brady, to some extent, with his use of the phrase “practical algebra proficiency” to mean “symbolic algebra proficiency.”)

Make no mistake about it, transfer from mastery in an interactive engagement on a tablet to paper-and-pencil math is not automatic, as both Brady and the CMU researchers observe. To modify the old horse aphorism, DragonBox takes its players right to the water’s edge and dips their feet in, but still the players have difficulty drinking. (My best guess is that, for most learners it takes a good teacher to facilitate transfer.)

I note in passing that initially I had difficulty playing DragonBox. My problem was, classical, symbolic algebra is a second language to me that I have been fluent in since childhood and use every day. I found it difficult mastering the corresponding actions in DragonBox. Transfer is difficult in both directions.

At the present moment in time, those of us in education (or learning research) should absolutely not assume any one representation is privileged. Particularly so when it comes to learning. In that respect, Brady is right to note that DragonBox’s success in terms of his third metric (essentially, attitude and engagement) is indeed “a huge win.”

In the world in which our students will live their lives, arithmetic, algebra, and many other parts of mathematics, should be learned, and will surely be applied, in multimedia environments. All the evidence available today suggests that mastery of the traditional symbolic representation will be a crucial ingredient in becoming proficient at arithmetic and algebra. But the more effective practitioners are likely to operate with the aid of various technological tools. Indeed, for some future practitioners, mastery of the traditional symbolic representation (which is, remember, just a user interface to a certain kind of thinking) may turn out to be primarily just a key step in the cognitive process of achieving conceptual understanding, not used directly in applications, which may all be by way of mathematical reasoning tools.

Exactly when, in the initial learning process, it is best to introduce the classical symbolic representation is as yet unclear. What the evidence of countless generations of students-turned-parents makes abundantly clear, however, is that teaching only the classical symbolic approach is a miserable failure. That much is affirmed every time a parent posts on social media that they are unable to understand a Common Core math question that requires nothing more than understanding the place-value representation of integers. (Which is true of most of the ones I have seen posted.)

There is some evidence (see for example Jo Boaler’s new book) that a more productive approach is to use learning technologies to develop and sustain student engagement and develop a growth mindset, and provide learning environments for safe, productive failure, with the goal of developing number sense and general forms of creative problem solving (mathematical thinking), bringing in symbolic representations and specific techniques as and when required.

**Full declaration: I should note that my own work in this area, some of it through my startup company BrainQuake, adopts this philosophy. The significant learning gains obtained with our first app were in number sense and creative problem solving for a novel, complex performance task. Acquisition of traditional “basic skills” with our app comes about (intentionally, by design) as a valuable by-product. The improvement we see in the basic skills category is much more modest, and may well be better achieved by a tool such as LearnLab’s ITS. In a world where we have multiple representations, it is wise to make effective use of them all, according to context. It is not a case of an interface “fail”; to say that (with or without a hashtag) is to remain locked in past thinking. Easy to do, even for experts. Rather, in an era when algebra is being forced to return to its roots of being a way of thinking to help us solve practical problems, using all available representations in unison can provide us with a major win.