What I would like to discuss here is the extremely interesting, for many reasons, foreword written by Gian-Carlo Rota. He talks about the dynamics of progress in mathematics, pointing out of the ways that the disciplines moves forward.

The first way, Rota writes, occurs when a long-standing problem is solved; e.g., Bieberbach's conjecture or Fermat's last theorem. These are holy grails in mathematics, problems which puzzle generations of mathematicians, leading to surprising developments in the field, until, one day, someone finally gets credit for the solution. While the person who finally obtains the solution rightly deserves the credit, Rota points out that the `genius' does not, simply, belong to one individual but it is a collective, cumulative intelligence belonging to generations of hard-working people.

The second way is also very interesting. There are ideas circulating in mathematics for years and years, collective intuition, one might say, things that people work on, use, but no one dares to put down rigorously for fear (perhaps) of formalizing a triviality, or, simply because nobody knows how to formalize the intuition, or because nobody wants to do that. The second way that mathematics advances is when a commonplace idea finally finds a proper, solid, rigorous home. Rota tells us that mathematicians are reluctant to publicize this second way that the field advances. But when it happens, properly so, that is, it opens a new window into a new way of thinking.

Rota gives a few examples belonging to the second way. The first is the formalization of group theory. The second is category theory. And, of course, Rota points out that the topic of the Combinatorial Species book is, precisely, an advancement of the second kind: someone (André Joyal) found a correct way of associating a combinatorial structure to a generating function and formalized the notion of combinatorial species. Rota points out that species relate to generating functions in much the same way that random variables relate to distribution functions.

My favorite example of the second kind of advancement is Stokes' theorem. Stokes' theorem is a generalization of the fundamental theorem of calculus to higher dimensions and, indeed, in a geometric setup. It states that the integral of a differential form over the boundary of a smooth oriented manifold equals to the integral of the derivative of the form over the manifold. The proof of the theorem is a `triviality'. It is a triviality that takes lots and lots of pages of setting up the scene properly: multilinear algebra, differential forms, manifolds. Once the scene is established, and once dozens of `trivial' lemmas are proven, Stokes' theorem comes out easily.

When progress of the second kind occurs in mathematics, Rota points out, it is met with distrust until many papers are written, using the theory and showing to the old fogies that things are done in a much nicer way using the newly established setup. After this happens, the old fogies will take notice. (Some will pretend they "knew that all along''.)

He makes an analogy:At first, the old fogies will pretend the book[Bergeronet al.]does not exist. This pretense will last until sufficiently many younger combinatorialists publish papers in which interesting problems are solved using the theory of species. Eventually, a major problem will be solved in the language of species, and from that time on everyone will have to take notice.

People, including mathematicians, are very protective of their way of doing things. I have met mathematicians who are completely reluctant in accepting a new way of seeing things, a different point of view. Once, when I was a fresh MSc student at Berkeley, Eugene Wong told me that there are two ways of thinking: one is geometric, the other is analytical; but the best progress is made by people who can use both.Those probabilists of the thirties who held on to distributions, while rejecting random variables as “superfluous,” were eventually wiped out, and their results are not even acknowledged today.

Now, I daresay add something more to Rota's prediction. You see, once the distrust phase is gone and everybody is happily using the ``new math'', it is the old, pedestrian, way of doing things that is forgotten: everybody (even the enemies of the new field) has converted. But there still remain problems which are best attacked by the good-old intuitionistic way, the one used before the formalization occurred. At this point, the ones who are going to have an advantage are those who can effortlessly combine both points of view.

Here is then the exact article by Rota. It is taken from Bergeron's webpage:

**Forward**

by

**Gian-Carlo Rota**

Advances in mathematics occur in one of two ways.

The first occurs by the solution of some outstanding problem, such as the Bieberbach conjecture or Fermat’s conjecture. Such solutions are justly acclaimed by the mathematical community. The solution of every famous mathematical problem is the result of joint effort of a great many mathematicians. It always comes as an unexpected application of theories that were previously developed without a specific purpose, theories whose effectiveness was at first thought to be highly questionable.

Mathematicians realized long ago that it is hopeless to get the lay public to understand the miracle of unexpected effectiveness of theory. The public, misled by two hundred years of Romantic fantasies, clamors for some “genius” whose brain power cracks open the secrets of nature. It is therefore a common public relations gimmick to give the entire credit for the solution of famous problems to the one mathematician who is responsible for the last step.

It would probably be counterproductive to let it be known that behind every “genius” there lurks a beehive of research mathematicians who gradually built up to the “final” step in seemingly pointless research papers. And it would be fatal to let it be known that the showcase problems of mathematics are of little or no interest for the progress of mathematics. We all know that they are dead ends, curiosities, good only as confirmation of the effectiveness of theory. What mathematicians privately celebrate when one of their showcase problems is solved is Polya's adage “no problem is ever solved directly.”

There is a second way by which mathematics advances, one that mathematicians are also reluctant to publicize. It happens whenever some commonsense notion that had heretofore been taken for granted is discovered to be wanting, to need clarification or definition. Such foundational advances produce substantial dividends, but not right away. The usual accusation that is leveled against mathematicians who dare propose overhauls of the obvious is that of being “too abstract”, As if one piece of mathematics could be “more abstract” than another, except in the eyes of the beholder (it is time to raise a cry of alarm against the misuse of the word “abstract,” which has become as meaningless as the word “Platonism.”)

An amusing case history of an advance of the second kind is uniform convergence, which first made headway in the latter quarter of the nineteenth century. The late Herbert Busemann told me that while he was a student, his analysis teachers admitted their inability to visualize uniform convergence, and viewed it as the outermost limit of abstraction. It took a few more generations to get uniform convergence taught in undergraduate classes.

The hostility against groups, when groups were first “abstracted” from the earlier “group of permutations” is another case in point. Hadamard admitted to being unable to visualize groups except as groups of permutations. In the thirties, when groups made their first inroad into physics via quantum mechanics, a staunch sect of reactionary physicists, repeatedly cried “Victory!” after convincing themselves of having finally rid physics of the “Gruppenpest.” Later, they tried to have this episode erased from the history of physics.

In our time, we have witnessed at least two displays of hostility against new mathematical ideas. The first was directed against lattice theory, and its virulence all but succeeded in wiping lattice theory off the mathematical map. The second. still going on, is directed against the theory of categories. Grothendieck did much to show the simplifying power of categories in mathematics. Categories have broadened our view all the way to the solution of the Weil conjectures. Today, after the advent of braided categories and quantum groups, categories are beginning to look downright concrete, and the last remaining anticategorical reactionaries are beginning to look downright pathetic.

There is a common pattern to advances in mathematics of the second kind. They inevitably begin when someone points out that items that were formerly thought to be “the same” are not really “the same,” while the opposition claims that “it does not matter,” or “these are piddling distinctions.” Take the notion of species that is the subject of this book. The distinction between “labeled graphs” and “unlabeled graphs” has long been familiar. Everyone agrees on the definition of an unlabeled graph, but until a while ago the notion of labeled graph was taken as obvious and not in need of clarification. If you objected that a graph whose vertices are labeled by cyclic permutations – nowadays called a “fat graph” – is not the same thing as a graph whose vertices are labeled by integers, you were given a strange look and you would not be invited to the next combinatorics meeting.

The correct definition of a labeled graph turned out to be more sophisticated than the definition of an unlabeled graph. A labeled graph – or any “labeled” combinatorial construct – is a functor from the groupoid of finite sets and bijections to itself. This definition of a labeled object is not “abstract”: on the contrary, it expresses in precise terms the commonsense idea of “being able to label the vertices of a graph either by integers or by colors, it does not matter,” and it is the only way of making this commonsense idea precise. The notion of groupoid, which is one of the key ideas of contemporary mathematics, makes it possible to withhold the assignement of a specific set of labels to the vertices of a graph without making the graph unlabeled.

Joyal’s definition of “labeled object” as a species discloses a vast horizon of new combinatorial constructions, which cannot be seen if one holds on to the reactionary view that “labeled objects” need no definition. The simplest, and the most remarkable, application of the definition of species is the rigorous combinatorial rendering of functional composition, which was formerly dealt with by handwaving – always a bad sign. But it is just the beginning.

Species are related to generating functions in much the same way as random variables are related to probability distributions. Those probabilists of the thirties who held on to distributions, while rejecting random variables as “superfluous,” were eventually wiped out, and their results are not even acknowledged today.

I dare make a prediction on the future acceptance of this book. At first, the old fogies will pretend the book does not exist. This pretense will last until sufficiently many younger combinatorialists publish papers in which interesting problems are solved using the theory of species. Eventually, a major problem will be solved in the language of species, and from that time on everyone will have to take notice. The rewriting, copying and imitating will start, and mathematicians who capitulate to the new theory will begin to tell us what species really are. Considering the speed at which mathematics progresses in our day, that time is more likely to come sooner than later.

The present book is the first thorough treatment in English of the theory of species. It is lucidly and clearly written, and it should go a long way to making this fundamental chapter of combinatorial mathematics available to the entire spectrum of mathematicians, computer scientists and cultivated scientists generally.

## No comments:

## Post a Comment