Thursday, June 24, 2010

1491: America Before Columbus; a review.


            1491—New Revelations of the Americas before Columbus, by Charles C. Mann.
                      Mr. Mann’s thesis is that very little of what we have been told about life in the Americas before Columbus is actually true.  Our textbooks still maintain that as of 1491, the Americas were a nearly untouched wilderness sparsely occupied by tribes of primitive hunter-gatherers.  Wrong on all counts!!  North America did not have 3 to 5 million people—it contained 30 to 50 million, making it more populous than Europe.  But the first contact with Europeans brought smallpox, hepatitis, and other diseases to which the Indians had utterly no resistance. Ninety-five percent of them died, and by the time Europeans began any systematic exploration of the interior 150 years later, the Indians had not only vanished but nearly any trace of their ever having existed also vanished, except in central America where stone had been used as a building material. Their way of life left little or no trace except the stories still told by their descendants.  While white America has always rejected oral histories which contradict their long held beliefs about Indians in pre-Columbian times, archaeology is now beginning to confirm these accounts.
            Mann says a controversy now exists as to the level of population in the Americas before Columbus.  Those who defend the old view that the Americas were very sparsely populated are the “low counters.”  Those who believe that the current archaeological data shows that the population was several times higher are the “high counters.”  Mann devotes a good portion of his book to expounding and defending the “high count.”  I won’t attempt to summarize the evidence which Mann presents to support this position, but he presents a great deal of it.  He approaches the problem from several different disciplines; and he makes, I believe, a pretty convincing case.
            When Columbus first visited the island of Hispaniola, he estimated that the island contained 2 million people. On his return trip, not a single living soul was found. The streets were littered with decaying corpses, but no one survived.  They had died of smallpox.  Some probably lived long enough to carry the disease to the mainland, but not all the continent was depopulated instantly.  When Desoto explored the lower Mississippi about 1541, he said that both sides of the river were heavily settled, and that one city (probably Cahokia, across the river from present day St Lewis) was larger than Paris.  Joliet came down that river 150 years later, and for hundreds of miles at a stretch, he saw no sign that “the hand of man” had ever touched the place.
            Mann says that pre-Columbian Indians were mostly farmers, not hunter-gatherers. But when their numbers plummeted, they abandoned their fields and reverted to hunting.  As the fields became overgrown with trees, these fields gradually became mature forest which the early white settlers foolishly mistook for virgin forest. Also, the Indians had a profound effect on the land they occupied, not merely as the inadvertent consequence of living on the land, but as the result of deliberately and actively managing every square inch of it.  They used agriculture and fire to change the landscape just as radically as we do with bulldozers.  Even the areas left as forest were intensively managed.  Captain John Smith once bragged that he could ride his horse at full gallop through any woods in Virginia.  A hundred years later this would not have been possible.  But in his day, the woods had been more of an open savanna, kept so by annual burnings. The great plains were managed by fire also.
            Mann also contends that the massive flocks of passenger pigeons seen in the late 18th century were not part of a normal pattern, nor were the great herds of bison noted by Lewis and Clark.  Rather, these phenomena were the result of a fairly recent environmental catastrophe.
            When Coronado explored southern Kansas about 1540, he made no mention of bison at all.  A single bison would be a pretty imposing sight to a Spaniard who had never seen one, to say nothing of a thundering herd of 10,000.  Apparently, bison did not exist in large numbers in the early 16th century, and Kansas may have contained none at all.  Three hundred years later, the plains contained 70 million.  Many Indian tribes hunted bison, but it was not an important staple of their diet before Columbus.  The pre-Columbian corn farmers in Cahokia occasionally hunted birds.  Bird bones are found in their trash heaps, but passenger pigeon bones are found only rarely.  The explosive growth of these two species began only after the first Europeans visited this continent.
            In any area, there is usually a “keystone species,” a species whose importance to the overall ecosystem is such that their abrupt removal or reduction causes chaos.  Man (the Indians) had been that keystone, and their abrupt depopulation from smallpox meant that the balance that had been maintained by the Indians’ use of fire, hunting, and agriculture no longer existed.  The result was the explosive growth in some species and the extinction of others.
            Indian agriculture was also quite sophisticated, in some ways more sophisticated than that of Europe.  The plant breeding achievement in developing corn from its ancient wild ancestor far exceeds anything Western science has ever achieved in plant genetics.  And there are gardens in Oaxaca that have been tilled for 4000 years and are still fertile.  They remain so, not through crop rotation, but through multi-culture. Corn, beans, squash, and chilies are grown in the same field at the same time.   The corn provided a stalk for the beans to climb, the beans fix nitrogen for the corn, and the squash leaves are ground cover to stop weed growth.  Every year, these crops, along with human and animal waste, replace everything which is removed from the soil.  And the nutrition derived from this combination of plants provides a nearly perfectly balanced diet.
            In some ways, even the Amazon rain forest may be a largely human artifact.  In any populated part of the Amazon basin, the jungle seems to contain an unnaturally high proportion of trees that provide edible fruit or some other product useful to humans.  It turns out that these species of trees, some 168 of them, are probably not wild trees but the wild descendants of cultivated varieties originally planted there by humans.  In fact, some areas of the Amazon basin which are above flood level   (a small percentage of the total land, but still a vast acreage) are covered with “anthropogenic” soil.  In these areas, which are still being gardened,   you can find soil that contains alternating layers of pot shards, fish bones, leaf mulch, and other things obviously placed there by humans.  This stuff is now so rich a growing medium that it is packaged and sold as potting soil.  A recently excavated burial mound which had been formed from this “man made” soil was estimated to contain 40 million pot shards.  Yet this mound used only a small part of the dirt from the farm field where it was built.  To manage a vast acreage that intensively would take a very large population living there for a very long time.  According to Charles Mann, this is what there was.


            Amazon basin people today practice slash and burn (swidden) agriculture.  The prevailing view is that this was always the case. Mann claims that slash and burn farming was never possible anywhere before the introduction of the steel axe blade, because clearing land with stone tools is so inefficient that if you were only to farm a plot for ten or twenty years and then abandon it, the total food calories produced would never return the calories expended in clearing it.
            Obviously, the severity with which the Indians impacted their environment was not uniform.   Those living too far north for agriculture would have lived as hunters and trappers, much as the Indians in northern Canada do today.  As such, they would have altered the environment only minimally. But by 800 AD, Indians were farming most parts of North America which are farmed today.  And in the Amazon basin, they practiced something akin to agriculture in places where we would not be successful today.
            The political institutions of the Indians were often as sophisticated as their agriculture. The Six Iroquois Nations had an elected parliament which dates back to about 1150 AD and which still functions today.  It is the second oldest representative democratic body on the planet, second only to Iceland.   The pre-Columbian Iroquois did not practice slavery, and their women enjoyed a status much closer to gender equality than European women at the time.  When English and French settlers first encountered the Iroquois, they were dumbfounded by their “outrageous” ideas, such as the belief that all men were by nature free—that no man could be owned by another—and that every man had an equal right to a voice in the governing of his country.
            Europeans saw these “naïve and silly” ideas as proof that the Indians would always be “ungovernable savages.”  They also found such ideas so amusing that they were quickly reported back to Europe.  But not all Europeans were amused.  The philosophers of the Enlightenment took these ideas seriously and argued about them for the next hundred years. John Locke was particularly impressed.  He saw the Indians as “man in the natural state.”  So he assumed that Iroquois concepts of individual freedom and equality must be the natural rights of man.  Jefferson and the other framers of the American constitution were familiar with the writings of Locke.  They also had direct personal knowledge of Iroquois democracy, and knew that it actually worked.  Democracy is not an invention that can be patented, but if it were, the Iroquois may have the prior claim.  In any case, Western democratic institutions and ideas of freedom and equality probably owe more to the Iroquois than to the Magna Charta.  (Mann doesn’t mention it, but I believe I’ve read somewhere that the phrase, “all men are created equal” is translated directly from an Iroquois law.)
            Mann contends that the Indians gave us corn, potatoes, and democracy—and also tobacco and syphilis.  We gave them horses, guns, and steel tools—and also alcohol, smallpox, and Christianity.   Most modern Americans are a bit conflicted about having taken the Indians’ land. We have no intention of giving it back and couldn’t if we wanted to.  But we’ve never felt it was quite fair to have taken it.  Somehow, believing that American was a vast, nearly empty wilderness, peopled only by a handful of child-like primitives makes taking this land seem more justifiable.  The main message in 1491 is that however comforting our beliefs may be, they are simply not true.  We’d like to believe that America was an empty wilderness.  But it wasn’t empty and it wasn’t really wilderness.  If you haven’t yet read 1491, I strongly recommend it.   It’s the perfect sequel for Guns, Germs, and Steel, by Jared Diamond.    .

Monday, June 21, 2010

Benefits of the Bust.

   Anatole Kaletsky, an editor of the Times of London, has a new book, Capitalism 4.0,                 

          and the Wall street Journal printed a brief excerpt on Jun 19th.  He says that the one benefit of the current economic crisis is that it clearly showed that modern academic economics is nonsense. He begins with a joke:  An economist, a physicist, and a chemist are trapped on a desert island with a can of food but no can opener.  The physicist wants to focus sunlight to burn a hole in the can, the chemist wants to use saltwater to corrode a hole, and the economist says:  “You’re wasting time—let’s just assume a can opener.”
            This joke explains how modern economic theory’s use of unjustified and over-simplified assumptions allowed politicians and regulators to create an imaginary world of market fundamentalist ideology, in which financial stability is automatic, involuntary unemployment is impossible, and omniscient markets would solve all problems if government would stand aside.  Now that the house of cards built on this nonsense has come crashing down, the elegant theories are in well-deserved disrepute.
             The greatest embarrassment was not that none of the academics foresaw the crash—no economist from Smith to Keynes ever claimed to predict the future—but that when the crisis hit, they had no useful advice for what to do about it.  When the politicians and central bankers asked for guidance they were effectively told:  “You are on your own since the situation you have to deal with is impossible—our theories show it cannot exist.”   The problem is not that the elegant theories are incorrect mathematically, but that they assume conditions that seldom exist in the real world.  I’ve said this for thirty years.  And Galbraith was saying it clear back in the 1960s.
            In one of his many books, (I don’t recall which one) Galbraith complained that nearly every scholarly economic publication begins with the phrase:  “assuming normal free market conditions;”   and then the author goes on to build these elaborate  theoretical castles in the air, stacking one complex equation on top of another.  But if you complain that the author has failed establish that such conditions are likely to occur in the real world, they get defensive, and end up just explaining how great it would be--if such conditions did exist.  Galbraith points out that the economics profession has a very precise definition for “normal free market conditions.”    It means having enough buyers and enough sellers so that the removal of the largest seller or the largest buyer would have no measurable effect on supply or price.  So what percentage of your income is spent on such items?  In the 60s, Galbraith estimated it would not be over 15% max.  So why bother construct equations that have no application to most of the economy?
  


Saturday, June 12, 2010

New Wind Turbine Design


        The cover article in the April 2010 Popular Science is about how GE’s latest design in wind turbines will attempt to leapfrog the rest of the industry.  (I don’t usually buy Popular Science because their articles usually do not cover a subject in sufficient depth to answer my questions.  But once in a while they have a cover piece that is just so intriguing I can’t resist.)  But before you look at this article, let me explain just what the GE people are probably trying to beat.  Right now, the best overall design is probably the Clipper, which is built in Cedar Rapids, Iowa.  This is not surprising, because this company was formed by a small group of entrepreneurs who each had wind turbine manufacturing experience going back 25 years or more. By pooling their patents and their capital, they hoped to finally build the kind of turbine which they had always wanted to build, but which no one had ever built. The design improvements they wanted were mainly these:
        First, scale it up large enough so that it’s up high where the wind is.  This is important because the amount of energy you get out of a turbine equals the cube of the wind speed.  That means that if by going higher you can get twice the velocity, then you get 8 times the output.  So they built a machine that has a blade “like a rotating football field” with its axle 350 feet above ground.
         Second, don’t generate AC directly off the turbine shaft.  Doing this leaves you with a shaft speed phase-locked to the power line frequency.  Better to make DC, letting the shaft speed vary with wind speed. Then use inverters to convert the DC to whatever AC the power line wants.
         Third, make the turbine maintainable.  Maintaining these machines is highly technical, and it takes a while to train a maintenance technician.  But with 300 foot ladders to climb every day, knees would start to wear out by age thirty-five.  You’d have to stop doing this work almost as soon as you got a good start.  Clipper’s answer is to put in an electric man-lift.
        Finally, don’t use one large generator--use four small ones.   If you use one big generator, then if it ever goes bad, you have to bring in a humongous crane to lift it down. 
        Clipper solves the problem like this:   Like most big wind turbines, Clipper has a gear-box to step up the very low shaft speed (15-60 rpm) to at least a few hundred rpm.  The Clipper gear-box has one bull gear turning four pinions.  The pinions are arranged in a circle, and each is permanently mounted in the gearbox with its own set of bearings and its own oil seal where the pinion shaft sticks out of the case.  Each pinion shaft is splined, and about 2 inches in diameter.  Each generator rotor just has a hollow shaft, internally splined, which fits over this arbor.  And since the generators use rotating fields using permanent magnets, there are no brushes either. The rotor is simply a collar that holds magnets, which fits over the arbor. The wiring is all in the stator, which slides over the rotor and is easily removable.  The stator housing has a flange on one end which bolts onto the gear box.  It’s light enough so that the built-in jib-crane can handle and it can be lowered to the ground with the built-in power winch.          
        But now GE is taking a different approach:   They eliminate the gear-box entirely by  mounting a single rotor wheel directly on the prop shaft, with diameter so large that the permanent magnet poles are moving through the stator windings fast enough even without a step-up in rpm.   The only part which might ever need replacing is the stator winding.  But this is made in small removable sections.  It seems to me that both the Clipper design and the new GE design would provide an ultra-low-maintenance machine.   As the wind turbine market matures and buyers begin to “price in” the advantage of maintainability, both of these designs are positioned to capture an increasing share of the market.

Friday, June 11, 2010

Charter School Failure


    Back in the administration of George Herbert Walker Bush, there was an educational bureaucrat, an Assistant Secretary of Education, named Diane Ravitch.  While perhaps not the main architect of the push for charter schools, school choice, and massive educational testing, she was one of the first to jump on the charter school bandwagon. 
    And under George W. Bush, when the No Child Left Behind (NCLB) act took effect and thousands of charter schools were opened, she was the strongest advocate.  Accountability and school choice were going to save education.
            But on March 9, 2010, in a guest editorial in the Wall Street Journal, she explained how she had come to realize that it was all nonsense, wishful thinking, and outright fraud--and that the main reason for failed educational outcomes is poverty.  She has now written a similar article, entitled “Why I Changed My Mind ,” in the June 14 issue of Nation Magazine.
            You may say, “Yes; we’ve always known that.”   But there were many people, including some intelligent enough that they should have known better, that did not know—and still don’t know. This failed program now drains funding  from precisely those school systems which need it most desperately, yet the fight against this malignant fraud is far from over.  Ms. Ravitch writes in Nation: “I expected that Obama would throw out NCLB and start over.  Instead, his administration has embraced some of its worst features.”
            The confession that this program is not working, signed by one of its strongest advocates, is certainly useful--but only if it is circulated widely among those who need to understand it.
              For Free Market Conservatives, charter schools were an attractive fantasy. If successful, they would not only save education, but vindicate any number of other “free market” approaches to public policy. But in the end, the free market initiatives have helped education about as much as free market derivative traders have helped our pension funds.
            No one likes to confront the abject failure of their most treasured fantasies—but sometimes that has to happen for any progress to be made. Please consider emailing this information to anyone who ought to see it. (Just click on the envelope icon.)

                                                                                   
                                                                                               




Tuesday, June 8, 2010

The Trade Deficit and the Dollar

   In the May/June issue of Dollars & Sense,  you can find a magnificent article by Katherine Sciacchitano entitled W(h)ITHER the DOLLAR.  It concerns the U.S. trade deficit,  the global economic crisis, and the Dollar's status as the world's reserve currency.
    Ms. Sciacchitano explains, in eight pages, how the United States failure to accept a "world currency" at the Breton Woods conference in 1948 has doomed us to our current status as a debtor nation.  Our insistence on using the Dollar as the world's reserve currency has also caused the de-industrialization of America, the reduction of the American industrial working class to poverty, and the virtual enslavement of most third world countries to the whims of international capital.
  I won't attempt to summarize this article, since the article itself basically summarizes the entire course of political, economic, and military relationships of the Western World since 1948, both within nations and between nations. What most authors would attempt in some 800 page magnum opus, she neatly does in 8 pages.  But this is absolutely the best explanation you're going to find on this subject.

Saturday, June 5, 2010

Social Security Sellout

    There is an important article by William Greider i n the June 7 issue of Nation magazine.  Greider says that Obama's National Commission on Fiscal Responsibility and Reform is just a vehicle to provide political cover for a plan to gut Social Security in exchange for conservative help in raising  taxes, which must be raised to deal with the deficit.
        But the article quotes Paul Volcker who points out that cutting Social Security benefits would not affect the deficit in the short run, and there is no funding problem in the long run.  Right now, the Social Security Trust Fund has a massive surplus--$2.5 trillion--expected to grow to $4.3 trillion by 2023.  This will cover all benefits till at least 2040.
    The government does not own this money and never did.  It was collected from generations of workers under the Federal Insurance Contributions Act (FICA) to pay in advance for their own retirement costs.  This fund is held in the form of treasury bills,  essentially interest bearing promissory notes signed by the government. When the Social Security Trust starts spending this money, the government will have to make good on its obligations,  either by raising taxes or by printing money.  The big tax breaks for wealthy individuals and for corporations were mainly funded by dipping into this fund,  so that's whose taxes would logically be raised to put it back.  But since our government lack the guts to tax powerful interests, they will probably just print money.  Knowing this, our foreign creditors, China and Japan, mainly, are becoming nervous,  since they too hold trillions in Dollar denominated  treasury notes.  If the Dollar is radically devaluated by overprinting of money,  all of our creditors are owed less, in real terms.
   So gutting Social Security would be a way to assure our foreign creditors, without making the rich pay back the money.   And it would also make conservatives happy for another reason:  They've always hated Social Security because it works.  It's the one example of an overtly socialist program that works reliably and enjoys wide public support.  It gives the lie to their idea that government doesn't work.

Thursday, June 3, 2010

The Real Garden of Eden

In 2003, Natalie D. Munro published an article in Mittielungen der Gesellschaft fur Urgeschichte entitled: “Small Game, the Younger Dryas, and the Transition to Agriculture in the Southern Levant.” I found certain parts of that article fascinating and would like to share them with you. I believe they may relate to the real, prehistorical “Garden of Eden.” What follows is my own, crude, two-page synopsis of the story which Munro tells us, placed within a cultural and historical context of my own choosing.

About 19,000 years ago, the glacial maximum of the Wurm period began to give way to the Bolling-Allerod interstadial and the climate became much warmer, which eventually brought about an end to big game hunting in Europe. As mammoth and other large game died out, people turned to small game hunting and more food gathering. People began to be more settled, and to develop local specializations for hunting and gathering. About this time, dogs were domesticated.

Then, about 14,500 years ago, the Bolling-Allerod warming trend sharply accelerated. The Fertile Crescent region turned much warmer and wetter, and food became radically more abundant. Semi-settled tribes in the southern Levant began living in fully settled villages, even though they did not yet practice agriculture or herding.

They built elaborate round, semi-sunken stone houses with slab-lined floors and built-in hearths. They collected wild grain and stored it in stone granaries, even though they were not yet planting or cultivating grain. Food was so abundant that these people could harvest grain, fruit, and meat from a permanent base without having to plant grain or raise herd animals. And since the climate was benign, little or no clothing was needed. This situation lasted for over a thousand years. This culture, the Early Natufian, was the first human society to have lived in comfort and security without struggle, toil, or continual migration. Many Near Eastern or Middle Eastern cultures have some myth, some folk memory of an Eden---a paradise. If these legends from the mists of our half-remembered past relate to an actual historical event, then this was the event. This was when and where it happened.

About 13,000 years ago, the climate turned sharply cooler and dryer, and remained so for 1,500 years, during the Younger Dryas period. Food became sparse, and villages were abandoned. The life of ease came to an end. The Early Natufian culture gave way to the Late Natufian, and migratory hunter/gatherer life was resumed.

Though the Natufians could no longer live in their villages, they still returned there as part of their annual migration, and to re-bury the bones of their dead. Bones of those who died and were buried elsewhere in the migration cycle were later exhumed and re-buried beneath the floors of stone houses which their tribe had once occupied, centuries earlier. I find this detail fairly poignant, as it reveals the disappointment, anguish, and grief the Natufians must have felt when forced to abandon their opulent garden and the life they had known there. Something new had entered human culture—the idea of “home.” Even though they could no longer live in these villages, they wanted to be buried there. After a thousand years of wandering, they still longed to “come home.”

By 11,500 years ago, (9,500 BC) the climate became warmer again. In semi-settled tribes of the foot hills of western Asia, (Turkey, Iraq, and Iran), men began domesticating wild sheep; women continued harvesting seeds of wild wheat and barley, and invented beer and bread; and the Mother Goddess religion was born. In the southern Levant, the old villages were occupied on a permanent basis again, with a population density that exceeded even the Early Natufian. But there was one striking difference; they were now practicing agriculture. They were planting and cultivating cereal crops on an intentional, full-time basis. Natufian culture had ended, and with it, the Pleistocene. And with the beginning of the Holocene, the Pre-Pottery Neolithic (PPNA) had begun.

One should not be too quick to suppose that early agriculture was a form of “progress.” No doubt, these people had known of agriculture and practiced it on a small scale, experimentally, for thousands of years. But to embrace farming, full-scale, was not a choice they would ever have preferred. Farming with a sharp stick and a stone knife is not quite like driving an air-conditioned John Deere combine.

Primitive farming, stooped labor in the hot sun from dawn to dusk, is as depressing and as back-breaking a way to obtain food as one can imagine. No one who had ever successfully practiced hunting/gathering would ever choose this life except as a last, desperate attempt to stay alive. For the first thousand years after the advent of farming, the average human height decreased by several inches, and life expectancy decreased. Yet, ironically, farming a given area will support many times as many people as hunting. So as humans became more miserable, they also became more numerous. And since hunting requires much more land per person than farming, once a certain population threshold was passed, there could be no turning back to hunting. For better or worse, it was a one-way trip—a “point of no return” was reached—and from that point on, an ineluctable, and to some extent even predictable trajectory of human culture had been launched.