Tuesday, July 31, 2007

Are Your Cell Phone and Laptop Bad for Your Health?

By Stan Cox, AlterNet
Posted on July 31, 2007.

In the wee hours of July 14, a 45-year-old Australian named John Patterson climbed into a tank and drove it through the streets of Sydney, knocking down six cell-phone towers and an electrical substation along the way. Patterson, a former telecommunications worker, reportedly had mapped out the locations of the towers, which he claimed were harming his health.
In recent years, protesters in England and Northern Ireland have brought down cell towers by sawing, removing bolts, and pulling with tow trucks and ropes. In one such case, locals bought the structure and sold off pieces of it as souvenirs to help with funding of future protests. In attempts to fend off objections to towers in Germany, some churches have taken to disguising them as giant crucifixes.
Opposition to towers usually finds more socially acceptable outlets, and protests are being heard more often than ever in meetings of city councils, planning commissions, and other government bodies. This summer alone, citizen efforts to block cell towers have sprouted in, among a host of other places, including California, New Jersey, Maryland, Illinois, North Dakota and north of the border in Ontario and British Columbia. Transmitters are already banned from the roofs of schools in many districts.
For years, towers have been even less welcome in the United Kingdom, where this summer has seen disputes across the country.
Most opponents cite not only aesthetics but also concerns over potential health effects of electromagnetic (EM) fields generated by the towers. Once ridiculed as crackpots and Luddites, they're starting to get backup from the scientific community.
It's not just cell phones they're worried about. The Tottenham area of London is considering the suspension of all wireless technology in its schools. Last year, Fred Gilbert, a respected scientist and president of Lakehead University in Ontario, banned wireless internet on his campus. And resident groups in San Francisco are currently battling Earthlink and Google over a proposed city-wide Wi-Fi system.
Picking up some interference?
For decades, concerns have been raised about the health effects of "extremely low frequency" fields that are produced by electrical equipment or power lines. People living close to large power lines or working next to heavy electrical equipment are spending a lot of time in electromagnetic fields generated by those sources. Others of us can be exposed briefly to very strong fields each day.
But in the past decade, suspicion has spread to cell phones and other wireless technologies, which operate at frequencies that are millions to tens of millions higher but at low power and "pulsed."
Then there's your cell phone, laptop, or other wireless device, which not only receives but also sends pulsed signals at high frequencies. Because it's usually very close to your head (or lap) when in use, the fields experienced by your body are stronger than those from a cell tower down the street.
A growing number of scientists, along with a diverse collection of technology critics, are pointing out that our bodies constantly generate electrical pulses as part of their normal functioning. They maintain that incoming radiation from modern technology may be fouling those signals.
But with hundreds of billions in sales at stake, the communications industry (and more than a few scientists) insist that radio-frequency radiation can't have biological effects unless it's intense enough to heat your flesh or organs, in the way a microwave oven cooks meat.
It's also turning out that when scientific studies are funded by industry, the results a lot less likely to show that EM fields are a health hazard.
Low frequency, more frequent disease?
Before the digital revolution, a long line of epidemiological studies compared people who were exposed to strong low-frequency fields -- people living in the shadow of power lines, for example, or long-time military radar operators -- to similar but unexposed groups.
One solid outcome of that research was to show that rates of childhood leukemia are associated with low-frequency EM exposure; as a result, the International Agency for Research on Cancer has labeled that type of energy as a possible carcinogen, just as they might label a chemical compound.
Other studies have found increased incidence of amyotrophic lateral sclerosis (commonly called ALS or Lou Gehrig's disease), higher rates of breast cancer among both men and women, and immune-system dysfunction in occupations with high exposure.
Five years ago, the California Public Utilities Commission asked three epidemiologists in the state Department of Health Services to review and evaluate the scientific literature on health effects of low-frequency EM fields.
The epidemiologists, who had expertise in physics, medicine, and genetics, agreed in their report that they were "inclined to believe that EMFs can cause some degree of increased risk of childhood leukemia, adult brain cancer, Lou Gehrig's disease, and miscarriage" and were open to the possibility that they raise the risks of adult leukemia and suicide. They did not see associations with other cancer types, heart disease, or Alzheimer's disease.
Epidemiological and animal studies have not been unanimous in finding negative health effects from low-frequency EM fields, so the electric-utility industry continues to emphasize that no cause-and-effect link has been proven.
High resistance
Now the most intense debate is focused on radio-frequency fields. As soon as cell phones came into common usage, there was widespread concern that holding an electronic device against the side of your head many hours a month for the rest of your life might be harmful, and researchers went to work looking for links to health problems, often zeroing in on the possibility of brain tumors.
Until recently, cell phones had not been widely used over enough years to evaluate effects on cancers that take a long time to develop. A number of researchers failed to find an effect during those years, but now that the phones have been widely available for more than a decade, some studies are relating brain-tumor rates to long-term phone use.
Some lab studies have found short-term harm as well. Treatment with cell-phone frequencies has disrupted thyroid-gland functioning in lab rats, for example. And at Lund University in Sweden, rats were exposed to cell-phone EM fields of varying strengths for two hours; 50 days later, exposed rats showed significant brain damage relative to non-exposed controls.
The authors were blunt in their assessment: "We chose 12-26-week-old rats because they are comparable with human teenagers -- notably frequent users of mobile phones -- with respect to age. The situation of the growing brain might deserve special concern from society because biologic and maturational processes are particularly vulnerable during the growth process."
Even more recently, health concerns have been raised about the antenna masts that serve cell phones and other wireless devices. EM fields at, say, a couple of blocks from a tower are not as strong as those from a wireless device held close to the body; nevertheless many city-dwellers are now continuously bathed in emissions that will only grow in their coverage and intensity.
Last year, the RMIT University in Melbourne, Australia closed off the top two floors of its 17-story business school for a time because five employees working on its upper floors had been diagnosed with brain tumors in a single month, and seven since 1999. Cell phone towers had been placed on the building's roof a decade earlier and, although there was no proven link between them and the tumors, university officials were taking no chances.
Data on the health effects of cell or W-Fi towers are still sparse and inconsistent. Their opponents point to statistically rigorous studies like one in Austria finding that headaches and difficulty with concentration were more common among people exposed to stronger fields from cell towers. All sides seem to agree on the need for more research with solid data and robust statistical design.
San Francisco, one of the world's most technology-happy cities, is home to more than 2400 cell-phone antennas, and many of those transmitters are due to be replaced with more powerful models that can better handle text messaging and photographs, and possibly a new generation of even higher-frequency phones.
Now there's hot-and-heavy debate over plans to add 2200 more towers for a city-wide Earthlink/Google Wi-Fi network. On July 31, the city's Board of Supervisors considered an appeal by the San Francisco Neighborhood Antenna-Free Union (SNAFU) that the network proposal be put through an environmental review -- a step that up to now has not been required for such telecommunications projects.
In support of the appeal, Magda Havas, professor of environmental and resource studies at Trent University in Ontario submitted an analysis of radio-frequency effects found in more than 50 human, animal, and cellular-level studies published in scientific journals.
Havas has specialized in investigating the effects of both low- and high-frequency EM radiation. She says most of the research in the field is properly done, but that alone won't guarantee that all studies will give similar results. "Natural variability in biological populations is the norm," she said.
And, she says, informative research takes time and focus: "For example, studies that consider all kinds of brain tumors in people who've only used cell phones for, say, five years don't show an association. But those studies that consider only tumors on the same side of the head where the phone is held and include only people who've used a phone for ten years or more give the same answer very consistently: there's an increased risk of tumors." In other research, wireless frequencies have been associated with higher rates of miscarriage, testicular cancer, and low sperm counts.
Direct current from a battery can be used to encourage healing of broken bones. EM fields of various frequencies have also been shown to reduce tissue damage from heart attacks, help heal wounds, reduce pain, improve sleep, and relieve depression and anxiety. If they are biologically active enough to promote health, are they also active enough to degrade it?
At the 2006 meeting of the International Commission for Electromagnetic Safety in Benevento, Italy, 42 scientists from 16 countries signed a resolution arguing for much stricter regulation of EM fields from wireless communication.
Four years earlier, in Freiburger, Germany, a group of physicians had signed a statement also calling for tighter regulation of wireless communication and a prohibition on use of wireless devices by children. In the years since, more than 3000 doctors have signed the so-called "Freiburger Appeal" and documents modeled on it.
But in this country, industry has pushed for and gotten exemption from strict regulation, most notably through the Telecommunications Act of 1996. Libby Kelley, director of the Council on Wireless Technology Impacts in Novato, California says, "The technology always comes first, the scientific and environmental questions later. EM trails chemicals by about 10 years, but I hope we'll catch up."
Kelley says a major problem is that the Telecommunications Act does not permit state or local governments to block the siting of towers based on health concerns: "We'll go to hearings and try to bring up health issues, and officials will tell us, 'We can't talk about that. We could get sued in federal court!'"
High-voltage influence?
Industry officials are correct when they say the scientific literature contains many studies that did not find power lines or telecommunication devices to have significant health effects. But when, as often happens, a range of studies give some positive and some negative results, industry people usually make statements like, "Technology A has not been proven to cause disease B."
Michael Kundi, professor at the Medical University of Vienna, Austria and an EM researcher, has issued a warning about distortions of the concept of cause-and-effect, particularly when a scientific study concludes that "there is no evidence for a causal relationship" between environmental factors and human health. Noting that science is rarely able to prove that A did or did not "cause" B, he wrote that such statements can be "readily misused by interested parties to claim that exposure is not associated with adverse health effects."
Scientists and groups concerned about current standards for EM fields have criticized the World Health Organization (WHO) and other for downplaying the risks. And some emphasize the risk of financial influence when such intense interest is being shown by huge utilities and a global communications industry that's expected to sell $250 billion worth of wireless handsets per year by 2011 (that's just for the instruments, not counting monthly bills). Microwave News cited Belgian reports in late 2006 that two industry groups -- the GSM Association and Mobile Manufacturers Forum -- accounted for more than 40 percent of the budget for WHO's EM fields project in 2005-06.
When a US National Academy of Sciences committee was formed earlier this year to look into health effects of wireless communication devices, the Center for Science in the Public Interest and Sage Associates wrote a letter to the Academy charging that the appointment of two of the committee's six members was improper under federal conflict-of-interest laws.
One of the committee members, Leeka Kheifets, a professor of epidemiology in UCLA's School of Public Health, has, says the letter, "spent the majority of the past 20 years working in various capacities with the Electric Power Research Institute, the research arm of the electric power industry."
The other, Bernard Veyret, senior scientist at the University of Bordeaux in France, "is on the consulting board of Bouygues Telecom (one of 3 French mobile phone providers), has contracts with Alcatel and other providers, and has received research funding from Electricite de France, the operator of the French electricity grid." The NAS committee will be holding a workshop this month and will issue a report sometime after that.
A paper published in January in the journal Environmental Health Perspectives found that when studies of cell phone use and health problems were funded by industry, they were much less likely to find a statistically significant relationship than were publicly funded studies.
The authors categorized the titles of the papers they surveyed as either negative (as in "Cellular phones have no effect on sleep patterns"), or neutral (e.g., "Sleep patterns of adolescents using cellular phones"), or positive, (e.g., "Cellular phones disrupt sleep"). Fully 42 percent of the privately funded studies had negative titles and none had positive ones. In public or nonprofit studies, titles were 18 percent negative and 46 percent positive.
Alluding to previous studies in the pharmaceutical and tobacco industries, the authors concluded, "Our findings add to the existing evidence that single-source sponsorship is associated with outcomes that favor the sponsors' products."
By email, I asked Dr. John Moulder, a senior editor of the journal Radiation Research, for his reaction to the study. Moulder, who is Professor and Director of Radiation Biology in the Department of Radiation Oncology at the University of Wisconsin, did not think the analysis was adequate to conclusively demonstrate industry influence and told me that in his capacity as an editor, "I have not noted such an effect, but I have not systematically looked for one either. I am certainly aware that an industry bias exists in other areas of medicine, such as reporting of clinical trails."
Moulder was lead author on a 2005 paper concluding that the scientific literature to that point showed "a lack of convincing evidence for a causal association between cancer and exposure to the RF [radio-frequency] energy used for mobile telecommunications."
The Center for Science in the Public Interest has questioned Moulder's objectivity because he has served as a consultant to electric-power and telecommunications firms and groups. Moulder told me, "I have not done any consulting for the electric power and telecommunications industry in years, and when I was doing consulting for these industries, the journals for which I served as an editor or reviewer were made aware of it."
A year ago, Microwave News also reported that approximately one-half of all studies looking into possible damage to DNA by communication-frequency EM fields found no effect. But three-fourths of those negative studies were industry- or military-funded; indeed, only 3 of 35 industry or military papers found an effect, whereas 32 of 37 publicly funded studies found effects.
Magda Havas sees a shortage of public money in the US for research on EM health effects as one of the chief factors leading to lack of a rigorous public policy, telling me, "Much of the research here ends up being funded directly or indirectly by industry. That affects both the design and the interpretation of studies." As for research done directly by company scientists, "It's the same as in any industry. They can decide what information to make public. They are free to downplay harmful effects and release information that's beneficial to their product."
Meanwhile, at Trent University where Havas works, students using laptops are exposed to radio-frequency levels that exceed international guidelines. Of that, she says, "For people who've been fully informed and decide to take the risk, that's their choice. But what about those who have no choice, who have a cell-phone tower outside their bedroom window?
"It's the equivalent of secondhand smoke. We took a long time to get the political will to establish smoke-free environments, and we now know we should have done it sooner. How long will it take to react to secondhand radiation?"
For more information, visit Environmnental Health Perspectives; Microwave News; the National Center for Biotechnology Information.
Stan Cox is a plant breeder and writer in Salina, Kansas. His book Sick Planet: Corporate Food and Medicine will be published by Pluto Press in Spring 2008.

Remembering Bergman

Ingmar Bergman changed the face of filmmaking -- and may have been the 20th century's greatest artist.
By Andrew O'Hehir
Jul. 31, 2007 Sometime in the fall of 1980, I went to see Ingmar Bergman's film "Persona." I can literally say that it changed my life. I had seen other so-called art films, and even other Bergman films, but nothing quite like that ambiguous black-and-white masterpiece from 1966, a critical point of contact between regular narrative filmmaking and the parallel tradition of experimental film.

If you haven't seen the film, it begins as an acutely observed, relatively straightforward story about the tense relationship between two women. One, played by Bergman's former wife Liv Ullmann, is a famous actress who has suddenly fallen mute, apparently in the grip of a psychological or spiritual crisis. The other, played by Bibi Andersson, is the chatty, overly confessional nurse assigned to care for the actress while she heads to the seaside for some rest and relaxation. At a certain point in the story, an act of cruelty ruptures the superficial friendship, and literally seems to destroy the film. The film appears to stick in the projector and burn from the heat of the bulb, and all sorts of fragmentary, unexplained images (many of them snippets of silent movies) erupt onto the screen. "Reality" is eventually restored, but the rest of "Persona" has a troubled, dreamlike quality, as if we're now in a world where old-fashioned narrative clarity is no longer available.
I remember sitting up nearly all night in my dorm room digesting what I had seen, and then going back to see it again the following night. A year or so later, one of my friends who had bought a 16mm projector at a flea market checked out a print of "Persona" from the Baltimore public library. We hung a bedsheet on the wall of his apartment and watched the movie perhaps eight times in two weeks, with various constellations of bored or enthralled or bewildered acquaintances. Wherever those people are today, I know what memories were called up for them by reading of Bergman's death on Monday, at age 89, on Faro, the remote Swedish island where he lived and had set several films.

Those bedsheet screenings exemplified the kind of devotion Ingmar Bergman's movies demanded from their adherents, and against which his detractors rebelled. For better and for worse, Bergman was the high priest of a certain vision of cinema, one that essentially vanished long ago. He made only a handful of films after his official retirement with the Oscar-winning "Fanny and Alexander" in 1983, but his death is still a landmark moment. Bergman was the last survivor among the foursome of legendary directors whose work created and defined the art-film market in the years after World War II, the others being Federico Fellini, Akira Kurosawa and François Truffaut.

It's misleading and overly narrow, however, to suggest that Bergman or the other art-house lions belonged entirely to the tradition of high art. His films encompass the carnival as well as the cathedral; they include comedies, romances and family melodramas as well as fables of the dark night of the soul. Only a few of them are as self-consciously confrontational as "Persona," and in the 1960s and '70s you could certainly find film buffs -- followers of Jean-Luc Godard, for instance -- who found Bergman to be conservative and conventional. (Compared to the work of his Russian disciple Andrei Tarkovsky, most of Bergman's pictures feel like crackerjack entertainment.)

It's nonetheless accurate to say that Bergman understood himself first and foremost as an artist who belonged to a European tradition stretching back to the Middle Ages, which he evoked so memorably in his first big international success, "The Seventh Seal" (1957). Most obviously, his work borrowed from the Scandinavian theatrical tradition of Ibsen and Strindberg, from various northern European strains of painting and sculpture, from Freudian psychology and severe Lutheran theology and the tormented philosophy of Nietzsche and Schopenhauer. On the other hand, Bergman was certainly not immune to popular culture; his sense of craft was shaped by the classic Hollywood films of his youth, especially those of George Cukor, a personal favorite. (One can certainly see, in several early Bergman pictures, the influence of Cukor films like "Dinner at Eight," "The Women" or "The Philadelphia Story.")
In an interview published in 1972, the critic John Simon said to Bergman, "It must be a great responsibility, I was thinking, just to be you; because film is probably the most important art today and I think you're the most important filmmaker in the world. To be the most important man in the most important art is a terrible responsibility." Simon is a contentious and disagreeable fellow, and no doubt the remark struck some people as fatuous even then. But it was not inherently ridiculous to suggest, 35 or 40 years ago, that the director of "Persona," "Smiles of a Summer Night," "The Seventh Seal," "Wild Strawberries," "The Virgin Spring" and "Cries and Whispers" might be the most important artist in the world.
Bergman struggled to combine the various intellectual and psychological currents that shaped him against a particular context, that of the postwar West traumatized by Auschwitz and the Bomb, in which belief in God was fading but, as Bergman would often observe, fear of God was not. For an entire generation of the European and American intelligentsia (which included my parents), Bergman's wrestling matches with existential doubt and religious guilt, with fractured family relationships and what seemed a civilization in disrepair, came to stand for its own. Max von Sydow's medieval knight playing chess with Death in the plague Europe of "The Seventh Seal" seemed to symbolize mankind on the brink of nuclear annihilation, and the aging professor facing his own death in "Wild Strawberries" (played so marvelously by Victor Sjöström) captured the anxiety of a culture that believed itself crippled by an inability to express or fulfill its emotional needs.
Some of those concerns now seem remote and old-fashioned to us, just as the boundary-smashing impact of the "interrupted" film in "Persona" looks like nothing special to a viewer acclimated to 20 years of music videos and increasingly sophisticated digital editing techniques. The conception that there could be a "most important" artist, or even a most important art, seems alien to the fragmented, niche-marketed, endlessly commodified spirit of the 21st century. Pop culture has become a self-propelling engine that endlessly consumes and recycles its own waste products, increasingly unconscious of anything that predates its own predominance.
Although Bergman remains the subject of sporadic repertory revivals and university film courses, his movies have lost most of their once-mystical aura. After an onslaught of recent DVD releases, most of his important pictures are now readily available (exceptions include "Sawdust and Tinsel," "Dreams" and "The Magician"), but they too are just cultural commodities from the past, and must fend for themselves on the virtual or actual shelves alongside Antonioni and Godard films and "Spartacus" and "Attack of the 50 Foot Woman."
That's probably for the best. By focusing on Bergman as a great artist and deep thinker who grappled with God and existentialism and boiled the soul of the post-Holocaust world in his crucible, critics like Simon have done much to drive audiences away from his work, and have distorted Bergman's own conception of his art. Entirely too much emphasis has been placed on the ideas that allegedly lie behind Bergman's movies; those who haven't seen them are often startled to discover that those ideas are delivered as memorably intimate images and as affecting human stories. Bergman never conceived of his "art" as distinct from cinematic and dramatic craftsmanship, and his very best films, like the battle-of-the-sexes comedy "Smiles of a Summer Night" or the magical family chronicle "Fanny and Alexander," are never reducible to theses or pronouncements.
"I am a man making things for use, and highly esteemed as a professional," Bergman told Simon. "I am proud of my knowing how to make those things." In another interview, with Andrew Sarris, Bergman famously compared himself to the thousands of anonymous stone carvers who worked together to build medieval cathedrals. "Whether I am a believer or an unbeliever, Christian or pagan," he said, "I work with all the world to build a cathedral because I am artist and artisan, and because I have learned to draw faces, limbs, and bodies out of stone."
As every obituary of Bergman will note, he grew up in Uppsala, the ecclesiastical and academic capital of Sweden, as the son of a strict Lutheran preacher and a mother he adored but who sometimes treated him coldly. (This family dynamic is presented vividly in "Fanny and Alexander," which, at least in emotional terms, is highly autobiographical.) In some respects, that's all you need to know about his background; the passionate blend of love, hatred and fear with which Bergman viewed women, God, spirituality and death, and family life in general is all present in his childhood.
He began working in the theater as a teenager, and continued directing plays throughout his life, serving as director in residence at the Royal National Theatre of Stockholm long after his semi-retirement from filmmaking in 1983. This sometimes leads to the misconception that Bergman saw film essentially as a "larger theater" (to use the phrase of Joseph L. Mankiewicz), when in fact he saw theater and film as drastically different media. What is startling about Bergman's movies (at least after his first few apprentice efforts), and what will ensure their survival, is not their philosophical concerns but their intense attention to cinematic craft.
Bergman's films are economical and intimate, and legendarily focused on the human face. (The split-screen optical merging of Ullmann's and Andersson's faces, at the climactic moment of "Persona," both epitomizes this tendency and simultaneously undermines or renounces it.) Working with cinematographer Sven Nykvist from about 1960 onward, Bergman constructed an expressive visual vocabulary that was both naturalistic and symbolic, in which the human face, considered in loving or excruciating detail, becomes an architectural element, and houses and buildings become characters with moods and temperaments of their own. (Nykvist gets plenty of credit for the "Bergman feeling," and he should. But he did not shoot "Smiles of a Summer Night," "The Seventh Seal" or Bergman's other great 1950s films.)
One thing Bergman brought from the theater was the idea of a revolving repertory company, an idea borrowed or imitated by many subsequent directors, but never to the same effect. Ullmann and von Sydow appeared in nearly a dozen Bergman films each, and Bibi Andersson, Harriet Andersson, Erland Josephson, Gunnar Björnstrand and various other actors kept making return appearances. After a while, seeing another Bergman film felt like a family reunion with people you fundamentally loved and trusted, whatever pain they might inflict on each other and on you. In what turned out to be Bergman's last film, the very fine 2003 "Saraband," Ullmann and Josephson reprised their roles as the warring married couple of "Scenes From a Marriage," made 30 years earlier.
Bergman's movies became the focus of intense intellectual combat: During his period of worldwide fame, he was accused of being a misogynist and a man-hater, of being an apolitical aesthete and a crypto-Marxist nihilist. But the films that occasioned the most controversy in days of yore, and that seem the most implicated in philosophical or psychological heavy lifting -- say, "The Silence" and "Cries and Whispers" and "Shame" and "The Virgin Spring" (and "Persona" too, much as I still love it) -- strike me more as intellectual curiosities today, not necessarily his best work. It's his more "realistic" -- or at least less transparently allegorical -- works about the wounded human quest for love that form the basis of a monumental legacy.
Everywhere I go and as long as I live, I'll carry with me images from Bergman's movies: the beautiful Eva Dahlbeck, weaving her spidery lover's schemes in "Smiles of a Summer Night"; Harriet Andersson and Lars Passgard, as the brother and sister performing a midsummer play for their father in "Through a Glass Darkly"; Bergman and Ullmann's daughter, glimpsed in the audience for his marvelous adaptation of Mozart's "Magic Flute"; Ingrid Bergman, so unforgettable in "Autumn Sonata" (the only time she ever worked with her namesake, to whom she was not related); young Alexander (Bertil Guve) nestled in the lap of his grandmother (the marvelous Gunn Wallgren) in "Fanny and Alexander," a film that captures the joys, terrors and enchantments of childhood better than any I've ever seen.
Bergman's fame may have faded to a ghostly shade of its former self, which he probably didn't mind, and at the moment he's not exactly fashionable outside film-buff circles. ("Saraband" did relatively poor business in its 2005 American release.) His influence is so widespread among younger filmmakers, both in Europe and in American independent cinema, as to be almost invisible. Anyone who makes emotional dramas or what might be called "serious" comedies about parents and children, men and women, is operating on Bergman's turf. Anyone who photographs a door opening in an empty house, a clock ticking on a mantelpiece, or someone reading a letter addressed to somebody else (bad mistake!) is borrowing his vocabulary, consciously or not.
If Ingmar Bergman was the most important man in the most important art form in 1972, his cultural significance on his death in 2007 seems much less clear. No single artist can stand for all the traditions of film (and film itself plays a more limited and ambiguous role in the media economy than it used to), and Bergman was undeniably a middle-class white European from an affluent, highly homogeneous society. Maybe we can agree that Bergman was the greatest of the 20th-century artists who tried to adapt the traditional craftsmanship of European theater to a new cultural form. Maybe we can agree that he believed in art as a redemptive, spiritual, even magical force, and did much to carry that ancient view of art into the movie theater.
Bergman lived a long life full of movies, plays and tumultuous marriages, and by all accounts left it behind with few regrets. He had the life, and the death, we would want for ourselves and those we love. I'm still grieving today because I know that, finally, there will be no more Bergman films. (His recurrent promises to quit making them had become almost comical.) If you've got a bedsheet and a projector, I'm coming over.
-- By Andrew O'Hehir

Sunday, July 29, 2007

Independent Filmmakers, Photographers Protest Proposed NYC Permit Law

"I already have a permit for my camera, its called the First Amendment," Picture New York's Beka Economopoulos told WNBC-TV at a Union Square rally in Manhattan on Friday night (as seen in a YouTube clip of news footage). Her statement was in response to a proposal by the NYC Mayor's Office of Film, Theatre and Broadcasting (MOFTB) to change rules requiring permits to photograph in public places in the City of New York.

The move, which was also explored in today's New York Times, is being criticized as particularly harmful to New York's independent filmmakers. "We've lived for 40 years without these regulations, there's no reason to introduce them now," Economopoulos continued in the TV interview. Included in the proposed law are restrictions that include requiring a permit and insurance for those filming or photographing with a trippd and a crew of 5 or more people at one site for 10 or more minutes or 2 people at a single site for more than 30 minutes.

Contacted by indieWIRE, Juliane Cho, Associate Commissioner of NYC MOFTB, referred to information about the change posted on the division's website and noted that one of the proposed rules would give her organization "the ability to provide a waiver of the insurance requirement for those who cannot afford it." Feedback on the proposed rules are being accepted through August 3, 2007 and protest efforts are being lead by Picture New York and more information is available on their website.

Who Really Took Over During That Colonoscopy

The NeverEndingWar slogs along with more Americans and Iraqi citizens dying, while the Cheney/Bush Administration is unalterably wedded to the very same implosion the jihadists are set upon, yet one must wonder whether a glorious afterlife from martyrdom is their blinding light of guidance or if it really is that incompetency is the governing principle. The long-standing rhetoric of the neo-con and conservative heirarchy has been to shrink government until you can drown it in a bathtub. With that dubious and cynical achievement to their credit, witness government programs slashed and shrunk beyond recognition, they can prove that indeed government doesn't work, i.e., FEMA, the EPA, and now the Department of Justice, and more. Of course it works for their greater purpose, to concentrate power in the coporatocracy, make the rich more powerful, and the collateral damage is rationalized as the cost of doing business. There's still 18 months of further crumbling before the next adminsitration is installed. No matter who the generals may be, it is going to be up to the electorate to compell major changes, and the setting for a generational shift, and despite the almost uniform lock-step of each party's presidential candidates, could be in the winds, perhaps of the proportion of 1968 or even 1932. Sounds like a reach, but in politics, anything, anything can happen. It's long overdue. -MS

By FRANK RICH, New York Times
THERE was, of course, gallows humor galore when Dick Cheney briefly grabbed the wheel of our listing ship of state during the presidential colonoscopy last weekend. Enjoy it while it lasts. A once-durable staple of 21st-century American humor is in its last throes. We have a new surrogate president now. Sic transit Cheney. Long live David Petraeus!
It was The Washington Post that first quantified General Petraeus’s remarkable ascension. President Bush, who mentioned his new Iraq commander’s name only six times as the surge rolled out in January, has cited him more than 150 times in public utterances since, including 53 in May alone.
As always with this White House’s propaganda offensives, the message in Mr. Bush’s relentless repetitions never varies. General Petraeus is the “main man.” He is the man who gives “candid advice.” Come September, he will be the man who will give the president and the country their orders about the war.
And so another constitutional principle can be added to the long list of those junked by this administration: the quaint notion that our uniformed officers are supposed to report to civilian leadership. In a de facto military coup, the commander in chief is now reporting to the commander in Iraq. We must “wait to see what David has to say,” Mr. Bush says.
Actually, we don’t have to wait. We already know what David will say. He gave it away to The Times of London last month, when he said that September “is a deadline for a report, not a deadline for a change in policy.” In other words: Damn the report (and that irrelevant Congress that will read it) — full speed ahead. There will be no change in policy. As Michael Gordon reported in The New York Times last week, General Petraeus has collaborated on a classified strategy document that will keep American troops in Iraq well into 2009 as we wait for the miracles that will somehow bring that country security and a functioning government.
Though General Petraeus wrote his 1987 Princeton doctoral dissertation on “The American Military and the Lessons of Vietnam,” he has an unshakable penchant for seeing light at the end of tunnels. It has been three Julys since he posed for the cover of Newsweek under the headline “Can This Man Save Iraq?” The magazine noted that the general’s pacification of Mosul was “a textbook case of doing counterinsurgency the right way.” Four months later, the police chief installed by General Petraeus defected to the insurgents, along with most of the Sunni members of the police force. Mosul, population 1.7 million, is now an insurgent stronghold, according to the Pentagon’s own June report.
By the time reality ambushed his textbook victory, the general had moved on to the mission of making Iraqi troops stand up so American troops could stand down. “Training is on track and increasing in capacity,” he wrote in The Washington Post in late September 2004, during the endgame of the American presidential election. He extolled the increased prowess of the Iraqi fighting forces and the rebuilding of their infrastructure.
The rest is tragic history. Were the Iraqi forces on the trajectory that General Petraeus asserted in his election-year pep talk, no “surge” would have been needed more than two years later. We would not be learning at this late date, as we did only when Gen. Peter Pace was pressed in a Pentagon briefing this month, that the number of Iraqi battalions operating independently is in fact falling — now standing at a mere six, down from 10 in March.
But even more revealing is what was happening at the time that General Petraeus disseminated his sunny 2004 prognosis. The best account is to be found in “The Occupation of Iraq,” the authoritative chronicle by Ali Allawi published this year by Yale University Press. Mr. Allawi is not some anti-American crank. He was the first civilian defense minister of postwar Iraq and has been an adviser to Prime Minister Nuri al-Maliki; his book was praised by none other than the Iraq war cheerleader Fouad Ajami as “magnificent.”
Mr. Allawi writes that the embezzlement of the Iraqi Army’s $1.2 billion arms procurement budget was happening “under the very noses” of the Security Transition Command run by General Petraeus: “The saga of the grand theft of the Ministry of Defense perfectly illustrated the huge gap between the harsh realities on the ground and the Panglossian spin that permeated official pronouncements.” Mr. Allawi contrasts the “lyrical” Petraeus pronouncements in The Post with the harsh realities of the Iraqi forces’ inoperable helicopters, flimsy bulletproof vests and toy helmets. The huge sums that might have helped the Iraqis stand up were instead “handed over to unscrupulous adventurers and former pizza parlor operators.”
Well, anyone can make a mistake. And when General Petraeus cited soccer games as an example of “the astonishing signs of normalcy” in Baghdad last month, he could not have anticipated that car bombs would kill at least 50 Iraqis after the Iraqi team’s poignant victory in the Asian Cup semifinals last week. This general may well be, as many say, the brightest and bravest we have. But that doesn’t account for why he has been invested by the White House and its last-ditch apologists with such singular power over the war.
On “Meet the Press,” Lindsey Graham, one of the Senate’s last gung-ho war defenders in either party, mentioned General Petraeus 10 times in one segment, saying he would “not vote for anything” unless “General Petraeus passes on it.” Desperate hawks on the nation’s op-ed pages not only idolize the commander daily but denounce any critics of his strategy as deserters, defeatists and enemies of the troops.
That’s because the Petraeus phenomenon is not about protecting the troops or American interests but about protecting the president. For all Mr. Bush’s claims of seeking “candid” advice, he wants nothing of the kind. He sent that message before the war, with the shunting aside of Eric Shinseki, the general who dared tell Congress the simple truth that hundreds of thousands of American troops would be needed to secure Iraq. The message was sent again when John Abizaid and George Casey were supplanted after they disagreed with the surge.
Two weeks ago, in his continuing quest for “candid” views, Mr. Bush invited a claque consisting exclusively of conservative pundits to the White House and inadvertently revealed the real motive for the Petraeus surrogate presidency. “The most credible person in the fight at this moment is Gen. David Petraeus,” he said, in National Review’s account.
To be the “most credible” person in this war team means about as much as being the most sober tabloid starlet in the Paris-Lindsay cohort. But never mind. What Mr. Bush meant is that General Petraeus is famous for minding his press coverage, even to the point of congratulating the ABC News anchor Charles Gibson for “kicking some butt” in the Nielsen ratings when Mr. Gibson interviewed him last month. The president, whose 65 percent disapproval rating is now just one point shy of Richard Nixon’s pre-resignation nadir, is counting on General Petraeus to be the un-Shinseki and bestow whatever credibility he has upon White House policies and pronouncements.
He is delivering, heaven knows. Like Mr. Bush, he has taken to comparing the utter stalemate in the Iraqi Parliament to “our own debates at the birth of our nation,” as if the Hamilton-Jefferson disputes were akin to the Shiite-Sunni bloodletting. He is also starting to echo the administration line that Al Qaeda is the principal villain in Iraq, a departure from the more nuanced and realistic picture of the civil-war-torn battlefront he presented to Senate questioners in his confirmation hearings in January.
Mr. Bush has become so reckless in his own denials of reality that he seems to think he can get away with saying anything as long as he has his “main man” to front for him. The president now hammers in the false litany of a “merger” between Osama bin Laden’s Al Qaeda and what he calls “Al Qaeda in Iraq” as if he were following the Madison Avenue script declaring that “Cingular is now the new AT&T.” He doesn’t seem to know that nearly 40 other groups besides Al Qaeda in Mesopotamia have adopted Al Qaeda’s name or pledged allegiance to Osama bin Laden worldwide since 2003, by the count of the former C.I.A. counterterrorism official Michael Scheuer. They may follow us here well before any insurgents in Iraq do.
On Tuesday — a week after the National Intelligence Estimate warned of the resurgence of bin Laden’s Qaeda in Pakistan — Mr. Bush gave a speech in which he continued to claim that “Al Qaeda in Iraq” makes Iraq the central front in the war on terror. He mentioned Al Qaeda 95 times but Pakistan and Pervez Musharraf not once. Two days later, his own top intelligence officials refused to endorse his premise when appearing before Congress. They are all too familiar with the threats that are building to a shrill pitch this summer.
Should those threats become a reality while America continues to be bogged down in Iraq, this much is certain: It will all be the fault of President Petraeus.

Friday, July 27, 2007

Do You Live in One of the World's 15 Greenest Cities?

By Grist Magazine
Posted on July 27, 2007, Printed on July 27, 2007
This article is reprinted by permission from Grist.

These metropolises aren't literally the greenest places on earth -- they're not necessarily dense with foliage, for one, and some still have a long way to go down the path to sustainability. But all of the cities on this list deserve recognition for making impressive strides toward eco-friendliness, helping their many millions of residents live better, greener lives.
1. Rekyjavik, Iceland
Remember the grade-school memory device "Greenland is icy and Iceland is green"? It's truer than ever thanks to progress made by Iceland and its capital city in recent years. Reykjavik has been putting hydrogen buses on its streets, and, like the rest of the country, its heat and electricity come entirely from renewable geothermal and hydropower sources and it's determined to become fossil-fuel-free by 2050. The mayor has pledged to make Reykjavik the cleanest city in Europe. Take that, Greenland.
2. Portland, Oregon, U.S.
The City of Roses' approach to urban planning and outdoor spaces has often earned it a spot on lists of the greenest places to live. Portland is the first U.S. city to enact a comprehensive plan to reduce CO2 emissions and has aggressively pushed green building initiatives. It also runs a comprehensive system of light rail, buses, and bike lanes to help keep cars off the roads, and it boasts 92,000 acres of green space and more than 74 miles of hiking, running, and biking trails.
3. Curitiba, Brazil
With citizens riding a bus system hailed as one of the world's best and with municipal parks benefiting from the work of a flock of 30 lawn-trimming sheep, this midsized Brazilian city has become a model for other metropolises. About three-quarters of its residents rely on public transport, and the city boasts over 580 square feet of green space per inhabitant. As a result, according to one survey, 99 percent of Curitibans are happy with their hometown.
4. Malmö, Sweden
Known for its extensive parks and green space, Sweden's third-largest city is a model of sustainable urban development. With the goal of making Malmö an "ekostaden" (eco-city), several neighborhoods have already been transformed using innovative design and are planning to become more socially, environmentally, and economically responsive. Two words, Malmö: organic meatballs.
5. Vancouver, Canada
Its dramatic perch between mountains and sea makes Vancouver a natural draw for nature lovers, and its green accomplishments are nothing to scoff at either. Drawing 90 percent of its power from renewable sources, British Columbia's biggest city has been a leader in hydroelectric power and is now charting a course to use wind, solar, wave, and tidal energy to significantly reduce fossil-fuel use. The metro area boasts 200 parks and over 18 miles of waterfront, and has developed a way-forward-thinking 100-year plan for sustainability. Assuming civilization will last another 100 years? Priceless.
6. Copenhagen, Denmark
With a big offshore wind farm just beyond its coastline and more people on bikes than you can shake a stick at, Copenhagen is a green dream. The city christened a new metro system in 2000 to make public transit more efficient. And it recently won the European Environmental Management Award for cleaning up public waterways and implementing holistic long-term environmental planning. Plus, the pastries? Divine.
7. London, England
When Mayor Ken Livingstone unveiled London's Climate Change Action Plan in February, it was just the latest step in his mission to make his city the world's greenest. Under the plan, London will switch 25 percent of its power to locally generated, more-efficient sources, cut CO2 emissions by 60 percent within the next 20 years, and offer incentives to residents who improve the energy efficiency of their homes. The city has also set stiff taxes on personal transportation to limit congestion in the central city, hitting SUVs heavily and letting electric vehicles and hybrids off scot-free.
8. San Francisco, California, U.S. Nearly half of all 'Friscans take public transit, walk, or bike each day, and over 17 percent of the city is devoted to parks and green space. San Francisco has also been a leader in green building, with more than 70 projects registered under the U.S. Green Building Council's LEED certification system. In 2001, San Francisco voters approved a $100 million bond initiative to finance solar panels, energy efficiency, and wind turbines for public facilities. The city has also banned non-recyclable plastic bags and plastic kids' toys laced with questionable chemicals. Next thing you know, they'll all be wearing flowers in their hair.
9. Bahía de Caráquez, Ecuador
After it suffered severe damage from natural disasters in the late 1990s, the Bahía de Caráquez government and nongovernmental organizations working in the area forged a plan to rebuild the city to be more sustainable. Declared an "Ecological City" in 1999, it has since developed programs to protect biodiversity, revegetate denuded areas, and control erosion. The city, which is marketing itself as a destination for eco-tourists, has also begun composting organic waste from public markets and households and supporting organic agriculture and aquaculture.
10. Sydney, Australia
The Land Down Under was the first country to put the squeeze on inefficient, old-school light bulbs, but Sydney-dwellers took things a step further in March, hosting a city-wide one-hour blackout to raise awareness about global warming. Add to that their quest for carbon neutrality, innovative food-waste disposal program, and new Green Square, and you've got a metropolis well on its way to becoming the Emerald City of the Southern Hemisphere.
11. Barcelona, Spain
Hailed for its pedestrian-friendliness (37 percent of all trips are taken on foot!), promotion of solar energy, and innovative parking strategies, Barcelona is creating a new vision for the future in Europe. City leaders' urban-regeneration plan also includes poverty reduction and investment in neglected areas, demonstrating a holistic view of sustainability.
12. Bogotá, Colombia
In a city known for crime and slums, one mayor led a crusade against cars that has helped to make Bogotá one of the most accessible and sustainable cities in the Western Hemisphere. Enrique Peñalosa, mayor from 1998 to 2001, used his time in office to create a highly efficient bus transit system, reconstruct sidewalks so pedestrians could get around safely, build more than 180 miles of bike trails, and revitalize 1,200 city green spaces. He restricted car use on city streets during rush hour, cutting peak-hour traffic 40 percent, and raised the gas tax. The city also started an annual "car-free day," and aims to eliminate personal car use during rush hour completely by 2015. Unthinkable!
13. Bangkok, Thailand
Once known for smokestacks, smog, and that unshakeable '80s song, Bangkok has big plans for a brighter future. City Governor Apirak Kosayodhin recently announced a five-year green strategy, which includes efforts to recycle citizens' used cooking oil to make biodiesel, reduce global-warming emissions from vehicles, and make city buildings more efficient. Bangkok has also made notable progress in tackling air pollution over the past decade. Though the city's pollution levels are still higher than some of its big-city Asian counterparts, its progress thus far is impressive.
14. Kampala, Uganda
This capital city is overcoming the challenges faced by many urban areas in developing countries. Originally built on seven hills, Kampala takes pride in its lush surroundings, but it is also plagued by big-city ills of poverty and pollution. Faced with the "problem" of residents farming within city limits, the city passed a set of bylaws supporting urban agriculture that revolutionized not only the local food system, but also the national one, inspiring the Ugandan government to adopt an urban-ag policy of its own. With plans to remove commuter taxis from the streets, establish a traffic-congestion fee, and introduce a comprehensive bus service, Kampala is on its way to becoming a cleaner, safer, more sustainable place to live.
15. Austin, Texas
Austin is poised to become the No. 1 solar manufacturing center in the U.S., and its hometown utility, Austin Energy, has given the notion of pulling power from the sun a Texas-sized embrace. The city is on its way to meeting 20 percent of its electricity needs through the use of renewables and efficiency by 2020. Austin also devotes 15 percent of its land to parks and other open spaces, boasts 32 miles of bike trails, and has an ambitious smart-growth initiative, making it a happy green nook in what's widely perceived as a not-so-green state. To put it mildly.
Chicago, IL, U.S.
Mayor Richard M. Daley (D) is striving to make his hometown "the greenest city in America." There's lots of literal greenery: under his leadership, Chicago has planted 500,000 new trees, invested hundreds of millions of dollars in the revitalization of parks and neighborhoods, and added more than 2 million square feet of rooftop gardens, more than all other U.S. cities combined. And there's plenty of metaphorical greening too: the Windy City has built some of the most eco-friendly municipal buildings in the country, been a pioneer in municipal renewable-energy standards, provided incentives for homeowners to be more energy efficient, and helped low-income families get solar power.
Freiburg, Germany
Home to the famously car-free Vauban neighborhood and a number of eco-transit innovations, Freiburg is a tourist destination with a green soul. The city has also long embraced solar power.
Seattle, WA, U.S.
Mayor Greg Nickels (D) has committed his city to meeting the emission-reduction goals of the Kyoto climate treaty, and inspired more than 590 other U.S. mayors to do the same. True to its name, the Emerald City is also planting trees, building green, and benefiting from biodiesel and hybrid buses.
Quebec City, Canada
Dubbed the most sustainable city in Canada by the Corporate Knights Forum, Quebec wins big points for clean water, good waste management, and bike paths aplenty. C'est magnifique!
© 2007 Independent Media Institute. All rights reserved.

Wednesday, July 25, 2007

Tammy Faye, We're still watching

Tammy Faye Bakker Messner was such a genius at come-into-my-living-room TV that she spent even her final moments working the camera.
By Fenton Bailey and Randy Barbato
Jul. 25, 2007 With Tammy Faye it was always about the eyes.

The very first thing Tammy did on the very first day of filming "The Eyes of Tammy Faye," the documentary we made about her, was show us her dead mother's glasses on her coffee table. She liked to keep them around, she said, to remind her how she saw things. And then, with the cameras rolling, she put them on.
In that moment we knew -- as did she -- that this would be the opening of our film. It was such an arresting, almost ghoulish thing to do, to put on your dead mother's glasses. Yet it reminded us that we all have different points of view because we are all looking through different lenses. And no matter how differently we see things, no matter how we may judge people accordingly, it's all temporary anyway.
In the opening of the movie "Crash," there's some mournful voice-over about how our lives are isolated by glass: car windscreens, television screens, computer screens. Rather than seeing this as a prescription for melancholy and loneliness, Tammy saw the screen as an opportunity to make a connection and determined to put herself in front of the eye of the camera.
Amazing really, because Tammy didn't have a lot to work with. She didn't have the genes of stardom. She grew up in Nowheresville, and Hollywood was definitely not calling. She was tiny. OK, so Hollywood could always forgive the vertically challenged as long as they had the eyes. But Tammy hardly had any eyes at all, just two tiny raisins bordered with some stumpy eyelashes.
Almost half a century before club kid and "Freak Show" author James St. James pronounced, "If you've got a hump back throw a little glitter on it, honey," she did just that; with false eyelashes glued on and mascara tattooed on, Tammy made her eyes pop. Years before Andy got around to it, Tammy painted her face like Warhol's Marilyn, and the impact was no less memorable. She gave herself a pair of abstract sunglasses that would make Elton John blush, putting bold quotation marks around the most powerful weapons she had.
It was a look that was perfect for television, an emergent trashy medium that no one really respected back then. "I still am big -- it's the pictures that got small," Gloria Swanson moaned at the end of "Sunset Boulevard." What spelled disaster for the dinosaurs of Hollywood was good news for tiny Tammy, who -- along with her sweetheart husband, Jim Bakker -- hijacked the medium of television in its infancy.
They pioneered the kind of come-into-our-living-room cozy casting that has become the staple of morning TV. And together they spoke the language of television so fluently, so effortlessly and so incessantly that suddenly they had a hugely successful ministry on their hands.
The televangelism thing gets a lot of people worked up: Poor widows sending in money they can't afford to spend in return for ... what? The fact is that television has always been a completely commercial medium, and anyone who thinks there is a safe divide -- or any divide at all -- between commercials and content needs his or her head examined. This actually makes home shopping and televangelism the purest and most honest forms of the medium: They just want your money.
But Jim and Tammy were happy to give something in return. While most televangelists used divisiveness and fear as their pitch (if you don't send money now you'll burn in hell and be overrun by commies and fags), Jim and Tammy made it all seem like one big house party. And it wasn't just a hypocritical construct limited to the television studio. They extended the experience by building around the studio an actual theme park and holiday camp. Instead of burning in the fires of hell, you could take a ride down the water flume. And everyone was welcome -- even the commies and the gays -- to come on down. People flocked in droves. Do not underestimate how revolutionary this "come one, come all" approach was among Christian circles. It was heresy.
But it was such fun, and Jim and Tammy Faye lived the high life. Even if their supposed excesses seem a little paltry compared with those of today's rap stars and hedge fund hogs, the furs and gold-plated taps did not go unnoticed. This was the big '80s -- our first brush with bling, our first contact with cashmere as the fabric of our lives -- so there was a need for expiation, for a scapegoat. Wall Street found its righteous zealot in Rudolph Giuliani, who puffed an insider-trading scandal into an overblown crusade to build his political career. And the Christian community had Jerry Falwell, who cunningly managed to steal Jim and Tammy's ministry right out from under their noses. In the end Jim and Tammy lost everything. Jim went to prison on fraud and conspiracy charges and Tammy went into exile in the desert.
But all that is really just a sideshow when it comes to understanding Tammy Faye's legacy. She loved to touch people and, in the age of mass media, she knew that the best way to do that was through the lens of a camera.
In "The Jim J. and Tammy Faye Show" (a sadly short-lived syndicated show), "The Eyes of Tammy Faye," "The Surreal Life," "Tammy Faye: Death Defying" (a film Tammy asked us to make in 2005 documenting her battle with cancer that aired on We) and "One Punk Under God" (the TV series we produced about Tammy Faye's son, Jay), she continued to speak the language of television with a virtuosity that was quite simply pure genius.
Heroically, she kept on doing it right up until hours before her death. Even with a face ravaged by cancer, she called Larry King and asked him to interview her. She looked dreadful. But she still had the eyes, not because the lashes were super-glued and the mascara tattooed, but because she always knew it was all about the eyes. And she knew -- as we all should know by now -- that the most important eye of all is the eye of the camera lens.
Sundance Channel will air a marathon of "One Punk Under God" on Thursday, July 26.
-- By Fenton Bailey and Randy Barbato

Joseph LeDoux's heavy mental

The neuroscientist explains how music, emotion and memory shape our identities -- and why he has donned a Stratocaster to keep the brain rollin' all night long.
By Jonathan Cott and Karen Rester
Jul. 25, 2007

In May at Madison Square Garden, an unknown, unsigned rock band began to play. It was only its fourth show since forming in the fall of 2006. Granted, its last show had sold out, but that was in the basement of the Cornelia Street Cafe in New York, which holds about 30 people. The Amygdaloids were staring at a crowd of 10,000, a big leap for a band that had yet to release, well, anything. Then something phenomenal happened. In the midst of its signature song, "All in a Nut," an inspired kid in the audience began leaping out of his seat, igniting a wave that went around the entire 200,000-square-foot arena. The band members were stunned; they had never seen anything like it.
All right, the occasion wasn't a concert but a graduation ceremony for 10,000 students in the New York University College of Arts and Science. Still, this was no ordinary club band hired to entertain the students. The Amygdaloids are made up of four scientists from NYU whose chief singer and songwriter is Joseph LeDoux. Earlier in the evening, LeDeoux had given the faculty address. Although one must ask what kind of neuroscience professor invokes Tennessee Williams and surrealist filmmaker Luis Buñuel to send a graduating class out into the world, then picks up his white Stratocaster and launches into a rock ballad about the amygdala, that almond-shaped "nut" in the brain that processes primitive emotions like fear, love, hate and anger: "Why do we feel so afraid/ Don't have to look very far/ Don't get stuck in a rut/ Don't have to look very hard/ It's all in a nut, in your brain."
A much-lauded pioneer in his field, the 58-year-old LeDoux, who is the Henry and Lucy Moses Professor of Science at NYU's Center for Neural Science as well as director of the Center for the Neuroscience of Fear and Anxiety, is perhaps used to being greeted by scientists, students and brain buffs alike with, as the New York Times put it, "enthusiasm usually reserved for rock stars."
Back in the '70s, when neuroscientists considered emotion too subjective for serious research, LeDoux made it the focus of his work, tracing the pathway in a rat's brain that leads to the fear response. The implications of this finding launched his career. His two highly praised books, "The Emotional Brain" and "Synaptic Self," look to the amygdala and to the brain's synapses, respectively, to understand how neural processes shape who we are, what we think, feel and remember. More specifically, LeDoux asks how the brain creates and remembers emotion, whether synaptic changes determine mental illness and how traumatic memories can be controlled and even erased.
Which prompts the question: What is LeDoux doing with a Stratocaster, anyway? Salon recently sat down with LeDoux in his NYU office, where he spoke to us as enthusiastically about the Amygdaloids and his love of music as he did about the amygdala itself and the extraordinary ways memory and emotion shape our identities.
What got you into using music to convey your ideas about the brain?
To be perfectly honest, I just love music, and when we wrote our first song, "Mind Body Problem," last November, I thought this could be our genre, especially after Newsday dubbed us "heavy mental." I think using music to teach students about the brain has a lot of potential. But right now we're just having a lot of fun playing.
How did all of you scientists find time to leave your labs and start jamming together?
Tyler Volk and I met because we both wrote science books for lay readers. Over dinner we discovered we both played guitar, so we started jamming together, mostly playing '60s classic rock and rock blues. We'd get together every month or so at one of our places for a couple of hours of guitar and then go to dinner, where the discussion often drifted into fantasies about having a band between discussions of the self and consciousness. When the holidays came around in 2005, we played some of our favorites at my lab party, like "Crossroads," "All Along the Watchtower" and "It's All Over Now, Baby Blue." After the party, Daniela Schiller, a postdoc who works with me, came up and said she plays drums and would love to jam with us. In the summer of 2006, I got an invitation to speak at a science event at a bar in Brooklyn [the Secret Science Club]. They said there would some entertainment afterward and I volunteered the three of us. At that point we felt we needed a bass player. It turned out that Daniela's research assistant, Nina Curely, had been taking bass lessons, so we invited her to join us. We practiced a few times and on November 1st the Amygdaloids had their first show. We're excited that our first CD will be released in the fall of 2007. [Listen to four songs here.]
What's the earliest rock 'n' roll song that you remember?
In my faculty address for NYU, I said I often think about the past in terms of the songs I was listening to at the time; it's how I categorize my life episodes. The earliest song I remember is "Love Me Tender."
How old were you at the time?
I was probably 7. There was a little diner a block from my parents' butcher store. I was in love with the waitress and she was in love with this tough guy who wore a leather jacket and rode a motorcycle in our town. And she used to sing "Love Me Tender" all the time. I went there every day and ordered a Coke and sort of stared at her.
After college you had a brief stint in a group called Cerebellum and the Medullas. What made you choose such a brainy name?
I liked the name. I remembered it from high school biology.
You weren't studying neuroscience at the time?
No, I was doing marketing. I didn't know anything about the brain. That was just out of the blue.
Daniel Levitin, who runs the Laboratory for Music Perception, Cognition and Expertise at McGill University, and who is the author of the book "This Is Your Brain on Music," has a rock group called the Diminished Faculties, composed entirely of professors and students at McGill. What is it with you neuroscientists and rock 'n' roll?
I don't know; they're coming out of the woodworks. We accidentally received an e-mail from someone in that group. It was after an article about us appeared in the New York Times. The e-mail said something like, "All right, we gotta get going. Look at what these guys are doing! We need to invite them up here and show them who's boss."
Most memories degrade and distort with time; why are music memories so sharply encoded?
I know from my own experience that it's a very powerful way to remember things. I've found that in the short time we've been playing music we can convey the gist of a concept with a three-minute song that we'd need a chapter for in a book and many, many hours of painstaking work to get across. Then people read it and they forget everything. But you can just sing the line, "An emotional brain is a hard thing to tame," which captures the essence of the concept, and people remember it.
It's very hard to erase memories of a piece of music, isn't it, unless a person suffers something like retrograde memory loss. Then I suspect the person wouldn't remember the songs from the period he or she has lost.
It would be interesting to hook that person up to some kind of physiological machine and see if they had autonomic responses to those songs even though they don't consciously remember them. There's a chance that they're in your brain implicitly but you just can't access them.
Neuroscientist Steven Pinker called music "auditory cheesecake," meaning music is pleasurable but of no evolutionary importance. If music isn't necessary for human survival, why has it appeared in every culture we know about?
There's a lot of important rhythmic activity in nature, whether it's the circadian rhythm of the daily cycle, the seasonal rhythm, the monthly rhythms. Just look at the human body. I mean, so much of what a body does is based on rhythm. Take the heart rhythms, the brain rhythms. Music is one expression of biological rhythm. That's why pleasing music has a kind of symmetry and rhythm to it that discordant music doesn't. I'm not sure he's right about music not having evolutionary importance.
Maybe the secret to music's power -- its ability to trigger memory and emotion in the listener -- is the fact that emotion and memory are what inspire it.
Yes, absolutely. Love songs, hate songs, blues. It's all about those big experiences in life. When words matter it's because the listener can relate to either the pain or the joy with the singer. So I think that's true. A lot of what inspires music is emotion and your memory of those emotions and your anticipation about future emotions.
Could there be something like mirror neurons at play, which fire both when a person observes an action and when they perform the action themselves.
That's an interesting idea. I think that's probably true, especially at a concert. You're watching the musician on stage and your brain is locking in with what he or she is doing, and so I think there is probably a lot of mirroring going on.
So if a musician's neural pattern is transferred to the listener's brain, perhaps the reason it's so strongly encoded is that by listening to it repeatedly the pathway becomes stronger and stronger, just as with a memory that you recall repeatedly.
I think it's more than repetition. I can imagine at a concert where you've got the musician up onstage, and there's a lot of intensity and the music is loud and driving and the crowd is swaying and the guy is dancing around onstage. There's a lot of stuff going on. Emotional upheaval like that is very good at storing memories. There's a very famous study from Columbia in the '60s, where they took people and gave them a shot of adrenaline, which revved them up, and then put them into a room of sad people, happy people or neutral people. If you had the injection you came out feeling the mood of the room you were in. Revving you up like that and putting you into a particular context creates emotions that are appropriate to that context. Your memories will automatically be stored more strongly because of the emotional arousal.
Researchers from the Montreal Neurological Institute took PET scans of musicians' brains while they listened to pieces of music that "turned them on." Music activated similar neural systems of reward and emotion as those stimulated by sex, food and addictive drugs. It's amazing that all of these things press the same button -- drugs, sex, music.
But it has to be music that you like; it's a reward. There are certain things in the world that are rewarding, like food and sex, and then there are things we attach to those rewards through experiences that can become rewards in themselves -- what we call conditioned or learned rewards. If you have a positive experience and a song is playing, then that positive experience attaches to that song and the song itself becomes a reward.
You write about how our brain synapses change through experience, what is called synaptic plasticity. And, indeed, research continues to demonstrate how amazingly plastic the brain is. Even as our cognitive abilities like memory degenerate over time, we can strengthen them through brain exercises. What do you envision happening in the future as scientists learn how to more expertly manipulate this plasticity?
We manipulate plasticity all the time. Each time you go to a nice restaurant for dinner to celebrate a special occasion, a birthday or anniversary, you are creating a situation in which the memory of the event will not be ordinary and fleeting. Much of psychotherapy is about plasticity -- getting patients to learn new ways to cope with challenges. But then there's also drugs. Lots of companies are working on memory enhancers, mainly to be used to help people with memory disorders. But drugs already exist that can enhance memory formation, and these are being used to help people with phobias learn fear reduction through exposure to fear-arousing stimuli. There are much-publicized debates about this coming out of the new field of neuroethics, about drugs that enhance learning and memory, since obviously such drugs might also be used to improve "normal" memory.
Speaking of memory, what did you think of "Eternal Sunshine of the Spotless Mind"?
I've said they ripped off our research.
I didn't really mean that in a negative way. We published a study in 2000 on this exact topic, which started this whole memory-erasure field. The film, which came out in 2004, described exactly what we were doing. We would activate a memory and then zap it. We were zapping it with a chemical, a shot of anisomycin; they zapped it with a machine.
Did they ever contact you?
Someone ran into Michel Gondry [the film's director] and asked him whether he was aware of the similarity to our study and he said that it had influenced him. I mean, our work is out there, it's in the world.
You pointed out one way of reconciling science and faith in "Synaptic Self," writing that "a spiritual view of the self doesn't have to be completely incompatible with a biological one," because even the nonmaterial soul depends on brain functions. Can you elaborate on this? What are the implications of this for you personally?
The idea that a spiritual view is not incompatible with a synaptic view of the self was important to me when I wrote "Synaptic Self" because I wanted to reassure anyone who got to that point in the book, who might be having reservations for reasons of faith, that they should read on. That is, I wasn't using brain research to try to dismantle faith. I had something more inclusive in mind. Personally, I'm somewhere between an atheist and an agnostic, so it wasn't about a deep internal struggle. But do I really know this? Probably not. Many of our motivations are unconscious. Given that I was quite religious as a young boy, maybe I do have some deeply internalized struggle going on.
You've said the implicit or unconscious aspects of the self play an essential role in shaping who we are and explaining why we do what we do. You've also said, "An understanding of the mystery of personality crucially depends on figuring out the unconscious functions of the brain." How is personality shaped at the unconscious level? What's something that really surprised you about this process?
I've long believed that much of who we are is due to unconscious processes. This goes back to my Ph.D. thesis work in spilt-brain patients, done with my advisor, Mike Gazzaniga. In these patients, the two sides of the brain are separated to control epilepsy. From the point of view of each side, behaviors produced by the other side are behaviors that are unconsciously produced. So we did lots of work trying to understand how the left hemisphere, which has language and can be communicated with, dealt with behaviors produced by the right hemisphere. The surprising thing was how seamlessly the left hemisphere adopted these behaviors as its own, as if it had produced them.
This led Gazzaniga and me to propose that much of human behavior is like this -- produced by unconscious systems. Consciousness then makes sense of it all by telling a story. Gazzaniga went on to pursue the nature of interpreter functions of consciousness and turned to trying to understand the unconscious control of emotional behavior. That's how I got interested in the unconscious aspects of mind. If you think about it, the enduring features of mind and behavior that define our personality are not things we consciously control. We are simply that way, and it can be hard to change those things because they are unconsciously controlled.
You've said the big question that brain research should be asking is: What makes us who we are? As biologists turn up evidence that animals can exhibit emotions and patterns of cognition that were once considered to be strictly human, Descartes' dictum, "I think, therefore I am," loses its force. Do you agree? If so, how does this affect your approach to "the big brain question"?
I don't buy this. Sure, animals have emotional behavior and can solve problems using cognitive capacities. However, a key thing about human cognition is the way language shapes it. Language allows us to classify and categorize the world instantaneously on the basis of words. A single word can imply so much -- think of how much information is carried by the contrast between the words America and Islam. And syntax allows us to rapidly conceptualize who is doing what to whom and to convey this to others. Also, the areas of the brain most involved in human cognition are in the prefrontal cortex, which has areas that are more elaborate in humans than other primates and nonexistent in other mammals. I do agree that much of the human brain can be understood in terms of animal brains -- very basic emotions like fear are a good example. But when it comes to higher cognition I believe the human brain stands out.
You and others have indicated that solving the origin of the self is a binding problem. You explain that there are a number of brain systems running in parallel and these systems bind together to give rise to what we perceive as the self, as a "coherent personality." How does this binding happen? And is the self really just an unexpected byproduct of synaptic plasticity? In other words, are the things humans value most in the world -- thought, creativity, beliefs, love, happiness, family -- are these just the happy accidents of an evolving brain?
The fact is that there are many different systems in the brain -- perceptual, emotional, motivational, cognitive and so on. And within each of these broad categories there are lots of divisions. All of these run in parallel. Neuroscience has learned a tremendous amount about how systems and brain areas work. But our self, our personality, is not just the sum total of our brain systems. Our self can be thought of as a particular configuration of functional activity occurring in many systems at once. These configurations are determined by our genetically based wiring and by the experiences we have as we go through life. When it comes to mental life and behavior, nature and nurture are not two different things but two ways of doing the same thing: wiring our synapses.
For example, each emotional experience you have will subtly change the wiring in many systems. If you have lots of fearful, stressful experiences, synapses in these various systems will be wired with a fear bias. Lots of positive experiences will have different biasing effects. These biases may be especially important in early life when the brain's wiring is up for grabs. Once developed, the biases will make one susceptive to certain kinds of thoughts or feelings, or make one seek out certain kinds of experiences. It is well known that people with phobias are especially sensitive to environmental stimuli that are related to their fears, and depressed people seize on negative information.
I don't think that the things we cherish are accidents of an evolving brain, as you put it in the question, but instead are a consequence of the way genes and experiences wire our brains. The good news is that each experience rewires the brain. We have the capacity to change, but the more we change earlier, the easier it may be to shift the bias. The subtitle of "Synaptic Self" is: "How Our Brains Become Who We Are." This was meant as a way of emphasizing the importance of learning, since there has been a big emphasis lately in neuroscience on genes. The world will be a better place if we adhere to the idea that change is possible. I wish "The Sopranos" hadn't ended so fatalistically.
In "Synaptic Self" you've come up with some interesting ways of thinking about consciousness. What do you think are the key brain mechanisms that underlie consciousness? And how close are we to the final answer?
Consciousness is important and immensely interesting. Lots of progress has been made conceptually and empirically. My own view, which is shared by many, is that it has something to do with the unique human capacity for language and also with the newly evolved prefrontal cortex and its capacity for working memory. These ideas are described in my books, so I won't go into detail here. The key thing for me is that even if we solved the problem of consciousness we wouldn't understand how our brains make us who we are. We wouldn't know why people develop mental disorders or start wars. We wouldn't know about the fundamental motives that drive human behavior. Even very complex motives like the desire to succeed or to obtain power are not simple reflections of consciousness. Dick Cheney probably thinks he's a good guy.
Where is your work headed?
One big question is how the brain controls not just emotional reactions but actions. For instance, with the fear response you can't stay frozen in fear. Eventually you have to take action -- fight back, run away. This transition from reaction to action is a key to the brain mechanisms, underlying something that therapists have known for a while -- that active coping strategies are more beneficial than passive coping. I'm also very intrigued with the problem of emotional development, with the idea that we build up emotional biases through experience and that early life is especially important. If true, it would mean that we should be teaching kids ways to control stress at an early age, maybe even as part of their educational training. Because of my belief in the importance of early emotions, I've made emotional development the main focus of the new Emotional Brain Institute I've started.
What will the institute be dealing with?
Together with Harold Koplewicz, a professor of child and adolescent psychiatry at NYU, and the support of NYU's administration, this institute will be dedicated to the study of emotions, especially fear and anxiety, in young brains -- in both animals and children. We want to start teaching kids how to regulate their emotions. I also want to make emotion a university-wide integrative topic at NYU that can unify the arts and humanities (literature, history, visual and performing arts) and the applied disciplines (business, law) with the sciences. These are big goals for the Emotional Brain Institute, but I think we can do it.
-- By Jonathan Cott and Karen Rester

The Key To Good Health That No One Is Talking About

By Brydie Ragan,
YES! Magazine. The public generally believes that poor lifestyle choices, faulty genes and infectious agents are the major factors that give rise to illness. Here's the rest of the story.

Research now tells us that lower socio-economic status may be more harmful to health than risky personal habits...
I recently saw a billboard for an employment service that said, "If you think cigarette smoking is bad for your health, try a dead-end job." This warning may not just be an advertising quip: public health research now tells us that lower socio-economic status may be more harmful to health than risky personal habits, such as smoking or eating junk food.
In 1967, British epidemiologist Michael Marmot began to study the relationship between poverty and health. He showed that each step up or down the socio-economic ladder correlates with increasing or decreasing health.
Over time, research linking health and wealth became more nuanced. It turns out that "what matters in determining mortality and health in a society is less the overall wealth of that society and more how evenly wealth is distributed. The more equally wealth is distributed, the better the health of that society," according to the editors of the April 20, 1996 issue of the British Medical Journal. In that issue, American epidemiologist George Kaplan and his colleagues showed that the disparity of income in each of the individual U.S. states, rather than the average income per state, predicted the death rate.
"The People's Epidemiologists," an article in the March/April 2006 issue of Harvard Magazine, takes the analysis a step further. Fundamental social forces such as "poverty, discrimination, stressful jobs, marketing-driven global food companies, substandard housing, dangerous neighborhoods and so on" actually cause individuals to become ill, according to the studies cited in the article. Nancy Krieger, the epidemiologist featured in the article, has shown that poverty and other social determinants are as formidable as hostile microbes or personal habits when it comes to making us sick. This may seem obvious, but it is a revolutionary idea: the public generally believes that poor lifestyle choices, faulty genes, infectious agents, and poisons are the major factors that give rise to illness.
Krieger is one of many prominent researchers making connections between health and inequality. Michael Marmot recently explained in his book, The Status Syndrome, that the experience of inequality impacts health, making the perception of our place in the social hierarchy an important factor. According to Harvard's Ichiro Kawachi, the distribution of wealth in the United States has become an "important public health problem." The claims of Kawachi and his colleagues move public health firmly into the political arena, where some people don't think it belongs. But the links between socio-economic status and health are so compelling that public health researchers are beginning to suggest economic and political remedies.
Richard Wilkinson, an epidemiologist at the University of Nottingham, points out that we are not fated to live in stressful dominance hierarchies that make us sick -- we can choose to create more egalitarian societies. In his book, The Impact of Inequality, Wilkinson suggests that employee ownership may provide a path toward greater equality and consequently better health. The University of Washington's Stephen Bezruchka, another leading researcher on status and health, also reminds us that we can choose. He encourages us to participate in our democracy to effect change. In a 2003 lecture he said that "working together and organizing is our hope."
It is always true that we have choices, but some conditions embolden us to create the future while others invite powerlessness. When it comes to health care these days, Americans are reluctant to act because we are full of fear. We are afraid: afraid because we have no health care insurance, afraid of losing our health care insurance if we have it, or afraid that the insurance we have will not cover our health care expenses. But in the shadow of those fears is an even greater fear -- the fear of poverty -- which can either cause or be caused by illness.
In the United States we have all the resources we need to create a new picture: an abundance of talent, ideas, intelligence, and material wealth. We can decide to create a society that not only includes guaranteed health care but also replaces our crushing climate of fear with a creative culture of care. As Wilkinson and Bezruchka suggest, we can choose to work for better health by working for greater equality