Again, in 3-part harmony–it’s not about “the science”

Let me be straight.  I don’t believe in conspiracy theories.* There’s no Bacon-gate.  No Cowspiracy.  No Salami-mafia out to suppress sandwich meat.  But, as the students in my Introduction to Science, Technology, and Society course will tell you, there are professional interests (only one of which is funding) and careerism.  There is also the human desire to simply not be wrong.  In nutrition, this desire is personal.

(If I were queen of the world, every research article published about nutrition and chronic disease would list, in addition to “author affiliations” and “conflicts of interest,” what each researcher typically eats for breakfast every day.  You’d find out a lot more about “affiliations” and “interests” from that information than from anything else.)

And so there is this:  Meat and fat intake and colorectal cancer risk: A pooled analysis of 14 prospective studies.  It’s an abstract from the Proceedings of the American Association of Cancer Research, from back in 2004.  It found:

Greater intake of either red meat (excluding processed meat) or processed meat was not related to colorectal cancer risk.

Typically, such abstracts are presented at a conference, then later published.  This one never made it publication.  We don’t know why.

Trevor Butterworth does some speculating about the “whys” here:

When contacted by, Smith-Warner said they wanted to add a few more studies before publishing their results next year. But the fact is that their colorectal cancer study had more subjects than many of the other studies published by the Pooling Project – and the four-year delay in publication cannot but raise the question of whether their results just didn’t fit in with the nutritional beliefs of Harvard’s School of Public Health, one of whose senior figures – Dr. Walter Willett – has long recommended limiting red meat and who, coincidentally, is a board member of the World Cancer Research Fund.

It’s not the first time studies that contradict the status quo in nutrition never made it publication.  This study also never got past conference proceedings, though there was an article about it in the Harvard Gazette and Walter Willett (who certainly seems to practice what he preaches) has his name on the abstract:

Greene, P., Willett, W., Devecis, J., and Skaf, A. (2003). Pilot 12-Week Feeding Weight-Loss Comparison: Low-Fat vs Low-Carbohydrate (Ketogenic) Diets (abstract presented at The North American Association for the Study of Obesity Annual Meeting 2003), Obesity Research, 11S, 95-OR.

Greene’s study found that a higher calorie low-carb diet resulted in more weight loss than a lower-calorie low-fat diet.  I’m not arguing about what this study might prove about diets in general, so back off, all you folks out there foaming at the mouth to pick it apart.  Truth is, you can’t really critique it, because it never got published.

Another study that almost didn’t make it out of the gate concluded that:

Our findings do not support the hypothesis that a diet consistent with the 2005 DGA benefits long-term weight maintenance in American young adults.

In a nutshell, Daisy Zamora found that black participants with a higher Diet Quality Index (according to the Dietary Guidelines for Americans) gained more weight over time than whites (with either a higher or lower Diet Quality Index).  More surprisingly, these black participants also gained more weight over time than blacks with a lower Diet Quality Index.

Again, I’m not arguing the strengths or shortcomings of this research. The part of the story that matters here is that Zamora worked on this study as part of her PhD research at UNC-Chapel Hill.  She found a tremendous amount of resistance to her findings, to the extent that she was counseled to “redo” her work without examining racial differences.

I’ve been hip-checked into the rails by the politics of nutrition science myself.

I guess that’s why, to some extent, I feel that all of the talk about “good” science vs. “bad” science in nutrition is misplaced.  How do we even know that the part of “the science” we get to see fairly represents the work that has been done when the whole process is so highly politicized and ideological?  How many grad students slogging away in labs or poking away at databases find things that never make it to publication because it would compromise the prevailing paradigm and their advisor’s funding (and don’t have the huevos that Zamora had to get her findings published anyway)? I feel pretty certain this doesn’t just happen in nutrition, but in nutrition it really matters to each of us, every day–and even more so to those who rely on government programs for food.

How did nutrition science become so politicized?  Dietary Guidelines, I’m looking at you.  When policy “chooses” a winner and a loser in a scientific controversy, things change. Science gets done differently. And when policy (dressed up as science) chooses a side in what we should/should not eat in order to prevent ostensibly preventable things like obesity and disease, well, all hell breaks loose. When we act like we “know” what foods cause/prevent disease, good health becomes entirely the responsibility of the individual.  If you get fat or sick–no matter what else is going in your world or in your body–it’s your own damn fault.

How do we un-politicize nutrition science? This article from Daniel Sarewitz, “Science can’t solve it,” offers some clues.  Although he’s focusing on new biotechnologies that have out-run our ethical frameworks for dealing with them, these remarks could just as well apply to diet-chronic disease science.  He calls for discussions and deliberations that:

… could address questions about what is acceptable and what isn’t, about appropriate governance frameworks for research, and about the relative priority of different lines of study given ongoing and inevitable uncertainties and disagreements about risks and benefits.

If there’s one thing we’ve got in diet-chronic disease science, it is “ongoing and inevitable uncertainties.”  It’s highly unlikely that science is going to solve those uncertainties anytime soon.  As for ethical frameworks, we have never given serious consideration to the ethical implications–not to mention the outright absurdity–of subjecting everyone in our diverse population to a single dietary prescription designed to prevent all of the major chronic diseases (none of which have ever been established as primarily nutritional in nature).

Until we get to these kinds of discussion, the creators of the 2015 Dietary Guidelines ought to listen to what Paul Marantz had to say back in 2010:

 When the evidence is murky, public health officials may best be served by exercising restraint, which is reflected by making no recommendation at all.

And when they don’t (cuz who can resist telling all those stupid Americans how to eat?), at the very least, we’ll all get a little smarter about “the science.”  As @Ted_Underwood put it on Twitter:

A stubborn love of bacon just taught Americans the diff. between p-values & effect size better than 100 stats courses could.

Works for me.


Many thanks to Dr. Sarah Hallberg, without whom it would have taken me another 5 years to stumble across some of these articles.

*Run one PTA meeting and try to get a half-dozen fairly intelligent, well-educated adults to coordinate plans for a yard sale, and you’ll see what I mean.  We can’t agree on whether used children’s books should be 50 cents or $1–figuring out whether to ruin the health of Americans by buying off the media or silencing the scientists would be beyond any possible reckoning.

Dietary Guidelines for Americans: We don’t need no stinkin’ science

I know, I know. I never post. I never call. I don’t bring you flowers. It’s a wonder we’re still together. I have the usual list of excuses:


But before I disappear off the face of the interwebz once again, I thought I share with you a quickie post on the science behind our current Dietary Guidelines. Even as we speak, the USDA and DHHS are busy working on the creation of the new 2015 Dietary Guidelines for Americans, which are shaping up to be the radically conservative documents we count on them to be.

For just this purpose, the USDA has set up a very large and impressive database called the Nutrition Evidence Libbary (NEL), where it conducts “systematic reviews to inform Federal nutrition policy and programs.” NEL staff collaborate with stakeholders and leading scientists using state-of-the-art methodology to objectively review, evaluate, and synthesize research to answer important diet-related questions in a manner that allows them to reach a conclusion that they’ve previous determined is the one they want.

It’s a handy skill to master. Here’s how it’s done.

The NEL question:

What is the effect of saturated fat intake on increased risk of cardiovascular disease or type 2 diabetes?

In the NEL, they break the evidence up into “cardiovascular” and “diabetes” so I’ll do the same, which means we are really asking: What is the effect of saturated fat (SFA) intake on increased risk of cardiovascular disease?

Spoiler alert–here’s the answer: “Strong evidence” indicates that we should reduce our intake of saturated fat (from whole foods like eggs, meat, whole milk, and butter) in order to reduce risk of heart disease. As Gomer Pyle would say, “SUR-PRIZE, SUR-PRIZE.”

Aaaaaaaand . . . here’s the evidence:

The 8 studies rated “positive quality” are in blue; the 4 “neutral quality” studies are in gray. The NEL ranks the studies as positive and neutral (less than positive?), but treats them all the same in the review. Fine. Whateverz.

According the exclusion criteria for this question, any study with a dropout rate of more than 20% should be eliminated from the review. These 4 studies have dropout rates of more than 20%. They should have been excluded. They weren’t, so we’ll exclude them now.

Also, according to NEL exclusion criteria for this question, any studies that substituted fat with carbohydrate or protein, instead of comparing types of fat, should be excluded. Furtado et al 2008 does not address the question of varying levels of saturated fat in the diet. In fact, saturated fat levels were held constant–at 6% of calories–for each experimental diet group. So, let’s just exclude this study too.

One study–Azadbakht et al 2007–was conducted on teenage subjects with hypercholesterolemia, a hereditary condition that affects about 1% of the population. Since the U.S. Dietary Guidelines are not meant to treat medical conditions and are meant for the entire population, this study should not have been included in the analysis. So let’s take care of that for those NEL folks.


In one study–Buonacorso et al 2007–total cholesterol levels did not change when dietary saturated fat was increased: “Plasma TC [total cholesterol] and triacylglycerol levels were NS [not significantly] changed by the diets, by time (basal vs. final test), or period (fasting vs. post-prandial) according to repeated-measures analysis.” This directly contradicts the conclusion of the NEL. Hmmmm. So let’s toss this study and see what’s left.

In these four studies, higher levels of saturated fat in the diet made some heart disease risk factors get worse, but other risk factors got better. So the overall effect on heart disease risk was mixed or neutral. As a result, these studies do not support the NEL conclusion that saturated fat should be reduced in order to reduce risk of heart disease.


That leaves one lone study. A meta-analysis of eleven observational studies. Seeing as the whole point of a meta-analysis is to combine studies with weak effects to see if you end up with a strong one, if saturated fat was really strongly associated with heart disease, we should see that, right? Right. What this meta-analysis found was that among women over 60, there is no association between saturated fat and coronary events or deaths. Among adult men of any age, there is no association between saturated fat and coronary events or deaths. Only in women under the age of 60 is there is a small inverse association between risk of coronary events or deaths and the reduction of saturated fat in the diet. That sounds like it might be bad news—at least for women under 60—but this study also found a positive association between monounsaturated fats—you know, the “good fat,” like you would find in olive oil—and risk of heart disease. If you take the results of this study at face value–which I wouldn’t recommend–then olive oil is as bad for you as butter.

So there’s your “strong” evidence for the conclusion that saturated fat increases risk of heart disease.


Just recently, Frank Hu of the 2015 Dietary Guidelines Advisory Committee was asked what we should make of the recent media attention to the idea that saturated fat is not bad for you after all (see this video at 1:06:00). Dr. Hu reassured us that, no, saturated fat still kills. He went on to say that the evidence to prove this, provided primarily by a meta-analysis created by USDA staffers (and we all know how science-y they can be), is MUCH stronger than that used by the 2010 Committee.

Well, all I can say is:  it must be.  Because it certainly couldn’t be any weaker.



As the Calories Churn (Episode 3): The Blame Game

In the previous episode of As the Calories Churn, we explored the differences in food supply/consumption between America in 1970 and America in 2010.

We learned that there were some significant changes in those 40 years. We saw dramatic increases in vegetable oils, grain products, and poultry—the things that the 1977 Dietary Goals and the 1980 Dietary Guidelines told us to increase. We saw decreases in red meat, eggs, butter, and full-fat milk—things that our national dietary recommendations told us to decrease. Mysteriously, what didn’t seem to increase much—or at all—were SoFAS (meaning “Solid Fats and Added Sugars”) which, as far as the 2010 Dietary Guidelines for Americans are concerned, are the primary culprits behind our current health crisis. (“Solid Fats” are a linguistic sleight-of-hand that lumps saturated fat from natural animal sources in with processed partially-hydrogenated vegetables oils and margarines that contain transfats; SoFAS takes the trick a step further, by being not only a dreadful acronym in terms of implying that poor health is caused by sitting on our “sofas,” but by creating an umbrella term for foods that have little in common in terms of structure, biological function or nutrition.)

Around the late 70s or early 80s, there were sudden and rapid changes in America’s food supply and food choices and similar sudden and rapid changes in our health. How these two phenomena are related remains a matter of debate. It doesn’t matter if you’re Marion Nestle and you think the problem is calories or if you’re Gary Taubes and you think the problem is carbohydrate—both of those things increased in our food supply. (Whether or not the problem is fat is an open debate; food availability data points to an increase in added fats and oil, the majority of which are, ironically enough, the “healthy” monounsaturated kind; consumption data points to a leveling off of overall fat intake and a decrease in saturated fat—not a discrepancy I can solve here.) What seems to continue to mystify people is why this changed occurred so rapidly at this specific point in our food and health history.

Personally responsible or helplessly victimized?

At one time, it was commonly thought that obesity was a matter of personal responsibility and that our collective sense of willpower took a nosedive in the 80s, but nobody could ever explain quite why. (Perhaps a giant funk swept over the nation after The Muppet Show got cancelled, and we all collectively decided to console ourselves with Little Debbie Snack Cakes and Nickelodeon?) But because this approach is essentially industry-friendly (Hey, says Big Food, we just make the stuff!) and because no one has any explanation for why nearly three-quarters of our population decided to become fat lazy gluttons all at once (my Muppet Show theory notwithstanding) or for the increase of obesity among preschool children (clearly not affected by the Muppet Show’s cancellation), public health pundits and media-appointed experts have decided that obesity is no longer a matter of personal responsibility. Instead the problem is our “obesogenic environment,” created by the Big Bad Fast Processed Fatty Salty Sugary Food Industry.

Even though it is usually understood that a balance between supply and demand creates what happens in the marketplace, Michael Pollan has argued that it is the food industry’s creation of cheap, highly-processed, nutritionally-bogus food that has caused the rapid rise in obesity. If you are a fan of Pollanomics, it seems obvious that food industry—on a whim?—made a bunch of cheap tasty food, laden with fatsugarsalt, hoping that Americans would come along and eat it. And whaddaya know? They did! Sort of like a Field of Dreams only with Taco-flavored Doritos.

As a result, obesity has become a major public health problem.

Just like it was in 1952.

Helen Lee in thought-provoking article, The Making of the Obesity Epidemic (it is even longer than one of my blog posts, but well worth the time) describes how our obesity problem looked then:

“It is clear that weight control is a major public health problem,” Dr. Lester Breslow, a leading researcher, warned at the annual meeting of the western branch of the American Public Health Association (APHA).
 At the national meeting of the APHA later that year, experts called obesity “America’s No. 1 health problem.”

The year was 1952. There was exactly one McDonald’s in all of America, an entire six-pack of Coca-Cola contained fewer ounces of soda than a single Super Big Gulp today, and less than 10 percent of the population was obese.

In the three decades that followed, the number of McDonald’s restaurants would rise to nearly 8,000 in 32 countries around the world,
sales of soda pop and junk food would explode — and yet, against the fears and predictions of public health experts, obesity in the United States hardly budged. The adult obesity rate was 13.4 percent in 1960. In 1980, it was 15 percent. If fast food was making us fatter, it wasn’t by very much.

Then, somewhat inexplicably, obesity took off.”

It is this “somewhat inexplicably” that has me awake at night gnashing my teeth.

And what is Government going to do about it?

I wonder how “inexplicable” it would be to Ms. Lee had she put these two things together:

(In case certain peoples have trouble with this concept, I’ll type this very slowly and loudly: I’m not implying that the Dietary Guidelines “caused” the rise in obesity; I am merely illustrating a temporal relationship of interest to me, and perhaps to a few billion other folks. I am also not implying that a particular change in diet “caused” the rise in obesity. My focus is on the widespread and encompassing effects that may have resulted from creating one official definition of “healthy food choices to prevent chronic disease” for the entire population.)

Right now we are hearing calls from every corner for the government to create or reform policies that will reign in industry and “slim down the nation.” Because we’d never tried that before, right?

When smoking was seen as a threat to the health of Americans, the government issued a definitive report outlining the science that found a connection between smoking and risk of chronic disease. Although there are still conspiracy theorists that believe that this has all been a Big Plot to foil the poor widdle tobacco companies, in general, the science was fairly straightforward. Cigarette smoking—amount and duration—is relatively easy to measure, and the associations between smoking and both disease and increased mortality were compelling and large enough that it was difficult to attribute them to methodological flaws.

Notice that Americans didn’t wait around for the tobacco industry to get slapped upside the head by the FDA’s David Kessler in the 1990s. Tobacco use plateaued in the 1950s as scientists began to publicize reports linking smoking and cancer. The decline in smoking in America began in earnest with the release of Smoking and Health: Report of the Advisory Committee to the Surgeon General in 1964. A public health campaign followed that shifted social norms away from considering smoking as an acceptable behavior, and smoking saw its biggest declines before litigation and sanctions against Big Tobacco  happened in the 1990s.

Been there, done that, failed miserably.

In a similar fashion, the 1977 Dietary Goals were the culmination of concerns about obesity that had begun decades before, joined by concerns about heart disease voiced by a vocal minority of scientists led by Ancel Keys. Declines in red meat, butter, whole milk and egg consumption had already begun in response to fears about cholesterol and saturated fat that originated with Keys and the American Heart Association—which used fear of fat and the heart attacks they supposedly caused as a fundraising tactic, especially among businessmen and health professionals, whom they portrayed as especially susceptible to this disease of “successful civilization and high living.”  The escalation of these fears—and declines in intake of animal foods portrayed as especially dangerous—picked up momentum when Senator George McGovern and his Select Senate Committee created the 1977 Dietary Goals for Americans. It was thought that, just as we had “tackled” smoking, we could create a document advising Americans on healthy food choices and compliance would follow. But issue was a lot less straightforward.

To begin with, when smoking was at its peak, only around 40% of the population smoked. On the other hand, we expect that approximately 100% of the population eats.

In addition, the anti-smoking campaigns of the 1960s and 1970s built on a long tradition of public health messages—originating with the Temperance movement—that associated smoking with dirty habits, loose living, and moral decay. It was going to be much harder to fully convince Americans that traditional foods typically associated with robust good health, foods that the US government thought were so nutritionally important that in the recent past they had been “saved” for the troops, were now suspect and to be avoided.

Where the American public had once been told to save “wheat, meat, and fats” for the soldiers, they now had to be convinced to separate the “wheat” from the “meat and fats” and believe that one was okay and the others were not.

To do this, public health leaders and policy makers turned to science, hoping to use it just as it had been used in anti-smoking arguments. Frankly, however, nutrition science just wasn’t up to the task. Linking nutrition to chronic disease was a field of study that would be in its infancy after it grew up a bit; in 1977, it was barely embryonic. There was little definitive data to support the notion that saturated fat from whole animal foods was actually a health risk; even experts who thought that the theory that saturated fat might be linked to heart disease had merit didn’t think there was enough evidence to call for dramatic changes in American’s eating habits.

The scientists who were intent on waving the “fear of fat” flag had to rely on observational studies of populations (considered then and now to be the weakest form of evidence), in order to attempt to prove that heart disease was related to intake of saturated fat (upon closer examination, these studies did not even do that).

Nutrition epidemiology is a soft science, so soft that it is not difficult to shape it into whatever conclusions the Consistent Public Health Message requires. In large-scale observational studies, dietary habits are difficult to measure and the results of Food Frequency Questionnaires are often more a product of wishful thinking than of reality. Furthermore, the size of associations in nutrition epidemiological studies is typically small—an order of magnitude smaller than those found for smoking and risk of chronic disease.

But nutrition epidemiology had proved its utility in convincing the public of the benefits of dietary change in the 70s and since then has become the primary tool—and the biggest funding stream (this is hardly coincidental)—for cementing in place the Consistent Public Health Message to reduce saturated fat and increase grains and cereals.

There is no doubt that the dramatic dietary change that the federal government was recommending was going to require some changes from the food industry, and they appear to have responded to the increased demands for low-fat,whole grain products with enthusiasm. Public health recommendations and the food fears they engendered are (as my friend James Woodward puts it) “a mechanism for encouraging consumers to make healthy eating decisions, with the ultimate goal of improving health outcomes.” Experts like Kelly Brownell and Marion Nestle decry the tactics used by the food industry of taking food components thought to be “bad” out of products while adding in components thought to be “good,” but it was federal dietary recommendations focusing above all else on avoiding saturated fat, cholesterol, and salt that led the way for such products to be marketed as “healthy” and to become acceptable to a confused, busy, and anxious public. The result was a decrease in demand for red meat, butter, whole milk and egg, and an increase in demand for low-saturated fat, low-cholesterol, and “whole” grain products. Minimally-processed animal-based products were replaced by cheaply-made, highly-processed plant-based products, which food manufacturers could market as healthy because, according to our USDA/HHS Dietary Guidelines, they were healthy.

The problem lies in the fact that—although these products contained less of the “unhealthy” stuff Americans were supposed to avoid—they also contained less of our most important nutrients, especially protein and fat-soluble vitamins. We were less likely to feel full and satisfied eating these products, and we were more likely to snack or binge—behaviors that were also fully endorsed by the food industry.

Between food industry marketing and the steady drumbeat of media messages explaining just how deadly red meat and eggs are (courtesy of population studies from Harvard, see above), Americans got the message. About 36% of the population believe that UFOs are real; only 25% believe that there’s no link between saturated fat and heart disease. We are more willing to believe that we’ve been visited by creatures from outer space than we are to believe that foods that humans have been eating ever since they became human have no harmful effects on health. But while industry has certainly taken advantage of our gullibility, they weren’t the ones who started those rumors, and they should not be shouldering all of the blame for the consequences.

Fixing it until it broke

Back in 1977, we were given a cure that didn’t work for diseases that we didn’t have. Then we spent billions in research dollars trying to get the glass slipper to fit the ugly stepsister’s foot. In the meantime, the food industry has done just what we would expect it to do, provide us with the foods that we think we should eat to be healthy and—when we feel deprived (because we are deprived)—with the foods we are hungry for.

We can blame industry, but as long as food manufacturers can take any mixture of vegetable oils and grain/cereals and tweak it with added fiber, vitamins, minerals, a little soy protein or maybe some chicken parts, some artificial sweeteners and salt substitutes, plus whatever other colors/preservatives/stabilizers/flavorizers they can get away with and still be able to get the right profile on the nutrition facts panel (which people do read), consumers–confused, busy, hungry–are going to be duped into believing what they are purchasing is “healthy” because–in fact–the government has deemed it so. And when these consumers are hungry later—which they are very likely to be—and they exercise their rights as consumers rather than their willpower, who should we blame then?

There is no way around it. Our dietary recommendations are at the heart of the problem they were created to try to reverse. Unlike the public health approach to smoking, we “fixed” obesity until it broke for real.

As the Calories Churn (Episode 1): Nooooo, not the carbs!!!

Oh the drama!  Some of the current hyperventilating in the alternative nutrition community–sugar is toxic, insulin is evil, vegetable oils give you cancer, and running will kill you–has, much to my dismay, made the alternative nutrition community sound as shrill and crazed as the mainstream nutrition one.

When you have self-appointed nutrition experts food writers like Mark Bittman agreeing feverishly with a pediatric endocrinologist with years of clinical experience like Robert Lustig, we’ve crossed over into some weird nutrition Twilight Zone where fact, fantasy, and hype all swirl together in one giant twitter feed of incoherence meant, I think, to send us into a dark corner where we can do nothing but nibble on organic kale, mumble incoherently about inflammation and phytates, and await the zombie apocalypse.

No, carbohydrates are not evil—that’s right, not even sugar. If sugar were rat poison, one trip to the county fair in 4th grade would have killed me with a cotton candy overdose. Neither is insulin, now characterized as the serial killer of hormones (try explaining that to a person with type 1 diabetes).

But that doesn’t mean that 35 years of dietary advice to increase our grain and cereal consumption, while decreasing our fat and saturated fat consumption has been a good idea.

I have gotten rather tired of seeing this graph used as a central rationale for arguing that the changes in total carbohydrate intake over the past 30 years have not contributed to the rising rates of obesity.

The argument takes shapes on 2 fronts:

1) We ate 500 grams of carbohydrate per day in 1909 and 500 grams in 1997 and WE WEREN’T FAT IN 1909!

2) The other part of the argument is that the TYPE of carbohydrate has shifted over time. In 1909, we ate healthy, fiber-filled unrefined and unprocessed types of carbohydrates. Not like now.

Okay, let’s take closer look at that paper, shall we?  And then let’s look at what really matters:  the context.

The data used to make this graph are not consumption data, but food availability data. This is problematic in that it tells us how much of a nutrient was available in the food supply in any given year, but does not account for food waste, spoilage, and other losses. And in America, we currently waste a lot of food. 

According to the USDA, we currently lose over 1000 calories in our food supply–calories that don’t make it into our mouths.  Did we waste the same percentage of our food supply across the entire century? Truth is, we don’t know and we are not likely to find out—but I seriously doubt it. My mother and both my grandmothers—with memories of war and rationing fresh in their minds—would be no more likely to throw out anything remotely edible as they would be to do the Macarena. My mother has been known to put random bits of leftover food in soups, sloppy joes, and—famously—pancake batter. To this day, should your hand begin to move toward the compost bucket with a tablespoon of mashed potatoes scraped from the plate of a grandchild shedding cold virus like it was last week’s fashion, she will throw herself in front of the bucket and shriek, “NOOOOOO! Don’t throw that OUT! I’ll have that for lunch tomorrow.”

You know what this means folks: in 1909, we were likely eating MORE carbohydrate than we are today. (Or maybe in 1909, all those steelworkers pulling 12 hour days 7 days a week, just tossed out their sandwich crusts rather than eat them. It could happen.)

BUT–as with butts all over America including mine, it’s a really Big BUT: How do I explain the fact that Americans were eating GIANT STEAMING HEAPS OF CARBOHYDRATES back in 1909—and yet, and yet—they were NOT FAT!!??!!

Okay. Y’know. I’m up for this one. Not only is problematic to the point of absurdity to compare food availability data from the early 1900s to our current food system, life in general was a little different back then. At the turn of the century,

  • average life expectancy was around 50
  • the nation had 8,000 cars
  • and about 10 miles of paved roads.

In 1909, neither assembly lines nor the Titanic had happened yet.

The labor force looked a little different too:Labor force 1900 - 2000

Primary occupations made up the largest percentage of male workers (42%)—farmers, fisherman, miners, etc.—what we would now call manual laborers. Another 21% were “blue collar” jobs, craftsmen, machine operators, and laborers whose activities in those early days of the Industrial Revolution, before many things became mechanized, must have required a considerable amount of energy. And not only was the work hard, there was a lot of it. At the turn of the century, the average workweek was 59 hours, or close to 6 10-hour days. And it wasn’t just men working. As our country shifted from a rural agrarian economy to a more urban industrialized one, women and children worked both on the farms and in the factories.

This is what is called “context.”

In the past, nutrition epidemiologists have always considered caloric intake to be a surrogate marker for activity level. To quote Walter Willett himself:

“Indeed, in most instances total energy intake can be interpreted as a crude measure of physical activity . . . ” (in: Willett, Walter. Nutritional Epidemiology. Oxford University Press, 1998, p. 276).

It makes perfect sense that Americans would have a lot of carbohydrate and calories in their food supply in 1909. Carbohydrates have been—and still are—a cheap source of energy to fuel the working masses. But it makes little sense to compare the carbohydrate intake of the labor force of 1909 to the labor force of 1997, as in the graph at the beginning of this post (remember the beginning of this post?).

After decades of decline, carbohydrate availability experienced a little upturn from the mid 1960s to the late 1970s, when it began to climb rapidly. But generally speaking, carbohydrate intake was lower during that time than at any point previously.

I’m not crazy about food availability data, but to be consistent with the graph at the top of the page, here it is.

Data based on per capita quantities of food available for consumption:

1909 1975 Change
Total calories 3500 3100 -400
Carbohydrate calories 2008 1592 -416
Protein calories 404 372 -32
Total fat calories 1098 1260 +162
Saturated fat (grams) 52 47 -5
Mono- and polyunsaturated fat (grams) 540 738 +198
Fiber (grams) 29 20 -9

To me, it looks pretty much like it should with regard to context.  As our country went from pre- and early industrialized conditions to a fully-industrialized country of suburbs and station wagons, we were less active in 1970 than we were in 1909, so we consumed fewer calories. The calories we gave up were ones from the cheap sources of energy—carbohydrates—that would have been most readily available in the economy of a still-developing nation. Instead, we ate more fat.

We can’t separate out “added fats” from “naturally-present fats” from this data, but if we use saturated fat vs. mono- and polyunsaturated fats as proxies for animal fats vs. vegetable oils (yes, I know that animal fats have lots of mono- and polyunsaturated fats, but alas, such are the limitations of the dataset), then it looks like Americans were making use of the soybean oil that was beginning to be manufactured in abundance during the 1950s and 1960s and was making its way into our food supply.  (During this time, heart disease mortality was decreasing, an effect likely due more to warnings about the hazards of smoking, which began in earnest in 1964, than to dietary changes; although availability of unsaturated fats went up, that of saturated fats did not really go down.)

As for all those “healthy” carbohydrates that we were eating before we started getting fat? Using fiber as a proxy for level of “refinement” (as in the graph at the beginning of this post—remember the beginning of this post?), we seemed to be eating more refined carbohydrates in 1975 than in 1909—and yet, the obesity crisis was still yet a gleam in Walter Willett’s eyes.

While our lives in 1909 differed greatly from our current environment, our lives in the 1970s were not all that much different than they are now. I remember. As much as it pains me to confess this, I was there. I wore bell bottoms. I had a bike with a banana seat (used primarily for trips to the candy store to buy Pixie Straws). I did macramé. My parents had desk jobs, as did most adults I knew. No adult I knew “exercised” until we got new neighbors next door. I remember the first time our new next-door neighbor jogged around the block. My brothers and sister and I plastered our faces to the picture window in the living room to scream with excitement every time she ran by; it was no less bizarre than watching a bear ride a unicycle.

In 1970, more men had white-collar than blue-collar jobs; jobs that primarily consisted of manual labor had reached their nadir. Children were largely excluded from the labor force, and women, like men, had moved from farm and factory jobs to more white (or pink) collar work. The data on this is not great (in the 1970s, we hadn’t gotten that excited about exercise yet) but our best approximation is that about 35% of adults–one of whom was my neighbor–exercised regularly, with “regularly” defined as “20 minutes at least 3 days a week” of moderately intense exercise.  (Compare this definition, a total of 60 minutes a week, to the current recommendation, more than double that amount, of 150 minutes a week.)

Not too long ago, the 2000 Dietary Guidelines Advisory Committee (DGAC) recognized that environmental context—such as the difference between America in 1909 and America in 1970—might lead to or warrant dietary differences:

“There has been a long-standing belief among experts in nutrition that low-fat diets are most conducive to overall health. This belief is based on epidemiological evidence that countries in which very low fat diets are consumed have a relatively low prevalence of coronary heart disease, obesity, and some forms of cancer. For example, low rates of coronary heart disease have been observed in parts of the Far East where intakes of fat traditionally have been very low. However, populations in these countries tend to be rural, consume a limited variety of food, and have a high energy expenditure from manual labor. Therefore, the specific contribution of low-fat diets to low rates of chronic disease remains uncertain. Particularly germane is the question of whether a low-fat diet would benefit the American population, which is largely urban and sedentary and has a wide choice of foods.” [emphasis mine – although whether our population in 2000 was largely “sedentary” is arguable]

The 2000 DGAC goes on to say:

“The metabolic changes that accompany a marked reduction in fat intake could predispose to coronary heart disease and type 2 diabetes mellitus. For example, reducing the percentage of dietary fat to 20 percent of calories can induce a serum lipoprotein pattern called atherogenic dyslipidemia, which is characterized by elevated triglycerides, small-dense LDL, and low high-density lipoproteins (HDL). This lipoprotein pattern apparently predisposes to coronary heart disease. This blood lipid response to a high-carbohydrate diet was observed earlier and has been confirmed repeatedly. Consumption of high-carbohydrate diets also can produce an enhanced post-prandial response in glucose and insulin concentrations. In persons with insulin resistance, this response could predispose to type 2 diabetes mellitus.

The committee further held the concern that the previous priority given to a “low-fat intake” may lead people to believe that, as long as fat intake is low, the diet will be entirely healthful. This belief could engender an overconsumption of total calories in the form of carbohydrate, resulting in the adverse metabolic consequences of high carbohydrate diets. Further, the possibility that overconsumption of carbohydrate may contribute to obesity cannot be ignored. The committee noted reports that an increasing prevalence of obesity in the United States has corresponded roughly with an absolute increase in carbohydrate consumption.” [emphasis mine]

Hmmmm. Okay, folks, that was in 2000—THIRTEEN years ago. If the DGAC was concerned about increases in carbohydrate intake—absolute carbohydrate intake, not just sugars, but sugars and starches—13 years ago, how come nothing has changed in our federal nutrition policy since then?

I’m not going to blame you if your eyes glaze over during this next part, as I get down and geeky on you with some Dietary Guidelines backstory:

As with all versions of the Dietary Guidelines after 1980, the 2000 edition was based on a report submitted by the DGAC which indicated what changes should be made from the previous version of the Guidelines. And, as will all previous versions after 1980, the changes in the 2000 Dietary Guidelines were taken almost word-for-word from the suggestions given by the scientists on the DGAC, with few changes made by USDA or HHS staff. Although HHS and USDA took turns administrating the creation of the Guidelines, in 2000, no staff members from either agency were indicated as contributing to the writing of the final Guidelines.

But after those comments in 2000 about carbohydrates, things changed.

Beginning with the 2005 Dietary Guidelines, HHS and USDA staff members are in charge of writing the Guidelines, which are no longer considered to be a scientific document whose audience is the American public, but a policy document whose audience is nutrition educators, health professionals, and policymakers. Why and under whose direction this change took place is unknown.

The Dietary Guidelines process doesn’t have a lot of law holding it up. Most of what happens in regard to the Guidelines is a matter of bureaucracy, decision-making that takes place within USDA and HHS that is not handled by elected representatives but by government employees.

However, there is one mandate of importance: the National Nutrition Monitoring and Related Research Act of 1990, Public Law 445, 101st Cong., 2nd sess. (October 22, 1990), section 301. (P.L. 101-445) requires that “The information and guidelines contained in each report required under paragraph shall be based on the preponderance of the scientific and medical knowledge which is current at the time the report is prepared.”

The 2000 Dietary Guidelines were (at least theoretically) scientifically accurate because scientists were writing them. But beginning in 2005, the Dietary Guidelines document recognizes the contributions of an “Independent Scientific Review Panel who peer reviewed the recommendations of the document to ensure they were based on a preponderance of scientific evidence.” [To read the whole sordid story of the “Independent Scientific Review Panel,” which appears to neither be “independent” nor to “peer-review” the Guidelines, check out Healthy Nation Coalition’s Freedom of Information Act results.]  Long story short:  we don’t know who–if anyone–is making sure the Guidelines are based on a complete and current review of the science.

Did HHS and USDA not like the direction that it looked like the Guidelines were going to take–with all that crazy talk about too many carbohydrates – and therefore made sure the scientists on the DGAC were farther removed from the process of creating them?

Hmmmmm again.

Dr. Janet King, chairwoman of the 2005 DGAC had this to say, after her tenure creating the Guidelines was over: “Evidence has begun to accumulate suggesting that a lower intake of carbohydrate may be better for cardiovascular health.”

Dr. Joanne Slavin, a member of the 2010 DGAC had this to say, after her tenure creating the Guidelines was over: “I believe fat needs to go higher and carbs need to go down,” and “It is overall carbohydrate, not just sugar. Just to take sugar out is not going to have any impact on public health.”

It looks like, at least in 2005 and 2010, some well-respected scientists (respected well enough to make it onto the DGAC) thought that—in the context of our current environment—maybe our continuing advice to Americans to eat more carbohydrate and less fat wasn’t such a good idea.

I think it is at about this point that I begin to hear the wailing and gnashing of teeth of those who don’t think Americans ever followed this advice to begin with, because—goodness knows—if we had, we wouldn’t be so darn FAT!

So did Americans follow the advice handed out in those early dietary recommendations? Or did Solid Fats and Added Sugars (SoFAS—as the USDA/HHS like to call them—as in “get up offa yur SoFAS and work your fatty acids off”) made us the giant tubs of lard that we are just as the USDA/HHS says they did?

Stay tuned for the next episode of As the Calories Churn, when I attempt to settle those questions once and for all.  And you’ll hear a big yellow blob with stick legs named Timer say, “I hanker for a hunk of–a slab or slice or chunk of–I hanker for a hunk of cheese!”

The NaCl Debacle Part 2: We don’t need no stinkin’ science!

Sodium-Slashing Superheroes Low-Sodium Larry and his bodacious side-kick Linda “The Less Salt the Better” Van Horn team up to protect Americans from the evils lurking in a teaspoon of salt!
(Drawings courtesy of Butcher Billy)

Yesterday, we found our Sodium-Slashing Superheroes Larry and Linda determined to make sure that no American endangered his/her health by ingesting more than ¾ of a teaspoon of salt a day. But recently, an Institute of Medicine report determined that recommendations to reduce sodium intake to such low levels provided no health benefits and could be detrimental to the health of some people. [In case you missed it and your job is really boring, you can read Part 1 of the NaCl Debacle here.]

Our story picks up as the 2010 USDA/HHS Dietary Guidelines Advisory Committee, fearlessly led by Linda and Larry, arrives at the foregone conclusion that most, if not all, US adults would (somehow) benefit from reducing their sodium intake to 1500 mg/day.  The American Heart Association, in a report written by—surprise!—Larry and Linda, goes on to state that “The health benefits [of reducing sodium intake to 1500 mg/day] apply to Americans in all groups, and there is no compelling evidence to exempt special populations from this public health recommendation.”

Does that mean there is “compelling evidence” to include special populations, or for that matter ordinary populations, in this 1500 mg/day recommendation? No, but who cares?

Does that mean there is science to prove that “excess” sodium intake (i.e. more than ¾ of a teaspoon of salt a day) leads to high blood pressure and thus cardiovascular disease, or that salt makes you fat, or that sodium consumption will eventually lead to the zombie apocalypse? No, no, and no—but who cares?

Larry and Linda KNOW that salt is BAD. Science? They don’t need no stinkin’ science.

Because the one thing everyone seems to be able to agree on is that the science on salt does indeed stink. The IOM report has had to use many of the same methodologically-flawed studies available to the 2010 Dietary Guidelines Advisory Committee, full of the same confounding, measurement error, reverse causation and lame-ass dietary assessment that we know and love about all nutrition epidemiology studies.  But the 2010 Dietary Guidelines Advisory Committee didn’t actually bother to look at these studies.

Why not?  (And let me remind you that the Dietary Guidelines folks usually <heart> methodologically-flawed study designs, full of confounding, measurement error, reverse causation and lame-ass dietary assessment.)

First, a little lesson in how the USDA/HHS folks create dietary guidance meant to improve the health and well-being of the American people:

  1. Take a clinical marker, whose health implications are unclear, but whose levels we can measure cheaply and easily (like blood pressure, cholesterol, weight).
  2. Suggest that this marker—like Karnac the Magnificent—can somehow predict risk of a chronic disease whose origins are multiple and murky (like obesity, heart disease, cancer).
  3. Use this suggestion to establish some arbitrary clinical cut offs for when this marker is “good” and “bad.” (Note to public health advocacy organizations: Be sure to frequently move those goalposts in whichever direction requires more pharmaceuticals to be purchased from the companies that sponsor you.)
  4. Find some dietary factor that can easily and profitably be removed from our food supply, but whose intake is difficult to track (like saturated fat, sodium, calories).
  5. Implicate the chosen food factor in the regulation of the arbitrary marker, the details of which we don’t quite understand. (How? Use observational data—see methodological flaws above—but hunches and wild guesses will also work.)
  6. Create policy that insists that the entire population—including people who, by the way, are not (at least at this point) fat, sick or dead—attempt to prevent this chronic disease by avoiding this particular dietary factor. (Note to public health advocacy organizations: Be sure to offer food manufacturers the opportunity to have the food products from which they have removed the offensive component labeled with a special logo from your organization—for a “small administrative fee,” of course.)
  7. Commence collecting weak, inconclusive, and inconsistent data to prove that yes indeedy this dietary factor we can’t accurately measure does in fact have some relationship to this arbitrary clinical marker, whose regulation and health implications we don’t fully understand.
  8. Finally—here’s the kicker—measure the success of your intervention by whether or not people are willing to eat expensive, tasteless, chemical-filled food devoid of the chosen food factor in order to attempt to regulate the arbitrary clinical marker.
  9. Whatever you do, DO NOT EVER measure the success of your intervention by looking at whether or not attempts to follow your intervention has made people fat, sick, or dead in the process.
  10. Ooops. I think I just described the entire history of nutrition epidemiology of chronic disease.

Blood pressure is easy to measure, but we don’t always know what causes it to go up (or down). There is no real physiological difference between having a blood pressure reading of 120/80, which will get you a diagnosis of “pre-hypertension” and a fistful of prescriptions, and a reading of 119/79, which won’t.  Blood pressure is not considered to be a “distinct underlying cause of death,” which means that, technically, no one ever dies of blood pressure (high or low). We certainly don’t know how to disentangle the effects of lowering dietary sodium on blood pressure from other effects (like weight loss) that may be related to dietary changes that are a part of an attempt to lower sodium (and we have an embarrassingly hard time collecting accurate dietary intake information from Food Fantasy Questionnaires anyway). We also know that individual response to sodium varies widely.

So doesn’t it make perfect sense that the folks at the USDA/HHS should ignore science that investigates the relationship between sodium intake and whether or not a person stayed out of the hospital, had a heart attack, or up and died? Well, it doesn’t to me, but nevertheless the USDA/HHS has remained obsessively fixated on one thing and one thing only, what effects reducing sodium has on blood pressure,  and they pay not one whit of attention to what effects reducing sodium has on, say, aliveness.

So let’s just get this out there and agree to agree: reducing sodium in most cases will reduce blood pressure.  But then, just to be clear, so will dismemberment, dysentery, and death.  We can’t just assume that lowering sodium will only affect blood pressure or will only positively affect health (I mean, we can’t unless we are Larry or Linda). Recent research, which prompted the IOM review, indicates that reducing sodium will also increase triglyceride levels, insulin resistance, and sympathetic nervous system activity. For the record, clinicians generally don’t consider these to be good things.

This may sound radical but in their review of the evidence, the IOM committee decided to do a few things differently.

First, they gave more weight to studies that determined sodium intake levels through multiple high-quality 24-hour urine collections. Remember, this is Low-Sodium Larry’s favorite way of estimating intake.

Also, they did not approach the data with a predetermined “healthy” range already established in their brains. Because of the extreme variability in intake levels among population groups, they decided to—this is crazy, I know—let the outcomes speak for themselves.

Finally, and most importantly, in the new IOM report, the authors, unlike Larry and Linda, focused on—hold on to your hats, folks!—actual health outcomes, something the Dietary Guidelines Have. Never. Done. Ever.

The IOM committee found, in a nutshell:

“that evidence from studies on direct health outcomes is inconsistent and insufficient to conclude that lowering sodium intakes below 2,300 mg per day either increases or decreases risk of CVD outcomes (including stroke and CVD mortality) or all-cause mortality in the general U.S. population.”

In other words, there is no science to indicate that we all need to be consuming less than ¾ of a teaspoon of salt a day. Furthermore, while there may be some subpopulations that may benefit from sodium reduction, reducing sodium intake to 1500 mg/day may increase risk of adverse health outcomes for people with congestive heart failure, diabetes, chronic kidney disease, or heart disease. (If you’d like to wallow in some of the studies reviewed by the IOM, I’ve provided the Reader’s Digest Condensed Version at the bottom of the page.)

Of course, the American Heart Association, eager to provide the public with the most up-to-date recommendations about heart health as long as they don’t contradict outdated recommendations of which the AHA is fond, responded to the IOM report by saying, “The American Heart Association is not changing its position. The association rejects the Institute of Medicine’s conclusions because the studies on which they were based had methodological flaws.”

Um, hello AHA? Exactly what completely non-existent, massive, highly-controlled and yet highly-generalizable randomized controlled trials about sodium intake and health effects were you planning on using to make your case? I believe it was the AHA that mentioned that “It is well-known, however, that such trials are not feasible because of logistic, financial, and often ethical considerations.” Besides, I don’t know what the AHA is whining about. The quality of the science hardly matters if you are not going to pay any attention to it in the first place.

No, folks that giant smacking sound you hear is not my head on my keyboard. That was the sound of science crashing into a giant wall of Consistent Public Health Message. Apparently, those public health advocates at the AHA seem to think that changing public health messages—even when they are wrong—confuses widdle ol’ Americans. The AHA—and the USDA/HHS team—doesn’t want us to have to worry our pretty little heads about all that crazy scientifical stuff with big scary words and no funny pictures or halftime shows.

Frankly, I appreciate that. I hate to have my pretty little head worried. But there’s one other problem with this particular Consistent Public Health Message. Not only is there no science to back it up; not only is it likely to be downright detrimental to the health of certain groups of people; not only is it likely to introduce an arsenal of synthetic chemical salt-replacements that will be consumed at unprecedented levels without testing for negative interactions or toxicities (remember how well that worked out when we replaced saturated fat with partially-hydrogenated vegetable oils?)—it is, apparently, incompatible with eating food.

Researchers set out to find what would really happen if Americans were muddle-headed and sheep-like enough to actually try to reduce their sodium intake to 1500 mg/day. They discovered that, “the 2010 Dietary Guidelines for sodium were incompatible with potassium guidelines and with nutritionally adequate diets, even after reducing the sodium content of all US foods by 10%.”  Way to go, Guidelines

While these researchers suggested that a feasibility study (this is a scientifical term for “reality check”) should precede the issuing of dietary guidelines to the public, I have a different suggestion.

How about we just stop with the whole 30-year-long dietary experiment to prevent chronic disease by telling Americans what not to eat? I hate to be the one to point this out, but it doesn’t seem to be working out all that well.  It’s hard to keep assuming that the AHA and the USDA/HHS mean well when, if you look at it for what it is, they are willing to continue to jeopardize the health of Americans just so they don’t have to admit that they might have been wrong about a few things.  I suppose if a Consistent Public Health Message means anything, it means never having to say you’re sorry for 30 years-worth of lousy dietary advice.

Marion Nestle has noted that, up until now, “every single committee that has dealt with this question [of sodium-reduction] says, ‘We really need to lower the sodium in the food supply.’ Now either every single committee that has ever dealt with this issue is delusional, which I find hard to believe—I mean they can’t all be making this up—[or] there must be a clinical or rational basis for the unanimity of these decisions.”

Weeeell, I got some bad news for you, Marion. Believe it. They have been delusional. They are making this up. And no, apparently there is no clinical or rational basis for the unanimity of these decisions.

But, thanks to the IOM report, perhaps we can no longer consider these decisions to be unanimous.

Praise the lard and pass the salt.

Read ’em and weep:  The Reader’s Digest Condensed Version of the science from the IOM report.  Studies marked with an asterix (*) are studies that were available to the 2010 Dietary Guidelines Advisory Committee.  

Studies that looked at Cardiovascular Disease, Stroke, and Mortality

*Cohen et al. (2006)

When intakes of sodium less than 2300 mg per day were compared to intakes greater than 2300 mg per day, the “lower sodium intake was statistically significantly associated with increased risk of all-cause mortality.”

*Cohen et al. (2008)

When a fully-adjusted (for confounders) model was used, “there was a statistically significant higher risk of CVD mortality with the lowest vs. the highest quartile of sodium intake.”

Gardener et al. (2012)

Risk of stroke was positively related to sodium intake when comparing the highest levels of intake to the lowest levels of intake. There was no statistically significant increase in risk for those consuming between 1500 and 4000 mg of sodium per day.

*Larsson et al. (2008)

“The analyses found no significant association between dietary sodium intake and risk of any stroke subtype.”

*Nagata et al. (2004)

“Among men, a 2.3-fold increased risk of stroke mortality was associated with the highest tertile of sodium intake.” That sounds bad, but the average sodium intake in the high-risk group was 6613 mg per day. The lowest risk group had an average intake of 4070 mg per day. “Thus, the average sodium intake in the US would be within the lowest tertile of this study.”

Stolarz-Skrzypek at al. (2011)

“Overall, the authors found that lower sodium intake was associated with higher CVD mortality.”

Takachi et al. (2010)

The authors found “a significant positive association between sodium consumption at the highest compared to the lowest quintile and risk of stroke.” As with the Nagata (2004) study, this sounds bad, but the average sodium intake in the high-risk group was 6844 mg per day. The lowest risk group had an average intake of 3084 mg per day. “Thus, the average sodium intake in the US would be close to the lowest quintile of this study.”

*Umesawa et al. (2008)

“The authors found an association between greater dietary sodium intake and greater mortality from total stroke, ischemic stroke, and total CVD.” However, as with the Nagata and the Takchi studies (above), lower quintiles—in this case, quintiles one and two—would be comparable to average US intake.

Yang et al. (2011)

Higher usual sodium intake was found to be associated with all-cause mortality, but not cardiovascular disease mortality or ischemic heart disease mortality. “However, the finding that correction for regression dilution increased the effect on all-cause mortality, but not on CVD mortality, is inconsistent with the theoretical causal pathway.”  In other words, high sodium intake might be bad for health, but not because it raises blood pressure and leads to heart disease.

Studies in Populations 51 Years of Age or Older

*Geleijnse et al. (2007)

“This study found no significant difference between urinary sodium level and risk of CVD mortality or all-cause mortality.” Relative risk was lowest in the medium intake group, with an average estimated intake of 2, 415 mg/day.


“Five of the nine reported studies in the general population listed above also analyzed the data on health outcomes by age and found no interaction (Cohen et al., 2006, 2008; Cook et al., 2007; Gardener et al., 2012; Yang et al., 2011).”

Studies in Populations with Chronic Kidney Disease

Dong et al. (2010)

“The authors found that the lowest sodium intake was associated with increased mortality risk.”

Heerspink et al. (2012)

“Results from this study suggest that ARBs were more effective at decreasing CKD progression and CVD when sodium intake was in the lowest tertile” which had an estimated average sodium intake of about 2783 mg/day.

Studies on Populations with Cardiovascular Disease

Costa et al. (2012)

“Dietary sodium intake was estimated from a 62-itemvalidated FFQ. . . . Significant correlations were found between sodium intake and percentage of fat and calories in daily intake. . . . Overall, for the first 30 days and up to 4 years afterward, total mortality was significantly associated with high sodium intake.”

Kono et al. (2011)

“Cumulative risk analysis found that a salt intake of greater than the median of 4,000 mg of sodium) was associated with higher stroke recurrence rate. Univariate analysis of lifestyle management also found that poor lifestyle, defined by both high salt intake and low physical activity, was significantly associated with stroke recurrence.

O’Donnell et al. (2011)

“For the composite outcome, multivariate analysis found a U-shaped relationship between 24-hour urine sodium and the composite outcome of CVD death, MI, stroke, and hospitalization for CHF.” In other words, both higher (>7,000 mg per day estimated intake) and lower (<2,990 mg per day estimated intake) intakes of sodium were associated with increased risk of heart disease and mortality.

Studies on Populations with Prehypertension

*Cook et al. (2007)

In a randomized trial comparing a low sodium intervention with usual intake, lower sodium intake did not significantly decrease risk of mortality or heart disease events.

*Cook et al. (2009)

No significant increase in risk of adverse cardiovascular outcomes was associated with increased sodium excretions levels.


“Several other studies discussed in this chapter analyzed data on health outcomes by blood pressure and found no statistical interactions (Cohen et al., 2006, 2008; Gardener et al., 2012; O’Donnell et al., 2011; Yang et al., 2011).”

Studies on Populations with Diabetes

Ekinci et al. (2011)

Higher sodium intakes were associated with decreased risk of all-cause mortality and heart disease mortality.

Tikellis et al. (2013)

“Adjusted multivariate regression analysis found urinary sodium excretion was associated with incident CVD, with increased risk at both the highest [> 4,401 mg/day] and lowest [<2,346 mg/day] urine sodium excretion levels. When analyzed as independent outcomes, no significant associations were found between urinary sodium excretion and new CVD or stroke after adjustment for other risk factors.”


“Two other studies discussed in this chapter analyzed the data on health outcomes by diabetes prevalence and found no interaction (Cohen et al., 2006; O’Donnell et al., 2011).”

Studies in Populations with Congestive Heart Failure

Arcand et al. (2011)

High sodium intake levels (≥2,800 mg per day) were significantly associated with acute decompensated heart failure, all-cause hospitalization, and mortality.

Lennie et al. (2011)

“Results for event-free survival at a urinary sodium of ≥3,000 mg per day varied by the severity of patient symptoms.” In people with less severe symptoms, sodium intake greater than 3,000 mg per day was correlated with a lower disease incidence compared to those with a sodium intake less than 3,000 mg per day. Conversely, people with more severe symptoms who had a sodium intake greater than 3,000 mg per day had a higher disease incidence than those with sodium intakes less than 3,000 mg per day.

Parrinello et al. (2009)

“During the 12 months of follow-up, participants receiving the restricted sodium diet [1840 mg/day] had a greater number of hospital readmissions and higher mortality compared to those on the modestly restricted diet [2760 mg/day].”

*Paterna et al. (2008)

The lower sodium intake group [1840 mg/day] experienced a significantly higher number of hospital readmissions compared to the normal sodium intake group [2760 mg/day].

*Paterna et al. (2009)

A significant association was found between the low sodium intake [1,840 mg per day]) and hospital readmissions. The group with normal sodium diet [2760 mg/day] also had fewer deaths compared to all groups receiving a low-sodium diet combined.


Move over saturated fat and cholesterol. There’s a new kid on the heart disease block: TMAO.

TMAO is not, as I first suspected, a new internet acronym that I was going to have to get my kids to decipher for me, while they snickered under their collective breaths. Rather, TMAO stands for Trimethylamine N-oxide, and it is set to become the reigning king of the “why meat is bad for you” argument. Former contenders, cholesterol and saturated fat, have apparently lost their mojo. After years of dominating the heart disease-diet debate, it turns out they were mere poseurs, only pretending to cause heart disease, the whole time distracting us from the true evils of TMAO.

The news is, the cholesterol and saturated fat in red meat can no longer be held responsible for clogging up your arteries. TMAO, which is produced by gut bacteria that digest the carnitine found in meat, is going to gum them up instead. This may be difficult to believe, especially in light of the fact that, while red meat intake has declined precipitously in the past 40 years, prevalence of heart disease has continued to climb. However, this is easily accounted for by the increase in consumption of Red Bull—which also contains carnitine—even though it is not, as some may suspect, made from real bulls (thank you, BW).

Here to explain once again why we should all be afraid of eating a food our ancestors ignorantly consumed in scandalous quantities (see what happened to them?  they are mostly dead!) is the Medical Media Circus! Ringleader for today is the New York Times’ Gina Kolata, who never met a half-baked nutrition theory she didn’t like (apparently Gary Taubes’ theory regarding carbohydrates was not half-baked enough for her).

Step right up folks and meet TMAO, the star of “a surprising new explanation of why red meat may contribute to heart disease” (because, frankly, the old explanations aren’t looking too good these days).

We know that red meat maybe almost probably for sure contributes to heart disease, because that wild bunch at Harvard just keeps cranking out studies like this one, Eat Red Meat and You Will Die Soon.

This study and others just like it definitely prove that if you are a white, well-educated, middle/upper-middle class health professional born between 1920 and 1946 and you smoke and drink, but you don’t exercise, watch your weight, or take a multivitamin, then eating red meat will maybe almost probably for sure increase your risk of heart disease. With evidence like that, who needs evidence?

Flying like the Wallenda family in the face of decades of concrete and well-proven assumptions that the reason we should avoid red meat is because of its saturated fat and cholesterol content, the daring young scientists who discovered the relationship between TMAO and heart disease “suspected that saturated fat and cholesterol made only a minor contribution to the increased amount of heart disease seen in red-meat eaters” [meaning that is, the red-meat eaters that are white, well-educated, middle/upper-middle class health professionals, who smoke and drink and don’t exercise, watch their weight, or take a multivitamin; emphasis mine].

Perhaps their suspicions were alerted by studies such as this one, that found that, in randomized, controlled trials, with over 65 thousand participants, people who reduced or changed their dietary fat intake didn’t actually live any longer than the people who just kept eating and enjoying the same artery-clogging, saturated fat- and cholesterol-laden foods that they always had. (However, this research was able to determine that a steady diet of broiled chicken breasts does in fact make the years crawl by more slowly.)

You can almost ALWAYS catch something on a fishing expedition.

Our brave scientists knew they couldn’t just throw up their hands and say “Let them eat meat!” That would undermine decades of consistent public health nutrition messaging and those poor stupid Americans might get CONFUSED—and we wouldn’t want that! So, instead the scientists went on a “scientific fishing expedition” (Ms. Kolata’s words, not mine) and hauled in a “little-studied chemical called TMAO that gets into the blood and increases the risk of heart disease.” Luckily, TMAO has something to do with meat. [As Chris Masterjohn points out, it also has something to do with fish, peas, and cauliflower, but–as I’m sure these scientists noticed immediately–those things do not contain meat.] Ta-da! Problemo solved.

Exactly how TMAO increases the risk of heart disease, nobody knows. But, good scientists that they are, the scientists have a theory. (Just to clarify, in some situations the word theory means: a coherent group of tested general propositions, commonly regarded as correct. This is not one of those situations.) The researcher’s think that TMAO enables cholesterol to “get into” artery walls and prevents the body from excreting “excess” cholesterol. At least that’s how it works in mice. Although mice don’t normally eat red meat, it should be noted that mice are exactly like people except they don’t have Twitter accounts. We know this because earlier mouse studies allowed scientists to prove beyond the shadow of a doubt that dietary cholesterol and saturated fat cause heart disease mice definitely do not have Twitter accounts.

Look, just because the scientists can’t explain how TMAO does all the bad stuff it does, doesn’t mean it’s not in there doing, you know, bad stuff. Remember, we are talking about molecules that are VERY VERY small and really small things can be hard to find–unless of course you are on a scientific fishing expedition.

What will happen to the American Heart Association’s seal of approval now that saturated fat and cholesterol are no longer to be feared?

Frankly, I’m relieved that we FINALLY know exactly what has been causing all this heart disease. Okay, so it’s not the saturated fat and cholesterol that we’ve been avoiding for 35 years. Heck, everybody makes mistakes. Even though Frank Sacks and Robert Eckel, two scientists from the American Heart Association, told us for decades that eating saturated fat and cholesterol was just greasing the rails on the fast track to death-by-clogged-arteries, they have no reason to doubt this new theory. And even though they apparently had no reason to doubt the now-doubtful old theory, at least not until just now—as a nation, we can rest assured that THIS time, they got it right.

Now that saturated fat and cholesterol are no longer Public Enemies Number One and Two, whole milk, cheese, eggs, and butter—which do not contain red meat—MUST BE OKAY! I guess there’s no more need for the AHA’s dietary limits on saturated fat, or for the USDA Guidelines restrictions on cholesterol intake, or for those new Front of Package labels identifying foods with too much saturated fat. Schools can start serving whole milk again, butter will once again be legal in California, and fat-free cheese can go back to being the substance that mouse pads are made out of. Halla-freaking- looyah! A new day has dawned.

But—amidst the rejoicing–don’t forget: Whether we blame saturated fat or cholesterol or TMAO, meat is exactly as bad for you now as it was 50 years ago.

“Broccoli has more protein than steak”—and other crap

Of all the asinine things that I read about nutrition—and let me tell you, I read a lot of them—this one has got to be the asininniest: Broccoli has more protein than steak.

I’ve seen this idiotic meme repeated many times, but the primary source of this stupid—see also: delusional, ludicrous, and absurd—notion seems to be Dr. Joel Furhman. My mom—bless her little osteoporotic soul—keeps his books down at the beach cottage. I don’t think she does it to taunt me, but you never know. I was a bad kid, and payback may be in order. My family has forbidden me to read Dr. Furhman’s books, to pick them up, or to even glance at the covers because the resulting full-on nutrition-rant kills everybody’s beach buzz.

However, as of last week, I have officially maxed out my tolerance for just ignoring this nonsense. So, note to my family: Read no further, it will kill your beach buzz.

According the Dr. Furhman’s book, Eat to Live, a 100-calorie portion of sirloin steak has 5.4 grams of protein, and a 100-calorie portion of broccoli has 11.2 grams of protein. This is rubbish. According to the USDA’s Agricultural Research Service’s Nutrient Data Laboratory database, 100 calories of broiled beef, top sirloin steak has exactly 11.08 grams of protein and 100 calories of chopped, raw broccoli has exactly 8.29. I’m not sure what universe Dr. Furhman lives in, but in my universe, 8.29 is less than 11.08.

I can explain the discrepancy in numbers by the simple fact that Dr. Furhman and I used different sources for our information. Dr. Furham wrote his book—the one that contains the piece of drivel under consideration—in 2005, but he chose to reference a nutrition book written in 1986 (Adams, C. 1986. Handbook of the Nutritional Value of Foods in Common Units, New York: Dover Publications). Just to put things in perspective, in 1986, the internet and DVDs had not yet been invented, no one knew who Bart Simpson was, and it would be another couple of years before Taylor Swift even draws her first ex-boyfriend-bashing breath.

Here’s what I can’t explain: Why, oh why did he dig up a reference nearly two decades old and not just use the USDA internet database, which is—and has been since the 1990s—available to anyone with a library card and a half a brain? While I do not wish to speculate on exactly which of these tools Dr. Furhman might be lacking, suffice it to say that it would take less than 10 minutes for any blogger interested in the truth of the matter to find a more recent source of information—assuming of course that bloggers who perpetuate this particular fiction are interested in the truth.

But wait—before you foam at the mouth too much, Adele—8.29 grams of protein is fair bit of protein.  There is only a difference of a couple of grams of protein between broccoli and steak.  Yes, I would agree, those numbers are a lot closer than you might expect, and this might actually be nutritionally important, if—Big If—all protein were created equal. Which it isn’t.

While I am a big fan of coming at nutrition from an individualized perspective, and I am aware that nutrition scientists don’t have any monopoly on truth, we have managed to nail down a few essential things that human must acquire from the food that they eat. In terms of essentiality, after calories and fluid comes protein—or more specifically, essential amino acids (there are more essentials, but they are not the topic of this particular rant). Because these amino acid requirements are so important (a particular form of starvation, kwashiorkor, involves not overall calorie deprivation, but protein deficit in the context of adequate or near-adequate calories), the World Health Organization has established specific daily requirements of the essential amino acids that are necessary for health.

Let’s see how similar caloric intakes of steak and broccoli stack up when comparing how these two foods provide for essential amino acid requirements. A 275-calorie portion of steak (4 ounces) has 30.5 grams of protein and comes very close to meeting all the daily essential amino acid requirements for a 70 kg adult. A 277-calorie portion of broccoli is not only way more food—you’ll be chewing for a long time as you try to make it through 9 ¼ cups of broccoli—exactly NONE of the daily essential amino acid requirements for an adult are met:

EssentialAmino acids (g) Daily requirement 70 kg adult (g) Essential amino acids (g) in 275 calories of steak (4 oz or 113.33 g) Essential amino acids (g) in 277 calories of chopped, raw broccoli (9.25 cups)
histidine 0.70 0.975 ( +0.275) 0.48 (-0.22)
isoleucine 1.400 1.391 (-0.009) 0.643 (-0.757)
leucine 2.730 2.431 (-0.299) 1.05 (-1.68)
lysine 2.100 2.583 (+0.483) 1.099 (-1.001)
methionine 0.70 0.796 (+0.096) 0.309 (-0.391)
cysteine 0.28 0.394 (+ 0.114) 0.228 (-0.052)
threonine 1.050 1.221 (+0.171) 0.716 (-0.334)
tryptophan 0.280 0.201 (-0.079) 0.269 (-0.011)
valine 1.82 1.516 (-0.304) 1.018 (-0.802)

In reality, it takes twice that much broccoli, or over 18 cups, containing nearly twice as many calories, in order to get anywhere near meeting all essential amino acid requirements.  While I’m willing to concede that individual amino acid requirements may vary considerably, I am not willing to concede that similar caloric amounts of steak and broccoli provide a similar supply of those requirements.  I’m no broccoli basher (it’s sooo yummy baked with cheese & a little bacon on top), but as a protein source, even a lot leaves a lot to be desired.

Oh yeah? Well then, “how on earth do animals like elephants, gorillas and oxen get so big and strong eating only plants? A diverse plant-based diet can obviously support a big, powerful body.” Sure it can. If you’re an elephant or a gorilla or an ox.

In general, human bodies don’t work very efficiently without a regular dietary supply of all essential amino acids: “It would be difficult to find a protein that did not have at least one residue of each of the common 20 amino acids. Half of these amino acids are essential, and if the diet is lacking or low in even one of these essential amino acids, then protein synthesis is not possible” [Emphasis mine; reference: Campbell & Farrell’s Biochemistry, 6th edition]. Protein synthesis allows us to grow, heal, reproduce, and function in general. One of the specific outcomes of protein deficiency in humans is stunting, i.e. where humans who would otherwise grow bigger, don’t.

Dr. Furhman seems to think that those of us who “believe” that food from animals provides a more biologically complete source of protein than food from plants “never thought too much about how a rhinoceros, hippopotamus, gorilla, giraffe, or elephant became so big eating only vegetables.” Hmmm. I have to say, I’m thinking the same thing about Dr. Furhman. Maybe he is unaware that humans aren’t really all that much like rhinoceroses, hippos, gorillas, giraffes, or elephants. But then maybe he just hangs out with a different crowd than I do.

Once again, armed with a library card and half a brain, it is not too difficult to figure out—assuming you did think about how those animals got so big eating only plants and didn’t just mindlessly parrot Dr. Furham’s poorly-researched blather—that, as Gomer Pyle would say, surprise! surprise! Humans and other large mammals ARE different.

While non-ruminants (like humans) must get their essential amino acids from their diet, ruminants (like giraffes) “may also acquire substantial amounts of these amino acids through the digestion of microbial protein synthesized in the rumen” (see: Amino Acids in Animal Nutrition, edited by J.P. Felix D’Mello). This may come as a bit of a shock to Dr. Furhman and his readership, but humans don’t actually have rumens and utilizing this particular approach to the acquisition of essential amino acids from plant matter ain’t gonna work for us.

You can get plenty of protein from a plants-only diet by eating like a hippo.

Other non-ruminant grazers—see elephants, rhinos, and hippos—have a different eating strategy. They “eat for volume and low extraction.” In other words, the relatively low availability of protein in the food is overcome by the high volume consumed. In that regard—assuming you aspire to an elephant-like, rhino-like, or hippo-like bod—it may be possible to get sufficient protein from a strictly plant-based diet. If you don’t mind eating all the time. And pooping. Less than half of what is consumed by the high-volume grazers is utilized by the body; the rest—like a handsome stranger—is just passin’ through (see: Nutritional Ecology of the Ruminant, by Peter J. van Soest). If the idea of literally flushing over half of what you eat down the toilet doesn’t bother you, then this strategy actually might work.

ooooh! Can we? Please?

So what about gorillas? This particular primate-to-primate comparison has been tossed all around the internet. Why can’t we just eat plants like gorillas do? Gorillas, although not so good at Jeopardy, are big and strong and they’re vegans, so we should all be vegans too, right? Aside from the fact that we don’t really know exactly what gorillas are eating much of the time, it does seem that they eat a lot of bugs along with their plants. So unless you have a particularly fastidious gorilla, some dietary protein won’t be vegan. Compared to humans, gorillas also have a much larger proportion of the gut devoted to fermentation—again, another source for microbes to contribute to the nutritional completeness of a plants-only diet. And, again, a high volume of food is consumed to compensate for the low nutritional value of it. You won’t have to worry about half your food going down the toilet, though. Those who want to live like gorillas can just eat that poop instead of flushing it. This provides the body with another opportunity to extract nutrition from the substance formerly known as food and may also help explain the willingness of Dr. Furhman’s readers to swallow what he’s shoveling.

I have nothing against a plants-only diet—in whatever form it takes—if that’s what a person want to do and it makes him/her happy. I have no more interest in converting a vegan to omnivory than I do in having a vegan attempt to convert me to swearing off bacon. I am also aware that there is more—much more—to food choices than the nutritional content of the food chosen.

But I’m afraid this is just one of those situations where ideology has been sent to do the work of science. Ideology has its place, and science has its flaws. Truth, facts, and beliefs can be hard to define and harder still to separate. I get all that. But – to quote Neil deGrasse Tyson – “The good thing about science is that it’s true whether or not you believe in it.” Unfortunately, for all those gorilla-wannabees out there, the reverse also applies: Believing in something doesn’t make it true. You can believe all you want that broccoli is a better source of protein than steak, but your ribosomes don’t have access to a keyboard and they might vote differently.

Now, dear readers, if you ever run across some library-card-challenged blogger out there perpetuating Dr. Furhman’s little myth, you have a link to help spill some sunshine on the matter.