This Power-Generating Shoe Isn’t Ready for Prime Time Yet, but This Kid’s Project is Still Pretty Cool

This is a video by a Angelo Casimiro, a 15-year-old Filipino participating in this year’s Google science fair. And he has seriously tweaked his shoes to do something cool: they spark. And I don’t mean spark like  those kids’ shoes that have stripes that dimly light as you walk that you really wanted to try but evidently you couldn’t wear because none of them could support your ankles (okay, that last part may not have applied to everyone else…). Angelo’s new shoes actually generate a little bit of electricity he time he takes a step. This is incredibly cool.

But just because I can, I’m going to bury the lede for a bit, because I want to contextualize this. Angelo did this as a test to see if it could work AT ALL, and he says he’s nowhere near a final product that you might buy. So before dreams of daily jogging to power your iPhone and laptop dance in your head, we need to look at the electricity we can create and how much we actually use.

Duracell’s basic alkaline non-renewable AA battery has a charge of about 2500-3000 miliampere-hours (mAh), which I estimated based on multiplying the number of hours it was used by the constant currents applied in the graphs on the first page here. The two basic rechargeable NiMH AA batteries have charges of 1700 and 2450 mAh. The battery in my Android smartphone has a charge of 1750 mAh, based on dividing the energy (6.48 watt-hours) by its operating voltage (3.7 V). Based on Angelo’s best reported current of 11 mA on his Google science fair page, it would take 159 hours to fully charge my phone. That’s nearly a week of non-stop running! (Literally! There’s only 168 hours in a week. You could only spend 9 hours doing anything besides running that week if you wanted to charge the phone, or replace one of the two AA batteries it takes to power my digital camera) However, I might be overestimating based on his averages. At around the 3:50 mark in the video, an annotation says that Angelo was able to charge a 400 mAh battery after 8 hours of jogging. That would translate to about 33 hours of jogging to charge my cell phone. No one I know would want to do that, but that is significantly less than jogging non-stop for almost 7 days.

But as Angelo points out, while you not be able to power your phone with his shoe, lots of sensors and gadgets that could go into smart clothes could be powered by this. In the video, he says he was able to power an Arduino board. An Arduino is a common mini-CPU board with extras people often use to make nifty devices, from how Peter Parker locks his room door in The Amazing Spider-Man movie to laser harps you can play by touching beams of light (note that the Arduino isn’t necessarily powering all the other components it is controlling in these cases), so you could potentially control smart clothes that respond to your moving.  A study by MIT’s Media Lab also looked at putting piezoelectric material in shoes and found they could power an RFID transmitter, which can be used to broadcast information to either devices. So perhaps your gym shoes could also act as your gym ID. The 400 mAh battery Angelo mentions is pretty close to the charge of batteries in small blood sugar monitors and over double the charge of some smaller hearing aid batteries.

But in relation to another recent science fair controversy, let’s put Angelo in context. No, he did not “invent” a new way to “charge your phone with your shoes“. Angelo himself points out that his work is more like a proof of concept than anything close to a product, and his numbers show you really won’t want to charge energy heavy devices with it. And MIT and DARPA, that branch of the US Department of Defense that funds crazy research schemes, have both looked at similar systems. (DARPA has looked at piezo-boots that could help power soldiers’ electronics.) Angelo and DARPA also both realize the limits of this: with our current materials, there’s only so much you can stuff into footwear before you run out of room or make it harder to walk. So instead, people have shifted to different goals for piezoelectricity: instead of having the material move with a single person who has to provide all the energy, we can place it where we know lots of people will walk and split the work. In Europe, high foot traffic areas have been covered with piezoelectric sidewalks to power lights, and in Japan, commuters walking through turnstiles in Tokyo and Shibuya stations help power ticket readers and the signboards that guide them to their trains.

Two distinct images. The left image shows a turnstile for ticketing. There is a black strip of material running through it. The right image shows a figure with an explanation in Japanese describing the power-generating nature of the strip.

Piezoelectric strip in ticket turnstile in Japanese subway station, from 2008

But none of this means that Angelo hasn’t done good technical work. It’s just that his effort falls more on the engineering side than the science side. Which is perfectly fine, because Google has categories for electronics and inventions and that other big science fair everyone talks about is technically a science AND engineering fair. Angelo’s shoe modification is posted on instructables and is something you could do in your home with consumer materials. The MIT Media Lab study still worked with custom-made piezoelectrics from colleagues in another lab. So the fact that Angelo could still manage to charge a battery in a reasonable (if you don’t need power right away) amount of time is incredibly impressive. And he also seems quite skilled at designing the circuits he used. As a 15 year old, he easily seems to know more about the various aspects of his circuit he needs to consider than I did through most of my time in college (granted, you didn’t need to know any particularly complicated circuity to be a physic majors). He’s definitely on to a great start if he wants to study engineering or science in college.

What Happens When You Literally Hang Something Out to Dry?

I got a question today!  A good friend from high school asked:

Hey! So I have a sciencey question for you. But don’t laugh at me! It might seem kinda silly at first, but bear with me. Ok, how does water evaporate without heat? Like a towel is wet, so we put it in the sun to dry (tada heat!) but if its a kitchen or a bathroom towel that doesn’t see any particular increase in temp? How does the towel dry? What happens to the water? Does it evaporate but in a more mild version of the cycle of thinking?
It’s actually a really good question, and the answer depends on some statistical physics and thermodynamics. You know water is turning into water vapor all the time around you, but you can also see that these things clearly aren’t boiling away.

I’ve said before that temperature and heat are kind of weird, even though we talk about them all the time:

It’s not the same thing as energy, but it is related to that.  And in scientific contexts, temperature is not the same as heat.  Heat is defined as the transfer of energy between bodies by some thermal process, like radiation (basically how old light bulbs work), conduction (touching), or convection (heat transfer by a fluid moving, like the way you might see soup churn on a stove).  So as a kind of approximate definition, we can think of temperature as a measure of how much energy something could give away as heat.
The other key point is that temperature is only an average measure of energy, as the molecules are all moving at different speeds (we touched on this at the end of this post on “negative temperature”). This turns out to be crucial, because this helps explain the distinction between boiling and evaporating a liquid. Boiling is when you heat a liquid to its boiling point, at which point it overcomes the attractive forces holding the molecules together in a liquid. In evaporation, it’s only the random molecules that happen to be moving fast enough to overcome those forces that leave.
We can better represent this with a graph showing the probabilities of each molecule having a particular velocity or energy. (Here we’re using the Maxwell-Boltzmann distribution, which is technically meant for ideal gases, but works as a rough approximation for liquids.) That bar on the right marks out an energy of interest, so here we’ll say it’s the energy needed for a molecule to escape the liquid (vaporization energy). At every temperature, there will always be some molecules that happen to have enough energy to leave the liquid. Because the more energetic molecules  leave first, this is also why evaporating liquids cool things off.
A graph with x-axis labelled

Maxwell-Boltzmann distributions of the energy of molecules in a gas at various temperatures. From

You might wonder that if say, your glass of water or a drenched towel is technically cooling off from evaporation, why will it completely evaporate over time? Because the water will keep warming up to room temperature and atomic collisions will keep bringing up the remaining molecules back to a similar Boltzmann distribution.
My friend also picks up on a good observation comparing putting the towel out in the sun versus hanging it in a bathroom. Infrared light from the sun will heat up the towel compared to one hanging around in your house, and you can see that at the hotter temperatures, more molecules exceed the vaporization energy, so evaporation will be faster. (In cooking, this is also why you raise the heat but don’t need to boil a liquid to make a reduction.)

There’s another factor that’s really important in evaporation compared to boiling. You can only have so much water in a region of air before it starts condensing back into a liquid (when you see dew or fog, there’s basically so much water vapor it starts re-accumulating into drops faster than they can evaporate). So if it’s really humid, this process goes slower. This is also why people can get so hot in a sauna. Because the air is almost completely steam, their sweat can’t evaporate to cool them off.

There Are Probably No Nanoparticles in Your Food… At Least, Not Intentionally

Recently, Mother Jones posted an article about “Big Dairy” putting microscopic pieces of metal in food. Their main source is the Project on Emerging Nanotechnologies and its Consumer Products Inventory, a collaboration between the Wilson Center and Virginia Tech. Unfortunately, the Mother Jones piece seems to misunderstand how the CPI is meant to be used. But another problem is that the CPI itself seems poorly designed as a tool for journalists.

So what’s the issue? The Mother Jones piece mainly focuses on the alleged use of nanoparticles of titanium dioxide (TiO2) in certain foods to enhance colors, making whites whiter or brightening other colors. First, the piece makes an error in its description of TiO2 as a “microscopic piece of metal”. Titanium is a metal, but metal oxides are not, unless you consider rust a metal (which would also be wrong). But another issue is “microscopic”. Just because something is microscopic, which generally means smaller than your eye can see, doesn’t mean it’s a nanomaterial. The smallest thing you can see at a normal reading distance is about a tenth of a millimeter, which is 1000 times bigger than the 100 nanometer cut-off we typically use to talk about nanoparticles.

A clear glass dish holds a bright white powder.

Titanium dioxide is a vivid white pigment, even as macroscopic particles.

And that’s what confuses me most here. As you can see above, titanium dioxide is white as a powder, but in that form it’s several hundred nanometers wide at minimum, if not on the scale of microns (1000 nanometers). In fact, nanoparticles of TiO2 are too small to scatter visible light and so they can’t appear white. A friend reminded me how sunscreens have switched from large TiO2 particles to actual nanoparticles precisely because it helps the sunscreen go on clearer. I’m not naive enough to think food companies wouldn’t try to cut a buck to help improve and standardize appearances, but I also don’t think food scientists are dumb enough to pay for a version of a material they can’t fulfill the purpose they’re adding it for. So TiO2 is probably used in some foods, but not on a nanoscale that radically changes it’s health properties.

But I don’t entirely blame Mother Jones. The thing is, the main reason I had a hunch the article seemed wrong is because one of my labmates at UVA has been working with TiO2 nanotubes for the last three years, and I’ve seen his samples. If I didn’t know that, and I just saw PEN include TiO2 on its list of nano additives, I would be inclined to believe it. PEN saw the Mother Jones piece and another similar article and responded by pointing out that the inventory categorized their inclusion of TiO2 in the products as having low confidence it was actually used. But their source is an environmental science paper including actual chemical analyses of food grade TiO2, so why do they give that low confidence? Also, PEN claims the CPI is something the public can use to monitor nanotechnology in products, so maybe they should rethink how confident they are in their analysis if they want to keep selling it that way.

The paper CPI references in the TiO2 claim is interesting too. That paper actually shows that most of the TiO2 is around 100 nm (figure below). But like I said, that’s kind of pushing the limit on how small the particles can be and still look white. It might be that the authors stumbled across a weird batch, as they note that in liquid products containing TiO2, less than 5% of the TiO2 could go through filters with pores that were 450 nanometers wide. Does the current process used to make food grade TiO2 end up making a lot of particles that are actually smaller than needed? Or maybe larger particles are breaking down into the smaller particles that Weir sees while in storage. This probably does need more research if other groups can replicate these results.

A histogram showing the distribution of particle sizes of TiO2. Categories go from 40 nanometers to 220 nanometers in intervals of 10. The greatest number of particles have diameters of 90-100 or 100-110 nanometers.

Distribution of TiO2 particle sizes in food grade TiO2. From Weir et al,

Making Fuel Out of Seawater Is Only One Part of An Energy Solution

So I recently saw this post about a recent breakthrough the Navy made in producing fuel from water make a small round on Facebook from questionable “alternative news” site Addicting Info and it kind of set off my BS detector. First, because this story is a few months old. It actually turned out the article was from April, so part of my skepticism was unfounded. But the opening claim that this wasn’t being reported much in mainstream outlets is wrong, as several sites beat them to the punch (even FOX NEWS! Which would probably make Addicting Info’s head explode.). The other thing that struck me as odd was how the Addicting Info piece seemed to think this technology is practically ready to use right now.  That surprised me, because for nearly the last two years, my graduate research at UVA has been focused on developing materials that could help produce fuel from CO2.

This Vice article does a pretty good job of debunking the overzealous claims made by the Addicting Info piece and others like it. As Vice points out, you need electricity to make hydrogen from water. Water is pretty chemically stable in most of our everyday lives. The only way the average person ends up splitting water is if they have metal rusting, which would be a really slow way to generate hydrogen, or by putting a larger battery in water for one of those home electrolysis experiments.

The Naval Research Lab seems kind of unique among the groups looking at making fuel from CO2 in that they’re extracting hydrogen and CO2 from water as separate processes from the step where they are combined into hydrocarbons. Most of the other research in this area looks at having metal electrodes help this reaction in water (nearly any metal from the middle of the periodic table can split CO2 with enough of a negative charge) . Because of water’s previously mentioned stability, they often add a chemical that can more easily give up hydrogen. A lot of groups use potassium bicarbonate, a close relative of baking soda that has potassium instead of sodium, to help improve the conductivity of the water and because the bicarbonate ion really easily gives up hydrogen. In these set-ups, the goal is for the electricity to help the metal break off an oxygen from a CO2 to make CO, and when you get enough CO, start adding hydrogen to the molecules and linking them together.

A chemical diagram shows a CO2 molecule losing a carbon atom on a copper surface to make CO. When another CO is nearby, the two carbon atoms link together.

Carbon atoms are initially removed from CO2 molecules on a copper surface, forming CO. When CO get close to each other, they can bond together. From Gattrell, Gupta, and Co.

But basically, no matter what reaction you do, if you want to make a hydrocarbon from CO2, you need to use electricity, either to isolate hydrogen or cause the CO2 to become chemically active. As the Vice article points out, this is still perfectly useful for the Navy, because ships with nuclear reactors continually generate large amounts of electricity, but fuel for aircraft must be replenished. If you’re on land, unless you’re part of the 30% of the US that gets electricity from renewable sources or nuclear plants, you’re kind of defeating the point. Chemical reactions and industrial processes always waste some energy, so burning a fossil fuel, which emits CO2, to make electricity that would then be used to turn CO2 back into fuel would always end up with you emitting more CO2 than you started with.

However, this process (or one like it) could actually be useful in a solar or wind-based electricity grid. Wind and solar power can be sporadic; obviously, any solar grid must somehow deal with the fact that night exists, and both wind and solar power can be interrupted by the weather. (Nuclear power doesn’t have this issue, so this set-up would be irrelevant.) However, it’s also possible for solar and wind to temporarily generate more electricity than customers are using at the time. The extra electricity can be used to power this CO2-to-fuel reaction, and the fuel can be burned to provide extra power when the solar or wind plants can’t generate enough electricity on their own. This is also where the Vice article misses something important. Jet fuel can’t have methane, but methane is basically the main component of natural gas, which is burned to provide about another 30% of electricity generated in the US today. And because methane is a small molecule (one carbon atom, four hydrogen atoms) it can be easier to make than the long hydrocarbons needed for jet fuel.

Also, one thing I’m surprised I never see come up when talking about this is using this for long-term human space exploration as a way to prevent to maintain a breathable atmosphere for astronauts and to build materials. If you can build-up the carbon chains for jet fuel, you could also make the precursors to lots of plastics. The International Space Station is entirely powered by solar panels, and solar panels are typically envisioned as being part of space colonies. Generally, electricity generation shouldn’t be a major problem in any of the manned missions we’re looking at for the near future and this could be a major way to help future astronauts or space colonists generate the raw materials they need and maintain their environment.

If you want to read more about the Naval Research Lab’s processes, here are some of the journal articles they have published lately:

Cosmos Tackled Climate Change in a Wonderfully Satisfying Way

So I worked out while watching this week’s episode of Cosmos. I’m several weeks “behind”, if it’s possible to be behind for a documentary series (though I’ve DVRed them all for future marathon sessions). But this was a wonderful episode to come back to. It was a really good primer on contemporary understanding of climate change, especially addressing rebuttals from skeptics that have become more common over the last 20 years or so. In particular, I liked these points

  • Tyson pointed out that the greenhouse effect is “beneficial” in that Earth would be like 30 degrees cooler without it. But he also points out how little CO2 it takes to get that much of a shift, and how little CO2 we need to add to make temperature change too much.
  • It’s not that climate is changing that’s bad, it’s that climate change that is too fast can destroy ecosystems. Tyson pointed out the speed of anthropogenic CO2 emissions isn’t close to anything previously seen in Earth’s history aside from the previous mass extinction believed to be caused by climate change.
  • We can in fact figure out the difference between CO2 we’re emitting and CO2 from many natural sources, and that evidence points out most of the new CO2 is from our fossil fuels. How? The CO2 from fossil fuels is made up of carbon atoms that are different weights from what we would otherwise expect in the atmosphere.
  • The Sun hasn’t really appreciably changed to cause the temperature increases we see.
  • The ocean is warming up. Actually, I’m not sure the show mentioned the “greenhouse pause” people talk about, but what is important to note is that so-called “stop” in air temperatures doesn’t really tell the whole story. The Spaceship of the Imagination gave us an infrared view of the Earth to look at the planet’s heat emissions. We’re finding out that while the atmosphere may not have heated up over this last decade, the ocean definitely has. I’m also surprised the show didn’t mention the potential danger of ocean acidification.

I also loved the presentation of climate as better understood by large scale driving forces and not “just” the average of weather. I don’t know any climatologists, but I’m sure such a simplifying definition of their field has always bugged them. One major factor is energy conservation, and since I took a simple engineering class that tried to stress how much of understanding technical systems is based on just applying various conservation laws, I try to emphasize that more. Those space satellites let us measure how much heat Earth radiates away, and given that the sun’s input is relatively constant, if the atmosphere traps more heat in, then we must be heating up. (This is also the source of my very basic understanding as to why severe weather gets worse under climate change. We’re trapping in more energy, and so storm systems basically get stronger because it goes somewhere.) And I also loved that they tied in how our knowledge of other planets has helped inform our understanding of Earth. (And even gave a shout out to Carl Sagan’s research!) I do have one major peeve, though, and I want to point out that they did commit the cardinal sin of data presentation and not include any scale for the color representing temperature increases on their maps.

I had never heard of Frank Shuman before and now I want to look him up. It amazes me how similar his pointing out a small region of solar power generators could power civilization is to Rick Smalley’s idea, minus the nanotubes. If there’s anything I’ve learned the last two years in grad school while doing literature research for my own project, it’s how freakishly non-linear and coincidental energy research can be. To a more philosophically minded friend, I joked that learning about energy technologies has destroyed the last shreds of my old belief in logical positivism, the idea that human history is generally a linear progression towards more good. But I’ll be darned and say I’m still an optimist and had slight chills hearing JFK’s “We choose to go to the moon” speech with scenes of that (super utopian) city of sustainable energy and green spaces. 

Japan Owned Everyone at Coupling Catalysts in the 1970s – Why?

In a slightly distracting science blog crawl, I came across something really interesting. I was looking at the Everyday Scientist’s past Nobel prize predictions and was wondering who the Sonogashira he predicted to win and was surprised to see excluded from the Nobel. The Everyday Scientist predicted Kenkichi Sonagashira would be included in the 2010 Nobel Prize in Chemistry if it was for the study of coupling reactions, a class of chemical reactions catalyzed by metals that help link together hydrocarbon chains, along with Richard Heck and Akira Suzuki. The prize did end up being award for the study of coupling reactions, but it went to Heck, Suzuki and Ei-ichi Negishi instead of Sonagashira.

Negishi is Japanese, but born in the Japanese puppet state in China called Manchuko in the 30s, got his PhD in America and spent the rest of his career in America. Suzuki did a post-doc in America, but after that, he did all his work in Japan, mainly at Hokkaido University. Heck is an American. None of them were at the same university, at least from cursory glances at their profiles, so I’m really curious about whether or not they collaborated (obviously, you don’t need to be at the same institution to collaborate in scientific research, but it tends to be really easy if that’s the case).

What’s really interesting is looking at the list of specific coupling reactions that have been researched, and the discoverers of each.

Reaction Year Discoverer 1 Nationality Discoverer 2 Nationality Discoverer 3 Nationality
Kumada coupling 1972 Makoto Kumada Japanese Robert Corriu French
Heck reaction 1972 Richard Heck American Tsutomu Mizoroki Japanese
Sonogashira coupling 1975 Kenkichi Sonogashira Japanese Yasuo Tohda Japanese Nobue Hagihara Japanese
Negishi coupling 1977 Ei-ichi Negishi Japanese
Stille cross coupling 1978 John Stille American Toshihiko Migita Japanese Masanori Kosugi Japanese
Suzuki reaction 1979 Akira Suzuki Japanese Norio Miyaura Japanese
Hiyama coupling 1988 Tamajiro Hiyama Japanese Yasuo Hatanaka Japanese
Buchwald-Hartwig reaction 1994 Stephen Buchwald American John Hartwig American
Fukuyama coupling 1998 Tohru Fukuyama Japanese Hidetoshi Tokuyama Japanese Satoshi Yokoshima Japanese
Liebeskind–Srogl coupling 2000 Lanny Liebeskind American Jiri Srogl Czech

As you can see, there are a lot of Japanese researchers on this list. Few of them are from the same institutions according to their Wikipedia profiles or easy Google searches. And the 1970s show a  flurry of activity. It’s not weird for an initial discovery to quickly kick off a lot of related research and lead to other discoveries, which seems to be the case here, or for one country to end up having a leading edge in a certain field of research, but the combination of both in such an originally niche field seems fascinating, especially because of the small degree of institutional overlap. (Also, it’s interesting that no name appears on the list twice, which you might expect in related discoveries). I’m not super familiar with organic chemistry, so is having a reaction named after you not as a big a deal as I think it is?

Was there something unique about the nature of organic chemistry in Japan at the time that led to such an efficient expansion and application of knowledge? New journals that came out to help spread knowledge in the community? A new push in research focus by funding agencies? Did some conferences or scientific organizations help encourage collaborations on a broader scale?

I’d love to see someone take this on an as some sort of case study in the history and/or sociology of science. I feel like there would be something fascinating here. What’s also interesting, as referenced in this New York Times article on the Nobel prize, is that many of these reactions didn’t catch on in industry until the 90s, so applications probably weren’t behind the original breakneck pace.


Going over the Critique of Cosmos part 1 and a brief review of part 2

Like I said before, Hank Campbell’s had some interesting critiques of the first episode of Cosmos. I thought nearly all of them missed the mark, and to be honest, it seems like he’s being a bit of a science hipster here. I want to go more in depth, and I’ll do that here. Let’s go through his points

1. Venus was not caused by global warming

Let’s look at what Campbell says:  “We have to ask why he thinks Venus is the way it is due to the greenhouse effect — which is another way of saying global warming. Venus is almost 900 degrees Fahrenheit and the clouds are sulfuric acid. Even the most aggressive climate change models and their 20-foot ocean rises don’t predict that for Earth… If this sequel to Cosmos had been made in 1989 the screenwriters of Cosmos would have invoked acid rain on Venus instead of global warming. Regardless, CO2 did not cause the poisonous conditions on Venus; instead, CO2 is an effect of the poisonous conditions on Venus. Invoking the greenhouse effect when talking about Venus is like blaming ocean liners for inventing barnacles.

Okay, but global warming isn’t the same as the greenhouse effect.  If it weren’t for the CO2, SO2, and H2O, Earth’s surface temperature would be significantly lower. That is the definition of the greenhouse effect. More technically, the greenhouse effect is when a gas in an atmosphere can absorb heat radiated from a planet surface, which then redirects some of the heat escaping from the planet back towards the surface. This shift the temperature equilibrium to higher than it would without the greenhosue gas. In an exchange on a follow-up on his website, Science 2.0, Campbell says the real culprit is hydrogen escape. (Note: I’m the “Matt” participating in the comment section.) “On Venus gravity, hydrogen is already light so a lack of gravity causes the water problem to go nuts. No water, CO2 goes crazy – but CO2 did not cause the atmosphere of Venus, Tyson knows it, anyone who knows high school atmospheric science knows it…”

Campbell probably overestimates hydrogen escape by itself. Hydrogen escapes from Earth too (we’re predicted to lose our all water due to hydrogen escape within the next billion years). Also, Venus’ surface gravity is about 90% that of Earth’s, so hydrogen shouldn’t experience that much weaker of a pull to the planet that it does here.  The chain of cause and effect leading to atmospheric changes on Venus doesn’t mean the greenhouse effect doesn’t describe the current temperature, or ever played an effect in the evolution of Venus’ climate. Even planetary scientists describe Venus’ history as the result of a “runaway greenhouse effect”. Once most of Venus’ water was in the atmosphere, it stayed that way because water vapor is also a greenhouse gas and raised the temperature enough to prevent condensation into massive oceans that could have held on to the water longer. (It’s easier to strip hydrogen from molecules in the atmosphere.) This is also why I don’t buy the idea that this segment relates at all to climate change debates. CO2 is not the only greenhouse gas, and this is typically mentioned in modern discussions about methane, and Tyson didn’t actually mention the concentration of CO2 in Venus’ atmosphere. Campbell takes his concern of framing too far here.

2. The Multiverse is Not Science

Campbell: “Any time a scientist begins a sentence with “Many of us suspect,” it is codespeak for “we sit around and discuss it at the bar.”

Why not just let that go as artistic license? When Carl Sagan was filming the originalCosmos program, physicists Alan Guth and Andrei Linde had not even come up with “inflation” for the Big Bang that Tyson mentions casually. Thus, it would not have made it into the original Cosmos as fact. Too much speculation makes the audience wonder if scientists are going to be trusted guides or another version of Dr. Oz and his Miracle Vegetable of the week. Science doesn’t need to toss in speculation to be interesting, because what we know and therefore don’t know is fascinating enough.”

“The multiverse is not science. It is more like an anthropic secular alternative to a divine origin. It’s not science because it can’t be proved or disproved — it’s just postmodernism with some math. And it’s invoked shortly after the introduction where Tyson tells us to test everything.”

I also kind of cringed when Tyson mentioned a multiverse and it even had a visualization with the spaceship of the imagination. But if you pay attention to the language of the episode, you’ll see that the writers were actually being pretty deliberate. Tyson used “suspect” here. But for the rest of the episode, Tyson never describes a scientific theory as anything less than a fact (which is how scientists treat theories). This means Cosmos is not elevating the idea of a multiverse to the level of accepted scientific theory. And it is true that many physicists and cosmologists “suspect” a multiverse.  Also, Campbell seems to only be thinking of a string theory multiverse in his critique. Tyson’s description didn’t specify the “kind” of multiverse, but the description and visualization seemed to suggest one resulting from “chaotic inflation”. That kind of multiverse actually may be testable if we see an “imprint”, and the new gravitational wave measurements suggest chaotic inflation is the inflation model that more closely matches our universe.

3. There is No Sound In Space

Like I said before, there actually isn’t a good defense of this. Tyson wouldn’t accept it in any other show, and I’m surprised he let that happen here.

4. Giordano Bruno Was Not More Important To Science Than Kepler And Galileo

Like I said before “The episode did not claim Bruno was more important than contemporary natural philosophers and empiricists and definitely pointed out that he wasn’t a scientist. Bruno’s ideas, though, do fit in well with the idea of understanding our place in the universe, which was the entire point of the first episode, as stated in like the first five minutes.”

5. The Universe Was Also Not Created in One Year

“On January 1st, we had the Big Bang and on December 31st, I am alive, less than a tiny fraction of a millisecond before midnight. That can’t be right — it took me a whole day just to write this article.

Oh, Cosmos is not being literal? Oddly, a number of religious critics, Tyson included, insist that too many religious people believe the Book of Genesis is taken literally by people who read the Bible. Unless we accept that figurative comparisons help make large ideas manageable, a year is no more accurate than six days — it is instead a completely arbitrary metric invented to show some context for how things evolved.”

Oh my God, seriously? Hank Campbell is trying so hard to not want to be in a culture war that he wound up back in it. First, Tyson is not nearly as involved in the science aspect of the “culture war” as, say, Richard Dawkins. Also, many Americans don’t take the Biblical account of Genesis figuratively. According to a 2012 Gallup poll, 46% of Americans think God created humans instantly in their present form within the last 10,000 years.

All these complaints just seem… odd. Like I said, “science hipster” is the best description I can think of. I was surprised to hear Campbell really liked the second episode. I actually liked that episode less. The description of the evolution of the eye seemed like a just-so story  in some steps, and probably would not win over the creationists who argue that “the eye is too complex to have evolved”. I thought the step showing the evolution of the lens seemed HUGE and it wasn’t associated with an organism like most of the other steps were. I feel like it would have been more straightforward to show the variety of eyes in the context of the tree of life, but maybe I say this because I’m not as familiar with evolutionary biology. The visualization of DNA seemed “too busy” at times, and they kept changing schematic representations without explaining it. I get that DNA doesn’t really look as pretty as it does in my old bio textbooks, but I was unsure of what was being represented at times. I thought the comparison of DNA sequences seemed a bit odd without a description of what base pairs are.

The Titan bit also seemed odd. I liked the description of Titan, but didn’t like the idea of the ship of the imagination visiting a hydrothermal vent (or its analogue) there. To me it seemed like the show was saying the hydrocarbon lakes on Titan are as deep as Earth’s oceans, and unless I’m really behind the times, I don’t think we know the depth that much. And we definitely don’t know if there are hydrothermal vents on Titan, and that visualization wasn’t accompanied by Tyson saying we “suspect” or some other phrase that would give it less of a weight than a theory/fact like the multiverse visualization was.