Red Eye Take Warning – Our Strange, Cyclical Awareness of Pee in Pools

The news has been abuzz lately with a terrifying revelation: if you get red eye at the the pool, it’s not from the chlorine, it’s from urine. Or to put it more accurately, from the product of chlorine reacting with a chemical in the urine. In the water, chlorine easily reacts with uric acid, a chemical found in urine, and also in sweat, to form chloramines. It’s not surprising that this caught a lot of peoples’ eyes, especially since those product chemicals are linked to more than just eye irritation. But what’s really weird is what spurred this all on. It’s not a new study that finally proved this. It’s just the release of the CDC’s annual safe swimming guide and a survey from the National Swimming Pool Foundation. But this isn’t the first year the CDC mentioned this fact: an infographic from 2014’s Recreational Water Illness and Injury Prevention Week does and two different posters from 2013 do (the posters have had some slight tweaks, but the Internet Archive confirms they were there in 2013 and even 2012), and on a slightly related note, a poster from 2010 says that urine in the pool uses up the chlorine.

A young smiling boy is at the edge of a swimming pool, with goggles on his forehead.

My neighborhood swim coach probably could have convinced me to wear goggles a lot earlier if she told me it would have kept pee out of my eyes.

Here’s what I find even stranger. Last year there was a lot of publicity about a study suggesting the products of the chlorine-uric acid reaction might be linked to more severe harm than just red eye. But neither Bletchley, the leader of study, and none of the articles about it link the chemicals to red eye at all, or even mention urine’s role in red eye in the pool. Also, if you’re curious about the harm, but don’t want to read the articles, the conclusion is that it doesn’t even reach the dangerous limits for drinking water. According to The Atlantic, Bletchley is worried more that it might be easier for an event like a swimming competition to easily deplete the chlorine available for disinfecting a pool in only a short amount of time. This seems strange because it seems like a great time to bring up that eye irritation can be a decent personal marker for the quality of the pool as a way to empower people. If you’re at a pool and your eyes feel like they’re on fire or you’re hacking a lot without swallowing water, maybe that’s a good sign to tell the lifeguard they need to add more chlorine because most of it has probably formed chloramines by then.

Discussion of urine and red eye seems to phase in and out over time, and actually even the focus of whether its sweat or urine does too. In 2013, the same person from the CDC spoke with LiveScience and they mention that the pool smell and red eye is mainly caused by chloramines (and therefore urine and sweat), not chlorine. A piece from 2012 reacting to a radio host goes into detail on chloramines. During the 2012 Olympics, Huffington Post discussed the irritating effects of chloramines on your body, including red eye, and the depletion of chlorine for sterilization after many Olympic swimmers admitted to peeing in the pool. (Other pieces seem to ignore that this reaction happens and assume it’s fine since urine itself doesn’t have any compounds or microbes that would cause disease.) In 2009, CNN mentions that the chloramines cause both red eye and some respiratory irritation. The article is from around Memorial Day, suggesting it was just a typical awareness piece. Oh, and they also refer to a 2008 interview with Michael Phelps admitting that Olympians pee in the pool. The CDC also mentions chloramines as potential asthma triggers in poorly maintained and ventilated pools and as eye irritants in a web page and review study that year. In 2008, the same Purdue group published what seems like the first study to analyze these byproducts, because others had only looked at inorganic molecules. There the health concern is mainly about respiratory problems caused by poor indoor pool maintenance because these chemicals can start to build up. Nothing about red eye is mentioned there. In 2006, someone on the Straight Dope discussion boards refers to a recent local news article attributing red eye in the pool to chlorine bonding with pee or sweat. They ask whether or not that’s true. Someone on the board claims it’s actually because chlorine in the pool forms a small amount of hydrochloric acid that will always irritate your eyes. A later commenter links to a piece by Water Quality and Health Council pinning chloramine as the culprit. An article from the Australian Broadcasting Corporation talks about how nitrogen from urine and sweat is responsible for that “chlorine smell” at pools, but doesn’t mention it causing irritation or just using up chlorine that could go to sterilizing the pool.

Finally, I just decided to look up the earliest mention possible by restricting Google searches to earlier dates. Here is an article from the Chicago Tribune in 1996.

There is no smell when chlorine is added to a clean pool. The smell comes as the chlorine attacks all the waste in the pool. (That garbage is known as “organic load” to pool experts.) So some chlorine is in the water just waiting for dirt to come by. Other chlorine is busy attaching to that dirt, making something called combined chlorine. “It’s the combined chlorine that burns a kid’s eyes and all that fun stuff,” says chemist Dave Kierzkowski of Laporte Water Technology and Biochem, a Milwaukee company that makes pool chemicals.

We’ve known about this for nearly 20 years! We just seem to forget. Often. I realize part of this is the seasonal nature of swimming, and so most news outlets will do a piece on being safe at pools every year. But even then, it seems like every few years people are surprised that it is not chlorine that stings your eyes, but the product of its reaction with waste in the water. I’m curious if I can find older things from LexisNexis or journal searches I can do at school. (Google results for sites older than 1996 don’t make much sense, because it seems like the crawler is picking up more recent related stories that happen to show up as suggestions on older pages.) Also, I’m just curious about the distinction between Bletchley’s tests and pool supplies that measure “combined chlorine” and chloramine, which is discussed in this 2001 article as causing red eye. I imagine his is more precise, but Bletchley also says people don’t measure it, and I wonder why.

Advertisements

What is a State of Matter?

This Vice article excitedly talking about the discovery of a new state of matter has been making the rounds a lot lately. (Or maybe it’s because I just started following Motherboard on Twitter after a friend pointed this article out) Which lead to two good questions: What is a state of matter? And how do we know we’ve found a new one? We’ll consider that second one another time.

In elementary school, we all learned that solid, liquid, and gas were different states of matter (and maybe if you’re science class was ahead of the curve, you talked about plasma). And recent scientific research has focused a lot on two other states of matter: the exotic Bose-Einstein condensate, which is looked at for many experiments to slow down light, and the quark-gluon plasma, which closely resembles the first few milliseconds of the universe after the Big Bang. What makes all of these different things states? Essentially a state of matter is the set behavior of the collection of particles that makes up your object of interest, and each state behaves so differently you can’t apply the description one to another. A crucial point here is that we can only consider multiple partcies to be states, and typically a large number of particles. If you somehow have only one molecule of water, it doesn’t really work to say whether it is solid, liquid, or gas because there’s no other water for it to interact with to develop collective properties.

So room temperature gold and ice are solids because they’re described by regular crystal lattices that repeat. Molten gold and water are no longer solids because they’re no longer described by a regular crystal structure but still have relatively strong forces between molecules. The key property of a Bose-Einstein condensate is that most of its constituent particles are in the lowest possible energy. You can tell when you switched states (a phase transition) because there was a discontinuous change in energy or a property related to energy. In everyday life, this shows up as the latent heat of melting and the latent heat of vaporization (or evaporation).

The latent heat of melting is what makes ice so good at keeping drinks cold. It’s not just the fact that ice is colder than the liquid; the discontinuous “jump” of energy required to actually melt 32°F  ice into 32°F water also absorbs a lot of heat. You can see this jump in the heating curve below. You also see this when you boil water. Just heating water to 212 degrees Fahrenheit doesn’t cause it all to boil away; your kettle/stove also has to provide enough heat to overcome the heat of vaporization. And that heating is discontinuous because it doesn’t raise the temperature until the phase transition is complete. You can try this for yourself in the kitchen with a candy thermometer: ice will always measure 32 F, even if  you threw it in the oven, and boiling water will always measure 212 F.

A graph with the horizontal axis labelled "Q (heat added}" and the veritcal axis labelled "temperature (in Celsius)". It shows three sloped segments, that are labelled, going from the left, ice, heating of water, and heating of water vapor. The sloped line for "ice" and "heating of water" are connected by a flat line representing heat used to melt ice to water. The "heating of water" and "heating of water vapor" sloped lines are connected by a flat line labelled "heat used to vaporize water to water vapor".

The heating curve of water. The horizontal axis represents how much heat has been added to a sample of water. The vertical axis shows the temperature. The flat lines are where heat is going into the latent heat of the phase transition instead of raising the temperature of the sample.

There’s also something neat about another common material related to phase transitions. The transition between a liquid and a glass state does not have a latent heat. This is the one thing that makes me really sympathetic to the “glasses are just supercooled liquids” views. Interestingly, this also means that there’s really no such thing as a single melting temperature for a given glass, because the heating/cooling rate becomes very important.

But then the latter bit of the article confused me, because to me it points out that “state of matter” seems kind of arbitrary compared to “phase”, which we talk about all the time in materials science (and as you can see, we say both go through “phase” transitions). A phase is some object with consistent properties throughout it, and a material with the same composition can be in different phases but still in the same state. For instance, there actually is a phase of ice called ice IX, and the arrangement of the water molecules in it is different from that in conventional ice, but we would definitely still consider both to be solids. Switching between these phases, even though they’re in the same state, also requires some kind of energy change.

Or if you heat a permanent magnet above its critical temperature and caused it to lose its magnetization, that’s the second kind of phase transition. That is, while the heat and magnetization may have changed continuously, the ease of magnetizing it (which is the second derivative of the energy with respect to strength of the magnetic field) had a jump at that point. Your material is still in a solid state and the atoms are still in the same positions, but it changed its magnetic state from permanent to paramagnetic. So part of me is wondering whether we can consider that underlying electron behavior to be a definition of a state of matter or a phase. The article makes it sound we’re fine saying they’re basically the same thing. This Wikipedia description of “fermionic condensates” as a means to describe superconductivity also supports this idea.

Going by this description then means we’re surrounded by way more states of matter than the usual four we consider. With solids alone, you interact with magnetic metals, conductors (metals or the semiconductors in your electronics), insulating solids, insulating glasses, and magnetic glasses (amorphous metals are used in most of those theft-prevention tags you see) on a regular basis, which all have different electron behaviors. It might seem slightly unsatisfying for something that sounds as fundamental as “states of matter” to end up having so many different categories, but it just reflects an increasing understanding of the natural world.

A Nobel for Nanotubes?

A popular pastime on the science blogosphere is doing Nobel predictions; educated guesses on who you think may win a Nobel prize in the various science categories (physics, chemistry, and physiology or medicine). I don’t feel like I know enough to really do detailed predictions, but I did make one. Okay, more of a dream than a prediction. But I feel justified because Slate also seemed to vouch for it. What was it? I think a Nobel Prize in Physics should be awarded for the discovery and study of carbon nanotubes.

One potential issue with awarding a prize for carbon nanotube work could be priority. Nobel prizes can only be split between three people. While Iijima is generally recognized as the first to discover carbon nanotubes, it actually seems that they have really been discovered multiple times (in fact, Iijima appears to have imaged a carbon nanotube in his thesis nearly 15 years before what is typically considered his “discovery”). It’s just that Iijima’s announcement happened to be at a time and place where the concept of a nanometer-sized cylinder of carbon atoms could both be well understood and greatly appreciated as a major focus of study. The paper linked to points out that many of the earlier studies that probably found nanotubes were mainly motivated by PREVENTING  their growth because they were linked to defects and failures in other processes. The committee could limit this by awarding the prize for the discovery of single-walled nanotubes, which brings the field of potential awardees down to Iijima and one of his colleagues and a competing group at IBM in California. This would also work because a great deal of the hype of carbon nanotubes is focused on single-walled tubes because they generally have superior properties than their multi-walled siblings and theory focuses on them.

No matter what, I would say Mildred Dresselhaus should be included in any potential nanotube prize because she has been one of the most important contributors to the theoretical understanding of carbon nanotubes since the beginning. She’s also done a lot of theoretical work on graphene, but the prize for graphene was more experimental because while theorists have been describing graphene since at least the 80s (Dresselhaus even has a special section in that same issue), no one had anything pure to work with until Geim and Novoselov started their experiments.

In 1996, another form of carbon was also recognized with the Nobel Prize in Chemistry. Rick Smalley, Robert Curl, and Harold Kroto won the prize for their discovery of buckminsterfullerene (or “buckyballs”) in 1985 and further work they did with other fullerenes and being able to the prove these did have ball-like structures. So while the prize for graphene recognized unique experimental work that could finally test theory, this prize was for an experimental result no one was expecting.   Pure carbon has been known to exist as a pure element in two forms, diamond and graphite, for a long time and no one was expecting to find another stable form. Fullerenes opened people’s minds to nanostructures and served as a practical base for the start of much nanotechnology research, which was very much in vogue after Drexler’s discussions in the 80s.

Six diagrams are shown, in two rows of three. Top left shows atoms arranged in hexagonal sheets, which are then layered on top of each other. This is graphite.

Six phases of carbon. Graphite and diamond are the two common phases we encounter in normal conditions.

So why do I think nanotubes should get the prize? One could argue it just seems transitional between buckyballs and graphene, so it would be redundant. While a lot of work using nano-enhanced materials does now focus on graphene, a great deal of this is based on previous experiments using carbon nanotubes, so the transition was scientifically important. And nanotubes still have some unique properties. The shape of a nanotube immediately brings lot of interesting applications to mind that wouldn’t come up for flat graphene or the spherical buckyballs: nano-wires, nano “test tubes”, nano pipes, nanomotors, nano-scaffolds, and more.  (Also, when describing nanotubes, it’s incredibly easy to be able to say it’s like nanometer-sized carbon fiber, but I realize that ease of generating one sentence descriptions is typically not a criterion for Nobel consideration.) The combination of these factors make nanotubes incredibly important in the history of nanotechnology and helped it transition into the critical field it is today.

What Happens When You Literally Hang Something Out to Dry?

I got a question today!  A good friend from high school asked:

Hey! So I have a sciencey question for you. But don’t laugh at me! It might seem kinda silly at first, but bear with me. Ok, how does water evaporate without heat? Like a towel is wet, so we put it in the sun to dry (tada heat!) but if its a kitchen or a bathroom towel that doesn’t see any particular increase in temp? How does the towel dry? What happens to the water? Does it evaporate but in a more mild version of the cycle of thinking?
It’s actually a really good question, and the answer depends on some statistical physics and thermodynamics. You know water is turning into water vapor all the time around you, but you can also see that these things clearly aren’t boiling away.

I’ve said before that temperature and heat are kind of weird, even though we talk about them all the time:

It’s not the same thing as energy, but it is related to that.  And in scientific contexts, temperature is not the same as heat.  Heat is defined as the transfer of energy between bodies by some thermal process, like radiation (basically how old light bulbs work), conduction (touching), or convection (heat transfer by a fluid moving, like the way you might see soup churn on a stove).  So as a kind of approximate definition, we can think of temperature as a measure of how much energy something could give away as heat.
The other key point is that temperature is only an average measure of energy, as the molecules are all moving at different speeds (we touched on this at the end of this post on “negative temperature”). This turns out to be crucial, because this helps explain the distinction between boiling and evaporating a liquid. Boiling is when you heat a liquid to its boiling point, at which point it overcomes the attractive forces holding the molecules together in a liquid. In evaporation, it’s only the random molecules that happen to be moving fast enough to overcome those forces that leave.
We can better represent this with a graph showing the probabilities of each molecule having a particular velocity or energy. (Here we’re using the Maxwell-Boltzmann distribution, which is technically meant for ideal gases, but works as a rough approximation for liquids.) That bar on the right marks out an energy of interest, so here we’ll say it’s the energy needed for a molecule to escape the liquid (vaporization energy). At every temperature, there will always be some molecules that happen to have enough energy to leave the liquid. Because the more energetic molecules  leave first, this is also why evaporating liquids cool things off.
A graph with x-axis labelled

Maxwell-Boltzmann distributions of the energy of molecules in a gas at various temperatures. From http://ibchem.com/IB/ibnotes/full/sta_htm/Maxwell_Boltzmann.htm

You might wonder that if say, your glass of water or a drenched towel is technically cooling off from evaporation, why will it completely evaporate over time? Because the water will keep warming up to room temperature and atomic collisions will keep bringing up the remaining molecules back to a similar Boltzmann distribution.
My friend also picks up on a good observation comparing putting the towel out in the sun versus hanging it in a bathroom. Infrared light from the sun will heat up the towel compared to one hanging around in your house, and you can see that at the hotter temperatures, more molecules exceed the vaporization energy, so evaporation will be faster. (In cooking, this is also why you raise the heat but don’t need to boil a liquid to make a reduction.)

There’s another factor that’s really important in evaporation compared to boiling. You can only have so much water in a region of air before it starts condensing back into a liquid (when you see dew or fog, there’s basically so much water vapor it starts re-accumulating into drops faster than they can evaporate). So if it’s really humid, this process goes slower. This is also why people can get so hot in a sauna. Because the air is almost completely steam, their sweat can’t evaporate to cool them off.

There Are Probably No Nanoparticles in Your Food… At Least, Not Intentionally

Recently, Mother Jones posted an article about “Big Dairy” putting microscopic pieces of metal in food. Their main source is the Project on Emerging Nanotechnologies and its Consumer Products Inventory, a collaboration between the Wilson Center and Virginia Tech. Unfortunately, the Mother Jones piece seems to misunderstand how the CPI is meant to be used. But another problem is that the CPI itself seems poorly designed as a tool for journalists.

So what’s the issue? The Mother Jones piece mainly focuses on the alleged use of nanoparticles of titanium dioxide (TiO2) in certain foods to enhance colors, making whites whiter or brightening other colors. First, the piece makes an error in its description of TiO2 as a “microscopic piece of metal”. Titanium is a metal, but metal oxides are not, unless you consider rust a metal (which would also be wrong). But another issue is “microscopic”. Just because something is microscopic, which generally means smaller than your eye can see, doesn’t mean it’s a nanomaterial. The smallest thing you can see at a normal reading distance is about a tenth of a millimeter, which is 1000 times bigger than the 100 nanometer cut-off we typically use to talk about nanoparticles.

A clear glass dish holds a bright white powder.

Titanium dioxide is a vivid white pigment, even as macroscopic particles.

And that’s what confuses me most here. As you can see above, titanium dioxide is white as a powder, but in that form it’s several hundred nanometers wide at minimum, if not on the scale of microns (1000 nanometers). In fact, nanoparticles of TiO2 are too small to scatter visible light and so they can’t appear white. A friend reminded me how sunscreens have switched from large TiO2 particles to actual nanoparticles precisely because it helps the sunscreen go on clearer. I’m not naive enough to think food companies wouldn’t try to cut a buck to help improve and standardize appearances, but I also don’t think food scientists are dumb enough to pay for a version of a material they can’t fulfill the purpose they’re adding it for. So TiO2 is probably used in some foods, but not on a nanoscale that radically changes it’s health properties.

But I don’t entirely blame Mother Jones. The thing is, the main reason I had a hunch the article seemed wrong is because one of my labmates at UVA has been working with TiO2 nanotubes for the last three years, and I’ve seen his samples. If I didn’t know that, and I just saw PEN include TiO2 on its list of nano additives, I would be inclined to believe it. PEN saw the Mother Jones piece and another similar article and responded by pointing out that the inventory categorized their inclusion of TiO2 in the products as having low confidence it was actually used. But their source is an environmental science paper including actual chemical analyses of food grade TiO2, so why do they give that low confidence? Also, PEN claims the CPI is something the public can use to monitor nanotechnology in products, so maybe they should rethink how confident they are in their analysis if they want to keep selling it that way.

The paper CPI references in the TiO2 claim is interesting too. That paper actually shows that most of the TiO2 is around 100 nm (figure below). But like I said, that’s kind of pushing the limit on how small the particles can be and still look white. It might be that the authors stumbled across a weird batch, as they note that in liquid products containing TiO2, less than 5% of the TiO2 could go through filters with pores that were 450 nanometers wide. Does the current process used to make food grade TiO2 end up making a lot of particles that are actually smaller than needed? Or maybe larger particles are breaking down into the smaller particles that Weir sees while in storage. This probably does need more research if other groups can replicate these results.

A histogram showing the distribution of particle sizes of TiO2. Categories go from 40 nanometers to 220 nanometers in intervals of 10. The greatest number of particles have diameters of 90-100 or 100-110 nanometers.

Distribution of TiO2 particle sizes in food grade TiO2. From Weir et al, http://pubs.acs.org/doi/abs/10.1021/es204168d?journalCode=esthag

Making Fuel Out of Seawater Is Only One Part of An Energy Solution

So I recently saw this post about a recent breakthrough the Navy made in producing fuel from water make a small round on Facebook from questionable “alternative news” site Addicting Info and it kind of set off my BS detector. First, because this story is a few months old. It actually turned out the article was from April, so part of my skepticism was unfounded. But the opening claim that this wasn’t being reported much in mainstream outlets is wrong, as several sites beat them to the punch (even FOX NEWS! Which would probably make Addicting Info’s head explode.). The other thing that struck me as odd was how the Addicting Info piece seemed to think this technology is practically ready to use right now.  That surprised me, because for nearly the last two years, my graduate research at UVA has been focused on developing materials that could help produce fuel from CO2.

This Vice article does a pretty good job of debunking the overzealous claims made by the Addicting Info piece and others like it. As Vice points out, you need electricity to make hydrogen from water. Water is pretty chemically stable in most of our everyday lives. The only way the average person ends up splitting water is if they have metal rusting, which would be a really slow way to generate hydrogen, or by putting a larger battery in water for one of those home electrolysis experiments.

The Naval Research Lab seems kind of unique among the groups looking at making fuel from CO2 in that they’re extracting hydrogen and CO2 from water as separate processes from the step where they are combined into hydrocarbons. Most of the other research in this area looks at having metal electrodes help this reaction in water (nearly any metal from the middle of the periodic table can split CO2 with enough of a negative charge) . Because of water’s previously mentioned stability, they often add a chemical that can more easily give up hydrogen. A lot of groups use potassium bicarbonate, a close relative of baking soda that has potassium instead of sodium, to help improve the conductivity of the water and because the bicarbonate ion really easily gives up hydrogen. In these set-ups, the goal is for the electricity to help the metal break off an oxygen from a CO2 to make CO, and when you get enough CO, start adding hydrogen to the molecules and linking them together.

A chemical diagram shows a CO2 molecule losing a carbon atom on a copper surface to make CO. When another CO is nearby, the two carbon atoms link together.

Carbon atoms are initially removed from CO2 molecules on a copper surface, forming CO. When CO get close to each other, they can bond together. From Gattrell, Gupta, and Co.

But basically, no matter what reaction you do, if you want to make a hydrocarbon from CO2, you need to use electricity, either to isolate hydrogen or cause the CO2 to become chemically active. As the Vice article points out, this is still perfectly useful for the Navy, because ships with nuclear reactors continually generate large amounts of electricity, but fuel for aircraft must be replenished. If you’re on land, unless you’re part of the 30% of the US that gets electricity from renewable sources or nuclear plants, you’re kind of defeating the point. Chemical reactions and industrial processes always waste some energy, so burning a fossil fuel, which emits CO2, to make electricity that would then be used to turn CO2 back into fuel would always end up with you emitting more CO2 than you started with.

However, this process (or one like it) could actually be useful in a solar or wind-based electricity grid. Wind and solar power can be sporadic; obviously, any solar grid must somehow deal with the fact that night exists, and both wind and solar power can be interrupted by the weather. (Nuclear power doesn’t have this issue, so this set-up would be irrelevant.) However, it’s also possible for solar and wind to temporarily generate more electricity than customers are using at the time. The extra electricity can be used to power this CO2-to-fuel reaction, and the fuel can be burned to provide extra power when the solar or wind plants can’t generate enough electricity on their own. This is also where the Vice article misses something important. Jet fuel can’t have methane, but methane is basically the main component of natural gas, which is burned to provide about another 30% of electricity generated in the US today. And because methane is a small molecule (one carbon atom, four hydrogen atoms) it can be easier to make than the long hydrocarbons needed for jet fuel.

Also, one thing I’m surprised I never see come up when talking about this is using this for long-term human space exploration as a way to prevent to maintain a breathable atmosphere for astronauts and to build materials. If you can build-up the carbon chains for jet fuel, you could also make the precursors to lots of plastics. The International Space Station is entirely powered by solar panels, and solar panels are typically envisioned as being part of space colonies. Generally, electricity generation shouldn’t be a major problem in any of the manned missions we’re looking at for the near future and this could be a major way to help future astronauts or space colonists generate the raw materials they need and maintain their environment.

If you want to read more about the Naval Research Lab’s processes, here are some of the journal articles they have published lately:

http://pubs.acs.org/doi/abs/10.1021/ie301006y?prevSearch=%255BContrib%253A%2BWillauer%252C%2BH%2BD%255D&searchHistoryKey= http://pubs.acs.org/doi/abs/10.1021/ie2008136?prevSearch=%255BContrib%253A%2BWillauer%252C%2BH%2BD%255D&searchHistoryKey= http://pubs.acs.org/doi/abs/10.1021/ef4011115 http://www.nrl.navy.mil/media/news-releases/2014/scale-model-wwii-craft-takes-flight-with-fuel-from-the-sea-concept

Japan Owned Everyone at Coupling Catalysts in the 1970s – Why?

In a slightly distracting science blog crawl, I came across something really interesting. I was looking at the Everyday Scientist’s past Nobel prize predictions and was wondering who the Sonogashira he predicted to win and was surprised to see excluded from the Nobel. The Everyday Scientist predicted Kenkichi Sonagashira would be included in the 2010 Nobel Prize in Chemistry if it was for the study of coupling reactions, a class of chemical reactions catalyzed by metals that help link together hydrocarbon chains, along with Richard Heck and Akira Suzuki. The prize did end up being award for the study of coupling reactions, but it went to Heck, Suzuki and Ei-ichi Negishi instead of Sonagashira.

Negishi is Japanese, but born in the Japanese puppet state in China called Manchuko in the 30s, got his PhD in America and spent the rest of his career in America. Suzuki did a post-doc in America, but after that, he did all his work in Japan, mainly at Hokkaido University. Heck is an American. None of them were at the same university, at least from cursory glances at their profiles, so I’m really curious about whether or not they collaborated (obviously, you don’t need to be at the same institution to collaborate in scientific research, but it tends to be really easy if that’s the case).

What’s really interesting is looking at the list of specific coupling reactions that have been researched, and the discoverers of each.

Reaction Year Discoverer 1 Nationality Discoverer 2 Nationality Discoverer 3 Nationality
Kumada coupling 1972 Makoto Kumada Japanese Robert Corriu French
Heck reaction 1972 Richard Heck American Tsutomu Mizoroki Japanese
Sonogashira coupling 1975 Kenkichi Sonogashira Japanese Yasuo Tohda Japanese Nobue Hagihara Japanese
Negishi coupling 1977 Ei-ichi Negishi Japanese
Stille cross coupling 1978 John Stille American Toshihiko Migita Japanese Masanori Kosugi Japanese
Suzuki reaction 1979 Akira Suzuki Japanese Norio Miyaura Japanese
Hiyama coupling 1988 Tamajiro Hiyama Japanese Yasuo Hatanaka Japanese
Buchwald-Hartwig reaction 1994 Stephen Buchwald American John Hartwig American
Fukuyama coupling 1998 Tohru Fukuyama Japanese Hidetoshi Tokuyama Japanese Satoshi Yokoshima Japanese
Liebeskind–Srogl coupling 2000 Lanny Liebeskind American Jiri Srogl Czech

As you can see, there are a lot of Japanese researchers on this list. Few of them are from the same institutions according to their Wikipedia profiles or easy Google searches. And the 1970s show a  flurry of activity. It’s not weird for an initial discovery to quickly kick off a lot of related research and lead to other discoveries, which seems to be the case here, or for one country to end up having a leading edge in a certain field of research, but the combination of both in such an originally niche field seems fascinating, especially because of the small degree of institutional overlap. (Also, it’s interesting that no name appears on the list twice, which you might expect in related discoveries). I’m not super familiar with organic chemistry, so is having a reaction named after you not as a big a deal as I think it is?

Was there something unique about the nature of organic chemistry in Japan at the time that led to such an efficient expansion and application of knowledge? New journals that came out to help spread knowledge in the community? A new push in research focus by funding agencies? Did some conferences or scientific organizations help encourage collaborations on a broader scale?

I’d love to see someone take this on an as some sort of case study in the history and/or sociology of science. I feel like there would be something fascinating here. What’s also interesting, as referenced in this New York Times article on the Nobel prize, is that many of these reactions didn’t catch on in industry until the 90s, so applications probably weren’t behind the original breakneck pace.