Red Eye Take Warning – Our Strange, Cyclical Awareness of Pee in Pools

The news has been abuzz lately with a terrifying revelation: if you get red eye at the the pool, it’s not from the chlorine, it’s from urine. Or to put it more accurately, from the product of chlorine reacting with a chemical in the urine. In the water, chlorine easily reacts with uric acid, a chemical found in urine, and also in sweat, to form chloramines. It’s not surprising that this caught a lot of peoples’ eyes, especially since those product chemicals are linked to more than just eye irritation. But what’s really weird is what spurred this all on. It’s not a new study that finally proved this. It’s just the release of the CDC’s annual safe swimming guide and a survey from the National Swimming Pool Foundation. But this isn’t the first year the CDC mentioned this fact: an infographic from 2014’s Recreational Water Illness and Injury Prevention Week does and two different posters from 2013 do (the posters have had some slight tweaks, but the Internet Archive confirms they were there in 2013 and even 2012), and on a slightly related note, a poster from 2010 says that urine in the pool uses up the chlorine.

A young smiling boy is at the edge of a swimming pool, with goggles on his forehead.

My neighborhood swim coach probably could have convinced me to wear goggles a lot earlier if she told me it would have kept pee out of my eyes.

Here’s what I find even stranger. Last year there was a lot of publicity about a study suggesting the products of the chlorine-uric acid reaction might be linked to more severe harm than just red eye. But neither Bletchley, the leader of study, and none of the articles about it link the chemicals to red eye at all, or even mention urine’s role in red eye in the pool. Also, if you’re curious about the harm, but don’t want to read the articles, the conclusion is that it doesn’t even reach the dangerous limits for drinking water. According to The Atlantic, Bletchley is worried more that it might be easier for an event like a swimming competition to easily deplete the chlorine available for disinfecting a pool in only a short amount of time. This seems strange because it seems like a great time to bring up that eye irritation can be a decent personal marker for the quality of the pool as a way to empower people. If you’re at a pool and your eyes feel like they’re on fire or you’re hacking a lot without swallowing water, maybe that’s a good sign to tell the lifeguard they need to add more chlorine because most of it has probably formed chloramines by then.

Discussion of urine and red eye seems to phase in and out over time, and actually even the focus of whether its sweat or urine does too. In 2013, the same person from the CDC spoke with LiveScience and they mention that the pool smell and red eye is mainly caused by chloramines (and therefore urine and sweat), not chlorine. A piece from 2012 reacting to a radio host goes into detail on chloramines. During the 2012 Olympics, Huffington Post discussed the irritating effects of chloramines on your body, including red eye, and the depletion of chlorine for sterilization after many Olympic swimmers admitted to peeing in the pool. (Other pieces seem to ignore that this reaction happens and assume it’s fine since urine itself doesn’t have any compounds or microbes that would cause disease.) In 2009, CNN mentions that the chloramines cause both red eye and some respiratory irritation. The article is from around Memorial Day, suggesting it was just a typical awareness piece. Oh, and they also refer to a 2008 interview with Michael Phelps admitting that Olympians pee in the pool. The CDC also mentions chloramines as potential asthma triggers in poorly maintained and ventilated pools and as eye irritants in a web page and review study that year. In 2008, the same Purdue group published what seems like the first study to analyze these byproducts, because others had only looked at inorganic molecules. There the health concern is mainly about respiratory problems caused by poor indoor pool maintenance because these chemicals can start to build up. Nothing about red eye is mentioned there. In 2006, someone on the Straight Dope discussion boards refers to a recent local news article attributing red eye in the pool to chlorine bonding with pee or sweat. They ask whether or not that’s true. Someone on the board claims it’s actually because chlorine in the pool forms a small amount of hydrochloric acid that will always irritate your eyes. A later commenter links to a piece by Water Quality and Health Council pinning chloramine as the culprit. An article from the Australian Broadcasting Corporation talks about how nitrogen from urine and sweat is responsible for that “chlorine smell” at pools, but doesn’t mention it causing irritation or just using up chlorine that could go to sterilizing the pool.

Finally, I just decided to look up the earliest mention possible by restricting Google searches to earlier dates. Here is an article from the Chicago Tribune in 1996.

There is no smell when chlorine is added to a clean pool. The smell comes as the chlorine attacks all the waste in the pool. (That garbage is known as “organic load” to pool experts.) So some chlorine is in the water just waiting for dirt to come by. Other chlorine is busy attaching to that dirt, making something called combined chlorine. “It’s the combined chlorine that burns a kid’s eyes and all that fun stuff,” says chemist Dave Kierzkowski of Laporte Water Technology and Biochem, a Milwaukee company that makes pool chemicals.

We’ve known about this for nearly 20 years! We just seem to forget. Often. I realize part of this is the seasonal nature of swimming, and so most news outlets will do a piece on being safe at pools every year. But even then, it seems like every few years people are surprised that it is not chlorine that stings your eyes, but the product of its reaction with waste in the water. I’m curious if I can find older things from LexisNexis or journal searches I can do at school. (Google results for sites older than 1996 don’t make much sense, because it seems like the crawler is picking up more recent related stories that happen to show up as suggestions on older pages.) Also, I’m just curious about the distinction between Bletchley’s tests and pool supplies that measure “combined chlorine” and chloramine, which is discussed in this 2001 article as causing red eye. I imagine his is more precise, but Bletchley also says people don’t measure it, and I wonder why.

Let’s Rethink Science Journalism

There’s been a lot of talk about science journalism after the revelation that a heavily publicized study about chocolate helping weight loss was actually a sham. A great deal of this is meta-commentary about whether or not the whole “sting” was ethical or if it even added much to ongoing discussions on science communication. It’s worth pointing out that science journalism in major outlets could be said to work for the most part, as they didn’t actually report on the study. The ScienceNews piece points out that a Washington Post reporter did want to write up something on the study and dropped it when he became suspicious. HuffPo would be the obvious exception in that they evidently had TWO pieces at one point on the study, but it’s science and health sections have historically been pretty questionable. (The science section has gotten better lately. I don’t know about the health section.)

I’m going to mainly focus on science in general publications, because that’s what most people see. And because science journalism in general publications has a weird organization. The standard treatment seems to be that a science journalist should be able to write on any science topic, regardless of background. That increasingly strikes me as strange. The conceptual difference between, say, astronomy and neuroscience is huge. That’s not to say people can’t be good at covering multiple fields of science. Rachel Feltman at The Washington Post wonderfully covers developments from all over science. But I think we should recognize that this is an incredible talent that not everyone has. (Indeed, going over HuffPo’s recent pieces, it’s notable how many seem to come from actual scientists now compared to what seemed like a never-ending stream of uncredited articles probably coming from anyone with an Internet connection a few years ago.)

A man is shown looking slightly up. Floating above his head are a moon, frog, butterflies, crystals, and some other objects, perhaps representing his thoughts or ideas.

It’s hard to actually have all this in your head.

Pretending that all science writers can cover everything harms science journalism. Where I think this shows up particularly clear is coverage of work done by children. For instance, consider last year’s story about the 12-year-old who supposedly made a major breakthrough about lionfish. Let’s be clear: Lauren did a lot of research for a 12-year-old and contributed a lot to a science lab and we should celebrate that. But so many outlets either exagerrated the claims of her father or took his overly hyped claims too much at face value, because it seems like none of these original reporters had any idea where her project fit in with other research. Similarly, there was the 15-year-old who said to have “invented a way to charge your phone”, but his project was similar to research that has been done for years (but again, Angelo ended up doing a lot of work for his age and seemed to develop a way to make it more effective).

I don’t think there’s a reason why a publication couldn’t cover all its science section by having more specialized journalists who also happened to work outside of science. For example, maybe someone covering physical sciences could also cover engineering and manufacturing firms for business reporting and someone else could be on a combined life sciences/health beat. And someone who can specialize and keep up to date on a smaller area can probably toss out names that better reflect the diversity of the research community instead of just pulling up the same few powerful people who typically get referenced . In fact, probably one of the best trends in science coverage over the last decade has been the proliferation of pieces focusing on social implications of science and also pieces that focus on how science is shaped by society. Reporting like that would benefit from more journalists and communicators who cover things both inside and outside of science and can give voice to diverse groups. And also, it would be great if these pieces actually called on scholars in the sociology, history, and/or philosophy of science and technology to help inform these pieces.

It is an image announcing a panel discussion, entitled

Discussions like this reflect important discussions in society that need to happen in science, too. And they’re at their best when people can understand science and society.

The Coolest Part of that Potentially New State of Matter

So we’ve discussed states of matter. And the reason they’re in the news. But the idea that this is a new state of matter isn’t particularly ground-breaking. If we’re counting electron states alone as new states of matter, then those are practically a dime a dozen. Solid-state physicists spend a lot of time creating materials with weird electron behaviors: under this defintion, lots of the newer superconductors are their own states of matter, as are topological insulators.

What is a big deal is the way this behaves as a superconductor. “Typical” superconductors include basically any metal. When you cool them to a few degrees above absolute zero, they lose all electrical resistance and become superconductive. These are described by BCS theory, a key part of which says that at low temperatures, the few remaining atomic vibrations of a metal will actually cause electrons to pair up and all drop to a low energy. In the 1970s, though, people discovered that some metal oxides could also become superconductive, and they did at temperatures above 30 K. Some go as high as 130 K, which, while still cold to us (room temperature is about 300 K), is warm enough to use liquid nitrogen instead of incredibly expensivve liquid helium for cooling. However, BCS theory doesn’t describe superconductivity in these materials, which also means we don’t really have a guide to develop ones with properties we want. The dream of a lot of superconductor researchers is that we could one day make a material that is superconducting at room temperature, and use that to make things like power transmission lines that don’t lose any energy.

This paper focused on an interesting material: a crystal of buckyballs (molcules of 60 carbon atoms arranged like a soccer ball) modified to have some rubidium and cesium atoms. Depending on the concentration of rubidium versus cesium in the crystal, it can behave like a regular metal or the new state of matter they call a “Jahn-Teller metal” because it is conductive but also has a distortion of the soccer ball shape from something called the Jahn-Teller effect. What’s particularly interesting is that these also correspond to different superconductive behaviors. At a concentration where the crystal is a regular metal at room temperatures, it becomes a typical superconductor at low temperatures. If the crystal is a Jahn-Teller metal, it behaves a lot like a high-temperature superconductor, albeit at low temperatures.

This is the first time scientists have ever seen a single material that can behave like both kinds of superconductor. This is exciting becasue this offers a unique testing ground to figure out what drives unconventional superconductors. By changing the composition, researchers change the behavior of electrons in the material, and can study their behavior, and see what makes them go through the phase transition to a superconductor.

What is a State of Matter?

This Vice article excitedly talking about the discovery of a new state of matter has been making the rounds a lot lately. (Or maybe it’s because I just started following Motherboard on Twitter after a friend pointed this article out) Which lead to two good questions: What is a state of matter? And how do we know we’ve found a new one? We’ll consider that second one another time.

In elementary school, we all learned that solid, liquid, and gas were different states of matter (and maybe if you’re science class was ahead of the curve, you talked about plasma). And recent scientific research has focused a lot on two other states of matter: the exotic Bose-Einstein condensate, which is looked at for many experiments to slow down light, and the quark-gluon plasma, which closely resembles the first few milliseconds of the universe after the Big Bang. What makes all of these different things states? Essentially a state of matter is the set behavior of the collection of particles that makes up your object of interest, and each state behaves so differently you can’t apply the description one to another. A crucial point here is that we can only consider multiple partcies to be states, and typically a large number of particles. If you somehow have only one molecule of water, it doesn’t really work to say whether it is solid, liquid, or gas because there’s no other water for it to interact with to develop collective properties.

So room temperature gold and ice are solids because they’re described by regular crystal lattices that repeat. Molten gold and water are no longer solids because they’re no longer described by a regular crystal structure but still have relatively strong forces between molecules. The key property of a Bose-Einstein condensate is that most of its constituent particles are in the lowest possible energy. You can tell when you switched states (a phase transition) because there was a discontinuous change in energy or a property related to energy. In everyday life, this shows up as the latent heat of melting and the latent heat of vaporization (or evaporation).

The latent heat of melting is what makes ice so good at keeping drinks cold. It’s not just the fact that ice is colder than the liquid; the discontinuous “jump” of energy required to actually melt 32°F  ice into 32°F water also absorbs a lot of heat. You can see this jump in the heating curve below. You also see this when you boil water. Just heating water to 212 degrees Fahrenheit doesn’t cause it all to boil away; your kettle/stove also has to provide enough heat to overcome the heat of vaporization. And that heating is discontinuous because it doesn’t raise the temperature until the phase transition is complete. You can try this for yourself in the kitchen with a candy thermometer: ice will always measure 32 F, even if  you threw it in the oven, and boiling water will always measure 212 F.

A graph with the horizontal axis labelled "Q (heat added}" and the veritcal axis labelled "temperature (in Celsius)". It shows three sloped segments, that are labelled, going from the left, ice, heating of water, and heating of water vapor. The sloped line for "ice" and "heating of water" are connected by a flat line representing heat used to melt ice to water. The "heating of water" and "heating of water vapor" sloped lines are connected by a flat line labelled "heat used to vaporize water to water vapor".

The heating curve of water. The horizontal axis represents how much heat has been added to a sample of water. The vertical axis shows the temperature. The flat lines are where heat is going into the latent heat of the phase transition instead of raising the temperature of the sample.

There’s also something neat about another common material related to phase transitions. The transition between a liquid and a glass state does not have a latent heat. This is the one thing that makes me really sympathetic to the “glasses are just supercooled liquids” views. Interestingly, this also means that there’s really no such thing as a single melting temperature for a given glass, because the heating/cooling rate becomes very important.

But then the latter bit of the article confused me, because to me it points out that “state of matter” seems kind of arbitrary compared to “phase”, which we talk about all the time in materials science (and as you can see, we say both go through “phase” transitions). A phase is some object with consistent properties throughout it, and a material with the same composition can be in different phases but still in the same state. For instance, there actually is a phase of ice called ice IX, and the arrangement of the water molecules in it is different from that in conventional ice, but we would definitely still consider both to be solids. Switching between these phases, even though they’re in the same state, also requires some kind of energy change.

Or if you heat a permanent magnet above its critical temperature and caused it to lose its magnetization, that’s the second kind of phase transition. That is, while the heat and magnetization may have changed continuously, the ease of magnetizing it (which is the second derivative of the energy with respect to strength of the magnetic field) had a jump at that point. Your material is still in a solid state and the atoms are still in the same positions, but it changed its magnetic state from permanent to paramagnetic. So part of me is wondering whether we can consider that underlying electron behavior to be a definition of a state of matter or a phase. The article makes it sound we’re fine saying they’re basically the same thing. This Wikipedia description of “fermionic condensates” as a means to describe superconductivity also supports this idea.

Going by this description then means we’re surrounded by way more states of matter than the usual four we consider. With solids alone, you interact with magnetic metals, conductors (metals or the semiconductors in your electronics), insulating solids, insulating glasses, and magnetic glasses (amorphous metals are used in most of those theft-prevention tags you see) on a regular basis, which all have different electron behaviors. It might seem slightly unsatisfying for something that sounds as fundamental as “states of matter” to end up having so many different categories, but it just reflects an increasing understanding of the natural world.

Carbon is Dead. Long Live Carbon?

Last month, the National Nanotechnology Initiative released a report on the state of commercial development of carbon nanotubes. And that state is mostly negative. (Which pains me, because I still love them.) If you’re not familiar with carbon nanotubes, you might know of their close relative, graphene, which has been in the news much more since the Nobel Prize awarded in 2010 for its discovery. Graphene is essentially a single layer of the carbon atoms found in graphite. A carbon nanotube can be thought of as rolling up a sheet of graphene into a cylinder.

On the left is a hexagonal pattern, representing the arrangement of atoms in graphene. An arrow in the middle of the image is labelled Visualizing a single-walled (SW) carbon nanotube (CNT) as the result of rolling up a sheet of graphene.

If you want to use carbon nanotubes, there are a lot of properties you need to consider. Nearly 25 years after their discovery, we’re still working on controlling a lot of these properties, which are closely tied to how we make the nanobues.

Carbon nanotubes have six major characteristics to consider when you want to use them:

  • How many “walls” does a nanotube have? We often talk about the single-walled nanotubes you see in the picture above, because their properties are the most impressive. However, it is much easier to make large quantities of nanotubes with multiple walls than single walls.
  • Size. For nanotubes, several things come in play here.
    • The diameter of the nanotubes is often related to chirality, another important aspect of nanotubes, and can affect both mechanical and electrical properties.
    • The length is also very important, especially if you want to incorporate the nanotubes into other materials or if you want to directly use nanotubes as a structural material themselves. For instance, if you want to add nanotubes to another material to make it more conductive, you want them to be long enough to routinely touch each other and carry charge through the entire material. Or if you want that oft-discussed nanotube space elevator, you need really long nanotubes, because stringing a bunch of short nanotubes together results in a weak material.
    • And the aspect ratio of length to width is important for materials when you use them in structures.
  • Chirality, which can basically be thought of as the curviness of how you roll up the graphene to get a nanotube (see the image below). If you think of rolling up a sheet of paper, you can roll it leaving the ends matched up, or you can roll it an angle. Chirality is incredibly important in determing the way electricity behaves in nanotubes, and whether a nanotube behaves like a metal or like a semiconductor (like the silicon in your computer chips). It also turns out that the chirality of nanotubes is related to how they grow when you make them.
  • Defects. Any material is always going to have some deviation from an “ideal” structure. In the case of the carbon nanotubes, it can be missing or have extra carbon atoms that replace a few of the hexagons of the structure with pentagons or heptagons. Or impurity atoms like oxygen may end up incorporated into the nanotube. Defects aren’t necessarily bad for all applications. For instance if you want to stick a nanotube in a plastic, defects can actually help it incorporate better. But electronics typically need nanotubes of the highest purity.

A plane of hexagons is shown in the top left. Overlaid on the plan are arrows representing vectors. On the top right is a nanotube labeled (10, 0) zig-zag. On the bottom left is a larger (10, 10) armchair nanotube. On the bottom right is a larger (10, 7) chiral nanotube. Some of the different ways a nanotube can be rolled up. The numbers in parentheses are the “chiral vector” of the nanotube and determine its diameter and electronic properties.

Currently, the methods we have to make large amounts of CNTs result in a mix of ones with different chiralities, if not also different sizes. (We have gotten much better at controlling diameter over the last several years.) For mechanical applications, the former isn’t much of a problem. But if you have a bunch of CNTs of different conductivities, it’s hard to use them consistently for electronics.

But maybe carbon nanotubes were always doomed once we discovered graphene. Working from the idea of a CNT as a rolled-up graphene sheet, you may realize that means there are  way more factors that can be varied in a CNT than a single flat flake of graphene. When working with graphene, there are just three main factors to consider:

  • Number of layers. This is similar to the number of walls of a nanotube. Scientists and engineers are generally most excited about single-layer graphene (which is technically the “true” graphene). The electronic properties change dramatically with the number of layers, and somewhere between 10 and 100 layers, you’re not that different from graphite. Again, the methods that produce the most graphene produce multi-layer graphene. But all the graphene made in a single batch will generally have consistent electronic properties.
  • Size. This is typically just one parameter, since most methods to make graphene result in roughly circular, square, or other equally shaped patches. Also, graphene’s properties are less affected by size than CNTs.
  • Defects. This tends to be pretty similar to what we see in CNTs, though in graphene there’s a major question of whether you can use an oxidized form or need the pure graphene for your application, because many production methods make the former first.

Single-layer graphene also has the added quirk of its electrical properties being greatly affected by whatever lies beneath it. However, that may be less of an issue for commercial applications, since whatever substrate is chosen for a given application will consistently affect all graphene on it. In a world where can now make graphene in blenders or just fire up any carbon source ranging from Girl Scout cookies to dead bugs and let it deposit on a metal surface, it can be hard for nanotubes to sustain their appeal when growing them requires additional steps of catalyst production and placement.

But perhaps we’re just leaving a devil we know for a more hyped devil we don’t. Near the end of last year, The New Yorker had a great article on the promises we’re making for graphene, the ones we made for nanotubes, and about technical change in general, which points out that we’re still years away from widespread adoption of either material for any purpose. In the meantime, we’re probably going to keep discovering other interesting nanomaterials, and just like people couldn’t believe we got graphene from sticky tape, we’ll probably be surprised by whatever comes next.

Grace Hopper, Pioneer of the Computing Age

A white woman in a naval dress uniform is pictured. Her arms are crossed.

If you’re seeing this on any kind of computing device on International Women’s Day, you should thank Dr. Grace Hopper, rear admiral of the US Navy. Hopper created the first compiler, which allowed for computer programming to be done in code that could more closely resemble human language instead of the essentially numerical instructions that work at the level of the hardware.

These “higher level” languages are what are typically used to create all the various programs and apps we use everyday. What have you done today? Word processing? Photo editing? Anything beyond math was considered outside the domain of computers when Hopper started work.