Listen to The Message Podcast on Your Long Trips This Weekend

If you are one of the 46.9 million Americans travelling more than 50 miles this weekend, I have an entertainment recommendation for you. Consider listening to a new (and recently finished) science fiction podcast, The Message. Sorry if you were hoping for the pseudo “Is this actually like Serial?” illusion, but I just don’t care about every edgy series trying to make itself seem better by hiding whether or not it is actually fiction. I’d add that marathoning the series is the best way to go. There’s only eight episodes, and aside from the last one, they’re all about 13 minutes (with a minute of intro each show) so it’s a good way to spend 2 hours while travelling. I will add, that if I attempted to listen to this one week at a time, I probably would have quickly lost interest because there just wasn’t much material in each episode to feel hooked. But listening to it for two hours straight, it actually felt like a decently paced radio play and the characters and plot were all compelling enough to make up for some clunky structure. Seriously, I skipped eating lunch or getting gas on the road for an hour because I was halfway through and didn’t want to interrupt it.

Also, against the Wired piece’s concern, it didn’t seem like a super transparent plug for GE products. Unless some of the scientists they mentioned were affiliated to GE in some way, and even then, I wouldn’t find that obnoxious. The science didn’t always make sense, but it didn’t seem like technobabble. Also, I was pleasantly surprised by what seemed to be the diversity of the science team in the universe of the story – there was even a person (Mod, though I’ve also seen the spelling Maud) who went by non-binary pronouns, and the program’s director made it clear that disrespecting them wouldn’t be tolerated. I would love to talk about it more if other people have listened to it.


What is the point of thesis/dissertation committees?

I ask this in all sincerity, because after talking to other students in other schools and other fields, I don’t seem any closer to an answer. Maybe it’s just because I think my department is weird, because we don’t assemble dissertation committees until we propose, and we propose fairly late (it’s pretty common for people to propose only a year before they plan on defending).

The closest thing to a consensus answer I can find is that committees exist to make sure advisors aren’t just handing out degrees. But if that is the case, it seems like there isn’t really a guarantee the average committee that doesn’t do much more than read the proposal and the dissertation would be effective at that. A group of less than half a dozen people who typically have two weeks to read a ~200 page summary of what is usually years of research can’t really independently verify the results that are presented. And if a professor really was intent on just handing out degrees to their lab, they could help make that data look more convincing. (I’m not saying this happens a lot. I don’t know for sure, but I don’t think so. My point is just that it seems easy to work around the supposed purpose of committees.)

I thought the point of a dissertation committee was to be a real committee, which in my mind means that at least part of it’s power comes from the fact that it is a group. Advisors can be great and all, but sometimes you need the perspective of other people to plan an experiment or help think through an interpretation of results. I thought the committee could help mediate part of the intellectual relationship between the advisor and student. Say a student wants to redo or alter some experiment but the advisor doesn’t think that it is worth the time; the student can try to convince the committee as a group of intellectual peers, and if they agree, they can essentially override the advisor’s wishes on behalf of the student. I think this is key because it can help diffuse some negative feelings in conflicts like this away from the student. (I don’t think the committee should take on issues that rise to the point of breaking up the advising relationship. Though if this works, I also think fewer issues should lead to the break up of the relationship.) I’m not sure if the converse matters as much because advisors do generally have a lot of control over what their students do, but if an advisor felt the student wasn’t doing something well, he or she could have the committee make it clearer.

So I’ll close with two questions I would love to hear answers from people in other graduate programs. First, when does you first assemble your committee? Second, what does your committee do?



Thoughts on Basic Science and Innovation

Recently, science writer and House of Lords member Matt Ridley wrote an essay in The Wall Street Journal about the myth of basic science leading to technological development. Many people have criticized it, and it even seems like Ridely has walked back some claims. One engaging take can be found here, which includes a great quote that I think helps summarize a lot of the reaction:

I think one reason I had trouble initially parsing this article was that it’s about two things at once. Beyond the topic of technology driving itself, though, Ridley has some controversial things to say about the sources of funding for technological progress, much of it quoted from Terence Kealy, whose book The Economic Laws of Scientific Research has come up here before

But also, it seems like Ridley has a weird conception of the two major examples he cites as overturning the “myth”: steam engines and the structure of the DNA. The issue with steam engines is that we mainly associate them with James Watt, who you memorialize everytime you fret about how many watts all your devices are consuming. Steam engines actually preceded Watt, but the reason we associate them with him is because he greatly improved their efficiency due to his understanding of latent heat, the energy that goes into changing something from one phase into another. (We sort of discussed this before. The graph below helps summarize.) Watt understood latent heat because his colleague and friend Joseph Black, a chemist at the University of Glasgow, discovered it.

A graph with X-axis labelled

The latent heat is the heat that is added to go between phases. In this figure, it is represented by the horizontal line between ice and heating of water and the other horizontal line between heating of water and heating of water vapor.

I don’t know whether or not X-ray crystallography was ever used for industrial purposes in the textiles industry, but it has pretty consistently been used in academia since the basic principles were discovered a century ago. The 1915 Nobel prize in physics was literally about the development of X-ray crystallograpy theory. A crystal structure of a biological molecule was determined by X-ray studies at least in 1923, if not earlier. The idea that DNA crystallography only took off as a popular technique because of spillover from industry is incredibly inaccurate.

Ridley also seems to have a basic assumption: that government has crowded out the private sector as a source of basic research over the past century. It sounds reasonable, and it seems testable as a hypothesis. As a percentage of GDP (which seems like a semi-reasonable metric for concerns about crowding out), federal spending on research and development has generally been on the decline since the 70s, and is now about a 1/3 less than it’s relatively stable levels that decade. If private R&D had been crowded out, a competitor dropping by that much seems like a decent place for some resurgence, especially since the one of the most cited examples of private research, Bell Labs, was still going strong all this time. But instead, Bell cut most of its basic research programs just a few years ago.

Fedederal research spending as a percentage of GDP from the 1976 to 2016 fiscal years. The total shows a slight decrease from 1976 to 1992, a large drop in the 90s, a recovery in 2004, and a drop since 2010.

Federal spending on R&D as a percentage of GDP over time

To be fair, more private philanthropists now seem to be funding academic research. The key word, though, is philanthropist, not commercial, which Ridley refers to a lot throughout the essay. Also, a significant amount of this new private funding is for prizes, but you can only get a prize after you have done work.

There is one major thing to take from Ridley’s essay, though I also think most scientists would admit it too. It’s somewhat ridiculous to try to lay out a clear path from a new fundamental result to a practical application, and if you hear a researcher claim to have one, keep your BS filter high. As The New Yorker has discussed, even results that seems obviously practical have a hard time clearing feasibility hurdles. (Also, maybe it’s just a result of small reference pools, but it seems like a lot of researchers I read are also concerned that research now seems to require some clear “mother of technology” justification.)  Similarly, practical developments may not always be obvious. Neil deGrasse Tyson once pointed out that if you spoke to a physicist in the late 1800s about the best way to quickly heat something, they would probably not describe something resembling a microwave.

Common timeframe estimates of when research should result in a commercially available product, followed by translations suggesting how unrealistic this is. The fourth quarter of next year The project will be canceled in six months. Five years I've solved the interesting research problems. The rest is just business, which is easy, right? Ten years We haven't finished inventing it yet, but when we do, it'll be awesome. 25+ years It has not been conclusively proven impossible. We're not really looking at market applications right now. I like being the only one with a hovercar.

Edit to add: Also, I almost immediately regret using innovation in the title because I barely address it in the post, and there’s probably a great discussion to have about that word choice by Ridley. Apple, almost famously, funds virtually no basic research internally or externally, which I often grumble about. However, I would not hesitate to call Apple an “innovative” company. There are a lot of design choices that can improve products that can be pretty divorced from the physical breakthroughs that made them unique. (Though it is worth pointing out human factors and ergonomics are very active fields of study in our modern, device-filled lives.)

So What is Materials Science (and Engineering)?

So this is my 100th post, and I felt like it should be kind of special. So I want to cover a question I get a lot, and one that’s important to me; what exactly is materials science? My early answer was that “it’s like if physics and chemistry had a really practical baby.” One of my favorite versions is a quote from this article on John Goodenough, one of the key figures in making rechargeable lithium ion batteries: “In hosting such researchers, Goodenough was part of the peculiar world of materials scientists, who at their best combine the intuition of physics with the meticulousness of chemistry and pragmatism of engineering”. Which is a much more elegant (and somewhat ego-boositng) way of wording my description. In one of my first classes in graduate school, my professor described materials science as “the study of defects and how to control them to obtain desirable properties”.

A more complete definition is some version of the one that shows up in most introductory lessons: materials science studies the relationship between the structure of a material, its properties, its performance, and the way it was treated. This is often represented as the “materials science tetrahedron”, shown below. Which turns out to be something people really love to use. (You also sometimes see characterization float in the middle, because it applies to all these aspects.)

A tetrahedron with blue points at the vertices. The top is labelled structe, the bottom three are properties, processing, and performance.

The materials science tetrahedron (with characterization floating in the middle).

Those terms may sound meaningless to you, so let’s break them down. In materials science, structure goes beyond that of chemistry: it’s not just what makeup of an atom or molecule that affects a material, but how the atoms/molecules are arranged together in a material has a huge effect on how it behaves. You’re probably familiar with one common example: carbon and its various allotropes. The hardness of diamond is partially attributed to its special crystal structure. Graphite is soft because it is easy to slide the different layers across each other. Another factor is the crystallinity of a material. Not all materials you see are monolithic pieces. Many are made of smaller crystals we call “grains”. The size and arrangement of these grains can be very important. For instance, the silicon in electronics is made in such a way to guarantee it will always be one single crystal because boundaries between grains would ruin its electronic properties. Turbine blades in jet engines and for wind turbines are single crystals, while steels used in structures are polycrystalline.

On the top is a diamond and a piece of graphite. On the bottom are their crystal structures.

Diamond and its crystal structure is on the left; graphite on the right.

Processing is what we do to a material before it ends up being used. This is more than just isolating the compounds you’ll use to make it. In fact, for some materials, processing actually involves adding impurities. Pure silicon wouldn’t be very effective in computers. Instead, silicon is “doped” with phosphorus or boron atoms and the different doping makes it possible to build various electronic components on the same piece. Processing can also determine the structure – temperature and composition can be manipulated to help control the size of grains in a material.

A ring is split into 10 different sections. Going counterclockwise from the top, each segment shows smaller crystals.

The same steel, with different size grains.

Properties and performance are closely related, and the distinction can be subtle (and honestly, it isn’t something we distinguish that much). One idea is that properties describe the essential behavior of a material, while performance reflects how that translates into its use, or the “properties under constraints“. This splits “materials science and engineering” into materials science for focusing on properties and materials engineering for focusing on performance. But that distinction can get blurred pretty quickly, especially if you look at different subfields. Someone who studies mechanical properties might say that corrosion is a performance issue since it limits how long a material could be used at its desired strength. Talk to my colleagues next door in the Center for Electrochemical Science and Engineering and they would almost all certainly consider corrosion to be a property of materials. Regardless, both of these depend on structure and processing. Blades in wind turbines and jet engines are single crystals because this reduces fatigue over time. Structural steels are polycrystals because this makes them stronger.

Now that I’ve thought about it more, I realize the different parts of the tetrahedron explain the different ways we define materials science and engineering. My “materials science as applied physics and chemistry” view reflects the scale of structures we talk about, from atoms that are typically chemistry’s domain to the crystal arrangement to the larger crystal as a whole, where I can talk about mechanics of atoms and grains. The description of Goodenough separates materials science from physics and chemistry through the performance-driven lens of pragmatism. My professor’s focus on defects comes from the processing part of the tetrahedron.

The tetrahedron also helps define the relationship of materials science and engineering to other fields. First, it helps limit what we call a “material”. Our notions of structure and processing are very different from the chemical engineers, and work best on solids. It also helps define limits to the field. Our structures aren’t primarily governed by quantum effects and we generally want defects, so we’re not redundant to solid-state physics. And when we talk about mechanics, we care a lot about the microstructure of the material, and rarely venture into the large continuum of mechanical and civil engineers.

At the same time, the tetrahedron also explains how interdisciplinary materials science is and can be. That makes sense because the tetrahedron developed to help unify materials science. A hundred years ago, “materials science” wouldn’t have meant anything to anyone. People studying metallurgy and ceramics were in their own mostly separate disciplines. The term semiconductor was only coined in a PhD dissertation in 1910, and polymers were still believed to be aggregates of molecules attracted to each other instead of the long chains we know them to be today. The development of crystallography and thermodynamics helped us tie all these together by helping us define structures, where they come from, and how we change them. (Polymers are still a bit weird in many materials science departments, but that’s a post for another day)

Each vertex is also a key branching off point to work with other disciplines. Our idea of structure isn’t redundant to chemistry and physics, but they build off each other. Atomic orbitals help explain why atoms end up in certain crystal structures. Defects end up being important in catalysts. Or we can look at structures that already exist in nature as an inspiration for own designs. One of my professors explained how he once did a project studying turtle shells from an atomic to macroscopic level, justifying it as a way to design stronger materials. Material properties put us in touch with anyone who wants to use our materials to go into their end products, from people designing jet engines to surgeons who want prosthetic implants, and have us talk to physicists and chemists to see how different properties emerge from structures.

This is what attracted me to materials science for graduate school. We can frame our thinking on each vertex , but it’s also expected that we shift. We can think about structures on a multitude of scales. Now I joke that being a bad physics major translates into being great at most of the physics I need to use now. The paradigm helps us approach all materials, not just the ones we personally study. Thinking with different applications in mind forces me to learn new things all the time. (When biomedical engineers sometimes try to claim they’re the “first” interdisciplinary field of engineering to come on the scene, I laugh thinking that they forget materials science has been around for decades. Heck, now I have 20 articles I want to read about the structure of pearl to help with my new research project.) It’s an incredibly exciting field to be in.

Red Eye Take Warning – Our Strange, Cyclical Awareness of Pee in Pools

The news has been abuzz lately with a terrifying revelation: if you get red eye at the the pool, it’s not from the chlorine, it’s from urine. Or to put it more accurately, from the product of chlorine reacting with a chemical in the urine. In the water, chlorine easily reacts with uric acid, a chemical found in urine, and also in sweat, to form chloramines. It’s not surprising that this caught a lot of peoples’ eyes, especially since those product chemicals are linked to more than just eye irritation. But what’s really weird is what spurred this all on. It’s not a new study that finally proved this. It’s just the release of the CDC’s annual safe swimming guide and a survey from the National Swimming Pool Foundation. But this isn’t the first year the CDC mentioned this fact: an infographic from 2014’s Recreational Water Illness and Injury Prevention Week does and two different posters from 2013 do (the posters have had some slight tweaks, but the Internet Archive confirms they were there in 2013 and even 2012), and on a slightly related note, a poster from 2010 says that urine in the pool uses up the chlorine.

A young smiling boy is at the edge of a swimming pool, with goggles on his forehead.

My neighborhood swim coach probably could have convinced me to wear goggles a lot earlier if she told me it would have kept pee out of my eyes.

Here’s what I find even stranger. Last year there was a lot of publicity about a study suggesting the products of the chlorine-uric acid reaction might be linked to more severe harm than just red eye. But neither Bletchley, the leader of study, and none of the articles about it link the chemicals to red eye at all, or even mention urine’s role in red eye in the pool. Also, if you’re curious about the harm, but don’t want to read the articles, the conclusion is that it doesn’t even reach the dangerous limits for drinking water. According to The Atlantic, Bletchley is worried more that it might be easier for an event like a swimming competition to easily deplete the chlorine available for disinfecting a pool in only a short amount of time. This seems strange because it seems like a great time to bring up that eye irritation can be a decent personal marker for the quality of the pool as a way to empower people. If you’re at a pool and your eyes feel like they’re on fire or you’re hacking a lot without swallowing water, maybe that’s a good sign to tell the lifeguard they need to add more chlorine because most of it has probably formed chloramines by then.

Discussion of urine and red eye seems to phase in and out over time, and actually even the focus of whether its sweat or urine does too. In 2013, the same person from the CDC spoke with LiveScience and they mention that the pool smell and red eye is mainly caused by chloramines (and therefore urine and sweat), not chlorine. A piece from 2012 reacting to a radio host goes into detail on chloramines. During the 2012 Olympics, Huffington Post discussed the irritating effects of chloramines on your body, including red eye, and the depletion of chlorine for sterilization after many Olympic swimmers admitted to peeing in the pool. (Other pieces seem to ignore that this reaction happens and assume it’s fine since urine itself doesn’t have any compounds or microbes that would cause disease.) In 2009, CNN mentions that the chloramines cause both red eye and some respiratory irritation. The article is from around Memorial Day, suggesting it was just a typical awareness piece. Oh, and they also refer to a 2008 interview with Michael Phelps admitting that Olympians pee in the pool. The CDC also mentions chloramines as potential asthma triggers in poorly maintained and ventilated pools and as eye irritants in a web page and review study that year. In 2008, the same Purdue group published what seems like the first study to analyze these byproducts, because others had only looked at inorganic molecules. There the health concern is mainly about respiratory problems caused by poor indoor pool maintenance because these chemicals can start to build up. Nothing about red eye is mentioned there. In 2006, someone on the Straight Dope discussion boards refers to a recent local news article attributing red eye in the pool to chlorine bonding with pee or sweat. They ask whether or not that’s true. Someone on the board claims it’s actually because chlorine in the pool forms a small amount of hydrochloric acid that will always irritate your eyes. A later commenter links to a piece by Water Quality and Health Council pinning chloramine as the culprit. An article from the Australian Broadcasting Corporation talks about how nitrogen from urine and sweat is responsible for that “chlorine smell” at pools, but doesn’t mention it causing irritation or just using up chlorine that could go to sterilizing the pool.

Finally, I just decided to look up the earliest mention possible by restricting Google searches to earlier dates. Here is an article from the Chicago Tribune in 1996.

There is no smell when chlorine is added to a clean pool. The smell comes as the chlorine attacks all the waste in the pool. (That garbage is known as “organic load” to pool experts.) So some chlorine is in the water just waiting for dirt to come by. Other chlorine is busy attaching to that dirt, making something called combined chlorine. “It’s the combined chlorine that burns a kid’s eyes and all that fun stuff,” says chemist Dave Kierzkowski of Laporte Water Technology and Biochem, a Milwaukee company that makes pool chemicals.

We’ve known about this for nearly 20 years! We just seem to forget. Often. I realize part of this is the seasonal nature of swimming, and so most news outlets will do a piece on being safe at pools every year. But even then, it seems like every few years people are surprised that it is not chlorine that stings your eyes, but the product of its reaction with waste in the water. I’m curious if I can find older things from LexisNexis or journal searches I can do at school. (Google results for sites older than 1996 don’t make much sense, because it seems like the crawler is picking up more recent related stories that happen to show up as suggestions on older pages.) Also, I’m just curious about the distinction between Bletchley’s tests and pool supplies that measure “combined chlorine” and chloramine, which is discussed in this 2001 article as causing red eye. I imagine his is more precise, but Bletchley also says people don’t measure it, and I wonder why.

Let’s Rethink Science Journalism

There’s been a lot of talk about science journalism after the revelation that a heavily publicized study about chocolate helping weight loss was actually a sham. A great deal of this is meta-commentary about whether or not the whole “sting” was ethical or if it even added much to ongoing discussions on science communication. It’s worth pointing out that science journalism in major outlets could be said to work for the most part, as they didn’t actually report on the study. The ScienceNews piece points out that a Washington Post reporter did want to write up something on the study and dropped it when he became suspicious. HuffPo would be the obvious exception in that they evidently had TWO pieces at one point on the study, but it’s science and health sections have historically been pretty questionable. (The science section has gotten better lately. I don’t know about the health section.)

I’m going to mainly focus on science in general publications, because that’s what most people see. And because science journalism in general publications has a weird organization. The standard treatment seems to be that a science journalist should be able to write on any science topic, regardless of background. That increasingly strikes me as strange. The conceptual difference between, say, astronomy and neuroscience is huge. That’s not to say people can’t be good at covering multiple fields of science. Rachel Feltman at The Washington Post wonderfully covers developments from all over science. But I think we should recognize that this is an incredible talent that not everyone has. (Indeed, going over HuffPo’s recent pieces, it’s notable how many seem to come from actual scientists now compared to what seemed like a never-ending stream of uncredited articles probably coming from anyone with an Internet connection a few years ago.)

A man is shown looking slightly up. Floating above his head are a moon, frog, butterflies, crystals, and some other objects, perhaps representing his thoughts or ideas.

It’s hard to actually have all this in your head.

Pretending that all science writers can cover everything harms science journalism. Where I think this shows up particularly clear is coverage of work done by children. For instance, consider last year’s story about the 12-year-old who supposedly made a major breakthrough about lionfish. Let’s be clear: Lauren did a lot of research for a 12-year-old and contributed a lot to a science lab and we should celebrate that. But so many outlets either exagerrated the claims of her father or took his overly hyped claims too much at face value, because it seems like none of these original reporters had any idea where her project fit in with other research. Similarly, there was the 15-year-old who said to have “invented a way to charge your phone”, but his project was similar to research that has been done for years (but again, Angelo ended up doing a lot of work for his age and seemed to develop a way to make it more effective).

I don’t think there’s a reason why a publication couldn’t cover all its science section by having more specialized journalists who also happened to work outside of science. For example, maybe someone covering physical sciences could also cover engineering and manufacturing firms for business reporting and someone else could be on a combined life sciences/health beat. And someone who can specialize and keep up to date on a smaller area can probably toss out names that better reflect the diversity of the research community instead of just pulling up the same few powerful people who typically get referenced . In fact, probably one of the best trends in science coverage over the last decade has been the proliferation of pieces focusing on social implications of science and also pieces that focus on how science is shaped by society. Reporting like that would benefit from more journalists and communicators who cover things both inside and outside of science and can give voice to diverse groups. And also, it would be great if these pieces actually called on scholars in the sociology, history, and/or philosophy of science and technology to help inform these pieces.

It is an image announcing a panel discussion, entitled

Discussions like this reflect important discussions in society that need to happen in science, too. And they’re at their best when people can understand science and society.

The Coolest Part of that Potentially New State of Matter

So we’ve discussed states of matter. And the reason they’re in the news. But the idea that this is a new state of matter isn’t particularly ground-breaking. If we’re counting electron states alone as new states of matter, then those are practically a dime a dozen. Solid-state physicists spend a lot of time creating materials with weird electron behaviors: under this defintion, lots of the newer superconductors are their own states of matter, as are topological insulators.

What is a big deal is the way this behaves as a superconductor. “Typical” superconductors include basically any metal. When you cool them to a few degrees above absolute zero, they lose all electrical resistance and become superconductive. These are described by BCS theory, a key part of which says that at low temperatures, the few remaining atomic vibrations of a metal will actually cause electrons to pair up and all drop to a low energy. In the 1970s, though, people discovered that some metal oxides could also become superconductive, and they did at temperatures above 30 K. Some go as high as 130 K, which, while still cold to us (room temperature is about 300 K), is warm enough to use liquid nitrogen instead of incredibly expensivve liquid helium for cooling. However, BCS theory doesn’t describe superconductivity in these materials, which also means we don’t really have a guide to develop ones with properties we want. The dream of a lot of superconductor researchers is that we could one day make a material that is superconducting at room temperature, and use that to make things like power transmission lines that don’t lose any energy.

This paper focused on an interesting material: a crystal of buckyballs (molcules of 60 carbon atoms arranged like a soccer ball) modified to have some rubidium and cesium atoms. Depending on the concentration of rubidium versus cesium in the crystal, it can behave like a regular metal or the new state of matter they call a “Jahn-Teller metal” because it is conductive but also has a distortion of the soccer ball shape from something called the Jahn-Teller effect. What’s particularly interesting is that these also correspond to different superconductive behaviors. At a concentration where the crystal is a regular metal at room temperatures, it becomes a typical superconductor at low temperatures. If the crystal is a Jahn-Teller metal, it behaves a lot like a high-temperature superconductor, albeit at low temperatures.

This is the first time scientists have ever seen a single material that can behave like both kinds of superconductor. This is exciting becasue this offers a unique testing ground to figure out what drives unconventional superconductors. By changing the composition, researchers change the behavior of electrons in the material, and can study their behavior, and see what makes them go through the phase transition to a superconductor.