Weirdly Specific Questions I Want Answers to in Meta-science, part 1

Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.

  • To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two major papers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using to describe a variety of physical things related to the original idea.
    • This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
  • Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
    • For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
    • Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.
51cb2b3noc-l-_sx346_bo1204203200_

Oh my gosh, I made fun of the subtitle for like two years, but it’s true

  • Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
    • On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano”.
    • Is this a reflection of applications of defects at the different scales? (More philosophically worded, are defects treated differently because of their teleological nature?) At the bulk level, we work to engineer the nature of defects to help develop the properties we want. At the nanoscale, some structures can basically be ruined for certain applications by the mislocation of a single atom. Is this also a reflection of the current practical process of needing to scale up the ability to make nanomaterials? E.g. as more realistic approaches to large-scale nanotech fabrication are developed, will the practical treatment of defects in nanomaterials converge to that of how we treat defects in the bulk?
Advertisements

Quick Thoughts on Diversity in Physics

Earlier this month, during oral arguments for Fisher v. University of Texas, Chief Justice John Roberts asked what perspective an African-American student would offer in physics classrooms. The group Equity and Inclusion in Physics and Astronomy has written an open letter about why this line of questioning may miss the point about diversity in the classroom. But it also seems worth pointing out why culture does matter in physics (and science more broadly).

So nature is nature and people can develop theoretical understanding of it anywhere and it should be similar (I think. This is actually glossing over what I imagine is a deep philosophy of science question.) But nature is also incredibly vast. People approach studies of nature in ways that can reflect their culture. Someone may choose to study a phenomenon because it is one they see often in their lives. Or they may develop an analogy between theory and some aspect of culture that helps them better understand a concept. You can’t wax philosphical about Kekule thinking of ouroboros when he was studying the structure of benzene without admitting that culture has some influence on how people approach science. There are literally entire books and articles about Einstein and Poincare being influenced by sociotechnical issues of late 19th/early 20th century Europe as they developed concepts that would lead to Einstein’s theories of relativity. A physics community that is a monoculture then misses out on other influences and perspectives. So yes, physics should be diverse, and more importantly, physics should be welcoming to all kinds of people.

It’s also worth pointing out this becomes immensely important in engineering and technology, where the problems people choose to study are often immensely influenced by their life experiences. For instance, I have heard people say that India does a great deal of research on speech recognition as a user interface because India still has a large population that cannot read or write, and even then, they may not all use the same language.

The Coolest Part of that Potentially New State of Matter

So we’ve discussed states of matter. And the reason they’re in the news. But the idea that this is a new state of matter isn’t particularly ground-breaking. If we’re counting electron states alone as new states of matter, then those are practically a dime a dozen. Solid-state physicists spend a lot of time creating materials with weird electron behaviors: under this defintion, lots of the newer superconductors are their own states of matter, as are topological insulators.

What is a big deal is the way this behaves as a superconductor. “Typical” superconductors include basically any metal. When you cool them to a few degrees above absolute zero, they lose all electrical resistance and become superconductive. These are described by BCS theory, a key part of which says that at low temperatures, the few remaining atomic vibrations of a metal will actually cause electrons to pair up and all drop to a low energy. In the 1970s, though, people discovered that some metal oxides could also become superconductive, and they did at temperatures above 30 K. Some go as high as 130 K, which, while still cold to us (room temperature is about 300 K), is warm enough to use liquid nitrogen instead of incredibly expensivve liquid helium for cooling. However, BCS theory doesn’t describe superconductivity in these materials, which also means we don’t really have a guide to develop ones with properties we want. The dream of a lot of superconductor researchers is that we could one day make a material that is superconducting at room temperature, and use that to make things like power transmission lines that don’t lose any energy.

This paper focused on an interesting material: a crystal of buckyballs (molcules of 60 carbon atoms arranged like a soccer ball) modified to have some rubidium and cesium atoms. Depending on the concentration of rubidium versus cesium in the crystal, it can behave like a regular metal or the new state of matter they call a “Jahn-Teller metal” because it is conductive but also has a distortion of the soccer ball shape from something called the Jahn-Teller effect. What’s particularly interesting is that these also correspond to different superconductive behaviors. At a concentration where the crystal is a regular metal at room temperatures, it becomes a typical superconductor at low temperatures. If the crystal is a Jahn-Teller metal, it behaves a lot like a high-temperature superconductor, albeit at low temperatures.

This is the first time scientists have ever seen a single material that can behave like both kinds of superconductor. This is exciting becasue this offers a unique testing ground to figure out what drives unconventional superconductors. By changing the composition, researchers change the behavior of electrons in the material, and can study their behavior, and see what makes them go through the phase transition to a superconductor.

Carbon is Dead. Long Live Carbon?

Last month, the National Nanotechnology Initiative released a report on the state of commercial development of carbon nanotubes. And that state is mostly negative. (Which pains me, because I still love them.) If you’re not familiar with carbon nanotubes, you might know of their close relative, graphene, which has been in the news much more since the Nobel Prize awarded in 2010 for its discovery. Graphene is essentially a single layer of the carbon atoms found in graphite. A carbon nanotube can be thought of as rolling up a sheet of graphene into a cylinder.

CNT as rolled-up graphene.jpg

Visualizing a single-walled (SW) carbon nanotube (CNT) as the result of rolling up a sheet of graphene.

If you want to use carbon nanotubes, there are a lot of properties you need to consider. Nearly 25 years after their discovery, we’re still working on controlling a lot of these properties, which are closely tied to how we make the nanobues.

Carbon nanotubes have six major characteristics to consider when you want to use them:

  • How many “walls” does a nanotube have? We often talk about the single-walled nanotubes you see in the picture above, because their properties are the most impressive. However, it is much easier to make large quantities of nanotubes with multiple walls than single walls.
  • Size. For nanotubes, several things come in play here.
    • The diameter of the nanotubes is often related to chirality, another important aspect of nanotubes, and can affect both mechanical and electrical properties.
    • The length is also very important, especially if you want to incorporate the nanotubes into other materials or if you want to directly use nanotubes as a structural material themselves. For instance, if you want to add nanotubes to another material to make it more conductive, you want them to be long enough to routinely touch each other and carry charge through the entire material. Or if you want that oft-discussed nanotube space elevator, you need really long nanotubes, because stringing a bunch of short nanotubes together results in a weak material.
    • And the aspect ratio of length to width is important for materials when you use them in structures.
  • Chirality, which can basically be thought of as the curviness of how you roll up the graphene to get a nanotube (see the image below). If you think of rolling up a sheet of paper, you can roll it leaving the ends matched up, or you can roll it an angle. Chirality is incredibly important in determing the way electricity behaves in nanotubes, and whether a nanotube behaves like a metal or like a semiconductor (like the silicon in your computer chips). It also turns out that the chirality of nanotubes is related to how they grow when you make them.
  • Defects. Any material is always going to have some deviation from an “ideal” structure. In the case of the carbon nanotubes, it can be missing or have extra carbon atoms that replace a few of the hexagons of the structure with pentagons or heptagons. Or impurity atoms like oxygen may end up incorporated into the nanotube. Defects aren’t necessarily bad for all applications. For instance if you want to stick a nanotube in a plastic, defects can actually help it incorporate better. But electronics typically need nanotubes of the highest purity.

A plane of hexagons is shown in the top left. Overlaid on the plan are arrows representing vectors. On the top right is a nanotube labeled (10, 0) zig-zag. On the bottom left is a larger (10, 10) armchair nanotube. On the bottom right is a larger (10, 7) chiral nanotube. Some of the different ways a nanotube can be rolled up. The numbers in parentheses are the “chiral vector” of the nanotube and determine its diameter and electronic properties.

Currently, the methods we have to make large amounts of CNTs result in a mix of ones with different chiralities, if not also different sizes. (We have gotten much better at controlling diameter over the last several years.) For mechanical applications, the former isn’t much of a problem. But if you have a bunch of CNTs of different conductivities, it’s hard to use them consistently for electronics.

But maybe carbon nanotubes were always doomed once we discovered graphene. Working from the idea of a CNT as a rolled-up graphene sheet, you may realize that means there are  way more factors that can be varied in a CNT than a single flat flake of graphene. When working with graphene, there are just three main factors to consider:

  • Number of layers. This is similar to the number of walls of a nanotube. Scientists and engineers are generally most excited about single-layer graphene (which is technically the “true” graphene). The electronic properties change dramatically with the number of layers, and somewhere between 10 and 100 layers, you’re not that different from graphite. Again, the methods that produce the most graphene produce multi-layer graphene. But all the graphene made in a single batch will generally have consistent electronic properties.
  • Size. This is typically just one parameter, since most methods to make graphene result in roughly circular, square, or other equally shaped patches. Also, graphene’s properties are less affected by size than CNTs.
  • Defects. This tends to be pretty similar to what we see in CNTs, though in graphene there’s a major question of whether you can use an oxidized form or need the pure graphene for your application, because many production methods make the former first.

Single-layer graphene also has the added quirk of its electrical properties being greatly affected by whatever lies beneath it. However, that may be less of an issue for commercial applications, since whatever substrate is chosen for a given application will consistently affect all graphene on it. In a world where can now make graphene in blenders or just fire up any carbon source ranging from Girl Scout cookies to dead bugs and let it deposit on a metal surface, it can be hard for nanotubes to sustain their appeal when growing them requires additional steps of catalyst production and placement.

But perhaps we’re just leaving a devil we know for a more hyped devil we don’t. Near the end of last year, The New Yorker had a great article on the promises we’re making for graphene, the ones we made for nanotubes, and about technical change in general, which points out that we’re still years away from widespread adoption of either material for any purpose. In the meantime, we’re probably going to keep discovering other interesting nanomaterials, and just like people couldn’t believe we got graphene from sticky tape, we’ll probably be surprised by whatever comes next.

A Nobel for Nanotubes?

A popular pastime on the science blogosphere is doing Nobel predictions; educated guesses on who you think may win a Nobel prize in the various science categories (physics, chemistry, and physiology or medicine). I don’t feel like I know enough to really do detailed predictions, but I did make one. Okay, more of a dream than a prediction. But I feel justified because Slate also seemed to vouch for it. What was it? I think a Nobel Prize in Physics should be awarded for the discovery and study of carbon nanotubes.

One potential issue with awarding a prize for carbon nanotube work could be priority. Nobel prizes can only be split between three people. While Iijima is generally recognized as the first to discover carbon nanotubes, it actually seems that they have really been discovered multiple times (in fact, Iijima appears to have imaged a carbon nanotube in his thesis nearly 15 years before what is typically considered his “discovery”). It’s just that Iijima’s announcement happened to be at a time and place where the concept of a nanometer-sized cylinder of carbon atoms could both be well understood and greatly appreciated as a major focus of study. The paper linked to points out that many of the earlier studies that probably found nanotubes were mainly motivated by PREVENTING  their growth because they were linked to defects and failures in other processes. The committee could limit this by awarding the prize for the discovery of single-walled nanotubes, which brings the field of potential awardees down to Iijima and one of his colleagues and a competing group at IBM in California. This would also work because a great deal of the hype of carbon nanotubes is focused on single-walled tubes because they generally have superior properties than their multi-walled siblings and theory focuses on them.

No matter what, I would say Mildred Dresselhaus should be included in any potential nanotube prize because she has been one of the most important contributors to the theoretical understanding of carbon nanotubes since the beginning. She’s also done a lot of theoretical work on graphene, but the prize for graphene was more experimental because while theorists have been describing graphene since at least the 80s (Dresselhaus even has a special section in that same issue), no one had anything pure to work with until Geim and Novoselov started their experiments.

In 1996, another form of carbon was also recognized with the Nobel Prize in Chemistry. Rick Smalley, Robert Curl, and Harold Kroto won the prize for their discovery of buckminsterfullerene (or “buckyballs”) in 1985 and further work they did with other fullerenes and being able to the prove these did have ball-like structures. So while the prize for graphene recognized unique experimental work that could finally test theory, this prize was for an experimental result no one was expecting.   Pure carbon has been known to exist as a pure element in two forms, diamond and graphite, for a long time and no one was expecting to find another stable form. Fullerenes opened people’s minds to nanostructures and served as a practical base for the start of much nanotechnology research, which was very much in vogue after Drexler’s discussions in the 80s.

Six diagrams are shown, in two rows of three. Top left shows atoms arranged in hexagonal sheets, which are then layered on top of each other. This is graphite.

Six phases of carbon. Graphite and diamond are the two common phases we encounter in normal conditions.

So why do I think nanotubes should get the prize? One could argue it just seems transitional between buckyballs and graphene, so it would be redundant. While a lot of work using nano-enhanced materials does now focus on graphene, a great deal of this is based on previous experiments using carbon nanotubes, so the transition was scientifically important. And nanotubes still have some unique properties. The shape of a nanotube immediately brings lot of interesting applications to mind that wouldn’t come up for flat graphene or the spherical buckyballs: nano-wires, nano “test tubes”, nano pipes, nanomotors, nano-scaffolds, and more.  (Also, when describing nanotubes, it’s incredibly easy to be able to say it’s like nanometer-sized carbon fiber, but I realize that ease of generating one sentence descriptions is typically not a criterion for Nobel consideration.) The combination of these factors make nanotubes incredibly important in the history of nanotechnology and helped it transition into the critical field it is today.

What Happens When You Literally Hang Something Out to Dry?

I got a question today!  A good friend from high school asked:

Hey! So I have a sciencey question for you. But don’t laugh at me! It might seem kinda silly at first, but bear with me. Ok, how does water evaporate without heat? Like a towel is wet, so we put it in the sun to dry (tada heat!) but if its a kitchen or a bathroom towel that doesn’t see any particular increase in temp? How does the towel dry? What happens to the water? Does it evaporate but in a more mild version of the cycle of thinking?
It’s actually a really good question, and the answer depends on some statistical physics and thermodynamics. You know water is turning into water vapor all the time around you, but you can also see that these things clearly aren’t boiling away.

I’ve said before that temperature and heat are kind of weird, even though we talk about them all the time:

It’s not the same thing as energy, but it is related to that.  And in scientific contexts, temperature is not the same as heat.  Heat is defined as the transfer of energy between bodies by some thermal process, like radiation (basically how old light bulbs work), conduction (touching), or convection (heat transfer by a fluid moving, like the way you might see soup churn on a stove).  So as a kind of approximate definition, we can think of temperature as a measure of how much energy something could give away as heat.
The other key point is that temperature is only an average measure of energy, as the molecules are all moving at different speeds (we touched on this at the end of this post on “negative temperature”). This turns out to be crucial, because this helps explain the distinction between boiling and evaporating a liquid. Boiling is when you heat a liquid to its boiling point, at which point it overcomes the attractive forces holding the molecules together in a liquid. In evaporation, it’s only the random molecules that happen to be moving fast enough to overcome those forces that leave.
We can better represent this with a graph showing the probabilities of each molecule having a particular velocity or energy. (Here we’re using the Maxwell-Boltzmann distribution, which is technically meant for ideal gases, but works as a rough approximation for liquids.) That bar on the right marks out an energy of interest, so here we’ll say it’s the energy needed for a molecule to escape the liquid (vaporization energy). At every temperature, there will always be some molecules that happen to have enough energy to leave the liquid. Because the more energetic molecules  leave first, this is also why evaporating liquids cool things off.
A graph with x-axis labelled

Maxwell-Boltzmann distributions of the energy of molecules in a gas at various temperatures. From http://ibchem.com/IB/ibnotes/full/sta_htm/Maxwell_Boltzmann.htm

You might wonder that if say, your glass of water or a drenched towel is technically cooling off from evaporation, why will it completely evaporate over time? Because the water will keep warming up to room temperature and atomic collisions will keep bringing up the remaining molecules back to a similar Boltzmann distribution.
My friend also picks up on a good observation comparing putting the towel out in the sun versus hanging it in a bathroom. Infrared light from the sun will heat up the towel compared to one hanging around in your house, and you can see that at the hotter temperatures, more molecules exceed the vaporization energy, so evaporation will be faster. (In cooking, this is also why you raise the heat but don’t need to boil a liquid to make a reduction.)

There’s another factor that’s really important in evaporation compared to boiling. You can only have so much water in a region of air before it starts condensing back into a liquid (when you see dew or fog, there’s basically so much water vapor it starts re-accumulating into drops faster than they can evaporate). So if it’s really humid, this process goes slower. This is also why people can get so hot in a sauna. Because the air is almost completely steam, their sweat can’t evaporate to cool them off.

Wine Crying Shouldn’t Make You Cry

So I am really jealous of the skill mechanical engineering PhD student Dan Quinn has in making that video. And it’s really informative. (Also, we talked about this in my surface chemistry class a few weeks ago!) Robert Krulwich says we need more people like Quinn and I totally agree. The Atlantic seems to misunderstand the video in two really weird ways, though. First, the description isn’t new. People figured out the cause of tears of wine in the 1850s. Secondly, this really has nothing to do with quality of the wine. The way a wine forms tears is more reflective of the alcohol content than anything else, because the solids you find in wine don’t affect surface tension nearly as much as the alcohol does. It does tell your something about your wine, but not the thing most wine drinkers focus on.