What is a State of Matter?

This Vice article excitedly talking about the discovery of a new state of matter has been making the rounds a lot lately. (Or maybe it’s because I just started following Motherboard on Twitter after a friend pointed this article out) Which lead to two good questions: What is a state of matter? And how do we know we’ve found a new one? We’ll consider that second one another time.

In elementary school, we all learned that solid, liquid, and gas were different states of matter (and maybe if you’re science class was ahead of the curve, you talked about plasma). And recent scientific research has focused a lot on two other states of matter: the exotic Bose-Einstein condensate, which is looked at for many experiments to slow down light, and the quark-gluon plasma, which closely resembles the first few milliseconds of the universe after the Big Bang. What makes all of these different things states? Essentially a state of matter is the set behavior of the collection of particles that makes up your object of interest, and each state behaves so differently you can’t apply the description one to another. A crucial point here is that we can only consider multiple partcies to be states, and typically a large number of particles. If you somehow have only one molecule of water, it doesn’t really work to say whether it is solid, liquid, or gas because there’s no other water for it to interact with to develop collective properties.

So room temperature gold and ice are solids because they’re described by regular crystal lattices that repeat. Molten gold and water are no longer solids because they’re no longer described by a regular crystal structure but still have relatively strong forces between molecules. The key property of a Bose-Einstein condensate is that most of its constituent particles are in the lowest possible energy. You can tell when you switched states (a phase transition) because there was a discontinuous change in energy or a property related to energy. In everyday life, this shows up as the latent heat of melting and the latent heat of vaporization (or evaporation).

The latent heat of melting is what makes ice so good at keeping drinks cold. It’s not just the fact that ice is colder than the liquid; the discontinuous “jump” of energy required to actually melt 32°F  ice into 32°F water also absorbs a lot of heat. You can see this jump in the heating curve below. You also see this when you boil water. Just heating water to 212 degrees Fahrenheit doesn’t cause it all to boil away; your kettle/stove also has to provide enough heat to overcome the heat of vaporization. And that heating is discontinuous because it doesn’t raise the temperature until the phase transition is complete. You can try this for yourself in the kitchen with a candy thermometer: ice will always measure 32 F, even if  you threw it in the oven, and boiling water will always measure 212 F.

A graph with the horizontal axis labelled "Q (heat added}" and the veritcal axis labelled "temperature (in Celsius)". It shows three sloped segments, that are labelled, going from the left, ice, heating of water, and heating of water vapor. The sloped line for "ice" and "heating of water" are connected by a flat line representing heat used to melt ice to water. The "heating of water" and "heating of water vapor" sloped lines are connected by a flat line labelled "heat used to vaporize water to water vapor".

The heating curve of water. The horizontal axis represents how much heat has been added to a sample of water. The vertical axis shows the temperature. The flat lines are where heat is going into the latent heat of the phase transition instead of raising the temperature of the sample.

There’s also something neat about another common material related to phase transitions. The transition between a liquid and a glass state does not have a latent heat. This is the one thing that makes me really sympathetic to the “glasses are just supercooled liquids” views. Interestingly, this also means that there’s really no such thing as a single melting temperature for a given glass, because the heating/cooling rate becomes very important.

But then the latter bit of the article confused me, because to me it points out that “state of matter” seems kind of arbitrary compared to “phase”, which we talk about all the time in materials science (and as you can see, we say both go through “phase” transitions). A phase is some object with consistent properties throughout it, and a material with the same composition can be in different phases but still in the same state. For instance, there actually is a phase of ice called ice IX, and the arrangement of the water molecules in it is different from that in conventional ice, but we would definitely still consider both to be solids. Switching between these phases, even though they’re in the same state, also requires some kind of energy change.

Or if you heat a permanent magnet above its critical temperature and caused it to lose its magnetization, that’s the second kind of phase transition. That is, while the heat and magnetization may have changed continuously, the ease of magnetizing it (which is the second derivative of the energy with respect to strength of the magnetic field) had a jump at that point. Your material is still in a solid state and the atoms are still in the same positions, but it changed its magnetic state from permanent to paramagnetic. So part of me is wondering whether we can consider that underlying electron behavior to be a definition of a state of matter or a phase. The article makes it sound we’re fine saying they’re basically the same thing. This Wikipedia description of “fermionic condensates” as a means to describe superconductivity also supports this idea.

Going by this description then means we’re surrounded by way more states of matter than the usual four we consider. With solids alone, you interact with magnetic metals, conductors (metals or the semiconductors in your electronics), insulating solids, insulating glasses, and magnetic glasses (amorphous metals are used in most of those theft-prevention tags you see) on a regular basis, which all have different electron behaviors. It might seem slightly unsatisfying for something that sounds as fundamental as “states of matter” to end up having so many different categories, but it just reflects an increasing understanding of the natural world.

Carbon is Dead. Long Live Carbon?

Last month, the National Nanotechnology Initiative released a report on the state of commercial development of carbon nanotubes. And that state is mostly negative. (Which pains me, because I still love them.) If you’re not familiar with carbon nanotubes, you might know of their close relative, graphene, which has been in the news much more since the Nobel Prize awarded in 2010 for its discovery. Graphene is essentially a single layer of the carbon atoms found in graphite. A carbon nanotube can be thought of as rolling up a sheet of graphene into a cylinder.

On the left is a hexagonal pattern, representing the arrangement of atoms in graphene. An arrow in the middle of the image is labelled Visualizing a single-walled (SW) carbon nanotube (CNT) as the result of rolling up a sheet of graphene.

If you want to use carbon nanotubes, there are a lot of properties you need to consider. Nearly 25 years after their discovery, we’re still working on controlling a lot of these properties, which are closely tied to how we make the nanobues.

Carbon nanotubes have six major characteristics to consider when you want to use them:

  • How many “walls” does a nanotube have? We often talk about the single-walled nanotubes you see in the picture above, because their properties are the most impressive. However, it is much easier to make large quantities of nanotubes with multiple walls than single walls.
  • Size. For nanotubes, several things come in play here.
    • The diameter of the nanotubes is often related to chirality, another important aspect of nanotubes, and can affect both mechanical and electrical properties.
    • The length is also very important, especially if you want to incorporate the nanotubes into other materials or if you want to directly use nanotubes as a structural material themselves. For instance, if you want to add nanotubes to another material to make it more conductive, you want them to be long enough to routinely touch each other and carry charge through the entire material. Or if you want that oft-discussed nanotube space elevator, you need really long nanotubes, because stringing a bunch of short nanotubes together results in a weak material.
    • And the aspect ratio of length to width is important for materials when you use them in structures.
  • Chirality, which can basically be thought of as the curviness of how you roll up the graphene to get a nanotube (see the image below). If you think of rolling up a sheet of paper, you can roll it leaving the ends matched up, or you can roll it an angle. Chirality is incredibly important in determing the way electricity behaves in nanotubes, and whether a nanotube behaves like a metal or like a semiconductor (like the silicon in your computer chips). It also turns out that the chirality of nanotubes is related to how they grow when you make them.
  • Defects. Any material is always going to have some deviation from an “ideal” structure. In the case of the carbon nanotubes, it can be missing or have extra carbon atoms that replace a few of the hexagons of the structure with pentagons or heptagons. Or impurity atoms like oxygen may end up incorporated into the nanotube. Defects aren’t necessarily bad for all applications. For instance if you want to stick a nanotube in a plastic, defects can actually help it incorporate better. But electronics typically need nanotubes of the highest purity.

A plane of hexagons is shown in the top left. Overlaid on the plan are arrows representing vectors. On the top right is a nanotube labeled (10, 0) zig-zag. On the bottom left is a larger (10, 10) armchair nanotube. On the bottom right is a larger (10, 7) chiral nanotube. Some of the different ways a nanotube can be rolled up. The numbers in parentheses are the “chiral vector” of the nanotube and determine its diameter and electronic properties.

Currently, the methods we have to make large amounts of CNTs result in a mix of ones with different chiralities, if not also different sizes. (We have gotten much better at controlling diameter over the last several years.) For mechanical applications, the former isn’t much of a problem. But if you have a bunch of CNTs of different conductivities, it’s hard to use them consistently for electronics.

But maybe carbon nanotubes were always doomed once we discovered graphene. Working from the idea of a CNT as a rolled-up graphene sheet, you may realize that means there are  way more factors that can be varied in a CNT than a single flat flake of graphene. When working with graphene, there are just three main factors to consider:

  • Number of layers. This is similar to the number of walls of a nanotube. Scientists and engineers are generally most excited about single-layer graphene (which is technically the “true” graphene). The electronic properties change dramatically with the number of layers, and somewhere between 10 and 100 layers, you’re not that different from graphite. Again, the methods that produce the most graphene produce multi-layer graphene. But all the graphene made in a single batch will generally have consistent electronic properties.
  • Size. This is typically just one parameter, since most methods to make graphene result in roughly circular, square, or other equally shaped patches. Also, graphene’s properties are less affected by size than CNTs.
  • Defects. This tends to be pretty similar to what we see in CNTs, though in graphene there’s a major question of whether you can use an oxidized form or need the pure graphene for your application, because many production methods make the former first.

Single-layer graphene also has the added quirk of its electrical properties being greatly affected by whatever lies beneath it. However, that may be less of an issue for commercial applications, since whatever substrate is chosen for a given application will consistently affect all graphene on it. In a world where can now make graphene in blenders or just fire up any carbon source ranging from Girl Scout cookies to dead bugs and let it deposit on a metal surface, it can be hard for nanotubes to sustain their appeal when growing them requires additional steps of catalyst production and placement.

But perhaps we’re just leaving a devil we know for a more hyped devil we don’t. Near the end of last year, The New Yorker had a great article on the promises we’re making for graphene, the ones we made for nanotubes, and about technical change in general, which points out that we’re still years away from widespread adoption of either material for any purpose. In the meantime, we’re probably going to keep discovering other interesting nanomaterials, and just like people couldn’t believe we got graphene from sticky tape, we’ll probably be surprised by whatever comes next.

Grace Hopper, Pioneer of the Computing Age

A white woman in a naval dress uniform is pictured. Her arms are crossed.

If you’re seeing this on any kind of computing device on International Women’s Day, you should thank Dr. Grace Hopper, rear admiral of the US Navy. Hopper created the first compiler, which allowed for computer programming to be done in code that could more closely resemble human language instead of the essentially numerical instructions that work at the level of the hardware.

These “higher level” languages are what are typically used to create all the various programs and apps we use everyday. What have you done today? Word processing? Photo editing? Anything beyond math was considered outside the domain of computers when Hopper started work.

Scientists and “Being Smart”, part 2: “Geekery” and What Happened to Technical Knowledge

For the purposes of this article, I’m treating “nerdiness” and “geekiness” as the same thing. If that bothers you, there’s millions of other pages on the Internet that care about this difference. Also, I’m sort of abusing “technical” here, but bear with me.

I loved Chad Orzel’s quote from last time, and I wanted to dissect one part a bit more:

We’re set off even from other highly educated academics — my faculty colleagues in arts, literature, and social science don’t hear that same “You must be really smart” despite the fact that they’ve generally spent at least as much time acquiring academic credentials as I have. The sort of scholarship they do is seen as just an extension of normal activities, whereas science is seen as alien and incomprehensible.

In particular, I wanted to point this out in the context of a sort of backlash against the idea that nerdiness/geekiness should be embraced as some part of science communication. Here’s the thing that bothers me about those pieces: while our society views specialized knowledge of STEM as less cultured than equally specialized knowledge in the humanities, then it will probably always be seen as intrinsically nerdy just to have studied science and engineering. For argument’s sake, I actually do have something in mind based on comparing courses in different departments at Rice and UVA. As an example of some basic idea of specialized scientific knowledge, I’m thinking of a typical sophomore modern physics class that includes a mostly algebra-based introduction to relativity or single variable quantum mechanics. For a roughly equivalent idea of specialized humanities knowledge,  courses at a similar level include a first course on metaphysics in philosophy and English courses focused on single authors. Quote Chaucer at a cocktail party? Congrats, you’re culturally literate! Mention that quantum mechanics is needed to describe any modern digital electronic device or that GPS requires relativistic corrections? I hate to disagree with someone doing work as cool as Tricia Berry, but sorry, you will almost certainly be considered a nerd for knowing that.

Should we care about this? Yes. It’s the same impulse that lets Martin Eve write off science and engineering open access advocates as just some corporatist movement or maybe just useful idiots of some other cultural force, and not some meaningful aspect of how scientists and engineers themselves want to approach the broader culture. And I don’t think this is new. CP Snow wrote about the “two cultures” over 50 years ago, complaining about the increasing division between literary culture and science and technology. I just think that now instead of ignoring scientists, which was what worried Snow, we now laud them in a way removed from mainstream culture by putting it in some geek/nerd offshoot. We see this in media about science. Scientists in movies are almost never full people with rich emotional and social lives, because, as this review of the Alan Turing biopic The Imitation Game points out, the convention is nearly always that they are more like machines trying to get along with humans. (I also feel sort of justified in this idea when an English PhD at least partially agreed with me when I argued that Bill Gates or Steve Jobs might count as “Renaissance men” today but culture seems uncomfortable applying that label to contemporary people whose original background was primarily technical.)

As I was writing this, I realized this may be a broadening trend that seems to separate technical knowledge in areas outside of science and engineering from their own related fields. Consider the distinction between how people discuss politics and policy. I know they’re not equivalent, but it seems interesting to me that readings of some theorist mainly approached in senior-level political science or philosophy makes you cultured, but trying to use anything beyond intro economics to talk about policy implementation seems to be unquestionably “wonky”. And I say that as someone with virtually no econ or policy training. Heck, Ezra Klein practically owns the idea of being a wonk, and he’s not an economist.

Over winter break, I got the chance to see a friend from high school who is currently working towards a master’s in public administration. We’re both at about similar stages in our graduate programs and we both talked about what we studied. She had her own deep technical knowledge in her field, but she commented that people often didn’t understand the idea of scientific management as a discipline and didn’t seem to appreciate that someone could actually systematically study team hierarchies and suggest better ways to organize. I think part of that is what I touched on in the first part of my rant and Orzel’s idea that people just seem to think of humanistic studies as just “extensions of normal”. But I also think part of that is some cultural lack of interest in, and understanding of, technical knowledge.

I don’t want to fall into some stereotypical scientist trap and write off ideas of fundamental truths or downplay the importance of ethics, culture, and other things generally considered liberal arts or humanistic. I just think that if Snow were writing today, he might say that intellectual seems to be an even narrower category that now no longer recognizes the idea of doing something with that intellect. And that seems like a real problem.

Public Health Involves Science Communication, Too

This tweet has been making the rounds on social media lately.

I actually think the tweet is funny, but I’m really tired with the way media seems to be considering actual policy concerns with it. Cutting off flights would have seriously hampered the Ebola response. But there is in fact a different policy used for traveling to/from regions with vaccine-preventable outbreaks: it is often recommended that you go and get the vaccine before travelling there or if you are from a region with an outbreak, you may be asked to prove you have been immunized. It would be perfectly reasonable for Nigeria and other countries to demand American travellers prove that they are vaccinated against measles as part of obtaining a visa. That policy isn’t possible with diseases without vaccines that we don’t have effective, standard treatments for.

And this has become an increasing concern of mine with so much of the coverage about the measles outbreak. There is actually a well-documented literature about effective science communication, but based on news articles, you wouldn’t know it exists. The idea that science communication is only about filling people’s head with scientific knowledge (the “bucket model”) has been discredited for over 20 years. Treating your audience snarkily like they know nothing (or really, treating your actual, narrow audience like they’re geniuses and everyone in the outgroup like they’re insane) has never really been shown to be effective in technical matters despite half the business model of Mic and Gawker.