Why Can’t You Reach the Speed of Light?

A friend from high school had a good question that I wanted to share:
I have a science question!!! Why can’t we travel the speed of light? We know what it is, and that its constant. We’ve even seen footage of it moving along a path (it was a video clip I saw somewhere [Edit to add: there are now two different experiments that have done this. One that requires multiple repeats of the light pulse and a newer technique that can work with just one). So, what is keeping us from moving at that speed? Is it simply an issue of materials not being able to withstand those speeds, or is it that we can’t even propel ourselves or any object fast enough to reach those speeds? And if its the latter, is it an issue of available space/distance required is unattainable, or is it an issue of the payload needed to propel us is simply too high to calculate/unfeasable (is that even a word?) for the project? Does my question even make sense? I got a strange look when I asked someone else…
 This question makes a lot of sense actually, because when we talk about space travel, people often use light-years to discuss vast distances involved and point out how slow our own methods are in comparison. But it actually turns out the road block is fundamental, not just practical. We can’t reach the speed of light, at least in our current understanding of physics, because relativity says this is impossible.

To put it simply, anything with mass can’t reach the speed of light. This is because E=mc2 works in both directions. This equation means that the energy of something is its mass times the speed of light squared. In chemistry (or a more advanced physics class), you may have talked about the mass defect of some radioactive compounds. The mass defect is the difference in mass before and after certain nuclear reactions, which was actually converted into energy. (This energy is what is exploited in nuclear power and nuclear weapons. Multiplying by the speed of light square means even a little mass equals a lot of energy. The Little Boy bomb dropped on Hiroshima had 140 pounds of uranium, and no more than two pounds of that are believed to have undergone fission to produce the nearly 16 kiloton blast.)

But it also turns out that as something with mass goes faster, its kinetic energy also turns into extra mass. This “relativistic mass” greatly increases as you approach the speed of light. So the faster something gets, the heavier it becomes and the more energy you need to accelerate it. It’s worth pointing out that the accelerating object hasn’t actually gained material – if your spaceship was initially say 20 moles of unobtanium, it is still 20 moles of material even at 99% the speed of light. Instead, the increase in “mass” is due to the geometry of spacetime as the object moves through it. In fact, this is why some physicists don’t like using the term “relativistic mass” and would prefer to focus on the relativistic descriptions of energy and momentum. What’s also really interesting is that the math underlying this in special relativity also implies that anything that doesn’t have mass HAS to travel at the speed of light.

A graph with X-axis showing speed relative to light and Y-axis showing energy. A line representing the kinetic energy the object expoentially increases it approach light speed.

The kinetic energy of a 1 kg object at various fractions of the speed of light. For reference, 10^18 J is about a tenth of United States’ annual electrical energy consumption.

The graph above represents  the (relativistically corrected) kinetic energy of an 1 kilogram (2.2 pound) object at different speeds. You can basically think of it as representing how much energy you need to impart into the object to reach that speed. In the graph, I started at one ten thousandth the speed of light, which is about twice the speed the New Horizons probe was launched at. I ended it at 99.99% of the speed of light. Just to get to 99.999% of the speed of light would have brought the maximum up another order of magnitude.
Edit to add (9/12/2017): A good video from Fermilab argues against relativistic mass, but concedes it helps introduce relativity to more people.
Advertisements

So What is Materials Science (and Engineering)?

So this is my 100th post, and I felt like it should be kind of special. So I want to cover a question I get a lot, and one that’s important to me; what exactly is materials science? My early answer was that “it’s like if physics and chemistry had a really practical baby.” One of my favorite versions is a quote from this article on John Goodenough, one of the key figures in making rechargeable lithium ion batteries: “In hosting such researchers, Goodenough was part of the peculiar world of materials scientists, who at their best combine the intuition of physics with the meticulousness of chemistry and pragmatism of engineering”. Which is a much more elegant (and somewhat ego-boositng) way of wording my description. In one of my first classes in graduate school, my professor described materials science as “the study of defects and how to control them to obtain desirable properties”.

A more complete definition is some version of the one that shows up in most introductory lessons: materials science studies the relationship between the structure of a material, its properties, its performance, and the way it was treated. This is often represented as the “materials science tetrahedron”, shown below. Which turns out to be something people really love to use. (You also sometimes see characterization float in the middle, because it applies to all these aspects.)

A tetrahedron with blue points at the vertices. The top is labelled structe, the bottom three are properties, processing, and performance.

The materials science tetrahedron (with characterization floating in the middle).

Those terms may sound meaningless to you, so let’s break them down. In materials science, structure goes beyond that of chemistry: it’s not just what makeup of an atom or molecule that affects a material, but how the atoms/molecules are arranged together in a material has a huge effect on how it behaves. You’re probably familiar with one common example: carbon and its various allotropes. The hardness of diamond is partially attributed to its special crystal structure. Graphite is soft because it is easy to slide the different layers across each other. Another factor is the crystallinity of a material. Not all materials you see are monolithic pieces. Many are made of smaller crystals we call “grains”. The size and arrangement of these grains can be very important. For instance, the silicon in electronics is made in such a way to guarantee it will always be one single crystal because boundaries between grains would ruin its electronic properties. Turbine blades in jet engines and for wind turbines are single crystals, while steels used in structures are polycrystalline.

On the top is a diamond and a piece of graphite. On the bottom are their crystal structures.

Diamond and its crystal structure is on the left; graphite on the right.

Processing is what we do to a material before it ends up being used. This is more than just isolating the compounds you’ll use to make it. In fact, for some materials, processing actually involves adding impurities. Pure silicon wouldn’t be very effective in computers. Instead, silicon is “doped” with phosphorus or boron atoms and the different doping makes it possible to build various electronic components on the same piece. Processing can also determine the structure – temperature and composition can be manipulated to help control the size of grains in a material.

A ring is split into 10 different sections. Going counterclockwise from the top, each segment shows smaller crystals.

The same steel, with different size grains.

Properties and performance are closely related, and the distinction can be subtle (and honestly, it isn’t something we distinguish that much). One idea is that properties describe the essential behavior of a material, while performance reflects how that translates into its use, or the “properties under constraints“. This splits “materials science and engineering” into materials science for focusing on properties and materials engineering for focusing on performance. But that distinction can get blurred pretty quickly, especially if you look at different subfields. Someone who studies mechanical properties might say that corrosion is a performance issue since it limits how long a material could be used at its desired strength. Talk to my colleagues next door in the Center for Electrochemical Science and Engineering and they would almost all certainly consider corrosion to be a property of materials. Regardless, both of these depend on structure and processing. Blades in wind turbines and jet engines are single crystals because this reduces fatigue over time. Structural steels are polycrystals because this makes them stronger.

Now that I’ve thought about it more, I realize the different parts of the tetrahedron explain the different ways we define materials science and engineering. My “materials science as applied physics and chemistry” view reflects the scale of structures we talk about, from atoms that are typically chemistry’s domain to the crystal arrangement to the larger crystal as a whole, where I can talk about mechanics of atoms and grains. The description of Goodenough separates materials science from physics and chemistry through the performance-driven lens of pragmatism. My professor’s focus on defects comes from the processing part of the tetrahedron.

The tetrahedron also helps define the relationship of materials science and engineering to other fields. First, it helps limit what we call a “material”. Our notions of structure and processing are very different from the chemical engineers, and work best on solids. It also helps define limits to the field. Our structures aren’t primarily governed by quantum effects and we generally want defects, so we’re not redundant to solid-state physics. And when we talk about mechanics, we care a lot about the microstructure of the material, and rarely venture into the large continuum of mechanical and civil engineers.

At the same time, the tetrahedron also explains how interdisciplinary materials science is and can be. That makes sense because the tetrahedron developed to help unify materials science. A hundred years ago, “materials science” wouldn’t have meant anything to anyone. People studying metallurgy and ceramics were in their own mostly separate disciplines. The term semiconductor was only coined in a PhD dissertation in 1910, and polymers were still believed to be aggregates of molecules attracted to each other instead of the long chains we know them to be today. The development of crystallography and thermodynamics helped us tie all these together by helping us define structures, where they come from, and how we change them. (Polymers are still a bit weird in many materials science departments, but that’s a post for another day)

Each vertex is also a key branching off point to work with other disciplines. Our idea of structure isn’t redundant to chemistry and physics, but they build off each other. Atomic orbitals help explain why atoms end up in certain crystal structures. Defects end up being important in catalysts. Or we can look at structures that already exist in nature as an inspiration for own designs. One of my professors explained how he once did a project studying turtle shells from an atomic to macroscopic level, justifying it as a way to design stronger materials. Material properties put us in touch with anyone who wants to use our materials to go into their end products, from people designing jet engines to surgeons who want prosthetic implants, and have us talk to physicists and chemists to see how different properties emerge from structures.

This is what attracted me to materials science for graduate school. We can frame our thinking on each vertex , but it’s also expected that we shift. We can think about structures on a multitude of scales. Now I joke that being a bad physics major translates into being great at most of the physics I need to use now. The paradigm helps us approach all materials, not just the ones we personally study. Thinking with different applications in mind forces me to learn new things all the time. (When biomedical engineers sometimes try to claim they’re the “first” interdisciplinary field of engineering to come on the scene, I laugh thinking that they forget materials science has been around for decades. Heck, now I have 20 articles I want to read about the structure of pearl to help with my new research project.) It’s an incredibly exciting field to be in.

What is a State of Matter?

This Vice article excitedly talking about the discovery of a new state of matter has been making the rounds a lot lately. (Or maybe it’s because I just started following Motherboard on Twitter after a friend pointed this article out) Which lead to two good questions: What is a state of matter? And how do we know we’ve found a new one? We’ll consider that second one another time.

In elementary school, we all learned that solid, liquid, and gas were different states of matter (and maybe if you’re science class was ahead of the curve, you talked about plasma). And recent scientific research has focused a lot on two other states of matter: the exotic Bose-Einstein condensate, which is looked at for many experiments to slow down light, and the quark-gluon plasma, which closely resembles the first few milliseconds of the universe after the Big Bang. What makes all of these different things states? Essentially a state of matter is the set behavior of the collection of particles that makes up your object of interest, and each state behaves so differently you can’t apply the description one to another. A crucial point here is that we can only consider multiple partcies to be states, and typically a large number of particles. If you somehow have only one molecule of water, it doesn’t really work to say whether it is solid, liquid, or gas because there’s no other water for it to interact with to develop collective properties.

So room temperature gold and ice are solids because they’re described by regular crystal lattices that repeat. Molten gold and water are no longer solids because they’re no longer described by a regular crystal structure but still have relatively strong forces between molecules. The key property of a Bose-Einstein condensate is that most of its constituent particles are in the lowest possible energy. You can tell when you switched states (a phase transition) because there was a discontinuous change in energy or a property related to energy. In everyday life, this shows up as the latent heat of melting and the latent heat of vaporization (or evaporation).

The latent heat of melting is what makes ice so good at keeping drinks cold. It’s not just the fact that ice is colder than the liquid; the discontinuous “jump” of energy required to actually melt 32°F  ice into 32°F water also absorbs a lot of heat. You can see this jump in the heating curve below. You also see this when you boil water. Just heating water to 212 degrees Fahrenheit doesn’t cause it all to boil away; your kettle/stove also has to provide enough heat to overcome the heat of vaporization. And that heating is discontinuous because it doesn’t raise the temperature until the phase transition is complete. You can try this for yourself in the kitchen with a candy thermometer: ice will always measure 32 F, even if  you threw it in the oven, and boiling water will always measure 212 F.

A graph with the horizontal axis labelled "Q (heat added}" and the veritcal axis labelled "temperature (in Celsius)". It shows three sloped segments, that are labelled, going from the left, ice, heating of water, and heating of water vapor. The sloped line for "ice" and "heating of water" are connected by a flat line representing heat used to melt ice to water. The "heating of water" and "heating of water vapor" sloped lines are connected by a flat line labelled "heat used to vaporize water to water vapor".

The heating curve of water. The horizontal axis represents how much heat has been added to a sample of water. The vertical axis shows the temperature. The flat lines are where heat is going into the latent heat of the phase transition instead of raising the temperature of the sample.

There’s also something neat about another common material related to phase transitions. The transition between a liquid and a glass state does not have a latent heat. This is the one thing that makes me really sympathetic to the “glasses are just supercooled liquids” views. Interestingly, this also means that there’s really no such thing as a single melting temperature for a given glass, because the heating/cooling rate becomes very important.

But then the latter bit of the article confused me, because to me it points out that “state of matter” seems kind of arbitrary compared to “phase”, which we talk about all the time in materials science (and as you can see, we say both go through “phase” transitions). A phase is some object with consistent properties throughout it, and a material with the same composition can be in different phases but still in the same state. For instance, there actually is a phase of ice called ice IX, and the arrangement of the water molecules in it is different from that in conventional ice, but we would definitely still consider both to be solids. Switching between these phases, even though they’re in the same state, also requires some kind of energy change.

Or if you heat a permanent magnet above its critical temperature and caused it to lose its magnetization, that’s the second kind of phase transition. That is, while the heat and magnetization may have changed continuously, the ease of magnetizing it (which is the second derivative of the energy with respect to strength of the magnetic field) had a jump at that point. Your material is still in a solid state and the atoms are still in the same positions, but it changed its magnetic state from permanent to paramagnetic. So part of me is wondering whether we can consider that underlying electron behavior to be a definition of a state of matter or a phase. The article makes it sound we’re fine saying they’re basically the same thing. This Wikipedia description of “fermionic condensates” as a means to describe superconductivity also supports this idea.

Going by this description then means we’re surrounded by way more states of matter than the usual four we consider. With solids alone, you interact with magnetic metals, conductors (metals or the semiconductors in your electronics), insulating solids, insulating glasses, and magnetic glasses (amorphous metals are used in most of those theft-prevention tags you see) on a regular basis, which all have different electron behaviors. It might seem slightly unsatisfying for something that sounds as fundamental as “states of matter” to end up having so many different categories, but it just reflects an increasing understanding of the natural world.

What Happens When You Literally Hang Something Out to Dry?

I got a question today!  A good friend from high school asked:

Hey! So I have a sciencey question for you. But don’t laugh at me! It might seem kinda silly at first, but bear with me. Ok, how does water evaporate without heat? Like a towel is wet, so we put it in the sun to dry (tada heat!) but if its a kitchen or a bathroom towel that doesn’t see any particular increase in temp? How does the towel dry? What happens to the water? Does it evaporate but in a more mild version of the cycle of thinking?
It’s actually a really good question, and the answer depends on some statistical physics and thermodynamics. You know water is turning into water vapor all the time around you, but you can also see that these things clearly aren’t boiling away.

I’ve said before that temperature and heat are kind of weird, even though we talk about them all the time:

It’s not the same thing as energy, but it is related to that.  And in scientific contexts, temperature is not the same as heat.  Heat is defined as the transfer of energy between bodies by some thermal process, like radiation (basically how old light bulbs work), conduction (touching), or convection (heat transfer by a fluid moving, like the way you might see soup churn on a stove).  So as a kind of approximate definition, we can think of temperature as a measure of how much energy something could give away as heat.
The other key point is that temperature is only an average measure of energy, as the molecules are all moving at different speeds (we touched on this at the end of this post on “negative temperature”). This turns out to be crucial, because this helps explain the distinction between boiling and evaporating a liquid. Boiling is when you heat a liquid to its boiling point, at which point it overcomes the attractive forces holding the molecules together in a liquid. In evaporation, it’s only the random molecules that happen to be moving fast enough to overcome those forces that leave.
We can better represent this with a graph showing the probabilities of each molecule having a particular velocity or energy. (Here we’re using the Maxwell-Boltzmann distribution, which is technically meant for ideal gases, but works as a rough approximation for liquids.) That bar on the right marks out an energy of interest, so here we’ll say it’s the energy needed for a molecule to escape the liquid (vaporization energy). At every temperature, there will always be some molecules that happen to have enough energy to leave the liquid. Because the more energetic molecules  leave first, this is also why evaporating liquids cool things off.
A graph with x-axis labelled

Maxwell-Boltzmann distributions of the energy of molecules in a gas at various temperatures. From http://ibchem.com/IB/ibnotes/full/sta_htm/Maxwell_Boltzmann.htm

You might wonder that if say, your glass of water or a drenched towel is technically cooling off from evaporation, why will it completely evaporate over time? Because the water will keep warming up to room temperature and atomic collisions will keep bringing up the remaining molecules back to a similar Boltzmann distribution.
My friend also picks up on a good observation comparing putting the towel out in the sun versus hanging it in a bathroom. Infrared light from the sun will heat up the towel compared to one hanging around in your house, and you can see that at the hotter temperatures, more molecules exceed the vaporization energy, so evaporation will be faster. (In cooking, this is also why you raise the heat but don’t need to boil a liquid to make a reduction.)

There’s another factor that’s really important in evaporation compared to boiling. You can only have so much water in a region of air before it starts condensing back into a liquid (when you see dew or fog, there’s basically so much water vapor it starts re-accumulating into drops faster than they can evaporate). So if it’s really humid, this process goes slower. This is also why people can get so hot in a sauna. Because the air is almost completely steam, their sweat can’t evaporate to cool them off.

New Page at the Top!

So there’s been something I’ve been wanting to start for a while, but waited until I learned more about how WordPress works (I am clearly not a web designer yet). You’ll notice a new link at the top titled “Trivial Explanations” (and also a new blog category with that name). In addition to posts where I look at science or its applications in the news, I also want to start giving some explanations of the concepts that are commonly referred to in science/tech journalism stories without much explanation. For instance, in the post on masers, many of the articles I linked to mentioned that masers could be good amplifiers for cell phones but didn’t explain why, while I briefly mentioned the relevant property was stimulated emission (the SE in LASER or MASER). I also plan on explaining things that might not be relevant to further off applications, but just appear in lots of papers anyway, to build up your background in seeing how things might be related (for example many materials are “doped”, but that is almost never explained in news pieces because it’s a basic step in the process). If you follow XKCD, think of it as a more practical (or less awesome) version of What If?

With that in mind, I need things to explain! So if you can think of anything you hear about in the news or in your life that you don’t understand, please send me your requests for desired explanations. For now, leave a comment on this post. I’ll try to come up with a better system in the future. (I’m a bit paranoid about linking my email)

The Gift of Knowledge

As a belated Christmas gift to readers of my blog (thank you!), I thought I would help answer what is probably one of the nerd sniping-est comics from XKCD.  It turns out the answer is more biology than physics.  The girl in the comic is right in questioning her mother’s answer; violet is a shorter wavelength than blue (if you remember your Roy G. Biv acronym ordering the colors of the rainbow, you’ve also listed the colors in order of decreasing wavelength).  It actually turns out that indigo and violet light is scattered well in the sky.  You just don’t see it because your eyes aren’t ideal detectors of light wavelength.  The reason you can observe different colors is because you have different cone cells in your eye, and each one reacts to a different part of the visible spectrum.  We only have three though, and they don’t respond to all the colors equally.

Response of cone cells to light, each curve is a different kind of cone cell

The blue curve which represents the cone that picks up short wavelengths is really sensitive to light at around 450 nm.  This wavelength represents the shorter end of what we consider “blue”, which goes up to about 500 nm.  The shorter wavelengths are “violet”.  As you can see, the response drops off really quickly on the shorter end.  This means that while there might be violet in the sky, you register the blue more in a mix of purple and blue.   The Georgia Tech researchers in the MSNBC article actually say you perceive a mix of blue and violet the same as a mix of blue and white.  The article the graph comes from also talks more about how the combination of different color sources affects your perception.

How we see color is actually a very complicated subject and isn’t just a straight up application of optics.  Spectroscopy is a branch of physics that studies how different wavelengths of light interact with matter.  Photometry and colorimetry are branches of science that study how people can actually perceive these wavelengths.  This includes factors like the structural aspects of your eye (like the chemical/physical behavior of the cone cells) as well as processes that go on in your brain, as seen in the famous checker shadow illusion.