What is rheology?

Inspired by NaBloPoMo and THE CENTRIFUGE NOT WORKING RIGHT THE FIRST TIME SO I HAVE TO STAY IN LAB FOR THREE MORE HOURS THAN I PLANNED (this was more relevant when I tried writing this a few weeks ago), I’ll be trying to post more often this month. Though heaven knows I’m not even going to pretend I’ll get a post a day when I have a conference (!) to prepare for.

I figure my first post could be a better attempt at better describing a major part of my research now – rheology and rheometers. The somewhat uncommon, unless you’re a doctor or med student who sees it pop up all the time in words like gonorrhea and diarrhea, Greek root “rheo” means “flow”, and so the simplest definition is that rheology is the study of flow. (And I just learned the Greek Titan Rhea’s name may also come from that root, so oh my God, rheology actually does relate to Rhea Perlman.) But what does that really mean? And if you’ve tripped out on fluid mechanics videos or photos before, maybe you’re wondering “what makes rheology different?”

RheaPerlmanAug2011.jpg

Oh my God, she is relevant to my field of study.

For our purposes, flow can mean any kind of material deformation, and we’re generally working with solids and liquids (or colloid mixtures involving those states, like foams and gels). Or if you want to get really fancy, you can say we’re working with (soft) condensed matter. Why not gas? We’ll get to that later. So what kind of flow behavior is there? There’s viscosity, which is what we commonly consider the “thickness” of a flowing liquid. Viscosity is how a fluid resists motion between component parts to some shearing force, but it doesn’t try to return the fluid back to its original state. You can see this in cases where viscosity dominates over the inertia of something moving in the fluid, such as at 1:00 and 2:15 in this video; the shape of the dye drops is essentially pinned at each point by how much the inner cylinder moves, but you don’t see the fluid move back until the narrator manually reverses the cylinder.

The other part of flow is elasticity. That might sound weird to think of a fluid as being elastic. While you really don’t see elasticity in pure fluids (unless maybe the force is ridiculously fast), you do see it a lot in mixtures. Oobleck, the ever popular mixture of cornstarch and water, becomes elastic as part of its shear-thickening behavior. (Which it turns out we still don’t have a great physical understanding of.)

 

You can think of viscosity as the “liquid-like” part of a substance’s behavior and elasticity as the “solid-like” part. Lots of mixtures (and even some pure substances) show both parts as “viscoelastic” materials. And this helps explain the confusion when you’re younger (or at least younger-me’s questions) of whether things like Jell-O, Oobleck, or raw dough are “really” solid or liquid. The answer is sort of “both”. More specifically, we can look at the “dynamic modulus” G at different rates of force. G has two components – G’ is the “storage modulus” and that’s the elastic/solid part, and G” is the “loss modulus” representing viscosity.

The dynamic moduli of Silly Putty at different rates of stress.

Whichever modulus is higher what mostly describes a material. So in the flow curve above, the Silly Putty is more like a liquid at low rates/frequencies of stress (which is why it spreads out when left on its own), but is more like a solid at high rates (which is why is bounces if you throw it fast enough). What’s really interesting is that the total number of either component doesn’t really matter, it’s just whichever one is higher. So even flimsy shaving cream behaves like a solid (seriously, it can support hair or other light objects without settling) at rest while house paint is a liquid, because even though paint tends to have a higher modulus, the shaving cream still has a higher storage modulus than its own loss modulus.

I want to publish this eventually, so I’ll get to why we do rheology and what makes it distinct in another post.

Advertisements

Thinking of the Urban as Natural

Image result for urban ecology

“Name everything you can think of that is alive.” This was the prompt given to three different groups of children: the Wichi, an indigenous tribe in the Gran Chaco forest, and rural and urban Spanish-speakers in Argentina. It might not surprise you to know that the indigenous children who directly interact with wildlife often named the most plants and animals that lived nearby and were native to the region, and they often gave very specific names. The rural children named a mixture of both native Argentinian wildlife and animals associated with farming. But the urban children were very different from the others. They would name only a few animals in Argentina. Instead, they named significantly more “exotic” animals from forests and jungles in other countries and continents. This result has been replicated in multiple studies on child development. But we shouldn’t be so hard on the urban children.

This reflects a somewhat uncomfortable truth about how we learn. If you live in a city, you mainly learn about nature indirectly, through pop culture and formal science education. In both contexts, it is much easier to find information about “exotic” animals like lions or tigers instead of most of the organisms that make a home in the city. I think this is a symptom of a deeper cultural notion: that somehow cities are “fake” environments divorced from nature. I will argue that this distinction between the urban and natural is not only wrong, but also harmful to our society.

First, we should consider that this notion really only makes sense relatively recently in history. Cities are young in a geological and even anthropological sense, but since we’ve been making them as a species, they have been influenced by nature. We talk about “cradles of civilization” because they were places where the natural environment was well-suited to supporting early, complex social systems and their infrastructure. To use the literal Ur-example, consider the Fertile Crescent region, the convergence of the Tigris and Euphrates rivers. This provided lush soil at several elevations, which supported the growth of a variety of crops and helped with irrigation. And many modern cities can still be traced back to earlier environmental decisions. I am from Louisville, a city by a part of the Ohio River that could not be crossed by boat until the building of locks in the 1830s. The city was founded as a natural stopping point for people before they would go on to the Mississippi River.

Second, it seems incredibly alienating to argue most of humanity is “unnatural”. Since 2008, the majority of humans have lived in cities. By 2050, 70% of the global population will live in urban areas. We should not discourage the growth of cities or devalue them, when their more efficient use of resources and infrastructure is necessary to keep projected population growth sustainable. The smart development of cities recognizes they can help preserve other environments.

Finally, this urban-natural distinction distorts our understanding of the environmental and ecological processes that affect cities and even our broader understanding of the environment. A recent study showed that insects help reduce food waste just as much as rodents in New York City – for every memeable “pizza rat” there’s an army of “pizza ants” getting rid of rotting food. Despite their importance, in New York’s American Museum of Natural History renowned insect collection, they have almost no species native to the city. And since many city-dwellers like the Argentinian children only know about exotic species, it affects animal conservation efforts. Well-known “charismatic” species like pandas or rhinos have support all over the world. Few people are aware of endangered species in urban areas And sometimes scientists don’t even know. For instance, relating to the above, 40% of insect species are endangered, but we don’t know if that number is different in cities.

Instead of rejecting the last few thousand years of our society’s development, we should (re)embrace cities as part of the broader natural world. Recognizing that cities can have their own rich ecological and environmental interactions can help us build urban spaces that are better for us humans, other city-dwelling creatures, and the rest of the world.

(Note: This post is based on a speech I gave as part of a contest at UVA, the Moomaw Oratorical Contest. And this year I won!)

I Have a Hard Time Summing Up My Science and Politics Beliefs Into a Slogan

From a half-joking, half-serious post of my own on Facebook:

“SCIENCE IS POLITICAL BECAUSE THERE’S LOT OF INFLUENCE BY POLITICAL AND POWERFUL CULTURAL INSTITUTIONS, BUT NOT PARTISAN. AND ALSO THAT SCIENTIFIC RESULTS AFFECT MORE OF OURS LIVES. BUT LIKE MAN, WE REALLY SHOULDN’T DO THE WHOLE TECHNOCRACY THING. BUT LIKE EVIDENCE SHOULD MATTER. BUT ALSO VALUES MATTER WHEN EVALUATING STUFF. IT’S COMPLICATED. HAS ANYONE READ LATOUR? OR FEYERABEND? CAN SOMEONE EXPLAIN FEYERABEND TO ME? DOES ANYONE WANT TO GET DRINKS AND TALK AFTER THIS?”

the_end_is_not_for_a_while

Evidently, I am the alt-text from this comic.

“HERE ARE SOME GOOD ARTICLES ABOUT PHILOSOPHY AND SOCIOLOGY OF SCIENCE” (I didn’t actually give a list, since I knew I would never really be able to put that on a poster, but some suggested readings if you’re interested: the Decolonizing Science Reading List curated by astrophysicist Chanda Prescod-Weinstein, a recent article from The Atlantic about the March for Science, a perspective on Doing Science While Black, the history of genes as an example of the evolution of scientific ideas, honestly there’s a lot here, and this is just stuff I shared on my Facebook page over the last few months.)
“LIKE HOLY SHIT Y’ALL EUGENICS HAPPENED”
“LIKE, MAN, WE STERILIZED A LOT OF PEOPLE. ALSO, EVEN BASIC RESEARCH CAN BE MESSED UP. LIKE TUSKEGEE. OR LITERALLY INJECTING CANCER INTO PEOPLE TO SEE WHAT HAPPENS. OR CRISPR. LIKE, JEEZ, WHAT ARE WE GOING TO DO WITH THAT ONE? SOCIETY HELPS DETERMINE WHAT IS APPROPRIATE.”
“I FEEL LIKE I’M GOING OFF MESSAGE. BUT LIKE WHAT EXACTLY IS THE MESSAGE HERE”
“I DON’T KNOW WHAT THE MESSAGE IS, BUT THESE ARE PROBABLY GOOD TO DO. ESPECIALLY IF THEY INSPIRE CONVERSATIONS LIKE THIS.”
“ALSO, DID YOU KNOW THAT MULTICELLULAR LIFE INDEPENDENTLY EVOLVED AT LEAST 10 TIMES ON EARTH? I’M NOT GOING ANYWHERE WITH THAT, I JUST THINK IT’S NEAT AND WE DON’T TYPICALLY HEAR THAT IN INTRO BIO.”

Reclaiming Science as a Liberal Art

What do you think of when someone talks about the liberal arts? Many of you probably think of subjects like English and literature, history, classics, and philosophy. Those are all a good start for a liberal education, but those are only fields in the humanities. Perhaps you think of the social sciences, to help you understand the institutions and actors in our culture; fields like psychology, sociology, or economics. What about subjects like physics, biology, chemistry, or astronomy? Would you ever think of them as belonging to the liberal arts, or would you cordon them off into the STEM fields? I would argue that excluding the sciences from the liberal arts is both historically wrong and harms society.

First, let’s look at the original conception of the liberal arts. Your study would begin with the trivium, the three subjects of grammar, logic, and rhetoric. The trivium has been described as a progression of study into argument. Grammar is concerned with how things are symbolized. Logic is concerned with how things are understood. Rhetoric is concerned with how things are effectively communicated, because what good is it to understand things if you cannot properly share your understanding to other learned people? With its focus on language, the trivium does fit the common stereotype of the liberal arts as a humanistic writing education.

But it is important to understand that the trivium was considered only the beginning of a liberal arts education. It was followed by the supposedly more “serious” quadrivium of arithmetic, geometry, music, and astronomy. The quadrivium is focused on number and can also be viewed as a progression. Arithmetic teaches you about pure numbers. Geometry looks at number to describe space. Music, as it was taught in the quadrivium, focused on the ratios that produce notes and the description of notes in time. Astronomy comes last, as it builds on this knowledge to understand the mathematical patterns in space and time of bodies in the heavens. Only after completing the quadrivium, when one would have a knowledge of both language and numbers, would a student move on to philosophy or theology, the “queen of the liberal arts”.

7 Liberal Arts

The seven liberal arts surrounding philosophy.

Although this progression might seem strange to some, it makes a lot of sense when you consider that science developed out of “natural philosophy”. Understanding what data and observations mean, whether they are from a normal experiment or “big data”, is a philosophical activity. As my professors say, running an experiment without an understanding of what I was measured makes me a technician, not a scientist. Or consider alchemists, who included many great experimentalists who developed some important chemical insights, but are typically excluded from our conception of science because they worked with different philosophical assumptions. The findings of modern science also tie into major questions that define philosophy. What does it say about our place in the universe if there are 10 billion planets like Earth in our galaxy, or when we are connected to all other living things on Earth through chemistry and evolution?

We get the term liberal arts from Latin, artes liberales, the arts or skills that are befitting of a free person. The children of the privileged would pursue those fields. This was in contrast to the mechanical arts – fields like clothesmaking, agriculture, architecture, martial arts, trade, cooking, and metalworking. The mechanical arts were a decent way for someone without status to make a living, but still considered servile and unbecoming of a free (read “noble”) person. This distinction breaks down in modern life because we are no longer that elitist in our approach to liberal education. We think everyone should be “free”, not just an established elite.

More importantly, in a liberal democracy, we think everyone should have some say in how they are governed. Many major issues in modern society relate to scientific understanding and knowledge. To talk about vaccines, you need to have some understanding of the immune system. The discussion over chemicals is very different when you know that we are made up chemicals. It is hard to understand what is at stake in climate change without a knowledge of how Earth’s various geological and environmental systems work and it is hard to evaluate solutions if you don’t know where energy comes from. Or how can we talk about surveillance without understanding how information is obtained and how it is distributed? The Founding Fathers say they had to study politics and war to win freedom for their new nation. As part of a liberal education, Americans today need to learn to science in order to keep theirs.

(Note: This post is based off a speech I gave as part of a contest at UVA. It reflects a view I think is often unconsidered in education discussions, so I wanted to adapt it into a blog post.

As another aside, it’s incredibly interesting people now tend to unambiguously think of social sciences as part of the liberal arts while wavering more on the natural sciences since the idea of a “social” science wasn’t really developed until well after the conception of the liberal arts.)

Why Can’t You Reach the Speed of Light?

A friend from high school had a good question that I wanted to share:
I have a science question!!! Why can’t we travel the speed of light? We know what it is, and that its constant. We’ve even seen footage of it moving along a path (it was a video clip I saw somewhere [Edit to add: there are now two different experiments that have done this. One that requires multiple repeats of the light pulse and a newer technique that can work with just one). So, what is keeping us from moving at that speed? Is it simply an issue of materials not being able to withstand those speeds, or is it that we can’t even propel ourselves or any object fast enough to reach those speeds? And if its the latter, is it an issue of available space/distance required is unattainable, or is it an issue of the payload needed to propel us is simply too high to calculate/unfeasable (is that even a word?) for the project? Does my question even make sense? I got a strange look when I asked someone else…
 This question makes a lot of sense actually, because when we talk about space travel, people often use light-years to discuss vast distances involved and point out how slow our own methods are in comparison. But it actually turns out the road block is fundamental, not just practical. We can’t reach the speed of light, at least in our current understanding of physics, because relativity says this is impossible.

To put it simply, anything with mass can’t reach the speed of light. This is because E=mc2 works in both directions. This equation means that the energy of something is its mass times the speed of light squared. In chemistry (or a more advanced physics class), you may have talked about the mass defect of some radioactive compounds. The mass defect is the difference in mass before and after certain nuclear reactions, which was actually converted into energy. (This energy is what is exploited in nuclear power and nuclear weapons. Multiplying by the speed of light square means even a little mass equals a lot of energy. The Little Boy bomb dropped on Hiroshima had 140 pounds of uranium, and no more than two pounds of that are believed to have undergone fission to produce the nearly 16 kiloton blast.)

But it also turns out that as something with mass goes faster, its kinetic energy also turns into extra mass. This “relativistic mass” greatly increases as you approach the speed of light. So the faster something gets, the heavier it becomes and the more energy you need to accelerate it. It’s worth pointing out that the accelerating object hasn’t actually gained material – if your spaceship was initially say 20 moles of unobtanium, it is still 20 moles of material even at 99% the speed of light. Instead, the increase in “mass” is due to the geometry of spacetime as the object moves through it. In fact, this is why some physicists don’t like using the term “relativistic mass” and would prefer to focus on the relativistic descriptions of energy and momentum. What’s also really interesting is that the math underlying this in special relativity also implies that anything that doesn’t have mass HAS to travel at the speed of light.

A graph with X-axis showing speed relative to light and Y-axis showing energy. A line representing the kinetic energy the object expoentially increases it approach light speed.

The kinetic energy of a 1 kg object at various fractions of the speed of light. For reference, 10^18 J is about a tenth of United States’ annual electrical energy consumption.

The graph above represents  the (relativistically corrected) kinetic energy of an 1 kilogram (2.2 pound) object at different speeds. You can basically think of it as representing how much energy you need to impart into the object to reach that speed. In the graph, I started at one ten thousandth the speed of light, which is about twice the speed the New Horizons probe was launched at. I ended it at 99.99% of the speed of light. Just to get to 99.999% of the speed of light would have brought the maximum up another order of magnitude.
Edit to add (9/12/2017): A good video from Fermilab argues against relativistic mass, but concedes it helps introduce relativity to more people.

Quick Thoughts on Diversity in Physics

Earlier this month, during oral arguments for Fisher v. University of Texas, Chief Justice John Roberts asked what perspective an African-American student would offer in physics classrooms. The group Equity and Inclusion in Physics and Astronomy has written an open letter about why this line of questioning may miss the point about diversity in the classroom. But it also seems worth pointing out why culture does matter in physics (and science more broadly).

So nature is nature and people can develop theoretical understanding of it anywhere and it should be similar (I think. This is actually glossing over what I imagine is a deep philosophy of science question.) But nature is also incredibly vast. People approach studies of nature in ways that can reflect their culture. Someone may choose to study a phenomenon because it is one they see often in their lives. Or they may develop an analogy between theory and some aspect of culture that helps them better understand a concept. You can’t wax philosphical about Kekule thinking of ouroboros when he was studying the structure of benzene without admitting that culture has some influence on how people approach science. There are literally entire books and articles about Einstein and Poincare being influenced by sociotechnical issues of late 19th/early 20th century Europe as they developed concepts that would lead to Einstein’s theories of relativity. A physics community that is a monoculture then misses out on other influences and perspectives. So yes, physics should be diverse, and more importantly, physics should be welcoming to all kinds of people.

It’s also worth pointing out this becomes immensely important in engineering and technology, where the problems people choose to study are often immensely influenced by their life experiences. For instance, I have heard people say that India does a great deal of research on speech recognition as a user interface because India still has a large population that cannot read or write, and even then, they may not all use the same language.

Thoughts on Basic Science and Innovation

Recently, science writer and House of Lords member Matt Ridley wrote an essay in The Wall Street Journal about the myth of basic science leading to technological development. Many people have criticized it, and it even seems like Ridely has walked back some claims. One engaging take can be found here, which includes a great quote that I think helps summarize a lot of the reaction:

I think one reason I had trouble initially parsing this article was that it’s about two things at once. Beyond the topic of technology driving itself, though, Ridley has some controversial things to say about the sources of funding for technological progress, much of it quoted from Terence Kealy, whose book The Economic Laws of Scientific Research has come up here before

But also, it seems like Ridley has a weird conception of the two major examples he cites as overturning the “myth”: steam engines and the structure of the DNA. The issue with steam engines is that we mainly associate them with James Watt, who you memorialize everytime you fret about how many watts all your devices are consuming. Steam engines actually preceded Watt, but the reason we associate them with him is because he greatly improved their efficiency due to his understanding of latent heat, the energy that goes into changing something from one phase into another. (We sort of discussed this before. The graph below helps summarize.) Watt understood latent heat because his colleague and friend Joseph Black, a chemist at the University of Glasgow, discovered it.

A graph with X-axis labelled

The latent heat is the heat that is added to go between phases. In this figure, it is represented by the horizontal line between ice and heating of water and the other horizontal line between heating of water and heating of water vapor.

I don’t know whether or not X-ray crystallography was ever used for industrial purposes in the textiles industry, but it has pretty consistently been used in academia since the basic principles were discovered a century ago. The 1915 Nobel prize in physics was literally about the development of X-ray crystallograpy theory. A crystal structure of a biological molecule was determined by X-ray studies at least in 1923, if not earlier. The idea that DNA crystallography only took off as a popular technique because of spillover from industry is incredibly inaccurate.

Ridley also seems to have a basic assumption: that government has crowded out the private sector as a source of basic research over the past century. It sounds reasonable, and it seems testable as a hypothesis. As a percentage of GDP (which seems like a semi-reasonable metric for concerns about crowding out), federal spending on research and development has generally been on the decline since the 70s, and is now about a 1/3 less than it’s relatively stable levels that decade. If private R&D had been crowded out, a competitor dropping by that much seems like a decent place for some resurgence, especially since the one of the most cited examples of private research, Bell Labs, was still going strong all this time. But instead, Bell cut most of its basic research programs just a few years ago.

Fedederal research spending as a percentage of GDP from the 1976 to 2016 fiscal years. The total shows a slight decrease from 1976 to 1992, a large drop in the 90s, a recovery in 2004, and a drop since 2010.

Federal spending on R&D as a percentage of GDP over time

To be fair, more private philanthropists now seem to be funding academic research. The key word, though, is philanthropist, not commercial, which Ridley refers to a lot throughout the essay. Also, a significant amount of this new private funding is for prizes, but you can only get a prize after you have done work.

There is one major thing to take from Ridley’s essay, though I also think most scientists would admit it too. It’s somewhat ridiculous to try to lay out a clear path from a new fundamental result to a practical application, and if you hear a researcher claim to have one, keep your BS filter high. As The New Yorker has discussed, even results that seems obviously practical have a hard time clearing feasibility hurdles. (Also, maybe it’s just a result of small reference pools, but it seems like a lot of researchers I read are also concerned that research now seems to require some clear “mother of technology” justification.)  Similarly, practical developments may not always be obvious. Neil deGrasse Tyson once pointed out that if you spoke to a physicist in the late 1800s about the best way to quickly heat something, they would probably not describe something resembling a microwave.

Common timeframe estimates of when research should result in a commercially available product, followed by translations suggesting how unrealistic this is. The fourth quarter of next year The project will be canceled in six months. Five years I've solved the interesting research problems. The rest is just business, which is easy, right? Ten years We haven't finished inventing it yet, but when we do, it'll be awesome. 25+ years It has not been conclusively proven impossible. We're not really looking at market applications right now. I like being the only one with a hovercar.

Edit to add: Also, I almost immediately regret using innovation in the title because I barely address it in the post, and there’s probably a great discussion to have about that word choice by Ridley. Apple, almost famously, funds virtually no basic research internally or externally, which I often grumble about. However, I would not hesitate to call Apple an “innovative” company. There are a lot of design choices that can improve products that can be pretty divorced from the physical breakthroughs that made them unique. (Though it is worth pointing out human factors and ergonomics are very active fields of study in our modern, device-filled lives.)