Materials Advent 2018 Part 17 – Snow

Hexagonal plate with dendritic extensions. (precipitating snow).
Snow in an electron microscope.

The US Department of Agriculture has a seriously fascinating page on microscopy of snow flakes because it helps advance understanding of weather and crystal formation.

Snowflakes (usually) have hexagonal symmetry because this reflects the crystal structure of water molecules in the solid. This old article from Scientific American goes a bit more into the details on the formation of snowflakes, but it’s also something we still don’t entirely understand

Advertisements

Materials Advent 2018 Part 12-15 – Silver and Gold, large and small

Uses of silver in the U.S.
uses of gold in the USA

I’ll slightly tweak my rules again, because you know what silver and gold look like, and I actually think it’s interesting to tell people that we have lots of practical uses for them. Gold and silver are transition metals, that special middle part of the periodic table which represents the addition of a new set of electron orbitals (the d-orbitals). Electrons in the d-orbitals are special because they tend to overlap in energy with those in the s- or p-orbitals, helping increase the number of electrons that are free to move. This is actually why gold and silver are shiny – they have electrons that are easily excited and interact with visible light and reflect it back. Gold gets its unique yellow color because its electrons move so fast they actually need to be described by relativity and it shows that their energies decrease (essentially because of their increased relativistic mass at high speed).

The energy levels of silver hydride (AgH) and gold hydrige (AuH) before (n.r.) and after accounting for relativity. From this Reddit conversation on relativity in chemistry.

You’re probably less familiar with the nanoscale forms of gold and silver. Or you might be with silver because we now use the nanoparticles in lot of things for their antimicrobial properties. Our lab actually makes a lot of metal nanoparticles, and so I can show you a high-resolution image of these particles.

Silver nanoparticles on amorphous carbon. The lines in the particles are actually the organized rows of atoms. From this paper by my research lab.

Gold nanoparticles (and many other metals) are neat because they can completely change the color of solutions they are in, often to red in the case of gold. This is different from the yellow we see for large gold pieces because at a very small size, the electrons in the particles have different energy levels than at the macro scale, and so they absorb light from different colors. Old stained glass actually gets its colors from tiny amounts of metallic nanoparticles being incorporated into the glass while it was being formed. We’re still not entirely sure how the nanoparticles were incorporated in stained glass. Theories range from it being caused by poor cleaning of gold residue from the surfaces glass was worked on or contamination in the source materials.

The Lycurgus Cup is a particularly special kind of stained glass that changes color depending on how it is lit.

Materials Advent 2018 Parts 9, 10, and 11 – Weird Glasses

A teardropped shaped piece of glass seems to have an iridescent sheen
A photo of a Prince Rupert’s drop taken with a special polarizer that helps reveals stresses in the material

Following up on yesterday, I thought it would be fun to look at some weirder glasses.

Prince Rupert’s drops are teardrop (or tadpole) shaped pieces of glass made by dropping molten glass into cold water. They are famous for their bizarre strength. You can pound on the head all you want, and it will almost never break, but nick the tail a little bit and it spectacularly explodes.

Poof

This turns out to come from the way the drop forms. That initial bit that hits the water cools so fast it actually gets compressed by the cooling, making it stronger, but the tail is basically a path to the weak core. The trippy oil-puddle-esque image above is taken with a special kind of set-up that looks at light that ends up being polarized by stresses within the glass. Prince Rupert’s drops turn out to be technologically important, because efforts to understand them since the 1600s have inspired research into ways to make other kinds of glass stronger, leading to the Gorilla Glass and other toughened glass that now lines our smartphones and many other displays.

Metallic glasses are just what they sound like. Just like how I mentioned yesterday that metals are usually crystals, it turns out we can also try making them into glasses by cooling them so quickly their atoms can’t form an ordered structure. This requires either incredibly fast cooling (on the scale of at least 1000 degrees a second for some compositions) or an interesting work around using a lot of different metals together. It turns out that mixing a bunch of atoms of different sizes makes it harder for them to pack into a neat pattern.

The mix of atoms in metallic glasses helps them stay disordered.
A microscopy image showing the real atoms in a glassy alloy of unspecified composition. From Physical Metallurgy (Fifth Edition)

You might wonder why we want to make glass out of metals. It turns out to provide a special property – bounciness. And we literally demonstrate that with “atomic trampolines”. It’s really easy to deform a crystalline metal because that orderly crystal structure makes it easy to slide rows of atoms past each other when you hit them hard enough, just like it’s easy to push a row of desks lined up in a classroom. The glass can’t deform – there’s no preferred direction to push the atoms, so instead the energy just goes back to whatever it hits. This has a cost though – if you hit it too hard, just like regular glass, a metallic glass just shatters instead of accepting a dent. There was initially a lot of hope for them as new materials for the shells of devices like smartphones since they don’t transmit that energy to the components inside, but that’s proved harder to make than hoped. However, you can buy a golf club that takes advantage of the bounciness to essentially transmit all the energy from your swing into the ball. (Going farther back, they evidently also form the basis of most of those theft prevention tags that ring alarms.)

Finally, I’m breaking my rule a bit with this last one by not having an image, but did you know that toffee is also a glass? (Sorry, no one has put toffee under a high-resolution microscope or run it under an X-ray source for weird images for me yet) Or at least good toffee is. That crisp crunch you get from well-made toffee is because of glass shattering. When toffee feels gritty, it is because it has actually started to crystallize and typically has hundreds of little mini-crystals that want to deform. This is why some recipes suggest adding corn syrup. The bigger sugar molecules in corn syrup mixed up with the sucrose in regular table sugar mix up in a way like the metallic glasses above and make it harder for them to set into their crystal structure. Similarly, an early kind of stunt glass for special effects was literally made by boiling sugar into a clear candy.

When Physics Became King – A Book Review (OK, Book Report), part 1

While picking up some books for my dissertation from the science and engineering library, I stumbled across an history book that sounded interesting: When Physics Became King. I enjoy it a lot so far, and hope to remember it, so writing about it seems useful. I also think it brings up some interesting ideas to relate to modern debates, so blogging about the book seems even more useful.

Some recaps and thoughts, roughly in the thematic order the book presents in the first three chapters:

  • It’s worth pointing out how deeply tied to politics natural philosophy/physics was as it developed into a scientific discipline in the 17th-19th centuries. We tend to think of “science policy” and the interplay between science and politics as a 20th century innovation, but the establishment of government-run or sponsored scientific societies was a big deal in early modern Europe. During the French Revolution, the Committee of Public Safety suppressed the old Royal Academy and the later establishment of the Institut Nationale was regarded as an important development for the new republic. Similarly, people’s conception of science was considered intrinsically linked to their political and often metaphysical views. (This always amuses me when people hope science communicators like Neil deGrasse Tyson or Bill Nye should shut up, since the idea of science as something that should influence our worldviews is basically as old as modern science.)
  • Similarly, science was considered intrinsically linked to commerce, and the desire was for new devices to better reflect the economy of nature by more efficiently converting energy between various forms. I also am greatly inspired by the work of Dr. Chanda Prescod-Weinstein, a theoretical physicist and historian of science and technology on this. One area that Morus doesn’t really get into is that the major impetus for astronomy during this time is improving celestial navigation, so ships can more efficiently move goods and enslaved persons between Europe and its colonies (Prescod-Weinstein discusses this in her introduction to her Decolonizing Science Reading List, which she perennially updates with new sources and links to other similar projects). This practical use of astronomy is lost to most of us in modern society and we now focus on spinoff technology when we want to sell space science to public, but it was very important to establishing astronomy as a science as astrology lost its luster. Dr. Prescod-Weinstein also brings up an interesting theoretical point I didn’t consider in her evaluation of the climate of cosmology, and even specifically references When Physics Became King. She notes that the driving force in institutional support of physics was new methods of generating energy and thus the establishment of energy as a foundational concept in physics (as opposed to Newton’s focus on force) may be influenced by early physics’ interactions with early capitalism.
  • The idea of universities as places where new knowledge is created was basically unheard of until late in the 1800s, and they were very reluctant to teach new ideas. In 1811, it was a group of students (including William Babbage and John Herschel) who essentially lead Cambridge to a move from Newtonian formulations of calculus to the French analytic formulation (which gives us the dy/dx notation), and this was considered revolutionary in both an intellectual and political sense. When Carl Gauss provided his thoughts on finding a new professor at the University of Gottingen, he actually suggested that highly regarded researchers and specialists might be inappropriate because he doubted their ability to teach broad audiences.
  • The importance of math in university education is interesting to compare to modern views. It wasn’t really assumed that future imperial administrators would use calculus, but that those who could learn it were probably the most fit to do the other intellectual tasks needed.
  • In the early 19th century, natural philosophy was the lowest regarded discipline in the philosophy faculties in Germany. It was actually Gauss who helped raise the discipline by stimulating research as a part of the role of the professor. The increasing importance of research also led to a disciplinary split between theoretical and experimental physics, and in the German states, being able to hire theoretical physicists at universities became a mark of distinction.
  • Some physicists were allied to Romanticism because the conversion of energy between various mechanical, chemical, thermal, and electrical forms was viewed as showing the unity of nature. Also, empiricism, particularly humans directly observing nature through experiments, was viewed as a means of investigating the mind and broadening experience.
  • The emergence of energy as the foundational concept of physics was controversial. One major complaint was that people have a less intuitive conception of energy than forces, which are considered a lot. Others objected that energy isn’t actually a physical property, but a useful calculational tool (and the question of what exactly energy is still pops up in modern philosophy of science, especially in how to best explain it). The development of theories of luminiferous (a)ether are linked a bit to this as an explanation of where electromagnetic energy is – ether theories suggested the ether stored the energy associated with waves and fields.

What is rheology?

Inspired by NaBloPoMo and THE CENTRIFUGE NOT WORKING RIGHT THE FIRST TIME SO I HAVE TO STAY IN LAB FOR THREE MORE HOURS THAN I PLANNED (this was more relevant when I tried writing this a few weeks ago), I’ll be trying to post more often this month. Though heaven knows I’m not even going to pretend I’ll get a post a day when I have a conference (!) to prepare for.

I figure my first post could be a better attempt at better describing a major part of my research now – rheology and rheometers. The somewhat uncommon, unless you’re a doctor or med student who sees it pop up all the time in words like gonorrhea and diarrhea, Greek root “rheo” means “flow”, and so the simplest definition is that rheology is the study of flow. (And I just learned the Greek Titan Rhea’s name may also come from that root, so oh my God, rheology actually does relate to Rhea Perlman.) But what does that really mean? And if you’ve tripped out on fluid mechanics videos or photos before, maybe you’re wondering “what makes rheology different?”

RheaPerlmanAug2011.jpg

Oh my God, she is relevant to my field of study.

For our purposes, flow can mean any kind of material deformation, and we’re generally working with solids and liquids (or colloid mixtures involving those states, like foams and gels). Or if you want to get really fancy, you can say we’re working with (soft) condensed matter. Why not gas? We’ll get to that later. So what kind of flow behavior is there? There’s viscosity, which is what we commonly consider the “thickness” of a flowing liquid. Viscosity is how a fluid resists motion between component parts to some shearing force, but it doesn’t try to return the fluid back to its original state. You can see this in cases where viscosity dominates over the inertia of something moving in the fluid, such as at 1:00 and 2:15 in this video; the shape of the dye drops is essentially pinned at each point by how much the inner cylinder moves, but you don’t see the fluid move back until the narrator manually reverses the cylinder.

The other part of flow is elasticity. That might sound weird to think of a fluid as being elastic. While you really don’t see elasticity in pure fluids (unless maybe the force is ridiculously fast), you do see it a lot in mixtures. Oobleck, the ever popular mixture of cornstarch and water, becomes elastic as part of its shear-thickening behavior. (Which it turns out we still don’t have a great physical understanding of.)

 

You can think of viscosity as the “liquid-like” part of a substance’s behavior and elasticity as the “solid-like” part. Lots of mixtures (and even some pure substances) show both parts as “viscoelastic” materials. And this helps explain the confusion when you’re younger (or at least younger-me’s questions) of whether things like Jell-O, Oobleck, or raw dough are “really” solid or liquid. The answer is sort of “both”. More specifically, we can look at the “dynamic modulus” G at different rates of force. G has two components – G’ is the “storage modulus” and that’s the elastic/solid part, and G” is the “loss modulus” representing viscosity.

Dynamic modulus of silly putty

The dynamic moduli of Silly Putty at different rates of stress.

Whichever modulus is higher what mostly describes a material. So in the flow curve above, the Silly Putty is more like a liquid at low rates/frequencies of stress (which is why it spreads out when left on its own), but is more like a solid at high rates (which is why is bounces if you throw it fast enough). What’s really interesting is that the total number of either component doesn’t really matter, it’s just whichever one is higher. So even flimsy shaving cream behaves like a solid (seriously, it can support hair or other light objects without settling) at rest while house paint is a liquid, because even though paint tends to have a higher modulus, the shaving cream still has a higher storage modulus than its own loss modulus.

I want to publish this eventually, so I’ll get to why we do rheology and what makes it distinct in another post.

Weirdly Specific Questions I Want Answers to in Meta-science, part 1

Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.

  • To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two major papers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought* someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using it to describe a variety of physical things related to the original idea.
    • This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
  • Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
    • For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
    • Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.

51cb2b3noc-l-_sx346_bo1204203200_

Oh my gosh, I made fun of the subtitle for like two years, but it’s true

  • Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
    • On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano” (though in the sense they may be too small).
    • Is this a reflection of applications of defects at the different scales? (More philosophically worded, are defects treated differently because of their teleological nature?) At the bulk level, we work to engineer the nature of defects to help develop the properties we want. At the nanoscale, some structures can basically be ruined for certain applications by the mislocation of a single atom. Is this also a reflection of the current practical process of needing to scale up the ability to make nanomaterials? E.g. as more realistic approaches to large-scale nanotech fabrication are developed, will the practical treatment of defects in nanomaterials converge to that of how we treat defects in the bulk?

*Okay, more like anyone cared a lot about it, since there are papers going back to the 1960s where researchers describe what appear to be atomic monolayers of graphite.

Using Plants to Turn Pollution into Profits

Once again, I may prove why I’m a poor writer, by burying a lede. But bear with me here, because this will come full circle and yield some fruit. You probably know that urban farming has become more popular over the last decade or so as local eating became trendy. As city dwellers started their own plots, people realized there might be a unique challenge to urban areas: avoiding lead poisoning. (Although a more recent study evidently suggests you’re getting less lead than people expected.) We used lead in lots of things throughout the 20th century, and it easily accumulated in the soil in areas exposed to high doses of some sources – so cities and areas by busy highways have lead from old gas emissions, old lots have lead from old paint, and even old lead pipes and batteries can leach lead into soils. There are other pollutants that can leach into soils in other places. Mercury and cadmium can build up in places where significant amounts of coal are burned, and many mining practices can result in a lot of the relevant metal leaking out into the environment.

Traditionally, the way to deal with polluted soil is to literally dig it all up. This has a major drawback, in that completely replacing a soil patch also means you throw out some nice perks of the little micro-ecosystem that was developed, like root systems that help prevent erosion or certain nutrient sources. Recently, a new technique called phytoremediation has caught on, and as the NYT article points out, it takes advantage of the fact that some plants are really good at absorbing these metals from the soil. We now know of so-called hyperaccumulators of a lot of different metals and a few other pollutants. These are nice because they concentrate the metals for us into parts of the plants we can easily dispose of, and they can help preserve aspects of the soil we like. (And roots can help prevent erosion of the soil into runoff to boot.) Of course, one drawback here is time. If you’re concerned that a plot with lead might end up leaching it into groundwater, you may not want to wait for a few harvests to go by to get rid of it.

But a second drawback seems like it could present an opportunity. A thing that bugged me when I first heard of hyperaccumulators was that disposing of them still seemed to pose lots of problems. You can burn the plants, but you would need to extract the metals from the fumes, or it just becomes like coal and gas emissions all over again. (Granted, it is a bit easier when you have it concentrated in one place.) Or you can just throw away the plants, but again, you need to make sure you’re doing it in a place that will safely keep the metals as the plants break down. When I got to meet someone who studies how metals accumulate in plants and animals last summer, I asked her if there was a way to do something productive with those plants that now had concentrated valuable metals. Dr. Pickering told me this is called “phytomining”, and that while people looked into it, economic methods still hadn’t been developed.

That looks like it may have changed last month, when a team from China reported making multiple nanomaterials from two common hyperaccumulators. The team studied Brassica juncea, which turns out to be mustard greens, and Sedum alfredii, which is a native herb, and both of which are known to accumulate copper and zinc. The plants were taken from a copper-zinc mine in Liaoning Province, China.  The plants were first dissolved in a mix of nitric and perchloric acid, but literally just heating the acid residue managed to make carbon nanotubes. Adding some ammonia to the acid residue formed zinc oxide nanoparticles in the Sedum, and zinc oxide with a little bit of copper in the mustard greens. What’s really interesting is that the structure and shape of the nanotubes seemed to correlate to the size of the vascular bundles (a plant equivalent to arteries/veins) in the different plants.

nanotube-from-mustard-greens

A nanotube grown from the mustard greens. Source.

But as Dr. Pickering said to me, people have been looking into to this for a while (indeed, the Chinese team has similar papers on this from 5 years ago). What’s needed for phytomining to take off is for it to be economical. And that’s where the end of the paper comes in. First, the individual materials are valuable. The nanotubes are strong and conductive and could have lots of uses. The zinc oxide particles already have some use in solar cells, and could be used in LEDs or as catalysts to help break down organic pollutants like fertilizers. The authors say they managed to make the nanotubes really cheaply compared to other methods: they claimed they could make a kilogram for $120 while bulk prices from commercial suppliers of similar nanotubes is about $600/kg. (And I can’t even find that, because looking at one of my common suppliers, I see multiwalled nanotubes selling on the order of $100 per gram.) What’s really interesting is they claim they can make a composite between the nanotubes and copper/zinc oxide particles that might be even more effective at breaking down pollutants.

I imagine there will be some unforeseen issue in attempting to scale this up (because it seems like there always is). But this is an incredibly cool result. Common plants can help clean up one kind of pollution and be turned into valuable materials to help clean up a second kind of pollution. That’s a win-win.