I’ll slightly tweak my rules again, because you know what silver and gold look like, and I actually think it’s interesting to tell people that we have lots of practical uses for them. Gold and silver are transition metals, that special middle part of the periodic table which represents the addition of a new set of electron orbitals (the d-orbitals). Electrons in the d-orbitals are special because they tend to overlap in energy with those in the s- or p-orbitals, helping increase the number of electrons that are free to move. This is actually why gold and silver are shiny – they have electrons that are easily excited and interact with visible light and reflect it back. Gold gets its unique yellow color because its electrons move so fast they actually need to be described by relativity and it shows that their energies decrease (essentially because of their increased relativistic mass at high speed).
You’re probably less familiar with the nanoscale forms of gold and silver. Or you might be with silver because we now use the nanoparticles in lot of things for their antimicrobial properties. Our lab actually makes a lot of metal nanoparticles, and so I can show you a high-resolution image of these particles.
Gold nanoparticles (and many other metals) are neat because they can completely change the color of solutions they are in, often to red in the case of gold. This is different from the yellow we see for large gold pieces because at a very small size, the electrons in the particles have different energy levels than at the macro scale, and so they absorb light from different colors. Old stained glass actually gets its colors from tiny amounts of metallic nanoparticles being incorporated into the glass while it was being formed. We’re still not entirely sure how the nanoparticles were incorporated in stained glass. Theories range from it being caused by poor cleaning of gold residue from the surfaces glass was worked on or contamination in the source materials.
Prince Rupert’s drops are teardrop (or tadpole) shaped pieces of glass made by dropping molten glass into cold water. They are famous for their bizarre strength. You can pound on the head all you want, and it will almost never break, but nick the tail a little bit and it spectacularly explodes.
This turns out to come from the way the drop forms. That initial bit that hits the water cools so fast it actually gets compressed by the cooling, making it stronger, but the tail is basically a path to the weak core. The trippy oil-puddle-esque image above is taken with a special kind of set-up that looks at light that ends up being polarized by stresses within the glass. Prince Rupert’s drops turn out to be technologically important, because efforts to understand them since the 1600s have inspired research into ways to make other kinds of glass stronger, leading to the Gorilla Glass and other toughened glass that now lines our smartphones and many other displays.
Metallic glasses are just what they sound like. Just like how I mentioned yesterday that metals are usually crystals, it turns out we can also try making them into glasses by cooling them so quickly their atoms can’t form an ordered structure. This requires either incredibly fast cooling (on the scale of at least 1000 degrees a second for some compositions) or an interesting work around using a lot of different metals together. It turns out that mixing a bunch of atoms of different sizes makes it harder for them to pack into a neat pattern.
You might wonder why we want to make glass out of metals. It turns out to provide a special property – bounciness. And we literally demonstrate that with “atomic trampolines”. It’s really easy to deform a crystalline metal because that orderly crystal structure makes it easy to slide rows of atoms past each other when you hit them hard enough, just like it’s easy to push a row of desks lined up in a classroom. The glass can’t deform – there’s no preferred direction to push the atoms, so instead the energy just goes back to whatever it hits. This has a cost though – if you hit it too hard, just like regular glass, a metallic glass just shatters instead of accepting a dent. There was initially a lot of hope for them as new materials for the shells of devices like smartphones since they don’t transmit that energy to the components inside, but that’s proved harder to make than hoped. However, you can buy a golf club that takes advantage of the bounciness to essentially transmit all the energy from your swing into the ball. (Going farther back, they evidently also form the basis of most of those theft prevention tags that ring alarms.)
Finally, I’m breaking my rule a bit with this last one by not having an image, but did you know that toffee is also a glass? (Sorry, no one has put toffee under a high-resolution microscope or run it under an X-ray source for weird images for me yet) Or at least good toffee is. That crisp crunch you get from well-made toffee is because of glass shattering. When toffee feels gritty, it is because it has actually started to crystallize and typically has hundreds of little mini-crystals that want to deform. This is why some recipes suggest adding corn syrup. The bigger sugar molecules in corn syrup mixed up with the sucrose in regular table sugar mix up in a way like the metallic glasses above and make it harder for them to set into their crystal structure. Similarly, an early kind of stunt glass for special effects was literally made by boiling sugar into a clear candy.
I’m going to (hopefully) permanently end the popular misconception that glass is a liquid with this post. Glass is not a really slow liquid – it is in fact a solid, based on its flow properties (yay rheology!). But glass is a solid without structure, or in fancy terms, an amorphous solid. Many solids you see are crystalline, not just the pretty stones pop cultures tends to reserve the name “crystal” for. A crystalline material is one where their atoms or molecules are arranged in a repeating 3D pattern. The metals in your car, the silicon in your computer, and the calcium phosphate mineral in your bones are also all crystals because we can see their atoms follow some crystal structure. While the atoms/molecules can differ, mathematicians have found out that there are only about 200 distinct ways to make a repeating pattern in 3D with no gaps or overlaps (and only 17 for 2D). This might seem low, but the point is that while tiny details may change, there’s only so many ways to combine the symmetries you can find, like reflections or rotations, and still fill up all of space (or your wall).
The image above is a high-resolution transmission electron micrograph literally showing you the atoms in silica (silicon oxide) – the material that makes up regular glass. On the left, it’s a crystal and the dots on the bottom show you in red and green where the different atoms are in a hexagon arrangement. Around 3/4 of the way to the right you see that the atoms are no longer always in hexagons and the shapes start to change. This side is amorphous.
Liquid crystals are everywhere now (if you’re looking at this on a computer, your display is probably LCD, and a decent chance any TV you looked at lately is an LCD too). They’re another one of those weird in-between states like we see in rheology. Liquid crystals are liquids based on their mechanical properties, but their molecules show some large-scale ordering that resembles solid crystals. This is because the molecules in liquid crystals tend to be relatively large, like a polymer chain, so you make distinct structures by lining them up, but it can also still be hard to pack them closely to make a true solid. The first liquid crystal was actually found studying cholesterol!
Most LCDs are based on liquid crystals in the nematic phase seen above. (Although evidently not LCD TVs) Because the molecules are asymmetric, they can be oriented by electric fields because their electrons will be pulled by the electric force. The power in an LCD is basically to turn on and off the voltage to twist the crystals. This twisting is done to change how light passes through. A bunch of nematic liquid crystals lined up next to each other essentially act as a filter called a polarizer and line up the waves of light passing through. At the top and bottom layer of your display are two permanently oriented filters that are perpendicular to each other, an arrangement which would not let any light pass through without the crystals being lined up in a way to help align the light somewhere in between. (This is also why you can sort of see the pixels of LCDs at off-angles and why the picture can look so off away from the center of LCD TVs – the light is really only lined up for someone looking straight through the display.)
Humanity did something really cool in November – as a species, we redefined all our main measuring units to be based on universal physical constants. This represents the confluence of the two favorite subjects of many people – large international bureaucracies and science, more specifically metrology, or the science of measurement, so I’m sure everyone already knows about this. Just kidding. But it actually is interesting. The International Bureau of Weights and Measures (BIPM) had its 26th General Conference and voted to redefine the kilogram, ampere, degree kelvin and mole (that famous unit you learned about in chemistry class). The vote was unanimous, which is kind of a big deal when you consider that BIPM members include countries with historically tense relations like Israel and Saudi Arabia or the US with Iran and Venezuela. So what does it mean to change these definitions and why did we do this?
First, it’s worth going over some history. You can sort of view this as the ultimate culmination of the hopes of the original proponents of the metric system. Fitting the spirit of the times, scientists in revolutionary France proposed new units of measurement that could be used to describe all the universe, not just a single kingdom or even just a fiefdom. Their initial system only proposed two of the units we now use: the meter and the gram. The meter was defined as one ten millionth of the distance from the North Pole to the Equator, as measured on the line of longitude going through Paris (because, France), which they estimated with pretty solid accuracy even back then. The gram was defined as the mass of a cubic centimeter of water at freezing point. To point out a recurring problem, the French defined the gram to lay out the system, but decided that making a standard reference of something that small was too hard and so they commissioned a kilogram standard to be made as the reference instead. Thomas Jefferson heard about the new French units when he was Secretary of State and actually asked for a copy of the kilogram standard, but it never made it to America because the ship (and scientist) carrying it were hijacked by pirates ofthe Caribbean, so America ended up without a reference kilogram for decades.
(As an aside, temperature wasn’t defined in the revolutionary system at all. The Fahrenheit and Celsius scales were both developed earlier in the 18th century. For some weird reason, Celsius was also originally reversed, so that the numbers actually decreased with rising temperature – 100 degrees C was the freezing point of water and 0 was the boiling point. Also, my hot take is that Fahrenheit actually has a better zero point for reference than Celsius. As you know, Celsius is zero for the freezing/melting point of water, but it’s really easy to overshoot that depending on your cooling/heating rate as any video of supercooled water can show you. Fahrenheit used a mixture of water, ice, and the flavoring in salty licorice that always stabilizes to a specific temperature.)
In 1875, 17 countries (including the US) signed the Metre Convention, which established the definitions of the kilogram and meter based on new physical artifacts – a 1 kg cylinder of platinum-iridium alloy (the International Prototype Kilogram, affectionately called “le Grand K”) and a platinum-iridium bar with markings spaced 1 meter apart – and establishes the BIPM and associate organizations. The BIPM makes copies of these prototypes for national metrology organizations, and the US finally gets a kilogram, and a few years later, the US actually adopts the metric system. You might think that’s a joke, but Americans’ continued usage of imperial units is really just a surface level thing. Since 1893, the US has always defined the pound and yard in relation to their metric counterparts instead of a separate standard. We just can’t convince people to stick with the metric units in everyday life.
The problem with these artifact-based definitions is that it makes it hard to pinpoint where uncertainty develops. Every time a physical object is being handled, you risk scratching some microscopic sliver off or leaving some tiny amount of residue changing the mass of the kilogram standard or perhaps creating some strain that changes the length of the meter standard. For instance, nearly all the national kilogram standards have gained mass relative to le grand K, but people aren’t sure if this is a sign of them gaining mass, le grand K losing mass, or le grand K gaining mass slower than the other standards. Defining in terms of a physical constant offers more stability because we do have good evidence that they are, well, constant. This is also only reliable if you can measure the constant to high precision.
The meter has already been redefined this way a few times. First in the 1960s, it was defined as being a certain multiple of the wavelength of light emitted by a certain atomic transition of krypton. As optics advanced, it was discovered that light spectrum had some irregularities that could result in different values depending on experiments, so it was later replaced by a definition relating the meter explicitly to the speed of light. This means the meter is now based on the second, which is very precisely defined by an atomic transition of cesium.
Until recently measurements of the relevant constants for mass weren’t precise enough to replace le grand K. However, it’s also been awkward because the process of “tracing” the mass from le grand K to smaller masses becomes more imprecise as you go down. The balances only compare masses, so the only way to standardize something smaller than a kilogram is to see if it along with a bunch of other masses add up to a kilogram, and then just keep working down to smaller and smaller values. This quickly adds up difficulty and accumulated error, so the smallest NIST standard you can obtain goes to half a milligram. Unfortunately, industrial processes like developing active ingredients in pharmaceuticals or manufacturing nanoscale parts for electronics routinely deal with masses a lot smaller than that.
The new definition defines the kilogram in terms of the Planck constant, the unit that relates the frequency of light to the energy it carries. Or technically, it precisely defines the Planck constant as 6.62607015×10−3 joule-seconds or square meter-kilograms/second, and since we have physical values for seconds and meters, better measurements of the Planck constant now give the value of the kilogram. This is done with a watt balance, which uses the force of an electric current in a magnetic field to balance out the weight of a mass by varying the current until the forces are equal. A benefit of this definition (and the one for the meter) is that it is easier to define larger or smaller values for standards. Instead of needing to go through the full sequence of steps for scaling down mass standards, once a watt balance has found the current needed for a kilogram, you can easily scale that current down by 10 or 100 to make a new 100 or 10 g standard.
AND Materials Advent 2018 part 6 – Platinum-Iridium Alloys
So why make standard artifacts out of a mixture of platinum and iridium? It turns out this is really durable in a lot of ways. Pure platinum is already desired for how resistant it is to rusting. Iridium is an even rarer element – so rare that probably it’s biggest claim to fame is that a worldwide layer of it deep in the soil was one of the earliest proofs of the asteroid impact we think killed the dinosaurs. Adding a bit of iridium to platinum not only improves the rust resistance, but makes it stronger and harder, so handling the standards shouldn’t degrade them as much. It’s also a mixture that doesn’t change size much when heated or cooled, helping minimize another potential headache in measuring length.
While picking up some books for my dissertation from the science and engineering library, I stumbled across an history book that sounded interesting: When Physics Became King. I enjoy it a lot so far, and hope to remember it, so writing about it seems useful. I also think it brings up some interesting ideas to relate to modern debates, so blogging about the book seems even more useful.
Some recaps and thoughts, roughly in the thematic order the book presents in the first three chapters:
It’s worth pointing out how deeply tied to politics natural philosophy/physics was as it developed into a scientific discipline in the 17th-19th centuries. We tend to think of “science policy” and the interplay between science and politics as a 20th century innovation, but the establishment of government-run or sponsored scientific societies was a big deal in early modern Europe. During the French Revolution, the Committee of Public Safety suppressed the old Royal Academy and the later establishment of the Institut Nationale was regarded as an important development for the new republic. Similarly, people’s conception of science was considered intrinsically linked to their political and often metaphysical views. (This always amuses me when people hope science communicators like Neil deGrasse Tyson or Bill Nye should shut up, since the idea of science as something that should influence our worldviews is basically as old as modern science.)
Similarly, science was considered intrinsically linked to commerce, and the desire was for new devices to better reflect the economy of nature by more efficiently converting energy between various forms. I also am greatly inspired by the work of Dr. Chanda Prescod-Weinstein, a theoretical physicist and historian of science and technology on this. One area that Morus doesn’t really get into is that the major impetus for astronomy during this time is improving celestial navigation, so ships can more efficiently move goods and enslaved persons between Europe and its colonies (Prescod-Weinstein discusses this in her introduction to her Decolonizing Science Reading List, which she perennially updates with new sources and links to other similar projects). This practical use of astronomy is lost to most of us in modern society and we now focus on spinoff technology when we want to sell space science to public, but it was very important to establishing astronomy as a science as astrology lost its luster. Dr. Prescod-Weinstein also brings up an interesting theoretical point I didn’t consider in her evaluation of the climate of cosmology, and even specifically references When Physics Became King. She notes that the driving force in institutional support of physics was new methods of generating energy and thus the establishment of energy as a foundational concept in physics (as opposed to Newton’s focus on force) may be influenced by early physics’ interactions with early capitalism.
The idea of universities as places where new knowledge is created was basically unheard of until late in the 1800s, and they were very reluctant to teach new ideas. In 1811, it was a group of students (including William Babbage and John Herschel) who essentially lead Cambridge to a move from Newtonian formulations of calculus to the French analytic formulation (which gives us the dy/dx notation), and this was considered revolutionary in both an intellectual and political sense. When Carl Gauss provided his thoughts on finding a new professor at the University of Gottingen, he actually suggested that highly regarded researchers and specialists might be inappropriate because he doubted their ability to teach broad audiences.
The importance of math in university education is interesting to compare to modern views. It wasn’t really assumed that future imperial administrators would use calculus, but that those who could learn it were probably the most fit to do the other intellectual tasks needed.
In the early 19th century, natural philosophy was the lowest regarded discipline in the philosophy faculties in Germany. It was actually Gauss who helped raise the discipline by stimulating research as a part of the role of the professor. The increasing importance of research also led to a disciplinary split between theoretical and experimental physics, and in the German states, being able to hire theoretical physicists at universities became a mark of distinction.
Some physicists were allied to Romanticism because the conversion of energy between various mechanical, chemical, thermal, and electrical forms was viewed as showing the unity of nature. Also, empiricism, particularly humans directly observing nature through experiments, was viewed as a means of investigating the mind and broadening experience.
The emergence of energy as the foundational concept of physics was controversial. One major complaint was that people have a less intuitive conception of energy than forces, which are considered a lot. Others objected that energy isn’t actually a physical property, but a useful calculational tool (and the question of what exactly energy is still pops up in modern philosophy of science, especially in how to best explain it). The development of theories of luminiferous (a)ether are linked a bit to this as an explanation of where electromagnetic energy is – ether theories suggested the ether stored the energy associated with waves and fields.
Inspired by NaBloPoMo and THE CENTRIFUGE NOT WORKING RIGHT THE FIRST TIME SO I HAVE TO STAY IN LAB FOR THREE MORE HOURS THAN I PLANNED (this was more relevant when I tried writing this a few weeks ago), I’ll be trying to post more often this month. Though heaven knows I’m not even going to pretend I’ll get a post a day when I have a conference (!) to prepare for.
I figure my first post could be a better attempt at better describing a major part of my research now – rheology and rheometers. The somewhat uncommon, unless you’re a doctor or med student who sees it pop up all the time in words like gonorrhea and diarrhea, Greek root “rheo” means “flow”, and so the simplest definition is that rheology is the study of flow. (And I just learned the Greek Titan Rhea’s name may also come from that root, so oh my God, rheology actually does relate to Rhea Perlman.) But what does that really mean? And if you’ve tripped out on fluid mechanicsvideos or photos before, maybe you’re wondering “what makes rheology different?”
Oh my God, she is relevant to my field of study.
For our purposes, flow can mean any kind of material deformation, and we’re generally working with solids and liquids (or colloid mixtures involving those states, like foams and gels). Or if you want to get really fancy, you can say we’re working with (soft) condensed matter. Why not gas? We’ll get to that later. So what kind of flow behavior is there? There’s viscosity, which is what we commonly consider the “thickness” of a flowing liquid. Viscosity is how a fluid resists motion between component parts to some shearing force, but it doesn’t try to return the fluid back to its original state. You can see this in cases where viscosity dominates over the inertia of something moving in the fluid, such as at 1:00 and 2:15 in this video; the shape of the dye drops is essentially pinned at each point by how much the inner cylinder moves, but you don’t see the fluid move back until the narrator manually reverses the cylinder.
The other part of flow is elasticity. That might sound weird to think of a fluid as being elastic. While you really don’t see elasticity in pure fluids (unless maybe the force is ridiculously fast), you do see it a lot in mixtures. Oobleck, the ever popular mixture of cornstarch and water, becomes elastic as part of its shear-thickening behavior. (Which it turns out we still don’t have a great physical understanding of.)
You can think of viscosity as the “liquid-like” part of a substance’s behavior and elasticity as the “solid-like” part. Lots of mixtures (and even some pure substances) show both parts as “viscoelastic” materials. And this helps explain the confusion when you’re younger (or at least younger-me’s questions) of whether things like Jell-O, Oobleck, or raw dough are “really” solid or liquid. The answer is sort of “both”. More specifically, we can look at the “dynamic modulus” G at different rates of force. G has two components – G’ is the “storage modulus” and that’s the elastic/solid part, and G” is the “loss modulus” representing viscosity.
The dynamic moduli of Silly Putty at different rates of stress.
Whichever modulus is higher what mostly describes a material. So in the flow curve above, the Silly Putty is more like a liquid at low rates/frequencies of stress (which is why it spreads out when left on its own), but is more like a solid at high rates (which is why is bounces if you throw it fast enough). What’s really interesting is that the total number of either component doesn’t really matter, it’s just whichever one is higher. So even flimsy shaving cream behaves like a solid (seriously, it can support hair or other light objects without settling) at rest while house paint is a liquid, because even though paint tends to have a higher modulus, the shaving cream still has a higher storage modulus than its own loss modulus.
I want to publish this eventually, so I’ll get to why we do rheology and what makes it distinct in another post.
Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.
To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two majorpapers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought* someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using it to describe a variety of physical things related to the original idea.
This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.
Oh my gosh, I made fun of the subtitle for like two years, but it’s true
Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano” (though in the sense they may be too small).
Earlier this month, during oral arguments for Fisher v. University of Texas, Chief Justice John Roberts asked what perspective an African-American student would offer in physics classrooms. The group Equity and Inclusion in Physics and Astronomy has written an open letter about why this line of questioning may miss the point about diversity in the classroom. But it also seems worth pointing out why culture does matter in physics (and science more broadly).
So nature is nature and people can develop theoretical understanding of it anywhere and it should be similar (I think. This is actually glossing over what I imagine is a deep philosophy of science question.) But nature is also incredibly vast. People approach studies of nature in ways that can reflect their culture. Someone may choose to study a phenomenon because it is one they see often in their lives. Or they may develop an analogy between theory and some aspect of culture that helps them better understand a concept. You can’t wax philosphical about Kekule thinking of ouroboros when he was studying the structure of benzene without admitting that culture has some influence on how people approach science. There are literally entire books and articles about Einstein and Poincare being influenced by sociotechnical issues of late 19th/early 20th century Europe as they developed concepts that would lead to Einstein’s theories of relativity. A physics community that is a monoculture then misses out on other influences and perspectives. So yes, physics should be diverse, and more importantly, physics should be welcoming to all kinds of people.
It’s also worth pointing out this becomes immensely important in engineering and technology, where the problems people choose to study are often immensely influenced by their life experiences. For instance, I have heard people say that India does a great deal of research on speech recognition as a user interface because India still has a large population that cannot read or write, and even then, they may not all use the same language.
So we’ve discussed states of matter. And the reason they’re in the news. But the idea that this is a new state of matter isn’t particularly ground-breaking. If we’re counting electron states alone as new states of matter, then those are practically a dime a dozen. Solid-state physicists spend a lot of time creating materials with weird electron behaviors: under this defintion, lots of the newer superconductors are their own states of matter, as are topological insulators.
What is a big deal is the way this behaves as a superconductor. “Typical” superconductors include basically any metal. When you cool them to a few degrees above absolute zero, they lose all electrical resistance and become superconductive. These are described by BCS theory, a key part of which says that at low temperatures, the few remaining atomic vibrations of a metal will actually cause electrons to pair up and all drop to a low energy. In the 1970s, though, people discovered that some metal oxides could also become superconductive, and they did at temperatures above 30 K. Some go as high as 130 K, which, while still cold to us (room temperature is about 300 K), is warm enough to use liquid nitrogen instead of incredibly expensivve liquid helium for cooling. However, BCS theory doesn’t describe superconductivity in these materials, which also means we don’t really have a guide to develop ones with properties we want. The dream of a lot of superconductor researchers is that we could one day make a material that is superconducting at room temperature, and use that to make things like power transmission lines that don’t lose any energy.
This paper focused on an interesting material: a crystal of buckyballs (molcules of 60 carbon atoms arranged like a soccer ball) modified to have some rubidium and cesium atoms. Depending on the concentration of rubidium versus cesium in the crystal, it can behave like a regular metal or the new state of matter they call a “Jahn-Teller metal” because it is conductive but also has a distortion of the soccer ball shape from something called the Jahn-Teller effect. What’s particularly interesting is that these also correspond to different superconductive behaviors. At a concentration where the crystal is a regular metal at room temperatures, it becomes a typical superconductor at low temperatures. If the crystal is a Jahn-Teller metal, it behaves a lot like a high-temperature superconductor, albeit at low temperatures.
This is the first time scientists have ever seen a single material that can behave like both kinds of superconductor. This is exciting becasue this offers a unique testing ground to figure out what drives unconventional superconductors. By changing the composition, researchers change the behavior of electrons in the material, and can study their behavior, and see what makes them go through the phase transition to a superconductor.