When Physics Became King – A Book Review (OK, Book Report), part 1

While picking up some books for my dissertation from the science and engineering library, I stumbled across an history book that sounded interesting: When Physics Became King. I enjoy it a lot so far, and hope to remember it, so writing about it seems useful. I also think it brings up some interesting ideas to relate to modern debates, so blogging about the book seems even more useful.

Some recaps and thoughts, roughly in the thematic order the book presents in the first three chapters:

  • It’s worth pointing out how deeply tied to politics natural philosophy/physics was as it developed into a scientific discipline in the 17th-19th centuries. We tend to think of “science policy” and the interplay between science and politics as a 20th century innovation, but the establishment of government-run or sponsored scientific societies was a big deal in early modern Europe. During the French Revolution, the Committee of Public Safety suppressed the old Royal Academy and the later establishment of the Institut Nationale was regarded as an important development for the new republic. Similarly, people’s conception of science was considered intrinsically linked to their political and often metaphysical views. (This always amuses me when people hope science communicators like Neil deGrasse Tyson or Bill Nye should shut up, since the idea of science as something that should influence our worldviews is basically as old as modern science.)
  • Similarly, science was considered intrinsically linked to commerce, and the desire was for new devices to better reflect the economy of nature by more efficiently converting energy between various forms. I also am greatly inspired by the work of Dr. Chanda Prescod-Weinstein, a theoretical physicist and historian of science and technology on this. One area that Morus doesn’t really get into is that the major impetus for astronomy during this time is improving celestial navigation, so ships can more efficiently move goods and enslaved persons between Europe and its colonies (Prescod-Weinstein discusses this in her introduction to her Decolonizing Science Reading List, which she perennially updates with new sources and links to other similar projects). This practical use of astronomy is lost to most of us in modern society and we now focus on spinoff technology when we want to sell space science to public, but it was very important to establishing astronomy as a science as astrology lost its luster. Dr. Prescod-Weinstein also brings up an interesting theoretical point I didn’t consider in her evaluation of the climate of cosmology, and even specifically references When Physics Became King. She notes that the driving force in institutional support of physics was new methods of generating energy and thus the establishment of energy as a foundational concept in physics (as opposed to Newton’s focus on force) may be influenced by early physics’ interactions with early capitalism.
  • The idea of universities as places where new knowledge is created was basically unheard of until late in the 1800s, and they were very reluctant to teach new ideas. In 1811, it was a group of students (including William Babbage and John Herschel) who essentially lead Cambridge to a move from Newtonian formulations of calculus to the French analytic formulation (which gives us the dy/dx notation), and this was considered revolutionary in both an intellectual and political sense. When Carl Gauss provided his thoughts on finding a new professor at the University of Gottingen, he actually suggested that highly regarded researchers and specialists might be inappropriate because he doubted their ability to teach broad audiences.
  • The importance of math in university education is interesting to compare to modern views. It wasn’t really assumed that future imperial administrators would use calculus, but that those who could learn it were probably the most fit to do the other intellectual tasks needed.
  • In the early 19th century, natural philosophy was the lowest regarded discipline in the philosophy faculties in Germany. It was actually Gauss who helped raise the discipline by stimulating research as a part of the role of the professor. The increasing importance of research also led to a disciplinary split between theoretical and experimental physics, and in the German states, being able to hire theoretical physicists at universities became a mark of distinction.
  • Some physicists were allied to Romanticism because the conversion of energy between various mechanical, chemical, thermal, and electrical forms was viewed as showing the unity of nature. Also, empiricism, particularly humans directly observing nature through experiments, was viewed as a means of investigating the mind and broadening experience.
  • The emergence of energy as the foundational concept of physics was controversial. One major complaint was that people have a less intuitive conception of energy than forces, which are considered a lot. Others objected that energy isn’t actually a physical property, but a useful calculational tool (and the question of what exactly energy is still pops up in modern philosophy of science, especially in how to best explain it). The development of theories of luminiferous (a)ether are linked a bit to this as an explanation of where electromagnetic energy is – ether theories suggested the ether stored the energy associated with waves and fields.

The Work of Dr. Wesley Harris

Dr. Wesley Harris was one of the first African-American students accepted to UVA and the first African-American member of the Jefferson Literary and Debating Society, a club I joined during my time in grad school. For Black History Month, I wanted to look up his work and share it with other people. I said this would be the engineering equivalent of our law students looking up the case history of our first woman member, Barbara Lynn.

Wesley Harris sitting behind a microphone

Wesley Harris was born in Richmond in 1941, and was interested in flight from a young age. As a child, he would make model airplanes out of balsa wood or plastic, and he even made self-powered ones that used rubber bands. By fourth grade, he wanted to be a test pilot. Harris was one of seven Black students first admitted to the University of Virginia in 1960. The students were only allowed to study within the School of Engineering and Applied Science. A friend who knew the history told me this was because “practical” engineering work was considered the only suitable field of study for these seven students, and they were barred from the “intellectual” College of Arts and Sciences and other units.

Harris invited Martin Luther King Jr. to speak at UVA in 1963, which was considered momentous in the history of UVA and, to some, revitalized King’s momentum in the Birmingham campaign. The administration at the time did not really acknowledge the King visit and the then-president of the university did not meet with him. While walking with King and the professor hosting him, Harris heard a loud noise and tried to protect King thinking it was a gun shot, but it turned out to be car engine blowing out. Harris was inducted into Tau Beta Pi, the engineering honor society, and for his senior year (or if you insist on the UVA terminology, “fourth year”) was accepted to live on the Lawn of the Academical Village, the original part of the university grounds designed by Thomas Jefferson. For non-UVA people, living on the Lawn is a BIG deal and basically like an honor society of its own. Lots of people believe Harris was the first Black student to live on the Lawn, but Harris will be one of the first to say he was actually the second and point out that he came after Leroy Willis. Harris graduated with honors in aeronautical engineering in 1964.

Harris went on to graduate school at Princeton, where he earned a master’s and PhD in aerospace and mechanical sciences. He then went back to UVA as a professor, becoming the first African-American professor in the engineering school and the first to receive tenure at UVA. He taught at UVA for two years before moving to Southern University, a historically Black university in Louisiana, and then took a visiting professor position at MIT before joining the aeronautics and astronautics department there full-time.

Dr. Harris’ early research focused on fluid dynamics and the acoustic properties of rotor blades (like helicopter blades). He studied objects moving close to or above the speed of sound and the noise they generate. This is important because excessive noise represents lost efficiency in a rotor. Additionally, if high-speed supersonic/hypersonic vehicles are to become a reality, we need to reduce the noise they generate from shock waves (a common complaint about the Concorde when it used to fly). Dr. Harris worked to replace “semi-empirical” (a term in engineering that often means we’ve found a way to mathematically describe something, but don’t understand the principles well) models of shock waves from helicopter rotors with more theoretically-backed analytical models. This work on helicopters and high-speed air flows earned him a spot as a fellow of the American Institute for Aeronautics and Astronautics (AIAA). His work and service have also made him a fellow of the National Academy of Engineering. For professional researchers, becoming a fellow of a scientific society is basically the grown-up version of being in a high school or college honor society, and being a fellow of one of the National Academies is one of the highest honors you can get in STEM fields.

Since around 2000, Dr. Harris has studied the fluid dynamics of sickle cell disease. If you’re aware of sickle cell disease, you may generally think of the problem being the defective hemoglobin sickle cells have that is less efficient at carrying oxygen. However, another problem is that the strange shape of sickle cells can cause them to get stuck and accumulate in smaller blood vessels, causing a “sickle cell crisis”. Dr. Harris and his students have been developing models to better predict what causes the onset of crises, looking at both the movement of the blood cells and the chemical diffusion of oxygen during a crisis.

Like many other STEM professors who reach a certain age, Dr. Harris has also been very active in understanding and improving engineering education. In 1996, he wrote a paper for AIAA titled “Will the last aeronautical engineering turn out the light?”, looking at the evolution of aeronautical engineering curricula since WWII. In the article, he raises a concern that that newer changes are de-emphasizing engineering science fundamentals to focus on specific technology systems companies want students to know, but could quickly become outdated over the course of an engineer’s career.

in 2012, Dr. Harris co-authored “Opportunities and Challenges in Corrosion Education” with the current chair of the materials science department at UVA as part of a program for the National Research Council. The paper argued that knowledge of corrosion is informally handled in many engineering programs and often non-existent. At the undergraduate level, most schools without research faculty do not teach corrosion concepts, and if they do, the topic is rarely covered in required classes and is only given a lecture or two’s worth of time. (UVA gets a positive shout out here for the MSE department’s class on fuel cells, batteries, and corrosion that links them all through a focus on basic electrochemistry.) At the graduate level, the report expressed concern that research funding into corrosion is declining. I think this paper is particularly important in that it partially predicts the conversation developing in engineering circles after the Flint water crisis. Although even after reading through dozens of articles and engineering memos by contractors, I still don’t entirely understand the technical conversations that went on, I think the report’s fear that corrosion engineering wasn’t taught conceptually came into play here. If you were just going off something like a corrosion chart, you might not understand why highly chlorinated water poses a corrosion risk otherwise stable metals and alloys.

Finally, Dr. Harris has been incredibly active in serving the broader scientific and engineering communities. He was dean of the School of Engineering at the University Connecticut from 1985 to 1990 and a member of Princeton’s Board of Trustees from  2001 to 2005.  He chaired the National Research Council’s Committee to Assess NASA’s Aeronautic Flight Research capabilities, which concluded that NASA should enhance its research into Earth-based flight in the 2010s in a way to enable a boom similar to how NASA’s work in the 90s enabled modern commercial and military use of drones. In particular, the committee said NASA should test more environmentally friendly aircraft and supersonic passenger aircraft to spur on those fields. At the National Academy of Engineering, Dr. Harris serves on the Committee on Grand Challenges for Engineering, defining important issues for engineering to solve. It seems incredibly appropriate that this trailblazing figure for engineering and society in the 20th century now works to identify the challenges we will face in the 21st century. And of course, this was only a summary of what Dr. Harris has done. If you want to see more, you can start at his MIT faculty page.

Galileo Did Do Experiments

After finding an old book of mine, The Ten Most Beautiful Experiments, over winter break, I wanted to follow up on my last post. I’ll say that this post is based almost entirely on that book’s chapter on Galileo, but since I don’t see it summarized in many places, I thought it was worth writing up. It is somewhat in vogue to claim that Galileo didn’t actually perform his experiments on falling bodies, and his writings just describe thought experiments. However, this actually confuses two different experiments attributed to Galileo. Most historians do believe stories of Galileo dropping weights from the Leaning Tower of Pisa are apocryphal and come from people confusing what is a thought experiment that Salviati, one of the fictional conversationalists in Two New Sciences, describes doing there, or a relatively unsourced claim by Galileo’s secretary in a biography after his death.

However, Salviati also describes an experiment that Galileo is recognized as having done: measuring the descent of balls of different weights down ramps, which also follow the same basic equation as bodies in free fall, but modified by the angle of slope. I think a few people may doubt Galileo actually completed the ramp experiment, based on criticisms by Alexandre Koyré in the 1950s that Galileo’s methods seemed too vague or imprecise to measure the acceleration. However, many researchers (like the Rice team in an above link) have found it possible to get data close to Galileo’s using the method Salviati describes. Additionally, another historian, Stillman Drake, who had access to more of Galileo’s manuscripts found what appears to be records of raw experimental data that show reasonable error. Drake also suggests that Galileo may have originally kept time through the use of musical tempo before moving on the water clock. Wikipedia (I know, but I don’t have much to go on) also suggests Drake does believe in the Leaning Tower of Pisa experiment. While he may not have done it at that tower, evidently Galileo’s accounts include a description that corresponds to an observed tic that happens if people try to freely drop objects of different sizes at the same time, which suggest he tried free fall somewhere.

What to do if you’re inside a scientific revolution

A LesserWrong user (LesserWrong-er?) has a thought-provoking post on The Copernican Revolution from the Inside, with two questions in mind: (1) if you lived in 17th century Europe, would you have accepted heliocentrism on an epistemic level, and (2) how do you become the kind of person who would say yes to question 1? It’s interesting in the sense the often-asked question of  “What would you be doing during the Civil Rights Movement/Holocaust/Other Time of Great Societal Change” is, in that most people realize they probably would not be a great crusader against the norm of another time. But as someone in Charlottesville in the year 2017, asking about what you’d be doing in scientific arguments is less terrifyingly relevant than asking people about how they’d deal with Nazism, so we’ll just focus on that.

Cover of Kuhn's The Structure of Scientific Revolutions showing a whirlpool behind the title text.
Look, you’re probably in at least one.

For once on the internet, I recommend reading the comments, in that I think they help flesh out the argument a lot more and correct some strawmanning of the heliocentrists by the OP. Interestingly, OP actually says he thinks

In fact, one my key motivations for writing it — and a point where I strongly disagree with people like Kuhn and Feyerabend — is that I think heliocentrism was more plausible during that time. It’s not that Copernicus, Kepler Descartes and Galileo were lucky enough to be overconfident in the right direction, and really should just have remained undecided. Rather, I think they did something very right (and very Bayesian). And I want to know what that was.

and seems surprised that commenters think he went too pro-geocentrist. I recommend the following if you want the detailed correction, but I’ll also summarize the main points so you don’t have to:

  • Thomas Kehrenberg’s comment as it corrects factual errors in the OP regarding sunspots and Jupiter’s moon
  • MakerOfErrors for suggesting the methodological point should be that both geo- and heliocentric systems should have been treated with more uncertainty around the time of Galileo until more evidence came in
  • Douglas_Knight for pointing out a factual error regarding Venus and an argument I’m sympathetic to regarding the Coriolis effect but evidently am wrong on, which I’ll get to below. I do think it’s important to acknowledge that Galilean relativity is a thing, though, and that reduces the potential error a lot.
  • Ilverin for sort of continuing MakerOfError’s point and suggesting the true rationalist lesson should be looking at how do you deal with competing theories that both have high uncertainties

It’s also worth pointing out that even the Tychonic system didn’t resolve Galileo’s argument for heliocentrism based on sun spots. (A modification to Tycho’s system by one of his students that allows for the rotation of the Earth supposedly resolves the sunspot issue, but I haven’t heard many people mention it yet.)

Also, knowing that we didn’t have a good understanding of the Coriolis effect until, well, Coriolis in the 1800s (though there are some mathematical descriptions in the 1700s), I was curious to what extent people made this objection during the time of Galileo. It turns out Galileo also predicted it as a consequence of a rotating earth. Giovanni Riccioli, a Jesuit scientist, seems to have made the most rigorous qualitative argument against heliocentrism because cannon fire and falling objects are not notably deflected from straight line paths. I want to point out that Riccioli does virtually no math in his argument on the Coriolis effect (unless there’s a lot in the original text that I don’t see in the summary of his Almagestum Novum). This isn’t uncommon pre-Newton, and no one would have the exact tools to deal with Coriolis forces for almost 200 years. But one could reasonably try to make a scaling argument about whether or not the Coriolis effect matters based only on the length scale you’re measuring and the rotation speed of the Earth (which would literally just be taking the inverse of a day) and see that that heliocentrists aren’t insane.

It’s not a sexy answer to the second question, but I think “patience for new data” goes a long way towards making you the kind of person who can say yes to the first question. You hear the term “Copernican revolution” thrown around like a very specific event, and I think it’s pretty easy to forget the relative timeframes of major players unless this is your bread and butter. Copernicus’ De revolutionibus came out in 1543. Newton’s Principia came out in 1687, which gives a physical explanation for Kepler’s empirical laws and results in them becoming more greatly accepted, and so can be considered a decent (if oversimplified) endpoint for the debate. Galileo began to get vocal about heliocentrism in the early 1610s. The Almagestum Novum came out in 1651. For over a century, people on both sides were gathering and interpreting new data and refining their theories.

I also like this article for a related point, albeit one a bit removed from the author’s thesis. In considering the question of how should accept new theories, we see the historical development of one theory overtaking another as “scientific consensus”. Earlier this year, rationalist Scott Alexander in a post on Learning to Love Scientific Consensus concisely summarized why the typical “consensus is meaningless” trope of just listing times consensus has turned out to be wrong isn’t particularly useful in understanding science:

I knew some criticisms of a scientific paradigm. They seemed right. I concluded that scientists weren’t very smart and maybe I was smarter. I should have concluded that some cutting-edge scientists were making good criticisms of an old paradigm. I can still flatter myself by saying that it’s no small achievement to recognize a new paradigm early and bet on the winning horse. But the pattern I was seeing was part of the process of science, not a condemnation of it.

Most people understand this intuitively about past paradigm shifts. When a creationist says that we can’t trust science because it used to believe in phlogiston and now it believes in combustion, we correctly respond that this is exactly why we can trust science. But this lesson doesn’t always generalize when you’re in the middle of a paradigm shift right now and having trouble seeing the other side.

The notion of “trusting” scientific consensus I think gets to a larger point. There are way more non-scientists than scientists, so most people aren’t in a place to rigorously evaluate contemporary analogues to the Copernican revolution, so you often have to trust consensus at least a little. Also scientists aren’t scientists of every field, so even they can’t evaluate all disputes and will rely on the work of their colleagues in other departments. And given how many fields of science there are, there’s always probably at least one scientific revolution going on in your lifetime, if not several. Fortunately they don’t all take 150 years to resolve. (Though major cosmological ones can take a long time when we need new instruments and new data that can take a long time to acquire.)

But if you want to be the kind of person who can evaluate revolutions (or maybe attempts at revolutions), and I hope you are, then here’s a bit more advice for the second question à la Kuhn: try to understand the structure of competing theories. This doesn’t mean a detailed understanding of every equation or concept, but realize some things are more much important to how a theory functions than others, and some predictions are relatively minor (see point 4 below for an application to something that I thing pretty clearly doesn’t fall into a revolution today). To pure geocentrists, the phases of Venus were theory-breaking because geocentrism doesn’t allow mechanisms for a full range of phases for only some planets, and so they had to move to Tycho’s model. To both groups writ large, it didn’t break the scientific theories if orbits weren’t perfectly circular (partly that was because there wasn’t really a force driving motion in either theory until Kepler and he wasn’t sure what actually provided it, so we see how several scientific revolutions later, it gets hard to evaluate their theories 100% within the language of our current concepts), though people held on because of other attachments. Which leads to a second suggestion: be open-minded about theories and hypotheses, while still critical based on the structure. (And I think it’s pretty reasonable to argue that the Catholic Church was not open-minded in that sense, as De revolutionibus was restricted and Galileo published his future works in  Protestant jurisdictions.) In revolutions in progress, being open-minded means allowing for reasonable revision of competing theories (per the structure point) to accommodate new data and almost maybe more importantly allows for generating new predictions from these theories to guide more experiment and observation to determine what data needs to be gathered to finally declare a winning horse.

***
Stray thoughts

  1. Let me explain  how I corrected my view on the Coriolis effect. We mainly think of it as applying to motion parallel to the surface of the Earth, but on further thought, I realized it does also apply to vertical motion (something further from the center of the Earth is moving at a faster rotational velocity than something closer, though they do have the same angular velocity). Christopher Graney, a physics and astronomy professor at Jefferson Community and Technical College who I will now probably academically stalk to keep in mind for jobs back home, has a good summary of Riccioli’s arguments from the Almagestum Novum in an article on arXiv and also what looks like a good book that I’m adding to my history/philosophy of science wishlist on Amazon. The Coriolis effect arguments are Anti-Copernican Arguments III-VI, X-XXII, and XXVII-XXXIII. Riccioli also addresses the sunspots in Pro-Copernican Argument XLIII, though the argument is basically philosophical in determining what kind of motion is more sensible. It’s worth pointing out that in the Almagestum, Riccioli is collecting almost all arguments used on both sides in the mid-17th century, and he even points out which ones are wrong on both sides. This has led some historians to call it what Galileo’s Dialogue should have been, as Galileo pretty clearly favored heliocentrism in Dialogue but Riccioli remains relatively neutral in Almagestum.
  2. I’m concerned someone might play the annoying pedant by saying a) “But we know the sun isn’t the center of the Universe!” or b) “But relativity says you could think of the Earth as the center of the Universe!”. To a), well yeah, but it’s really hard to get to that point without thinking of us living in a solar system and thinking of other stars as like our sun. To b), look, you can totally shift the frames, but you’re basically changing the game at that point since no frame is special. Also, separate from that, if you’re really cranking out the general relativity equations, I still think you see more space-time deformation from the sun (unless something very weird happens in the non-inertial frame transform) so it still “dominates” the solar system, not the Earth.
  3. For a good example of the “consensus is dumb” listing of consensuses of the past, look at Michael Crichton’s rant from his “Aliens Cause Global Warming” 2003 Michelin Lecture at CalTech beginning around “In science consensus is irrelevant. What is relevant is reproducible results.” Crichton gets close to acknowledging that consensus does in fact seem to accommodate evidence in the plate tectonics example, but he writes it off. And to get to Crichton’s motivating point about climate science, it’s not like climate science always assumed man had a significant impact. The evolution of global warming theory goes back to Arrhenius who hypothesized around 1900 that the release of CO2 from coal burning might have an effect after studying CO2’s infrared spectrum, and it wasn’t until the 60s and 70s that people thought it might outweigh other human contributions (hence the oft-misunderstood “global cooling” stories about reports from the mid-20th century).
  4. Or to sum up something that a certain class of people would love to make a scientific revolution but isn’t, consider anthropogenic climate change. Honestly, specific local temperature predictions being wrong generally isn’t a big deal unless say most of them can’t be explained by other co-occurring phenomena (e.g. the oceans seem to have absorbed most of the heat instead of it leading to rising surface temperatures), since the central part of the theory is that emission of CO2 and certain other human-produced gases has a pretty large effect due to radiative forcing which traps more heat in. Show that radiative forcing is wrong or significantly different from the current values, and that’s a really big deal. Or come up with evidence of something that might counter radiative forcing’s effect on temperature at almost the same scale, and while the concern would go away, I think it’s worth pointing out it wouldn’t actually mean research on greenhouse gases was wrong. I would also argue that you do see open-mindedness in climate science, since people do still pursue the “iris hypothesis” and there are actually almost always studies on solar variability if you search NASA and NSF grants. 

I Have a Hard Time Summing Up My Science and Politics Beliefs Into a Slogan

From a half-joking, half-serious post of my own on Facebook:

“SCIENCE IS POLITICAL BECAUSE THERE’S LOT OF INFLUENCE BY POLITICAL AND POWERFUL CULTURAL INSTITUTIONS, BUT NOT PARTISAN. AND ALSO THAT SCIENTIFIC RESULTS AFFECT MORE OF OURS LIVES. BUT LIKE MAN, WE REALLY SHOULDN’T DO THE WHOLE TECHNOCRACY THING. BUT LIKE EVIDENCE SHOULD MATTER. BUT ALSO VALUES MATTER WHEN EVALUATING STUFF. IT’S COMPLICATED. HAS ANYONE READ LATOUR? OR FEYERABEND? CAN SOMEONE EXPLAIN FEYERABEND TO ME? DOES ANYONE WANT TO GET DRINKS AND TALK AFTER THIS?”

the_end_is_not_for_a_while

Evidently, I am the alt-text from this comic.

“HERE ARE SOME GOOD ARTICLES ABOUT PHILOSOPHY AND SOCIOLOGY OF SCIENCE” (I didn’t actually give a list, since I knew I would never really be able to put that on a poster, but some suggested readings if you’re interested: the Decolonizing Science Reading List curated by astrophysicist Chanda Prescod-Weinstein, a recent article from The Atlantic about the March for Science, a perspective on Doing Science While Black, the history of genes as an example of the evolution of scientific ideas, honestly there’s a lot here, and this is just stuff I shared on my Facebook page over the last few months.)
“LIKE HOLY SHIT Y’ALL EUGENICS HAPPENED”
“LIKE, MAN, WE STERILIZED A LOT OF PEOPLE. ALSO, EVEN BASIC RESEARCH CAN BE MESSED UP. LIKE TUSKEGEE. OR LITERALLY INJECTING CANCER INTO PEOPLE TO SEE WHAT HAPPENS. OR CRISPR. LIKE, JEEZ, WHAT ARE WE GOING TO DO WITH THAT ONE? SOCIETY HELPS DETERMINE WHAT IS APPROPRIATE.”
“I FEEL LIKE I’M GOING OFF MESSAGE. BUT LIKE WHAT EXACTLY IS THE MESSAGE HERE”
“I DON’T KNOW WHAT THE MESSAGE IS, BUT THESE ARE PROBABLY GOOD TO DO. ESPECIALLY IF THEY INSPIRE CONVERSATIONS LIKE THIS.”
“ALSO, DID YOU KNOW THAT MULTICELLULAR LIFE INDEPENDENTLY EVOLVED AT LEAST 10 TIMES ON EARTH? I’M NOT GOING ANYWHERE WITH THAT, I JUST THINK IT’S NEAT AND WE DON’T TYPICALLY HEAR THAT IN INTRO BIO.”

Lynn Conway, Enabler of Microchips

Are you using something with a modern microprocessor on International Women’s Day? (If you’re not, but somehow able to see this post, talk to a doctor. Or a psychic.) You should thank Dr. Lynn Conway, professor emerita of electrical engineering and computer science at Michigan and member of the National Academy of Engineering, who is responsible for two major innovations that are ubiquitous in modern computing. She is most famous for the Mead-Conway revolution, as she developed the “design rules” that are used in Very-Large-Scale Integration architecture, the scheme that basically underlies all modern computer chips. Conway’s rules standardized chip design, making the process faster, easier, and more reliable, and perhaps most significant to broader society, easy to scale down, which is why we are now surrounded by computers.

sscm_cover_page_m

She is less known for her work on dynamic instruction scheduling (DIS). DIS lets a computer program operate out of order, so that later parts of code that do not depend on results of earlier parts can start running instead of letting the whole program stall until certain operations finish. This lets programs run faster and also be more efficient with processor and memory resources. Conway was less known for this work for years because she presented as a man when she began work at IBM. When Conway began her public transition to a woman in 1968, she was fired because the transition was seen as potentially “disruptive” to the work environment. After leaving IBM and completing her transition, Conway lived in “stealth”, which prevented her from publicly taking credit for her work there until the 2000s, when she decided to reach out to someone studying the company’s work on “superscalar” computers in the 60s.

Since coming out, Dr. Conway has been an advocate for trans rights, in science and in society. As a scientist herself, Dr. Conway is very interested in how trans people and the development of gender identity are represented in research. In 2007, she co-authored a paper showing that mental health experts seemed to be dramatically underestimating the number of trans people in the US based just on studies of transition surgeries alone. In 2013 and 2014, Conway worked to make the IEEE’s Code of Ethics inclusive of gender identity and expression.

A good short biography of Dr. Conway can be found here. Or read her writings on her website.

Weirdly Specific Questions I Want Answers to in Meta-science, part 1

Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.

  • To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two major papers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought* someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using it to describe a variety of physical things related to the original idea.
    • This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
  • Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
    • For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
    • Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.

51cb2b3noc-l-_sx346_bo1204203200_

Oh my gosh, I made fun of the subtitle for like two years, but it’s true

  • Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
    • On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano” (though in the sense they may be too small).
    • Is this a reflection of applications of defects at the different scales? (More philosophically worded, are defects treated differently because of their teleological nature?) At the bulk level, we work to engineer the nature of defects to help develop the properties we want. At the nanoscale, some structures can basically be ruined for certain applications by the mislocation of a single atom. Is this also a reflection of the current practical process of needing to scale up the ability to make nanomaterials? E.g. as more realistic approaches to large-scale nanotech fabrication are developed, will the practical treatment of defects in nanomaterials converge to that of how we treat defects in the bulk?

*Okay, more like anyone cared a lot about it, since there are papers going back to the 1960s where researchers describe what appear to be atomic monolayers of graphite.

Reclaiming Science as a Liberal Art

What do you think of when someone talks about the liberal arts? Many of you probably think of subjects like English and literature, history, classics, and philosophy. Those are all a good start for a liberal education, but those are only fields in the humanities. Perhaps you think of the social sciences, to help you understand the institutions and actors in our culture; fields like psychology, sociology, or economics. What about subjects like physics, biology, chemistry, or astronomy? Would you ever think of them as belonging to the liberal arts, or would you cordon them off into the STEM fields? I would argue that excluding the sciences from the liberal arts is both historically wrong and harms society.

First, let’s look at the original conception of the liberal arts. Your study would begin with the trivium, the three subjects of grammar, logic, and rhetoric. The trivium has been described as a progression of study into argument. Grammar is concerned with how things are symbolized. Logic is concerned with how things are understood. Rhetoric is concerned with how things are effectively communicated, because what good is it to understand things if you cannot properly share your understanding to other learned people? With its focus on language, the trivium does fit the common stereotype of the liberal arts as a humanistic writing education.

But it is important to understand that the trivium was considered only the beginning of a liberal arts education. It was followed by the supposedly more “serious” quadrivium of arithmetic, geometry, music, and astronomy. The quadrivium is focused on number and can also be viewed as a progression. Arithmetic teaches you about pure numbers. Geometry looks at number to describe space. Music, as it was taught in the quadrivium, focused on the ratios that produce notes and the description of notes in time. Astronomy comes last, as it builds on this knowledge to understand the mathematical patterns in space and time of bodies in the heavens. Only after completing the quadrivium, when one would have a knowledge of both language and numbers, would a student move on to philosophy or theology, the “queen of the liberal arts”.

7 Liberal Arts

The seven liberal arts surrounding philosophy.

Although this progression might seem strange to some, it makes a lot of sense when you consider that science developed out of “natural philosophy”. Understanding what data and observations mean, whether they are from a normal experiment or “big data”, is a philosophical activity. As my professors say, running an experiment without an understanding of what I was measured makes me a technician, not a scientist. Or consider alchemists, who included many great experimentalists who developed some important chemical insights, but are typically excluded from our conception of science because they worked with different philosophical assumptions. The findings of modern science also tie into major questions that define philosophy. What does it say about our place in the universe if there are 10 billion planets like Earth in our galaxy, or when we are connected to all other living things on Earth through chemistry and evolution?

We get the term liberal arts from Latin, artes liberales, the arts or skills that are befitting of a free person. The children of the privileged would pursue those fields. This was in contrast to the mechanical arts – fields like clothesmaking, agriculture, architecture, martial arts, trade, cooking, and metalworking. The mechanical arts were a decent way for someone without status to make a living, but still considered servile and unbecoming of a free (read “noble”) person. This distinction breaks down in modern life because we are no longer that elitist in our approach to liberal education. We think everyone should be “free”, not just an established elite.

More importantly, in a liberal democracy, we think everyone should have some say in how they are governed. Many major issues in modern society relate to scientific understanding and knowledge. To talk about vaccines, you need to have some understanding of the immune system. The discussion over chemicals is very different when you know that we are made up chemicals. It is hard to understand what is at stake in climate change without a knowledge of how Earth’s various geological and environmental systems work and it is hard to evaluate solutions if you don’t know where energy comes from. Or how can we talk about surveillance without understanding how information is obtained and how it is distributed? The Founding Fathers say they had to study politics and war to win freedom for their new nation. As part of a liberal education, Americans today need to learn to science in order to keep theirs.

(Note: This post is based off a speech I gave as part of a contest at UVA. It reflects a view I think is often unconsidered in education discussions, so I wanted to adapt it into a blog post.

As another aside, it’s incredibly interesting people now tend to unambiguously think of social sciences as part of the liberal arts while wavering more on the natural sciences since the idea of a “social” science wasn’t really developed until well after the conception of the liberal arts.)

Thoughts on Basic Science and Innovation

Recently, science writer and House of Lords member Matt Ridley wrote an essay in The Wall Street Journal about the myth of basic science leading to technological development. Many people have criticized it, and it even seems like Ridely has walked back some claims. One engaging take can be found here, which includes a great quote that I think helps summarize a lot of the reaction:

I think one reason I had trouble initially parsing this article was that it’s about two things at once. Beyond the topic of technology driving itself, though, Ridley has some controversial things to say about the sources of funding for technological progress, much of it quoted from Terence Kealy, whose book The Economic Laws of Scientific Research has come up here before

But also, it seems like Ridley has a weird conception of the two major examples he cites as overturning the “myth”: steam engines and the structure of the DNA. The issue with steam engines is that we mainly associate them with James Watt, who you memorialize everytime you fret about how many watts all your devices are consuming. Steam engines actually preceded Watt, but the reason we associate them with him is because he greatly improved their efficiency due to his understanding of latent heat, the energy that goes into changing something from one phase into another. (We sort of discussed this before. The graph below helps summarize.) Watt understood latent heat because his colleague and friend Joseph Black, a chemist at the University of Glasgow, discovered it.

A graph with X-axis labelled

The latent heat is the heat that is added to go between phases. In this figure, it is represented by the horizontal line between ice and heating of water and the other horizontal line between heating of water and heating of water vapor.

I don’t know whether or not X-ray crystallography was ever used for industrial purposes in the textiles industry, but it has pretty consistently been used in academia since the basic principles were discovered a century ago. The 1915 Nobel prize in physics was literally about the development of X-ray crystallograpy theory. A crystal structure of a biological molecule was determined by X-ray studies at least in 1923, if not earlier. The idea that DNA crystallography only took off as a popular technique because of spillover from industry is incredibly inaccurate.

Ridley also seems to have a basic assumption: that government has crowded out the private sector as a source of basic research over the past century. It sounds reasonable, and it seems testable as a hypothesis. As a percentage of GDP (which seems like a semi-reasonable metric for concerns about crowding out), federal spending on research and development has generally been on the decline since the 70s, and is now about a 1/3 less than it’s relatively stable levels that decade. If private R&D had been crowded out, a competitor dropping by that much seems like a decent place for some resurgence, especially since the one of the most cited examples of private research, Bell Labs, was still going strong all this time. But instead, Bell cut most of its basic research programs just a few years ago.

Fedederal research spending as a percentage of GDP from the 1976 to 2016 fiscal years. The total shows a slight decrease from 1976 to 1992, a large drop in the 90s, a recovery in 2004, and a drop since 2010.

Federal spending on R&D as a percentage of GDP over time

To be fair, more private philanthropists now seem to be funding academic research. The key word, though, is philanthropist, not commercial, which Ridley refers to a lot throughout the essay. Also, a significant amount of this new private funding is for prizes, but you can only get a prize after you have done work.

There is one major thing to take from Ridley’s essay, though I also think most scientists would admit it too. It’s somewhat ridiculous to try to lay out a clear path from a new fundamental result to a practical application, and if you hear a researcher claim to have one, keep your BS filter high. As The New Yorker has discussed, even results that seems obviously practical have a hard time clearing feasibility hurdles. (Also, maybe it’s just a result of small reference pools, but it seems like a lot of researchers I read are also concerned that research now seems to require some clear “mother of technology” justification.)  Similarly, practical developments may not always be obvious. Neil deGrasse Tyson once pointed out that if you spoke to a physicist in the late 1800s about the best way to quickly heat something, they would probably not describe something resembling a microwave.

Common timeframe estimates of when research should result in a commercially available product, followed by translations suggesting how unrealistic this is. The fourth quarter of next year The project will be canceled in six months. Five years I've solved the interesting research problems. The rest is just business, which is easy, right? Ten years We haven't finished inventing it yet, but when we do, it'll be awesome. 25+ years It has not been conclusively proven impossible. We're not really looking at market applications right now. I like being the only one with a hovercar.

Edit to add: Also, I almost immediately regret using innovation in the title because I barely address it in the post, and there’s probably a great discussion to have about that word choice by Ridley. Apple, almost famously, funds virtually no basic research internally or externally, which I often grumble about. However, I would not hesitate to call Apple an “innovative” company. There are a lot of design choices that can improve products that can be pretty divorced from the physical breakthroughs that made them unique. (Though it is worth pointing out human factors and ergonomics are very active fields of study in our modern, device-filled lives.)

Red Eye Take Warning – Our Strange, Cyclical Awareness of Pee in Pools

The news has been abuzz lately with a terrifying revelation: if you get red eye at the the pool, it’s not from the chlorine, it’s from urine. Or to put it more accurately, from the product of chlorine reacting with a chemical in the urine. In the water, chlorine easily reacts with uric acid, a chemical found in urine, and also in sweat, to form chloramines. It’s not surprising that this caught a lot of peoples’ eyes, especially since those product chemicals are linked to more than just eye irritation. But what’s really weird is what spurred this all on. It’s not a new study that finally proved this. It’s just the release of the CDC’s annual safe swimming guide and a survey from the National Swimming Pool Foundation. But this isn’t the first year the CDC mentioned this fact: an infographic from 2014’s Recreational Water Illness and Injury Prevention Week does and two different posters from 2013 do (the posters have had some slight tweaks, but the Internet Archive confirms they were there in 2013 and even 2012), and on a slightly related note, a poster from 2010 says that urine in the pool uses up the chlorine.

A young smiling boy is at the edge of a swimming pool, with goggles on his forehead.

My neighborhood swim coach probably could have convinced me to wear goggles a lot earlier if she told me it would have kept pee out of my eyes.

Here’s what I find even stranger. Last year there was a lot of publicity about a study suggesting the products of the chlorine-uric acid reaction might be linked to more severe harm than just red eye. But neither Bletchley, the leader of study, and none of the articles about it link the chemicals to red eye at all, or even mention urine’s role in red eye in the pool. Also, if you’re curious about the harm, but don’t want to read the articles, the conclusion is that it doesn’t even reach the dangerous limits for drinking water. According to The Atlantic, Bletchley is worried more that it might be easier for an event like a swimming competition to easily deplete the chlorine available for disinfecting a pool in only a short amount of time. This seems strange because it seems like a great time to bring up that eye irritation can be a decent personal marker for the quality of the pool as a way to empower people. If you’re at a pool and your eyes feel like they’re on fire or you’re hacking a lot without swallowing water, maybe that’s a good sign to tell the lifeguard they need to add more chlorine because most of it has probably formed chloramines by then.

Discussion of urine and red eye seems to phase in and out over time, and actually even the focus of whether its sweat or urine does too. In 2013, the same person from the CDC spoke with LiveScience and they mention that the pool smell and red eye is mainly caused by chloramines (and therefore urine and sweat), not chlorine. A piece from 2012 reacting to a radio host goes into detail on chloramines. During the 2012 Olympics, Huffington Post discussed the irritating effects of chloramines on your body, including red eye, and the depletion of chlorine for sterilization after many Olympic swimmers admitted to peeing in the pool. (Other pieces seem to ignore that this reaction happens and assume it’s fine since urine itself doesn’t have any compounds or microbes that would cause disease.) In 2009, CNN mentions that the chloramines cause both red eye and some respiratory irritation. The article is from around Memorial Day, suggesting it was just a typical awareness piece. Oh, and they also refer to a 2008 interview with Michael Phelps admitting that Olympians pee in the pool. The CDC also mentions chloramines as potential asthma triggers in poorly maintained and ventilated pools and as eye irritants in a web page and review study that year. In 2008, the same Purdue group published what seems like the first study to analyze these byproducts, because others had only looked at inorganic molecules. There the health concern is mainly about respiratory problems caused by poor indoor pool maintenance because these chemicals can start to build up. Nothing about red eye is mentioned there. In 2006, someone on the Straight Dope discussion boards refers to a recent local news article attributing red eye in the pool to chlorine bonding with pee or sweat. They ask whether or not that’s true. Someone on the board claims it’s actually because chlorine in the pool forms a small amount of hydrochloric acid that will always irritate your eyes. A later commenter links to a piece by Water Quality and Health Council pinning chloramine as the culprit. An article from the Australian Broadcasting Corporation talks about how nitrogen from urine and sweat is responsible for that “chlorine smell” at pools, but doesn’t mention it causing irritation or just using up chlorine that could go to sterilizing the pool.

Finally, I just decided to look up the earliest mention possible by restricting Google searches to earlier dates. Here is an article from the Chicago Tribune in 1996.

There is no smell when chlorine is added to a clean pool. The smell comes as the chlorine attacks all the waste in the pool. (That garbage is known as “organic load” to pool experts.) So some chlorine is in the water just waiting for dirt to come by. Other chlorine is busy attaching to that dirt, making something called combined chlorine. “It’s the combined chlorine that burns a kid’s eyes and all that fun stuff,” says chemist Dave Kierzkowski of Laporte Water Technology and Biochem, a Milwaukee company that makes pool chemicals.

We’ve known about this for nearly 20 years! We just seem to forget. Often. I realize part of this is the seasonal nature of swimming, and so most news outlets will do a piece on being safe at pools every year. But even then, it seems like every few years people are surprised that it is not chlorine that stings your eyes, but the product of its reaction with waste in the water. I’m curious if I can find older things from LexisNexis or journal searches I can do at school. (Google results for sites older than 1996 don’t make much sense, because it seems like the crawler is picking up more recent related stories that happen to show up as suggestions on older pages.) Also, I’m just curious about the distinction between Bletchley’s tests and pool supplies that measure “combined chlorine” and chloramine, which is discussed in this 2001 article as causing red eye. I imagine his is more precise, but Bletchley also says people don’t measure it, and I wonder why.