Galileo Did Do Experiments

After finding an old book of mine, The Ten Most Beautiful Experiments, over winter break, I wanted to follow up on my last post. I’ll say that this post is based almost entirely on that book’s chapter on Galileo, but since I don’t see it summarized in many places, I thought it was worth writing up. It is somewhat in vogue to claim that Galileo didn’t actually perform his experiments on falling bodies, and his writings just describe thought experiments. However, this actually confuses two different experiments attributed to Galileo. Most historians do believe stories of Galileo dropping weights from the Leaning Tower of Pisa are apocryphal and come from people confusing what is a thought experiment that Salviati, one of the fictional conversationalists in Two New Sciences, describes doing there, or a relatively unsourced claim by Galileo’s secretary in a biography after his death.

However, Salviati also describes an experiment that Galileo is recognized as having done: measuring the descent of balls of different weights down ramps, which also follow the same basic equation as bodies in free fall, but modified by the angle of slope. I think a few people may doubt Galileo actually completed the ramp experiment, based on criticisms by Alexandre Koyré in the 1950s that Galileo’s methods seemed too vague or imprecise to measure the acceleration. However, many researchers (like the Rice team in an above link) have found it possible to get data close to Galileo’s using the method Salviati describes. Additionally, another historian, Stillman Drake, who had access to more of Galileo’s manuscripts found what appears to be records of raw experimental data that show reasonable error. Drake also suggests that Galileo may have originally kept time through the use of musical tempo before moving on the water clock. Wikipedia (I know, but I don’t have much to go on) also suggests Drake does believe in the Leaning Tower of Pisa experiment. While he may not have done it at that tower, evidently Galileo’s accounts include a description that corresponds to an observed tic that happens if people try to freely drop objects of different sizes at the same time, which suggest he tried free fall somewhere.

Advertisements

What to do if you’re inside a scientific revolution

A LesserWrong user (LesserWrong-er?) has a thought-provoking post on The Copernican Revolution from the Inside, with two questions in mind: (1) if you lived in 17th century Europe, would you have accepted heliocentrism on an epistemic level, and (2) how do you become the kind of person who would say yes to question 1? It’s interesting in the sense the often-asked question of  “What would you be doing during the Civil Rights Movement/Holocaust/Other Time of Great Societal Change” is, in that most people realize they probably would not be a great crusader against the norm of another time. But as someone in Charlottesville in the year 2017, asking about what you’d be doing in scientific arguments is less terrifying relevant than asking people about how they’d deal with Nazism, so we’ll just focus on that.

Cover of Kuhn's The Structure of Scientific Revolutions showing a whirlpool behind the title text.

Look, you’re probably in at least one.

For once on the internet, I recommend reading the comments, in that I think they help flesh out the argument a lot more and correct some strawmanning of the heliocentrists by the OP. Interestingly, OP actually says he thinks

In fact, one my key motivations for writing it — and a point where I strongly disagree with people like Kuhn and Feyerabend — is that I think heliocentrism was more plausible during that time. It’s not that Copernicus, Kepler Descartes and Galileo were lucky enough to be overconfident in the right direction, and really should just have remained undecided. Rather, I think they did something very right (and very Bayesian). And I want to know what that was.

and seems surprised that commenters think he went too pro-geocentrist. I recommend the following if you want the detailed correction, but I’ll also summarize the main points so you don’t have to:

  • Thomas Kehrenberg’s comment as it corrects factual errors in the OP regarding sunspots and Jupiter’s moon
  • MakerOfErrors for suggesting the methodological point should be that both geo- and heliocentric systems should have been treated with more uncertainty around the time of Galileo until more evidence came in
  • Douglas_Knight for pointing out a factual error regarding Venus and an argument I’m sympathetic to regarding the Coriolis effect but evidently am wrong on, which I’ll get to below. I do think it’s important to acknowledge that Galilean relativity is a thing, though, and that reduces the potential error a lot.
  • Ilverin for sort of continuing MakerOfError’s point and suggesting the true rationalist lesson should be looking at how do you deal with competing theories that both have high uncertainties

It’s also worth pointing out that even the Tychonic system didn’t resolve Galileo’s argument for heliocentrism based on sun spots. (A modification to Tycho’s system by one of his students that allows for the rotation of the Earth supposedly resolves the sunspot issue, but I haven’t heard many people mention it yet.)

Also, knowing that we didn’t have a good understanding of the Coriolis effect until, well, Coriolis in the 1800s (though there are some mathematical descriptions in the 1700s), I was curious to what extent people made this objection during the time of Galileo. It turns out Galileo also predicted it as a consequence of a rotating earth. Giovanni Riccioli, a Jesuit scientist, seems to have made the most rigorous qualitative argument against heliocentrism because cannon fire and falling objects are not notably deflected from straight line paths. I want to point out that Riccioli does virtually no math in his argument on the Coriolis effect (unless there’s a lot in the original text that I don’t see in the summary of his Almagestum Novum). This isn’t uncommon pre-Newton, and no one would have the exact tools to deal with Coriolis forces for almost 200 years. But one could reasonably try to make a scaling argument about whether or not the Coriolis effect matters based only on the length scale you’re measuring and the rotation speed of the Earth (which would literally just be taking the inverse of a day) and see that that heliocentrists aren’t insane.

It’s not a sexy answer to the second question, but I think “patience for new data” goes a long way towards making you the kind of person who can say yes to the first question. You hear the term “Copernican revolution” thrown around like a very specific event, and I think it’s pretty easy to forget the relative timeframes of major players unless this is your bread and butter. Copernicus’ De revolutionibus came out in 1543. Newton’s Principia came out in 1687, which gives a physical explanation for Kepler’s empirical laws and results in them becoming more greatly accepted, and so can be considered a decent (if oversimplified) endpoint for the debate. Galileo began to get vocal about heliocentrism in the early 1610s. The Almagestum Novum came out in 1651. For over a century, people on both sides were gathering and interpreting new data and refining their theories.

I also like this article for a related point, albeit one a bit removed from the author’s thesis. In considering the question of how should accept new theories, we see the historical development of one theory overtaking another as “scientific consensus”. Earlier this year, rationalist Scott Alexander in a post on Learning to Love Scientific Consensus concisely summarized why the typical “consensus is meaningless” trope of just listing times consensus has turned out to be wrong isn’t particularly useful in understanding science:

I knew some criticisms of a scientific paradigm. They seemed right. I concluded that scientists weren’t very smart and maybe I was smarter. I should have concluded that some cutting-edge scientists were making good criticisms of an old paradigm. I can still flatter myself by saying that it’s no small achievement to recognize a new paradigm early and bet on the winning horse. But the pattern I was seeing was part of the process of science, not a condemnation of it.

Most people understand this intuitively about past paradigm shifts. When a creationist says that we can’t trust science because it used to believe in phlogiston and now it believes in combustion, we correctly respond that this is exactly why we can trust science. But this lesson doesn’t always generalize when you’re in the middle of a paradigm shift right now and having trouble seeing the other side.

The notion of “trusting” scientific consensus I think gets to a larger point. There are way more non-scientists than scientists, so most people aren’t in a place to rigorously evaluate contemporary analogues to the Copernican revolution, so you often have to trust consensus at least a little. Also scientists aren’t scientists of every field, so even they can’t evaluate all disputes and will rely on the work of their colleagues in other departments. And given how many fields of science there are, there’s always probably at least one scientific revolution going on in your lifetime, if not several. Fortunately they don’t all take 150 years to resolve. (Though major cosmological ones can take a long time when we need new instruments and new data that can take a long time to acquire.)

But if you want to be the kind of person who can evaluate revolutions (or maybe attempts at revolutions), and I hope you are, then here’s a bit more advice for the second question à la Kuhn: try to understand the structure of competing theories. This doesn’t mean a detailed understanding of every equation or concept, but realize some things are more much important to how a theory functions than others, and some predictions are relatively minor (see point 4 below for an application to something that I thing pretty clearly doesn’t fall into a revolution today). To pure geocentrists, the phases of Venus were theory-breaking because geocentrism doesn’t allow mechanisms for a full range of phases for only some planets, and so they had to move to Tycho’s model. To both groups writ large, it didn’t break the scientific theories if orbits weren’t perfectly circular (partly that was because there wasn’t really a force driving motion in either theory until Kepler and he wasn’t sure what actually provided it, so we see how several scientific revolutions later, it gets hard to evaluate their theories 100% within the language of our current concepts), though people held on because of other attachments. Which leads to a second suggestion: be open-minded about theories and hypotheses, while still critical based on the structure. (And I think it’s pretty reasonable to argue that the Catholic Church was not open-minded in that sense, as De revolutionibus was restricted and Galileo published his future works in  Protestant jurisdictions.) In revolutions in progress, being open-minded means allowing for reasonable revision of competing theories (per the structure point) to accommodate new data and almost maybe more importantly allows for generating new predictions from these theories to guide more experiment and observation to determine what data needs to be gathered to finally declare a winning horse.

***
Stray thoughts

  1. Let me explain  how I corrected my view on the Coriolis effect. We mainly think of it as applying to motion parallel to the surface of the Earth, but on further thought, I realized it does also apply to vertical motion (something further from the center of the Earth is moving at a faster rotational velocity than something closer, though they do have the same angular velocity). Christopher Graney, a physics and astronomy professor at Jefferson Community and Technical College who I will now probably academically stalk to keep in mind for jobs back home, has a good summary of Riccioli’s arguments from the Almagestum Novum in an article on arXiv and also what looks like a good book that I’m adding to my history/philosophy of science wishlist on Amazon. The Coriolis effect arguments are Anti-Copernican Arguments III-VI, X-XXII, and XXVII-XXXIII. Riccioli also addresses the sunspots in Pro-Copernican Argument XLIII, though the argument is basically philosophical in determining what kind of motion is more sensible. It’s worth pointing out that in the Almagestum, Riccioli is collecting almost all arguments used on both sides in the mid-17th century, and he even points out which ones are wrong on both sides. This has led some historians to call it what Galileo’s Dialogue should have been, as Galileo pretty clearly favored heliocentrism in Dialogue but Riccioli remains relatively neutral in Almagestum.
  2. I’m concerned someone might play the annoying pedant by saying a) “But we know the sun isn’t the center of the Universe!” or b) “But relativity says you could think of the Earth as the center of the Universe!”. To a), well yeah, but it’s really hard to get to that point without thinking of us living in a solar system and thinking of other stars as like our sun. To b), look, you can totally shift the frames, but you’re basically changing the game at that point since no frame is special. Also, separate from that, if you’re really cranking out the general relativity equations, I still think you see more space-time deformation from the sun (unless something very weird happens in the non-inertial frame transform) so it still “dominates” the solar system, not the Earth.
  3. For a good example of the “consensus is dumb” listing of consensuses of the past, look at Michael Crichton’s rant from his “Aliens Cause Global Warming” 2003 Michelin Lecture at CalTech beginning around “In science consensus is irrelevant. What is relevant is reproducible results.” Crichton gets close to acknowledging that consensus does in fact seem to accommodate evidence in the plate tectonics example, but he writes it off. And to get to Crichton’s motivating point about climate science, it’s not like climate science always assumed man had a significant impact. The evolution of global warming theory goes back to Arrhenius who hypothesized around 1900 that the release of CO2 from coal burning might have an effect after studying CO2’s infrared spectrum, and it wasn’t until the 60s and 70s that people thought it might outweigh other human contributions (hence the oft-misunderstood “global cooling” stories about reports from the mid-20th century).
  4. Or to sum up something that a certain class of people would love to make a scientific revolution but isn’t, consider anthropogenic climate change. Honestly, specific local temperature predictions being wrong generally isn’t a big deal unless say most of them can’t be explained by other co-occurring phenomena (e.g. the oceans seem to have absorbed most of the heat instead of it leading to rising surface temperatures), since the central part of the theory is that emission of CO2 and certain other human-produced gases has a pretty effect due to radiative forcing which traps more heat in. Show that radiative forcing is wrong or significantly different from the current values, and that’s a really big deal. Or come up with evidence of something that might counter radiative forcing’s effect on temperature at almost the same scale, and while the concern would go away, I think it’s worth pointing out it wouldn’t actually mean research on greenhouse gases was wrong. I would also argue that you do open-mindedness in climate science, since people do still pursue the “iris hypothesis” and there are actually almost always studies on solar variability if you search NASA and NSF grants. 

I Have a Hard Time Summing Up My Science and Politics Beliefs Into a Slogan

From a half-joking, half-serious post of my own on Facebook:

“SCIENCE IS POLITICAL BECAUSE THERE’S LOT OF INFLUENCE BY POLITICAL AND POWERFUL CULTURAL INSTITUTIONS, BUT NOT PARTISAN. AND ALSO THAT SCIENTIFIC RESULTS AFFECT MORE OF OURS LIVES. BUT LIKE MAN, WE REALLY SHOULDN’T DO THE WHOLE TECHNOCRACY THING. BUT LIKE EVIDENCE SHOULD MATTER. BUT ALSO VALUES MATTER WHEN EVALUATING STUFF. IT’S COMPLICATED. HAS ANYONE READ LATOUR? OR FEYERABEND? CAN SOMEONE EXPLAIN FEYERABEND TO ME? DOES ANYONE WANT TO GET DRINKS AND TALK AFTER THIS?”

the_end_is_not_for_a_while

Evidently, I am the alt-text from this comic.

“HERE ARE SOME GOOD ARTICLES ABOUT PHILOSOPHY AND SOCIOLOGY OF SCIENCE” (I didn’t actually give a list, since I knew I would never really be able to put that on a poster, but some suggested readings if you’re interested: the Decolonizing Science Reading List curated by astrophysicist Chanda Prescod-Weinstein, a recent article from The Atlantic about the March for Science, a perspective on Doing Science While Black, the history of genes as an example of the evolution of scientific ideas, honestly there’s a lot here, and this is just stuff I shared on my Facebook page over the last few months.)
“LIKE HOLY SHIT Y’ALL EUGENICS HAPPENED”
“LIKE, MAN, WE STERILIZED A LOT OF PEOPLE. ALSO, EVEN BASIC RESEARCH CAN BE MESSED UP. LIKE TUSKEGEE. OR LITERALLY INJECTING CANCER INTO PEOPLE TO SEE WHAT HAPPENS. OR CRISPR. LIKE, JEEZ, WHAT ARE WE GOING TO DO WITH THAT ONE? SOCIETY HELPS DETERMINE WHAT IS APPROPRIATE.”
“I FEEL LIKE I’M GOING OFF MESSAGE. BUT LIKE WHAT EXACTLY IS THE MESSAGE HERE”
“I DON’T KNOW WHAT THE MESSAGE IS, BUT THESE ARE PROBABLY GOOD TO DO. ESPECIALLY IF THEY INSPIRE CONVERSATIONS LIKE THIS.”
“ALSO, DID YOU KNOW THAT MULTICELLULAR LIFE INDEPENDENTLY EVOLVED AT LEAST 10 TIMES ON EARTH? I’M NOT GOING ANYWHERE WITH THAT, I JUST THINK IT’S NEAT AND WE DON’T TYPICALLY HEAR THAT IN INTRO BIO.”

Lynn Conway, Enabler of Microchips

Are you using something with a modern microprocessor on International Women’s Day? (If you’re not, but somehow able to see this post, talk to a doctor. Or a psychic.) You should thank Dr. Lynn Conway, professor emerita of electrical engineering and computer science at Michigan and member of the National Academy of Engineering, who is responsible for two major innovations that are ubiquitous in modern computing. She is most famous for the Mead-Conway revolution, as she developed the “design rules” that are used in Very-Large-Scale Integration architecture, the scheme that basically underlies all modern computer chips. Conway’s rules standardized chip design, making the process faster, easier, and more reliable, and perhaps most significant to broader society, easy to scale down, which is why we are now surrounded by computers.

sscm_cover_page_m

She is less known for her work on dynamic instruction scheduling (DIS). DIS lets a computer program operate out of order, so that later parts of code that do not depend on results of earlier parts can start running instead of letting the whole program stall until certain operations finish. This lets programs run faster and also be more efficient with processor and memory resources. Conway was less known for this work for years because she presented as a man when she began work at IBM. When Conway began her public transition to a woman in 1968, she was fired because the transition was seen as potentially “disruptive” to the work environment. After leaving IBM and completing her transition, Conway lived in “stealth”, which prevented her from publicly taking credit for her work there until the 2000s, when she decided to reach out to someone studying the company’s work on “superscalar” computers in the 60s.

Since coming out, Dr. Conway has been an advocate for trans rights, in science and in society. As a scientist herself, Dr. Conway is very interested in how trans people and the development of gender identity are represented in research. In 2007, she co-authored a paper showing that mental health experts seemed to be dramatically underestimating the number of trans people in the US based just on studies of transition surgeries alone. In 2013 and 2014, Conway worked to make the IEEE’s Code of Ethics inclusive of gender identity and expression.

A good short biography of Dr. Conway can be found here. Or read her writings on her website.

Weirdly Specific Questions I Want Answers to in Meta-science, part 1

Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.

  • To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two major papers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought* someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using it to describe a variety of physical things related to the original idea.
    • This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
  • Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
    • For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
    • Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.
51cb2b3noc-l-_sx346_bo1204203200_

Oh my gosh, I made fun of the subtitle for like two years, but it’s true

  • Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
    • On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano” (though in the sense they may be too small).
    • Is this a reflection of applications of defects at the different scales? (More philosophically worded, are defects treated differently because of their teleological nature?) At the bulk level, we work to engineer the nature of defects to help develop the properties we want. At the nanoscale, some structures can basically be ruined for certain applications by the mislocation of a single atom. Is this also a reflection of the current practical process of needing to scale up the ability to make nanomaterials? E.g. as more realistic approaches to large-scale nanotech fabrication are developed, will the practical treatment of defects in nanomaterials converge to that of how we treat defects in the bulk?

*Okay, more like anyone cared a lot about it, since there are papers going back to the 1960s where researchers describe what appear to be atomic monolayers of graphite.

Reclaiming Science as a Liberal Art

What do you think of when someone talks about the liberal arts? Many of you probably think of subjects like English and literature, history, classics, and philosophy. Those are all a good start for a liberal education, but those are only fields in the humanities. Perhaps you think of the social sciences, to help you understand the institutions and actors in our culture; fields like psychology, sociology, or economics. What about subjects like physics, biology, chemistry, or astronomy? Would you ever think of them as belonging to the liberal arts, or would you cordon them off into the STEM fields? I would argue that excluding the sciences from the liberal arts is both historically wrong and harms society.

First, let’s look at the original conception of the liberal arts. Your study would begin with the trivium, the three subjects of grammar, logic, and rhetoric. The trivium has been described as a progression of study into argument. Grammar is concerned with how things are symbolized. Logic is concerned with how things are understood. Rhetoric is concerned with how things are effectively communicated, because what good is it to understand things if you cannot properly share your understanding to other learned people? With its focus on language, the trivium does fit the common stereotype of the liberal arts as a humanistic writing education.

But it is important to understand that the trivium was considered only the beginning of a liberal arts education. It was followed by the supposedly more “serious” quadrivium of arithmetic, geometry, music, and astronomy. The quadrivium is focused on number and can also be viewed as a progression. Arithmetic teaches you about pure numbers. Geometry looks at number to describe space. Music, as it was taught in the quadrivium, focused on the ratios that produce notes and the description of notes in time. Astronomy comes last, as it builds on this knowledge to understand the mathematical patterns in space and time of bodies in the heavens. Only after completing the quadrivium, when one would have a knowledge of both language and numbers, would a student move on to philosophy or theology, the “queen of the liberal arts”.

7 Liberal Arts

The seven liberal arts surrounding philosophy.

Although this progression might seem strange to some, it makes a lot of sense when you consider that science developed out of “natural philosophy”. Understanding what data and observations mean, whether they are from a normal experiment or “big data”, is a philosophical activity. As my professors say, running an experiment without an understanding of what I was measured makes me a technician, not a scientist. Or consider alchemists, who included many great experimentalists who developed some important chemical insights, but are typically excluded from our conception of science because they worked with different philosophical assumptions. The findings of modern science also tie into major questions that define philosophy. What does it say about our place in the universe if there are 10 billion planets like Earth in our galaxy, or when we are connected to all other living things on Earth through chemistry and evolution?

We get the term liberal arts from Latin, artes liberales, the arts or skills that are befitting of a free person. The children of the privileged would pursue those fields. This was in contrast to the mechanical arts – fields like clothesmaking, agriculture, architecture, martial arts, trade, cooking, and metalworking. The mechanical arts were a decent way for someone without status to make a living, but still considered servile and unbecoming of a free (read “noble”) person. This distinction breaks down in modern life because we are no longer that elitist in our approach to liberal education. We think everyone should be “free”, not just an established elite.

More importantly, in a liberal democracy, we think everyone should have some say in how they are governed. Many major issues in modern society relate to scientific understanding and knowledge. To talk about vaccines, you need to have some understanding of the immune system. The discussion over chemicals is very different when you know that we are made up chemicals. It is hard to understand what is at stake in climate change without a knowledge of how Earth’s various geological and environmental systems work and it is hard to evaluate solutions if you don’t know where energy comes from. Or how can we talk about surveillance without understanding how information is obtained and how it is distributed? The Founding Fathers say they had to study politics and war to win freedom for their new nation. As part of a liberal education, Americans today need to learn to science in order to keep theirs.

(Note: This post is based off a speech I gave as part of a contest at UVA. It reflects a view I think is often unconsidered in education discussions, so I wanted to adapt it into a blog post.

As another aside, it’s incredibly interesting people now tend to unambiguously think of social sciences as part of the liberal arts while wavering more on the natural sciences since the idea of a “social” science wasn’t really developed until well after the conception of the liberal arts.)

Thoughts on Basic Science and Innovation

Recently, science writer and House of Lords member Matt Ridley wrote an essay in The Wall Street Journal about the myth of basic science leading to technological development. Many people have criticized it, and it even seems like Ridely has walked back some claims. One engaging take can be found here, which includes a great quote that I think helps summarize a lot of the reaction:

I think one reason I had trouble initially parsing this article was that it’s about two things at once. Beyond the topic of technology driving itself, though, Ridley has some controversial things to say about the sources of funding for technological progress, much of it quoted from Terence Kealy, whose book The Economic Laws of Scientific Research has come up here before

But also, it seems like Ridley has a weird conception of the two major examples he cites as overturning the “myth”: steam engines and the structure of the DNA. The issue with steam engines is that we mainly associate them with James Watt, who you memorialize everytime you fret about how many watts all your devices are consuming. Steam engines actually preceded Watt, but the reason we associate them with him is because he greatly improved their efficiency due to his understanding of latent heat, the energy that goes into changing something from one phase into another. (We sort of discussed this before. The graph below helps summarize.) Watt understood latent heat because his colleague and friend Joseph Black, a chemist at the University of Glasgow, discovered it.

A graph with X-axis labelled

The latent heat is the heat that is added to go between phases. In this figure, it is represented by the horizontal line between ice and heating of water and the other horizontal line between heating of water and heating of water vapor.

I don’t know whether or not X-ray crystallography was ever used for industrial purposes in the textiles industry, but it has pretty consistently been used in academia since the basic principles were discovered a century ago. The 1915 Nobel prize in physics was literally about the development of X-ray crystallograpy theory. A crystal structure of a biological molecule was determined by X-ray studies at least in 1923, if not earlier. The idea that DNA crystallography only took off as a popular technique because of spillover from industry is incredibly inaccurate.

Ridley also seems to have a basic assumption: that government has crowded out the private sector as a source of basic research over the past century. It sounds reasonable, and it seems testable as a hypothesis. As a percentage of GDP (which seems like a semi-reasonable metric for concerns about crowding out), federal spending on research and development has generally been on the decline since the 70s, and is now about a 1/3 less than it’s relatively stable levels that decade. If private R&D had been crowded out, a competitor dropping by that much seems like a decent place for some resurgence, especially since the one of the most cited examples of private research, Bell Labs, was still going strong all this time. But instead, Bell cut most of its basic research programs just a few years ago.

Fedederal research spending as a percentage of GDP from the 1976 to 2016 fiscal years. The total shows a slight decrease from 1976 to 1992, a large drop in the 90s, a recovery in 2004, and a drop since 2010.

Federal spending on R&D as a percentage of GDP over time

To be fair, more private philanthropists now seem to be funding academic research. The key word, though, is philanthropist, not commercial, which Ridley refers to a lot throughout the essay. Also, a significant amount of this new private funding is for prizes, but you can only get a prize after you have done work.

There is one major thing to take from Ridley’s essay, though I also think most scientists would admit it too. It’s somewhat ridiculous to try to lay out a clear path from a new fundamental result to a practical application, and if you hear a researcher claim to have one, keep your BS filter high. As The New Yorker has discussed, even results that seems obviously practical have a hard time clearing feasibility hurdles. (Also, maybe it’s just a result of small reference pools, but it seems like a lot of researchers I read are also concerned that research now seems to require some clear “mother of technology” justification.)  Similarly, practical developments may not always be obvious. Neil deGrasse Tyson once pointed out that if you spoke to a physicist in the late 1800s about the best way to quickly heat something, they would probably not describe something resembling a microwave.

Common timeframe estimates of when research should result in a commercially available product, followed by translations suggesting how unrealistic this is. The fourth quarter of next year The project will be canceled in six months. Five years I've solved the interesting research problems. The rest is just business, which is easy, right? Ten years We haven't finished inventing it yet, but when we do, it'll be awesome. 25+ years It has not been conclusively proven impossible. We're not really looking at market applications right now. I like being the only one with a hovercar.

Edit to add: Also, I almost immediately regret using innovation in the title because I barely address it in the post, and there’s probably a great discussion to have about that word choice by Ridley. Apple, almost famously, funds virtually no basic research internally or externally, which I often grumble about. However, I would not hesitate to call Apple an “innovative” company. There are a lot of design choices that can improve products that can be pretty divorced from the physical breakthroughs that made them unique. (Though it is worth pointing out human factors and ergonomics are very active fields of study in our modern, device-filled lives.)