What to do if you’re inside a scientific revolution

A LesserWrong user (LesserWrong-er?) has a thought-provoking post on The Copernican Revolution from the Inside, with two questions in mind: (1) if you lived in 17th century Europe, would you have accepted heliocentrism on an epistemic level, and (2) how do you become the kind of person who would say yes to question 1? It’s interesting in the sense the often-asked question of  “What would you be doing during the Civil Rights Movement/Holocaust/Other Time of Great Societal Change” is, in that most people realize they probably would not be a great crusader against the norm of another time. But as someone in Charlottesville in the year 2017, asking about what you’d be doing in scientific arguments is less terrifying relevant than asking people about how they’d deal with Nazism, so we’ll just focus on that.

Cover of Kuhn's The Structure of Scientific Revolutions showing a whirlpool behind the title text.

Look, you’re probably in at least one.

For once on the internet, I recommend reading the comments, in that I think they help flesh out the argument a lot more and correct some strawmanning of the heliocentrists by the OP. Interestingly, OP actually says he thinks

In fact, one my key motivations for writing it — and a point where I strongly disagree with people like Kuhn and Feyerabend — is that I think heliocentrism was more plausible during that time. It’s not that Copernicus, Kepler Descartes and Galileo were lucky enough to be overconfident in the right direction, and really should just have remained undecided. Rather, I think they did something very right (and very Bayesian). And I want to know what that was.

and seems surprised that commenters think he went too pro-geocentrist. I recommend the following if you want the detailed correction, but I’ll also summarize the main points so you don’t have to:

  • Thomas Kehrenberg’s comment as it corrects factual errors in the OP regarding sunspots and Jupiter’s moon
  • MakerOfErrors for suggesting the methodological point should be that both geo- and heliocentric systems should have been treated with more uncertainty around the time of Galileo until more evidence came in
  • Douglas_Knight for pointing out a factual error regarding Venus and an argument I’m sympathetic to regarding the Coriolis effect but evidently am wrong on, which I’ll get to below. I do think it’s important to acknowledge that Galilean relativity is a thing, though, and that reduces the potential error a lot.
  • Ilverin for sort of continuing MakerOfError’s point and suggesting the true rationalist lesson should be looking at how do you deal with competing theories that both have high uncertainties

It’s also worth pointing out that even the Tychonic system didn’t resolve Galileo’s argument for heliocentrism based on sun spots. (A modification to Tycho’s system by one of his students that allows for the rotation of the Earth supposedly resolves the sunspot issue, but I haven’t heard many people mention it yet.)

Also, knowing that we didn’t have a good understanding of the Coriolis effect until, well, Coriolis in the 1800s (though there are some mathematical descriptions in the 1700s), I was curious to what extent people made this objection during the time of Galileo. It turns out Galileo also predicted it as a consequence of a rotating earth. Giovanni Riccioli, a Jesuit scientist, seems to have made the most rigorous qualitative argument against heliocentrism because cannon fire and falling objects are not notably deflected from straight line paths. I want to point out that Riccioli does virtually no math in his argument on the Coriolis effect (unless there’s a lot in the original text that I don’t see in the summary of his Almagestum Novum). This isn’t uncommon pre-Newton, and no one would have the exact tools to deal with Coriolis forces for almost 200 years. But one could reasonably try to make a scaling argument about whether or not the Coriolis effect matters based only on the length scale you’re measuring and the rotation speed of the Earth (which would literally just be taking the inverse of a day) and see that that heliocentrists aren’t insane.

It’s not a sexy answer to the second question, but I think “patience for new data” goes a long way towards making you the kind of person who can say yes to the first question. You hear the term “Copernican revolution” thrown around like a very specific event, and I think it’s pretty easy to forget the relative timeframes of major players unless this is your bread and butter. Copernicus’ De revolutionibus came out in 1543. Newton’s Principia came out in 1687, which gives a physical explanation for Kepler’s empirical laws and results in them becoming more greatly accepted, and so can be considered a decent (if oversimplified) endpoint for the debate. Galileo began to get vocal about heliocentrism in the early 1610s. The Almagestum Novum came out in 1651. For over a century, people on both sides were gathering and interpreting new data and refining their theories.

I also like this article for a related point, albeit one a bit removed from the author’s thesis. In considering the question of how should accept new theories, we see the historical development of one theory overtaking another as “scientific consensus”. Earlier this year, rationalist Scott Alexander in a post on Learning to Love Scientific Consensus concisely summarized why the typical “consensus is meaningless” trope of just listing times consensus has turned out to be wrong isn’t particularly useful in understanding science:

I knew some criticisms of a scientific paradigm. They seemed right. I concluded that scientists weren’t very smart and maybe I was smarter. I should have concluded that some cutting-edge scientists were making good criticisms of an old paradigm. I can still flatter myself by saying that it’s no small achievement to recognize a new paradigm early and bet on the winning horse. But the pattern I was seeing was part of the process of science, not a condemnation of it.

Most people understand this intuitively about past paradigm shifts. When a creationist says that we can’t trust science because it used to believe in phlogiston and now it believes in combustion, we correctly respond that this is exactly why we can trust science. But this lesson doesn’t always generalize when you’re in the middle of a paradigm shift right now and having trouble seeing the other side.

The notion of “trusting” scientific consensus I think gets to a larger point. There are way more non-scientists than scientists, so most people aren’t in a place to rigorously evaluate contemporary analogues to the Copernican revolution, so you often have to trust consensus at least a little. Also scientists aren’t scientists of every field, so even they can’t evaluate all disputes and will rely on the work of their colleagues in other departments. And given how many fields of science there are, there’s always probably at least one scientific revolution going on in your lifetime, if not several. Fortunately they don’t all take 150 years to resolve. (Though major cosmological ones can take a long time when we need new instruments and new data that can take a long time to acquire.)

But if you want to be the kind of person who can evaluate revolutions (or maybe attempts at revolutions), and I hope you are, then here’s a bit more advice for the second question à la Kuhn: try to understand the structure of competing theories. This doesn’t mean a detailed understanding of every equation or concept, but realize some things are more much important to how a theory functions than others, and some predictions are relatively minor (see point 4 below for an application to something that I thing pretty clearly doesn’t fall into a revolution today). To pure geocentrists, the phases of Venus were theory-breaking because geocentrism doesn’t allow mechanisms for a full range of phases for only some planets, and so they had to move to Tycho’s model. To both groups writ large, it didn’t break the scientific theories if orbits weren’t perfectly circular (partly that was because there wasn’t really a force driving motion in either theory until Kepler and he wasn’t sure what actually provided it, so we see how several scientific revolutions later, it gets hard to evaluate their theories 100% within the language of our current concepts), though people held on because of other attachments. Which leads to a second suggestion: be open-minded about theories and hypotheses, while still critical based on the structure. (And I think it’s pretty reasonable to argue that the Catholic Church was not open-minded in that sense, as De revolutionibus was restricted and Galileo published his future works in  Protestant jurisdictions.) In revolutions in progress, being open-minded means allowing for reasonable revision of competing theories (per the structure point) to accommodate new data and almost maybe more importantly allows for generating new predictions from these theories to guide more experiment and observation to determine what data needs to be gathered to finally declare a winning horse.

***
Stray thoughts

  1. Let me explain  how I corrected my view on the Coriolis effect. We mainly think of it as applying to motion parallel to the surface of the Earth, but on further thought, I realized it does also apply to vertical motion (something further from the center of the Earth is moving at a faster rotational velocity than something closer, though they do have the same angular velocity). Christopher Graney, a physics and astronomy professor at Jefferson Community and Technical College who I will now probably academically stalk to keep in mind for jobs back home, has a good summary of Riccioli’s arguments from the Almagestum Novum in an article on arXiv and also what looks like a good book that I’m adding to my history/philosophy of science wishlist on Amazon. The Coriolis effect arguments are Anti-Copernican Arguments III-VI, X-XXII, and XXVII-XXXIII. Riccioli also addresses the sunspots in Pro-Copernican Argument XLIII, though the argument is basically philosophical in determining what kind of motion is more sensible. It’s worth pointing out that in the Almagestum, Riccioli is collecting almost all arguments used on both sides in the mid-17th century, and he even points out which ones are wrong on both sides. This has led some historians to call it what Galileo’s Dialogue should have been, as Galileo pretty clearly favored heliocentrism in Dialogue but Riccioli remains relatively neutral in Almagestum.
  2. I’m concerned someone might play the annoying pedant by saying a) “But we know the sun isn’t the center of the Universe!” or b) “But relativity says you could think of the Earth as the center of the Universe!”. To a), well yeah, but it’s really hard to get to that point without thinking of us living in a solar system and thinking of other stars as like our sun. To b), look, you can totally shift the frames, but you’re basically changing the game at that point since no frame is special. Also, separate from that, if you’re really cranking out the general relativity equations, I still think you see more space-time deformation from the sun (unless something very weird happens in the non-inertial frame transform) so it still “dominates” the solar system, not the Earth.
  3. For a good example of the “consensus is dumb” listing of consensuses of the past, look at Michael Crichton’s rant from his “Aliens Cause Global Warming” 2003 Michelin Lecture at CalTech beginning around “In science consensus is irrelevant. What is relevant is reproducible results.” Crichton gets close to acknowledging that consensus does in fact seem to accommodate evidence in the plate tectonics example, but he writes it off. And to get to Crichton’s motivating point about climate science, it’s not like climate science always assumed man had a significant impact. The evolution of global warming theory goes back to Arrhenius who hypothesized around 1900 that the release of CO2 from coal burning might have an effect after studying CO2’s infrared spectrum, and it wasn’t until the 60s and 70s that people thought it might outweigh other human contributions (hence the oft-misunderstood “global cooling” stories about reports from the mid-20th century).
  4. Or to sum up something that a certain class of people would love to make a scientific revolution but isn’t, consider anthropogenic climate change. Honestly, specific local temperature predictions being wrong generally isn’t a big deal unless say most of them can’t be explained by other co-occurring phenomena (e.g. the oceans seem to have absorbed most of the heat instead of it leading to rising surface temperatures), since the central part of the theory is that emission of CO2 and certain other human-produced gases has a pretty effect due to radiative forcing which traps more heat in. Show that radiative forcing is wrong or significantly different from the current values, and that’s a really big deal. Or come up with evidence of something that might counter radiative forcing’s effect on temperature at almost the same scale, and while the concern would go away, I think it’s worth pointing out it wouldn’t actually mean research on greenhouse gases was wrong. I would also argue that you do open-mindedness in climate science, since people do still pursue the “iris hypothesis” and there are actually almost always studies on solar variability if you search NASA and NSF grants. 
Advertisements

Is Blogging Dead? There may be hope.

Having only just discovered that ScienceBlogs closed at the end of October, seeing this in my WordPress reader struck a chord. Although at the same time, one of my friends at UVA just started a blog this spring and got over 10,000 views by September, so clearly there’s still a place for blogs. I wonder about the extent to which most people have to manage multiple social media accounts in addition to their blog makes it hard to form dedicated networks.

Sacred Twilight

shutterstock_512497354

Last night I had a long rant about blogging being dead, inasmuch as Google says so, and it does seem much harder now to get readers and followers for a new/small blog than it was several years ago. Based, however, on the kind and supportive comments of another blogger, Aarron Mondelo (check out his beautiful poems at My Worlds in Worlds ), I have decided not to throw in the towel just yet.

This morning I updated my About Page to explain my communal and supportive approach to blogging for new/small blogs, but I wanted to include it in a post too. So here’s a copy paste:

I had a blog several years ago when it was much easier to make blogging-friends (even if your blogs had nothing in common!). A small handful of troll-bullies had not taken over the Internet; and small/new bloggers would re-blog each other’s posts…

View original post 502 more words

What is rheology?

Inspired by NaBloPoMo and THE CENTRIFUGE NOT WORKING RIGHT THE FIRST TIME SO I HAVE TO STAY IN LAB FOR THREE MORE HOURS THAN I PLANNED (this was more relevant when I tried writing this a few weeks ago), I’ll be trying to post more often this month. Though heaven knows I’m not even going to pretend I’ll get a post a day when I have a conference (!) to prepare for.

I figure my first post could be a better attempt at better describing a major part of my research now – rheology and rheometers. The somewhat uncommon, unless you’re a doctor or med student who sees it pop up all the time in words like gonorrhea and diarrhea, Greek root “rheo” means “flow”, and so the simplest definition is that rheology is the study of flow. (And I just learned the Greek Titan Rhea’s name may also come from that root, so oh my God, rheology actually does relate to Rhea Perlman.) But what does that really mean? And if you’ve tripped out on fluid mechanics videos or photos before, maybe you’re wondering “what makes rheology different?”

RheaPerlmanAug2011.jpg

Oh my God, she is relevant to my field of study.

For our purposes, flow can mean any kind of material deformation, and we’re generally working with solids and liquids (or colloid mixtures involving those states, like foams and gels). Or if you want to get really fancy, you can say we’re working with (soft) condensed matter. Why not gas? We’ll get to that later. So what kind of flow behavior is there? There’s viscosity, which is what we commonly consider the “thickness” of a flowing liquid. Viscosity is how a fluid resists motion between component parts to some shearing force, but it doesn’t try to return the fluid back to its original state. You can see this in cases where viscosity dominates over the inertia of something moving in the fluid, such as at 1:00 and 2:15 in this video; the shape of the dye drops is essentially pinned at each point by how much the inner cylinder moves, but you don’t see the fluid move back until the narrator manually reverses the cylinder.

The other part of flow is elasticity. That might sound weird to think of a fluid as being elastic. While you really don’t see elasticity in pure fluids (unless maybe the force is ridiculously fast), you do see it a lot in mixtures. Oobleck, the ever popular mixture of cornstarch and water, becomes elastic as part of its shear-thickening behavior. (Which it turns out we still don’t have a great physical understanding of.)

 

You can think of viscosity as the “liquid-like” part of a substance’s behavior and elasticity as the “solid-like” part. Lots of mixtures (and even some pure substances) show both parts as “viscoelastic” materials. And this helps explain the confusion when you’re younger (or at least younger-me’s questions) of whether things like Jell-O, Oobleck, or raw dough are “really” solid or liquid. The answer is sort of “both”. More specifically, we can look at the “dynamic modulus” G at different rates of force. G has two components – G’ is the “storage modulus” and that’s the elastic/solid part, and G” is the “loss modulus” representing viscosity.

Dynamic modulus of silly putty

The dynamic moduli of Silly Putty at different rates of stress.

Whichever modulus is higher what mostly describes a material. So in the flow curve above, the Silly Putty is more like a liquid at low rates/frequencies of stress (which is why it spreads out when left on its own), but is more like a solid at high rates (which is why is bounces if you throw it fast enough). What’s really interesting is that the total number of either component doesn’t really matter, it’s just whichever one is higher. So even flimsy shaving cream behaves like a solid (seriously, it can support hair or other light objects without settling) at rest while house paint is a liquid, because even though paint tends to have a higher modulus, the shaving cream still has a higher storage modulus than its own loss modulus.

I want to publish this eventually, so I’ll get to why we do rheology and what makes it distinct in another post.

Thinking of the Urban as Natural

Image result for urban ecology

“Name everything you can think of that is alive.” This was the prompt given to three different groups of children: the Wichi, an indigenous tribe in the Gran Chaco forest, and rural and urban Spanish-speakers in Argentina. It might not surprise you to know that the indigenous children who directly interact with wildlife often named the most plants and animals that lived nearby and were native to the region, and they often gave very specific names. The rural children named a mixture of both native Argentinian wildlife and animals associated with farming. But the urban children were very different from the others. They would name only a few animals in Argentina. Instead, they named significantly more “exotic” animals from forests and jungles in other countries and continents. This result has been replicated in multiple studies on child development. But we shouldn’t be so hard on the urban children.

This reflects a somewhat uncomfortable truth about how we learn. If you live in a city, you mainly learn about nature indirectly, through pop culture and formal science education. In both contexts, it is much easier to find information about “exotic” animals like lions or tigers instead of most of the organisms that make a home in the city. I think this is a symptom of a deeper cultural notion: that somehow cities are “fake” environments divorced from nature. I will argue that this distinction between the urban and natural is not only wrong, but also harmful to our society.

First, we should consider that this notion really only makes sense relatively recently in history. Cities are young in a geological and even anthropological sense, but since we’ve been making them as a species, they have been influenced by nature. We talk about “cradles of civilization” because they were places where the natural environment was well-suited to supporting early, complex social systems and their infrastructure. To use the literal Ur-example, consider the Fertile Crescent region, the convergence of the Tigris and Euphrates rivers. This provided lush soil at several elevations, which supported the growth of a variety of crops and helped with irrigation. And many modern cities can still be traced back to earlier environmental decisions. I am from Louisville, a city by a part of the Ohio River that could not be crossed by boat until the building of locks in the 1830s. The city was founded as a natural stopping point for people before they would go on to the Mississippi River.

Second, it seems incredibly alienating to argue most of humanity is “unnatural”. Since 2008, the majority of humans have lived in cities. By 2050, 70% of the global population will live in urban areas. We should not discourage the growth of cities or devalue them, when their more efficient use of resources and infrastructure is necessary to keep projected population growth sustainable. The smart development of cities recognizes they can help preserve other environments.

Finally, this urban-natural distinction distorts our understanding of the environmental and ecological processes that affect cities and even our broader understanding of the environment. A recent study showed that insects help reduce food waste just as much as rodents in New York City – for every memeable “pizza rat” there’s an army of “pizza ants” getting rid of rotting food. Despite their importance, in New York’s American Museum of Natural History renowned insect collection, they have almost no species native to the city. And since many city-dwellers like the Argentinian children only know about exotic species, it affects animal conservation efforts. Well-known “charismatic” species like pandas or rhinos have support all over the world. Few people are aware of endangered species in urban areas And sometimes scientists don’t even know. For instance, relating to the above, 40% of insect species are endangered, but we don’t know if that number is different in cities.

Instead of rejecting the last few thousand years of our society’s development, we should (re)embrace cities as part of the broader natural world. Recognizing that cities can have their own rich ecological and environmental interactions can help us build urban spaces that are better for us humans, other city-dwelling creatures, and the rest of the world.

(Note: This post is based on a speech I gave as part of a contest at UVA, the Moomaw Oratorical Contest. And this year I won!)

I Have a Hard Time Summing Up My Science and Politics Beliefs Into a Slogan

From a half-joking, half-serious post of my own on Facebook:

“SCIENCE IS POLITICAL BECAUSE THERE’S LOT OF INFLUENCE BY POLITICAL AND POWERFUL CULTURAL INSTITUTIONS, BUT NOT PARTISAN. AND ALSO THAT SCIENTIFIC RESULTS AFFECT MORE OF OURS LIVES. BUT LIKE MAN, WE REALLY SHOULDN’T DO THE WHOLE TECHNOCRACY THING. BUT LIKE EVIDENCE SHOULD MATTER. BUT ALSO VALUES MATTER WHEN EVALUATING STUFF. IT’S COMPLICATED. HAS ANYONE READ LATOUR? OR FEYERABEND? CAN SOMEONE EXPLAIN FEYERABEND TO ME? DOES ANYONE WANT TO GET DRINKS AND TALK AFTER THIS?”

the_end_is_not_for_a_while

Evidently, I am the alt-text from this comic.

“HERE ARE SOME GOOD ARTICLES ABOUT PHILOSOPHY AND SOCIOLOGY OF SCIENCE” (I didn’t actually give a list, since I knew I would never really be able to put that on a poster, but some suggested readings if you’re interested: the Decolonizing Science Reading List curated by astrophysicist Chanda Prescod-Weinstein, a recent article from The Atlantic about the March for Science, a perspective on Doing Science While Black, the history of genes as an example of the evolution of scientific ideas, honestly there’s a lot here, and this is just stuff I shared on my Facebook page over the last few months.)
“LIKE HOLY SHIT Y’ALL EUGENICS HAPPENED”
“LIKE, MAN, WE STERILIZED A LOT OF PEOPLE. ALSO, EVEN BASIC RESEARCH CAN BE MESSED UP. LIKE TUSKEGEE. OR LITERALLY INJECTING CANCER INTO PEOPLE TO SEE WHAT HAPPENS. OR CRISPR. LIKE, JEEZ, WHAT ARE WE GOING TO DO WITH THAT ONE? SOCIETY HELPS DETERMINE WHAT IS APPROPRIATE.”
“I FEEL LIKE I’M GOING OFF MESSAGE. BUT LIKE WHAT EXACTLY IS THE MESSAGE HERE”
“I DON’T KNOW WHAT THE MESSAGE IS, BUT THESE ARE PROBABLY GOOD TO DO. ESPECIALLY IF THEY INSPIRE CONVERSATIONS LIKE THIS.”
“ALSO, DID YOU KNOW THAT MULTICELLULAR LIFE INDEPENDENTLY EVOLVED AT LEAST 10 TIMES ON EARTH? I’M NOT GOING ANYWHERE WITH THAT, I JUST THINK IT’S NEAT AND WE DON’T TYPICALLY HEAR THAT IN INTRO BIO.”

Quantum Waves are Still Physical, Regardless of Your Thoughts

Adam Frank, founder of NPR’s science and culture blog 13.7, recently published an essay on Aeon about materialism. It’s a bit confusing to get at what he’s trying to say because of the different focus its two titles have, as well as his own arguments. First, the titles. The title I saw first, which is what is displayed when shared on Facebook, is “Materialism alone cannot explain the riddle of consciousness”. But on Aeon, the title is “Minding matter”, with the sub-title or blurb of “The closer your look, the more the materialist position in physics appears to rest on shaky metaphysical ground.” The question of theories of mind is very different than philosophical interpretations of quantum mechanics.

This shows up in the article, where I found it confusing because Franks ties together several different arguments and confuses them with various ideas of “realism” and “materialism”. First, his conception of theories of mind is confusing. I’d say the average modern neuroscientist or other scholar of cognition is a materialist, but I’d be hesitant to say the average one is a reductionist who thinks thought depends very hard on the atoms in your brain. Computational theories of mind tend to be some of the most popular ones, and it’s hard to consider those reductionist. I would concede there may be too much of an experimental focus on reductionism (and that’s what has diffused into pop culture), but the debate over how to move from those experimental techniques to theoretical understanding is occurring: see the recent attempt at using neuroscience statistical techniques to understand Donkey Kong.

I also think he’s making a bit of an odd claim on reductionism in the other sciences in this passage:

A century of agnosticism about the true nature of matter hasn’t found its way deeply enough into other fields, where materialism still appears to be the most sensible way of dealing with the world and, most of all, with the mind. Some neuroscientists think that they’re being precise and grounded by holding tightly to materialist credentials. Molecular biologists, geneticists, and many other types of researchers – as well as the nonscientist public – have been similarly drawn to materialism’s seeming finality.

Yes, he technically calls it materialism, but he seems to basically equate it to reductionism by assuming the other sciences seem fine with being reducible to physics. But, first, Frank should know better from his own colleagues. The solid-state folks in his department work a lot with “emergentism” and point out that the supposedly more reductionist particle people now borrow concepts from them. And he should definitely know from his collaborators at 13.7 that the concept of reducibility is controversial across the sciences. Heck, even physical chemists take issue with being reducible to physics and will point out that QM models can’t fully reproduce aspects of the periodic table. Per the above, it’s worth pointing out that Jerry Fodor, a philosopher of mind and cognitive scientist, who does believe in a computational theory of mind disputes the idea of reductionism

purity

This is funny because this tends to be controversial, not because it’s widely accepted.

Frank’s view on the nature of matter is also confusing. Here he seems to be suggesting “materialism” can really only refer to particulate theories of matter, e.g. something an instrument could definitely touch (in theory). But modern fundamental physics does accept fields and waves as real entities. “Shut up and calculate” isn’t useful for ontology or epistemology, but his professor’s pithy response actually isn’t that. Quantum field theories would agree that “an electron is that we attribute the properties of the electron” since electrons (and any particles) can actually take on any value of mass, charge, spin, etc. as virtual particles (which actually do exist, but only temporarily). The conventional values are what one gets in the process of renormalization in the theory. (I might be misstating that here, since I never actually got to doing QFT myself.) I would say this doesn’t mean electrons aren’t “real” or understood, but it would suggest that quantum fields are ontologically more fundamental than the particles are. If it makes more physical sense for an electron to be a probability wave, that’s bully for probability waves, not a lack of understanding. (Also, aside from experiments showing wave-particle duality, we’re now learning that even biochemistry is dependent on the wave nature of matter.)

I’m also not sure the discussion of wave function collapse does much work here. I don’t get why it would inherently undermine materialism, unless a consciousness interpretation were to win out, and as Frank admits, there’s still not much to support one interpretation over the other. (And even then, again, this could still be solved by a materialist view of consciousness.) He’s also ignoring the development of theories of quantum decoherence to explain wavefunction collapse as quantum systems interact with classical environments, and to my understanding, those are relatively agnostic to interpretation. (Although I think there’s an issue with timescales in quantitative descriptions.)

From there, Frank says we should be open to things beyond “materialism” in describing mind. But like my complaint with the title differences, those arguments don’t really follow from the bulk of the article focusing on philosophical issues in quantum mechanics. Also, he seems open to emergentism in the second to last paragraph. Actually, here I think Frank missed out on a great discussion. I think there are some great philosophy of science questions to be had at the level of QFT, especially with regards to epistemology, and especially directed to popular audiences. Even as a physics major, my main understanding of specific aspects of the framework like renormalization are accepted because “the math works”, which is different than other observables we measure. For instance, the anomalous magnetic moment is a very high precision test of quantum electrodynamics, the quantum field theory of electromagnetism, and our calculation is based on renormalization. But the “unreasonable effectiveness of mathematics” can sometimes be wrong and we might lucky in converging to something close. (Though at this point I might be pulling dangerously close to the Duhem-Quine thesis without knowing much of the technical details.) Instead, we got a mediocre crossover between the question of consciousness and interpretations of quantum mechanics, even though Frank tried hard to avoid turning into “woo”.

Lynn Conway, Enabler of Microchips

Are you using something with a modern microprocessor on International Women’s Day? (If you’re not, but somehow able to see this post, talk to a doctor. Or a psychic.) You should thank Dr. Lynn Conway, professor emerita of electrical engineering and computer science at Michigan and member of the National Academy of Engineering, who is responsible for two major innovations that are ubiquitous in modern computing. She is most famous for the Mead-Conway revolution, as she developed the “design rules” that are used in Very-Large-Scale Integration architecture, the scheme that basically underlies all modern computer chips. Conway’s rules standardized chip design, making the process faster, easier, and more reliable, and perhaps most significant to broader society, easy to scale down, which is why we are now surrounded by computers.

sscm_cover_page_m

She is less known for her work on dynamic instruction scheduling (DIS). DIS lets a computer program operate out of order, so that later parts of code that do not depend on results of earlier parts can start running instead of letting the whole program stall until certain operations finish. This lets programs run faster and also be more efficient with processor and memory resources. Conway was less known for this work for years because she presented as a man when she began work at IBM. When Conway began her public transition to a woman in 1968, she was fired because the transition was seen as potentially “disruptive” to the work environment. After leaving IBM and completing her transition, Conway lived in “stealth”, which prevented her from publicly taking credit for her work there until the 2000s, when she decided to reach out to someone studying the company’s work on “superscalar” computers in the 60s.

Since coming out, Dr. Conway has been an advocate for trans rights, in science and in society. As a scientist herself, Dr. Conway is very interested in how trans people and the development of gender identity are represented in research. In 2007, she co-authored a paper showing that mental health experts seemed to be dramatically underestimating the number of trans people in the US based just on studies of transition surgeries alone. In 2013 and 2014, Conway worked to make the IEEE’s Code of Ethics inclusive of gender identity and expression.

A good short biography of Dr. Conway can be found here. Or read her writings on her website.