While picking up some books for my dissertation from the science and engineering library, I stumbled across an history book that sounded interesting: When Physics Became King. I enjoy it a lot so far, and hope to remember it, so writing about it seems useful. I also think it brings up some interesting ideas to relate to modern debates, so blogging about the book seems even more useful.
Some recaps and thoughts, roughly in the thematic order the book presents in the first three chapters:
- It’s worth pointing out how deeply tied to politics natural philosophy/physics was as it developed into a scientific discipline in the 17th-19th centuries. We tend to think of “science policy” and the interplay between science and politics as a 20th century innovation, but the establishment of government-run or sponsored scientific societies was a big deal in early modern Europe. During the French Revolution, the Committee of Public Safety suppressed the old Royal Academy and the later establishment of the Institut Nationale was regarded as an important development for the new republic. Similarly, people’s conception of science was considered intrinsically linked to their political and often metaphysical views. (This always amuses me when people hope science communicators like Neil deGrasse Tyson or Bill Nye should shut up, since the idea of science as something that should influence our worldviews is basically as old as modern science.)
- Similarly, science was considered intrinsically linked to commerce, and the desire was for new devices to better reflect the economy of nature by more efficiently converting energy between various forms. I also am greatly inspired by the work of Dr. Chanda Prescod-Weinstein, a theoretical physicist and historian of science and technology on this. One area that Morus doesn’t really get into is that the major impetus for astronomy during this time is improving celestial navigation, so ships can more efficiently move goods and enslaved persons between Europe and its colonies (Prescod-Weinstein discusses this in her introduction to her Decolonizing Science Reading List, which she perennially updates with new sources and links to other similar projects). This practical use of astronomy is lost to most of us in modern society and we now focus on spinoff technology when we want to sell space science to public, but it was very important to establishing astronomy as a science as astrology lost its luster. Dr. Prescod-Weinstein also brings up an interesting theoretical point I didn’t consider in her evaluation of the climate of cosmology, and even specifically references When Physics Became King. She notes that the driving force in institutional support of physics was new methods of generating energy and thus the establishment of energy as a foundational concept in physics (as opposed to Newton’s focus on force) may be influenced by early physics’ interactions with early capitalism.
- The idea of universities as places where new knowledge is created was basically unheard of until late in the 1800s, and they were very reluctant to teach new ideas. In 1811, it was a group of students (including William Babbage and John Herschel) who essentially lead Cambridge to a move from Newtonian formulations of calculus to the French analytic formulation (which gives us the dy/dx notation), and this was considered revolutionary in both an intellectual and political sense. When Carl Gauss provided his thoughts on finding a new professor at the University of Gottingen, he actually suggested that highly regarded researchers and specialists might be inappropriate because he doubted their ability to teach broad audiences.
- The importance of math in university education is interesting to compare to modern views. It wasn’t really assumed that future imperial administrators would use calculus, but that those who could learn it were probably the most fit to do the other intellectual tasks needed.
- In the early 19th century, natural philosophy was the lowest regarded discipline in the philosophy faculties in Germany. It was actually Gauss who helped raise the discipline by stimulating research as a part of the role of the professor. The increasing importance of research also led to a disciplinary split between theoretical and experimental physics, and in the German states, being able to hire theoretical physicists at universities became a mark of distinction.
- Some physicists were allied to Romanticism because the conversion of energy between various mechanical, chemical, thermal, and electrical forms was viewed as showing the unity of nature. Also, empiricism, particularly humans directly observing nature through experiments, was viewed as a means of investigating the mind and broadening experience.
- The emergence of energy as the foundational concept of physics was controversial. One major complaint was that people have a less intuitive conception of energy than forces, which are considered a lot. Others objected that energy isn’t actually a physical property, but a useful calculational tool (and the question of what exactly energy is still pops up in modern philosophy of science, especially in how to best explain it). The development of theories of luminiferous (a)ether are linked a bit to this as an explanation of where electromagnetic energy is – ether theories suggested the ether stored the energy associated with waves and fields.
Are you using something with a modern microprocessor on International Women’s Day? (If you’re not, but somehow able to see this post, talk to a doctor. Or a psychic.) You should thank Dr. Lynn Conway, professor emerita of electrical engineering and computer science at Michigan and member of the National Academy of Engineering, who is responsible for two major innovations that are ubiquitous in modern computing. She is most famous for the Mead-Conway revolution, as she developed the “design rules” that are used in Very-Large-Scale Integration architecture, the scheme that basically underlies all modern computer chips. Conway’s rules standardized chip design, making the process faster, easier, and more reliable, and perhaps most significant to broader society, easy to scale down, which is why we are now surrounded by computers.
She is less known for her work on dynamic instruction scheduling (DIS). DIS lets a computer program operate out of order, so that later parts of code that do not depend on results of earlier parts can start running instead of letting the whole program stall until certain operations finish. This lets programs run faster and also be more efficient with processor and memory resources. Conway was less known for this work for years because she presented as a man when she began work at IBM. When Conway began her public transition to a woman in 1968, she was fired because the transition was seen as potentially “disruptive” to the work environment. After leaving IBM and completing her transition, Conway lived in “stealth”, which prevented her from publicly taking credit for her work there until the 2000s, when she decided to reach out to someone studying the company’s work on “superscalar” computers in the 60s.
Since coming out, Dr. Conway has been an advocate for trans rights, in science and in society. As a scientist herself, Dr. Conway is very interested in how trans people and the development of gender identity are represented in research. In 2007, she co-authored a paper showing that mental health experts seemed to be dramatically underestimating the number of trans people in the US based just on studies of transition surgeries alone. In 2013 and 2014, Conway worked to make the IEEE’s Code of Ethics inclusive of gender identity and expression.
A good short biography of Dr. Conway can be found here. Or read her writings on her website.
Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.
- To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two major papers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought* someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using it to describe a variety of physical things related to the original idea.
- This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
- Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
- For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
- Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.
Oh my gosh, I made fun of the subtitle for like two years, but it’s true
- Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
- On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano” (though in the sense they may be too small).
- Is this a reflection of applications of defects at the different scales? (More philosophically worded, are defects treated differently because of their teleological nature?) At the bulk level, we work to engineer the nature of defects to help develop the properties we want. At the nanoscale, some structures can basically be ruined for certain applications by the mislocation of a single atom. Is this also a reflection of the current practical process of needing to scale up the ability to make nanomaterials? E.g. as more realistic approaches to large-scale nanotech fabrication are developed, will the practical treatment of defects in nanomaterials converge to that of how we treat defects in the bulk?
*Okay, more like anyone cared a lot about it, since there are papers going back to the 1960s where researchers describe what appear to be atomic monolayers of graphite.
An article about the “most ridiculous startup ideas that became successful” has been making the rounds on social media. It amused me, mainly because the “ridiculous” ideas used to summarize each company are more like strained ex post facto descriptions that describe what they currently do, not the starting business model.
- Facebook was not meant to be another Myspace. It started as a way for college students to communicate with each other (after a very brief life as a “hot or not” thing for Harvard dorms). If you’re a Millenial, ask your parents if they ever looked at Classmates.com. Odds are that they have. Myspace was a public site where 13-year-olds made 90s-esque web profiles that were open to everyone, including 40-year-old men pretending to be 13-year-olds. That Facebook did not require this degree of openness has been part of its success.
- Dropbox seemed like the first major file transfer program I heard of aside from Google Docs. As this XKCD shows, we’re still desperately working on file transfer and so almost any idea could go. (My current solution is Google Drive)
- Amazon took off a lot after eBay drew people online. Amazon started around the time of the dot com bubble, so it’s not like investors needed much rationalization before investing in websites. But if you think about it, the basic starting idea kind of makes sense: Amazon could get virtually any book for a customer without wasting money on inventory costs. Also, Amazon hasn’t turned a profit in years because it tries to keep expanding, so maybe we should be wary of calling it a success for now.
- Virgin was founded not long after the airline industry was deregulated, so the timing isn’t crazy.
- I know virtually nothing about Mint or Palantir, but the idea of a company being really dependent on defense contracts is actually not uncommon.
- Craigslist is a classifieds web site in a time when newspaper classifieds are slowly dying. Investing in it seems really reasonable. And actually, it doesn’t seem to be pulling in a lot of venture capital money. The one major outside investor is eBay.
- iOS isn’t even a company or a standalone product. Why is it on this list?
- The whole point of Google was that its indexing algorithm was almost completely different than other search engines at the time. Does the author not remember how bad search results were in the 90s? Also, Google grew out of Larry Page’s dissertation, so it’s not like pitching was done before the algorithm existed.
- Part of PayPal’s appeal is that it’s more secure to give just one website your financial information and use that for purchases than to give your credit card information to a new person every time you make buy something online
- LinkedIn totally confused me in college, but now I appreciate separating my professional and social networking activities. And evidently lots of companies do use LinkedIn for recruiting since they can sort-of target appropriate people better than random Internet ads.
- Tesla actually does work with other car companies on some models and does have a goal of providing electric car equipment to other manufacturers to help mainstream electric cars. And considering that it was founded in 2003, its existence predates the cleantech “backlash”.
- 2/3 of SpaceX is owned by Elon Musk. And SpaceX doesn’t just want to be a commercial NASA (and even if the author finds this really weird, I would invite him to read almost any science fiction talking about space colonization). It plans to do commercial satellite launches as well, which are big business now.
- Firefox is the work of a free software group. Which is mostly funded by a non-profit (and a company that makes money, but that reinvests nearly all of that into the non-profit).
- Honestly, the only crazy ideas here seem to be Instagram and Twitter. And people still seem unsure of how those are supposed to make money so maybe we’ll find that their current structures are ridiculous.
Of course, the reason this list is so popular is because people seem to love counterintuitive ideas proving some experts or conventional wisdom wrong. It’s like Malcolm Gladwell applied to entrepreneurship. And just as wrong.