Weirdly Specific Questions I Want Answers to in Meta-science, part 1

Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.

  • To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two major papers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought* someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using it to describe a variety of physical things related to the original idea.
    • This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
  • Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
    • For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
    • Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.
51cb2b3noc-l-_sx346_bo1204203200_

Oh my gosh, I made fun of the subtitle for like two years, but it’s true

  • Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
    • On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano” (though in the sense they may be too small).
    • Is this a reflection of applications of defects at the different scales? (More philosophically worded, are defects treated differently because of their teleological nature?) At the bulk level, we work to engineer the nature of defects to help develop the properties we want. At the nanoscale, some structures can basically be ruined for certain applications by the mislocation of a single atom. Is this also a reflection of the current practical process of needing to scale up the ability to make nanomaterials? E.g. as more realistic approaches to large-scale nanotech fabrication are developed, will the practical treatment of defects in nanomaterials converge to that of how we treat defects in the bulk?

*Okay, more like anyone cared a lot about it, since there are papers going back to the 1960s where researchers describe what appear to be atomic monolayers of graphite.

Advertisements

Where are all the engineering blogs?

I was browsing through Dynamic Ecology recently on my reader to catch up on end-of-2015 posts and was intrigued by one of the author’s comments on why there isn’t really an ecology blogosphere. And though I’ve pondered it before, this makes me wonder where the engineering blogosphere is. I don’t have much evidence to back up the loading of that question, but I’ve been in grad school for engineering for 3.5 years now, and it’s worth noting that I still haven’t heard of any major engineering blogs people follow. And the sheer randomness of Blogmetric’s ranking of engineering blogs seems to corroborate this: only the top 2 of the ranked engineering blogs are tracked to have over 100 visitors a month. A Github list of engineering blogs (which is currently the first result for Googling “engineering blogs”) seems incredibly focused on tech company blogs and IT/programming/development.

Engineering.com’s blog seems to have ended without even a goodbye at the end of 2014. Engineer Blogs has been radio silent since September of 2012. And the American Chemical Society’s magazine, Chemical & Engineering News, closed up nearly all of its blogs in mid-2014, with an explanation implying this was because they were viewed as a drain on resources that could be more productively used for other tasks. Chemical Engineering World (which as far as I can tell, is a personal blog and not affiliated to the Indian publication of the same name) seems to have just came back after a hiatus.

The Dynamic Ecology post’s second point on ecology not being very news-driven sounds compelling to me as a reason that could easily cross-apply to engineering, especially if you’re trying to move away from just tech company gossip. Having something well-known to react to can make it easier to post content that’ll actually engage readers because they start searching for it. Point 1 of the Neuroecology post’s on neuroscience lacking a blogosphere because neuroscience bloggers focus more on outreach to general audiences than technical exchanges with each other also seems valid. What’s interesting is the comparison I’m making. As Jeremy from Dynamic Ecology points out, the general science blogosphere is pretty vibrant. He and the Neuroecologist are focusing more on the lack of interacting blogging communities in specific disciplines. Engineering seems to lack this at both levels. I also wonder about some specific issues in engineering that can contribute to this.

  • Is engineering too broad to have a meaningful blogosphere? I see two distinct forces here.
    • First, is the breadth of engineering disciplines. I could see it being hard for there to be a lot of substantive discussion between, say, a chemical engineer and a computer scientist on a broad range of topics.
    • Second, there’s the huge influence of a lot of engineering actually being done in industry. I’m not going to say academic and corporate engineers don’t talk to each other, but it would also be dumb to pretend they have the same interests in how they approach outreach.
  • Is engineering too tied into the science blogosphere? (I wondered a similar thing last time I posted about engineers and outreach) Interested scientists and science writers can (and do) do a good job of explaining concepts and results from related engineering fields. For instance, Dot Physics is written by a physicist who routinely covers topics that are related to technology and engineering. On the opposite end, I clearly try to cover science topics that I think I can explain, even if I’m not experts in them. Randall Munroe straddles the border a lot in What If? and Thing Explainer.
  • You might think I’m treading around one obvious potential component of an engineering blogosphere, and that’s tech blogs. But engineering isn’t just “tech companies”, which in modern parlance seems to really just mean computer and Internet companies. (I’ve somewhat ranted about this before in the last two paragraphs of this post.) A lot of stuff also goes on in physical infrastructure that engineers could talk about. And in an era where the Internet seems increasingly interested in discussion of how system shapes our lives, it seems like we’re missing out if the people who help shape physical systems don’t share their voices.

Edit to add: I also realize I didn’t include any discussion about Twitter here, mainly because I’m still a novice there. But I still haven’t seen very long discussions on specific engineering issues on Twitter, though I assume tech is the exception again.

So What is Materials Science (and Engineering)?

So this is my 100th post, and I felt like it should be kind of special. So I want to cover a question I get a lot, and one that’s important to me; what exactly is materials science? My early answer was that “it’s like if physics and chemistry had a really practical baby.” One of my favorite versions is a quote from this article on John Goodenough, one of the key figures in making rechargeable lithium ion batteries: “In hosting such researchers, Goodenough was part of the peculiar world of materials scientists, who at their best combine the intuition of physics with the meticulousness of chemistry and pragmatism of engineering”. Which is a much more elegant (and somewhat ego-boositng) way of wording my description. In one of my first classes in graduate school, my professor described materials science as “the study of defects and how to control them to obtain desirable properties”.

A more complete definition is some version of the one that shows up in most introductory lessons: materials science studies the relationship between the structure of a material, its properties, its performance, and the way it was treated. This is often represented as the “materials science tetrahedron”, shown below. Which turns out to be something people really love to use. (You also sometimes see characterization float in the middle, because it applies to all these aspects.)

A tetrahedron with blue points at the vertices. The top is labelled structe, the bottom three are properties, processing, and performance.

The materials science tetrahedron (with characterization floating in the middle).

Those terms may sound meaningless to you, so let’s break them down. In materials science, structure goes beyond that of chemistry: it’s not just what makeup of an atom or molecule that affects a material, but how the atoms/molecules are arranged together in a material has a huge effect on how it behaves. You’re probably familiar with one common example: carbon and its various allotropes. The hardness of diamond is partially attributed to its special crystal structure. Graphite is soft because it is easy to slide the different layers across each other. Another factor is the crystallinity of a material. Not all materials you see are monolithic pieces. Many are made of smaller crystals we call “grains”. The size and arrangement of these grains can be very important. For instance, the silicon in electronics is made in such a way to guarantee it will always be one single crystal because boundaries between grains would ruin its electronic properties. Turbine blades in jet engines and for wind turbines are single crystals, while steels used in structures are polycrystalline.

On the top is a diamond and a piece of graphite. On the bottom are their crystal structures.

Diamond and its crystal structure is on the left; graphite on the right.

Processing is what we do to a material before it ends up being used. This is more than just isolating the compounds you’ll use to make it. In fact, for some materials, processing actually involves adding impurities. Pure silicon wouldn’t be very effective in computers. Instead, silicon is “doped” with phosphorus or boron atoms and the different doping makes it possible to build various electronic components on the same piece. Processing can also determine the structure – temperature and composition can be manipulated to help control the size of grains in a material.

A ring is split into 10 different sections. Going counterclockwise from the top, each segment shows smaller crystals.

The same steel, with different size grains.

Properties and performance are closely related, and the distinction can be subtle (and honestly, it isn’t something we distinguish that much). One idea is that properties describe the essential behavior of a material, while performance reflects how that translates into its use, or the “properties under constraints“. This splits “materials science and engineering” into materials science for focusing on properties and materials engineering for focusing on performance. But that distinction can get blurred pretty quickly, especially if you look at different subfields. Someone who studies mechanical properties might say that corrosion is a performance issue since it limits how long a material could be used at its desired strength. Talk to my colleagues next door in the Center for Electrochemical Science and Engineering and they would almost all certainly consider corrosion to be a property of materials. Regardless, both of these depend on structure and processing. Blades in wind turbines and jet engines are single crystals because this reduces fatigue over time. Structural steels are polycrystals because this makes them stronger.

Now that I’ve thought about it more, I realize the different parts of the tetrahedron explain the different ways we define materials science and engineering. My “materials science as applied physics and chemistry” view reflects the scale of structures we talk about, from atoms that are typically chemistry’s domain to the crystal arrangement to the larger crystal as a whole, where I can talk about mechanics of atoms and grains. The description of Goodenough separates materials science from physics and chemistry through the performance-driven lens of pragmatism. My professor’s focus on defects comes from the processing part of the tetrahedron.

The tetrahedron also helps define the relationship of materials science and engineering to other fields. First, it helps limit what we call a “material”. Our notions of structure and processing are very different from the chemical engineers, and work best on solids. It also helps define limits to the field. Our structures aren’t primarily governed by quantum effects and we generally want defects, so we’re not redundant to solid-state physics. And when we talk about mechanics, we care a lot about the microstructure of the material, and rarely venture into the large continuum of mechanical and civil engineers.

At the same time, the tetrahedron also explains how interdisciplinary materials science is and can be. That makes sense because the tetrahedron developed to help unify materials science. A hundred years ago, “materials science” wouldn’t have meant anything to anyone. People studying metallurgy and ceramics were in their own mostly separate disciplines. The term semiconductor was only coined in a PhD dissertation in 1910, and polymers were still believed to be aggregates of molecules attracted to each other instead of the long chains we know them to be today. The development of crystallography and thermodynamics helped us tie all these together by helping us define structures, where they come from, and how we change them. (Polymers are still a bit weird in many materials science departments, but that’s a post for another day)

Each vertex is also a key branching off point to work with other disciplines. Our idea of structure isn’t redundant to chemistry and physics, but they build off each other. Atomic orbitals help explain why atoms end up in certain crystal structures. Defects end up being important in catalysts. Or we can look at structures that already exist in nature as an inspiration for own designs. One of my professors explained how he once did a project studying turtle shells from an atomic to macroscopic level, justifying it as a way to design stronger materials. Material properties put us in touch with anyone who wants to use our materials to go into their end products, from people designing jet engines to surgeons who want prosthetic implants, and have us talk to physicists and chemists to see how different properties emerge from structures.

This is what attracted me to materials science for graduate school. We can frame our thinking on each vertex , but it’s also expected that we shift. We can think about structures on a multitude of scales. Now I joke that being a bad physics major translates into being great at most of the physics I need to use now. The paradigm helps us approach all materials, not just the ones we personally study. Thinking with different applications in mind forces me to learn new things all the time. (When biomedical engineers sometimes try to claim they’re the “first” interdisciplinary field of engineering to come on the scene, I laugh thinking that they forget materials science has been around for decades. Heck, now I have 20 articles I want to read about the structure of pearl to help with my new research project.) It’s an incredibly exciting field to be in.

The Coolest Part of that Potentially New State of Matter

So we’ve discussed states of matter. And the reason they’re in the news. But the idea that this is a new state of matter isn’t particularly ground-breaking. If we’re counting electron states alone as new states of matter, then those are practically a dime a dozen. Solid-state physicists spend a lot of time creating materials with weird electron behaviors: under this defintion, lots of the newer superconductors are their own states of matter, as are topological insulators.

What is a big deal is the way this behaves as a superconductor. “Typical” superconductors include basically any metal. When you cool them to a few degrees above absolute zero, they lose all electrical resistance and become superconductive. These are described by BCS theory, a key part of which says that at low temperatures, the few remaining atomic vibrations of a metal will actually cause electrons to pair up and all drop to a low energy. In the 1970s, though, people discovered that some metal oxides could also become superconductive, and they did at temperatures above 30 K. Some go as high as 130 K, which, while still cold to us (room temperature is about 300 K), is warm enough to use liquid nitrogen instead of incredibly expensivve liquid helium for cooling. However, BCS theory doesn’t describe superconductivity in these materials, which also means we don’t really have a guide to develop ones with properties we want. The dream of a lot of superconductor researchers is that we could one day make a material that is superconducting at room temperature, and use that to make things like power transmission lines that don’t lose any energy.

This paper focused on an interesting material: a crystal of buckyballs (molcules of 60 carbon atoms arranged like a soccer ball) modified to have some rubidium and cesium atoms. Depending on the concentration of rubidium versus cesium in the crystal, it can behave like a regular metal or the new state of matter they call a “Jahn-Teller metal” because it is conductive but also has a distortion of the soccer ball shape from something called the Jahn-Teller effect. What’s particularly interesting is that these also correspond to different superconductive behaviors. At a concentration where the crystal is a regular metal at room temperatures, it becomes a typical superconductor at low temperatures. If the crystal is a Jahn-Teller metal, it behaves a lot like a high-temperature superconductor, albeit at low temperatures.

This is the first time scientists have ever seen a single material that can behave like both kinds of superconductor. This is exciting becasue this offers a unique testing ground to figure out what drives unconventional superconductors. By changing the composition, researchers change the behavior of electrons in the material, and can study their behavior, and see what makes them go through the phase transition to a superconductor.

Carbon is Dead. Long Live Carbon?

Last month, the National Nanotechnology Initiative released a report on the state of commercial development of carbon nanotubes. And that state is mostly negative. (Which pains me, because I still love them.) If you’re not familiar with carbon nanotubes, you might know of their close relative, graphene, which has been in the news much more since the Nobel Prize awarded in 2010 for its discovery. Graphene is essentially a single layer of the carbon atoms found in graphite. A carbon nanotube can be thought of as rolling up a sheet of graphene into a cylinder.

CNT as rolled-up graphene.jpg

Visualizing a single-walled (SW) carbon nanotube (CNT) as the result of rolling up a sheet of graphene.

If you want to use carbon nanotubes, there are a lot of properties you need to consider. Nearly 25 years after their discovery, we’re still working on controlling a lot of these properties, which are closely tied to how we make the nanobues.

Carbon nanotubes have six major characteristics to consider when you want to use them:

  • How many “walls” does a nanotube have? We often talk about the single-walled nanotubes you see in the picture above, because their properties are the most impressive. However, it is much easier to make large quantities of nanotubes with multiple walls than single walls.
  • Size. For nanotubes, several things come in play here.
    • The diameter of the nanotubes is often related to chirality, another important aspect of nanotubes, and can affect both mechanical and electrical properties.
    • The length is also very important, especially if you want to incorporate the nanotubes into other materials or if you want to directly use nanotubes as a structural material themselves. For instance, if you want to add nanotubes to another material to make it more conductive, you want them to be long enough to routinely touch each other and carry charge through the entire material. Or if you want that oft-discussed nanotube space elevator, you need really long nanotubes, because stringing a bunch of short nanotubes together results in a weak material.
    • And the aspect ratio of length to width is important for materials when you use them in structures.
  • Chirality, which can basically be thought of as the curviness of how you roll up the graphene to get a nanotube (see the image below). If you think of rolling up a sheet of paper, you can roll it leaving the ends matched up, or you can roll it an angle. Chirality is incredibly important in determing the way electricity behaves in nanotubes, and whether a nanotube behaves like a metal or like a semiconductor (like the silicon in your computer chips). It also turns out that the chirality of nanotubes is related to how they grow when you make them.
  • Defects. Any material is always going to have some deviation from an “ideal” structure. In the case of the carbon nanotubes, it can be missing or have extra carbon atoms that replace a few of the hexagons of the structure with pentagons or heptagons. Or impurity atoms like oxygen may end up incorporated into the nanotube. Defects aren’t necessarily bad for all applications. For instance if you want to stick a nanotube in a plastic, defects can actually help it incorporate better. But electronics typically need nanotubes of the highest purity.

A plane of hexagons is shown in the top left. Overlaid on the plan are arrows representing vectors. On the top right is a nanotube labeled (10, 0) zig-zag. On the bottom left is a larger (10, 10) armchair nanotube. On the bottom right is a larger (10, 7) chiral nanotube. Some of the different ways a nanotube can be rolled up. The numbers in parentheses are the “chiral vector” of the nanotube and determine its diameter and electronic properties.

Currently, the methods we have to make large amounts of CNTs result in a mix of ones with different chiralities, if not also different sizes. (We have gotten much better at controlling diameter over the last several years.) For mechanical applications, the former isn’t much of a problem. But if you have a bunch of CNTs of different conductivities, it’s hard to use them consistently for electronics.

But maybe carbon nanotubes were always doomed once we discovered graphene. Working from the idea of a CNT as a rolled-up graphene sheet, you may realize that means there are  way more factors that can be varied in a CNT than a single flat flake of graphene. When working with graphene, there are just three main factors to consider:

  • Number of layers. This is similar to the number of walls of a nanotube. Scientists and engineers are generally most excited about single-layer graphene (which is technically the “true” graphene). The electronic properties change dramatically with the number of layers, and somewhere between 10 and 100 layers, you’re not that different from graphite. Again, the methods that produce the most graphene produce multi-layer graphene. But all the graphene made in a single batch will generally have consistent electronic properties.
  • Size. This is typically just one parameter, since most methods to make graphene result in roughly circular, square, or other equally shaped patches. Also, graphene’s properties are less affected by size than CNTs.
  • Defects. This tends to be pretty similar to what we see in CNTs, though in graphene there’s a major question of whether you can use an oxidized form or need the pure graphene for your application, because many production methods make the former first.

Single-layer graphene also has the added quirk of its electrical properties being greatly affected by whatever lies beneath it. However, that may be less of an issue for commercial applications, since whatever substrate is chosen for a given application will consistently affect all graphene on it. In a world where can now make graphene in blenders or just fire up any carbon source ranging from Girl Scout cookies to dead bugs and let it deposit on a metal surface, it can be hard for nanotubes to sustain their appeal when growing them requires additional steps of catalyst production and placement.

But perhaps we’re just leaving a devil we know for a more hyped devil we don’t. Near the end of last year, The New Yorker had a great article on the promises we’re making for graphene, the ones we made for nanotubes, and about technical change in general, which points out that we’re still years away from widespread adoption of either material for any purpose. In the meantime, we’re probably going to keep discovering other interesting nanomaterials, and just like people couldn’t believe we got graphene from sticky tape, we’ll probably be surprised by whatever comes next.

This Power-Generating Shoe Isn’t Ready for Prime Time Yet, but This Kid’s Project is Still Pretty Cool

This is a video by a Angelo Casimiro, a 15-year-old Filipino participating in this year’s Google science fair. And he has seriously tweaked his shoes to do something cool: they spark. And I don’t mean spark like  those kids’ shoes that have stripes that dimly light as you walk that you really wanted to try but evidently you couldn’t wear because none of them could support your ankles (okay, that last part may not have applied to everyone else…). Angelo’s new shoes actually generate a little bit of electricity each time he takes a step. This is incredibly cool.

But just because I can, I’m going to bury the lede for a bit, because I want to contextualize this. Angelo did this as a test to see if it could work AT ALL, and he says he’s nowhere near a final product that you might buy. So before dreams of daily jogging to power your iPhone and laptop dance in your head, we need to look at the electricity we can create and how much we actually use.

Duracell’s basic alkaline non-renewable AA battery has a charge of about 2500-3000 miliampere-hours (mAh), which I estimated based on multiplying the number of hours it was used by the constant currents applied in the graphs on the first page here. The two basic rechargeable NiMH AA batteries have charges of 1700 and 2450 mAh. The battery in my Android smartphone has a charge of 1750 mAh, based on dividing the energy (6.48 watt-hours) by its operating voltage (3.7 V). Based on Angelo’s best reported current of 11 mA on his Google science fair page, it would take 159 hours to fully charge my phone. That’s nearly a week of non-stop running! (Literally! There’s only 168 hours in a week. You could only spend 9 hours doing anything besides running that week if you wanted to charge the phone, or replace one of the two AA batteries it takes to power my digital camera) However, I might be overestimating based on his averages. At around the 3:50 mark in the video, an annotation says that Angelo was able to charge a 400 mAh battery after 8 hours of jogging. That would translate to about 33 hours of jogging to charge my cell phone. No one I know would want to do that, but that is significantly less than jogging non-stop for almost 7 days.

But as Angelo points out, while you not be able to power your phone with his shoe, lots of sensors and gadgets that could go into smart clothes could be powered by this. In the video, he says he was able to power an Arduino board. An Arduino is a common mini-CPU board with extras people often use to make nifty devices, from how Peter Parker locks his room door in The Amazing Spider-Man movie to laser harps you can play by touching beams of light (note that the Arduino isn’t necessarily powering all the other components it is controlling in these cases), so you could potentially control smart clothes that respond to your moving.  A study by MIT’s Media Lab also looked at putting piezoelectric material in shoes and found they could power an RFID transmitter, which can be used to broadcast information to either devices. So perhaps your gym shoes could also act as your gym ID. The 400 mAh battery Angelo mentions is pretty close to the charge of batteries in small blood sugar monitors and over double the charge of some smaller hearing aid batteries.

But in relation to another recent science fair controversy, let’s put Angelo in context. No, he did not “invent” a new way to “charge your phone with your shoes“. Angelo himself points out that his work is more like a proof of concept than anything close to a product, and his numbers show you really won’t want to charge energy heavy devices with it. And MIT and DARPA, that branch of the US Department of Defense that funds crazy research schemes, have both looked at similar systems. (DARPA has looked at piezo-boots that could help power soldiers’ electronics.) Angelo and DARPA also both realize the limits of this: with our current materials, there’s only so much you can stuff into footwear before you run out of room or make it harder to walk. So instead, people have shifted to different goals for piezoelectricity: instead of having the material move with a single person who has to provide all the energy, we can place it where we know lots of people will walk and split the work. In Europe, high foot traffic areas have been covered with piezoelectric sidewalks to power lights, and in Japan, commuters walking through turnstiles in Tokyo and Shibuya stations help power ticket readers and the signboards that guide them to their trains.

Two distinct images. The left image shows a turnstile for ticketing. There is a black strip of material running through it. The right image shows a figure with an explanation in Japanese describing the power-generating nature of the strip.

Piezoelectric strip in ticket turnstile in Japanese subway station, from 2008

But none of this means that Angelo hasn’t done good technical work. It’s just that his effort falls more on the engineering side than the science side. Which is perfectly fine, because Google has categories for electronics and inventions and that other big science fair everyone talks about is technically a science AND engineering fair. Angelo’s shoe modification is posted on instructables and is something you could do in your home with consumer materials. The MIT Media Lab study still worked with custom-made piezoelectrics from colleagues in another lab. So the fact that Angelo could still manage to charge a battery in a reasonable (if you don’t need power right away) amount of time is incredibly impressive. And he also seems quite skilled at designing the circuits he used. As a 15 year old, he easily seems to know more about the various aspects of his circuit he needs to consider than I did through most of my time in college (granted, you didn’t need to know any particularly complicated circuity to be a physic majors). He’s definitely on to a great start if he wants to study engineering or science in college.

A Neat Bit of UVA Engineering History

Consider this an incredibly belated addendum to my musing on what’s in a name in the professions of science and engineering. I pointed out that Purdue calls the academic unit devoted to studying materials a college of Materials Engineering, and doesn’t have science in the name. I learned from the professor I TA under this semester that my department is the reason UVA has a School of Engineering and Applied Sciences. The Department of Materials Science and Engineering at Virginia was founded in 1962. Evidently the school was only called the School of Engineering before that. When the MSE department was established, they added “Applied Sciences” (or maybe just the singular, I’m a bit unsure) to reflect the nature of research in the new department. Pretty cool.