I've come to believe that the vast majority of scientific or expert knowledge is obtained via two major epistemic operations: purification and pontification.
Let's say that knowledge, scientific or not, can be described as an inventory of context-prediction pairs: given a state of the world, action or event "A", I predict that I will next observe state of the world "B".
It seems that almost all our basic inventory of predictions is acquired at a young age, from physical to biological to social phenomena.
By 11, we understand that the volume of orange juice in a glass will remain the same if we transfer it to a different-looking glass. We may have seen or heard of enough people dying that we suspect that humans are mortal. We have a reasonable theory of mind and can assign internal states and motives to other entities, and we know that we can get them to do what we want in exchange for money.
I will argue that most mental operations employed by experts, from jurists to theoretical physicists, are nothing but a repackaging of these basic inferences.
In day-to-day situations, most of the predictions we come up with are immediate variations on "exemplars", particular predictions we have successfully made in a sufficiently similar context. What works for a glass of juice will work for a vase full of water. Extensions of our knowledge are mainly done by continuity, changing only a few details at a time from previously-tested examples to new predictive contexts.
We may not always correctly deduce which traits can vary and which cannot in order for the inference to still work. For instance, many contemporary philosophers are still confused about whether having internal states requires having a brain-like information-processing apparatus, or being a human-shaped pile of meat.
Intriguingly, the main failure mode of neophytes doesn't seem to be "being convinced of a wrong answer", but rather "being able to imagine all the answers with equal likelihood". When in doubt, our predictive brain goes for something like uniform random sampling.
For instance, imagine being on your first day as forensic scientist on a crime scene. You may very well come up with all sorts of scenarios -- a sniper on a distant roof, a bullet ricocheting on a nearby lamp post, a point blank shot from a wronged lover -- not knowing which ones are physically incompatible with the evidence. Or more plausibly, imagine yourself learning economics from your favorite TV pundit; you can just as easily be convinced that lowering taxes will foster or stifle economic growth and make you happy or homeless.
Thus, the most common operation by far when acquiring expert knowledge is "purification": learning what *cannot* happen. Concentrating the probability mass, if you want to be pedantic. What one is taught in first year undergrad in physics is mainly what, among our imaginary scenarios, is in fact impossible, rather than whole new scenarios we couldn't have imagined in the first place.
Very few new basic intuitions are added to our repertoire, and when they are, they tend be exemplars that are learned by rote and drilled into us until they stick forever and shape us in all future endeavors -- from a symmetry-fanatic physicist to a Prisoner's dilemma-obsessed evolutionary biologist.
Purification is the basis for rigorous knowledge: raising by a lot our standards for what qualifies as a plausible extension of known scenarios. "He was killed with a gun, or maybe a trebuchet" becomes "He was killed with one of three models of guns manufactured in Basel, Switzerland before 1957 when the factory was closed and remodelled into a chocolate museum", all other possibilities being now excluded by a precise inventory of restrictive clues.
The problem of purification is that, while it can guide us toward correct predictions, it also leads us to miss huge domains of valid applications that happen to be very far away from the original examples we learned these intuitions from.
The informational structure of the universe appears to be high-dimensional and nonconvex. The domain of validity of an intuition is rarely a small neighborhood around some known examples, it is more like a hilly landscape with multiple summits separated by deep trenches.
This is where I bring in the second and deeper operation of expert knowledge, "pontification": creating a bridge between seemingly unrelated phenomena, by realizing that an intuition on object A can be translated into valid predictions on object B which has, a priori, no obvious relationship to A.
For instance, I am quite convinced that all our deep and abstract predictions about conservation laws (invariants and energy in physics, the behavior of money in an economic system) are simply repurposing our 11-year old's intuitions on the volume of orange juice moving between glasses. Thermodynamics is living in a world where you have to pay for everything.
The abstractness of scientific knowledge is not a property of our concepts and intuitions themselves, but of the distance between the main exemplar A from which we learned an intuition and its eventual application context B -- a bit like driving a car is a somewhat abstracted application of our motor reflexes (with various mechanical transformations and transductions happening to repackage our sense of self and motion into a very different type of locomotion), and piloting a plane in a videogame even more so. Mathematical models feel very abstract because they employ our intuitions on language, like the way that syllables and words and symbols can be combined and moved around, to make predictions on objects that are utterly non-linguistic, such as waves and planets and chemical reactions.
Of course, most attempts at such translations will fail. Science is a lot of trying in every possible direction, and the art of the modeler is the purified understanding of when long-distance transfers of intuitions are likely to work.
A side-effect of this basic model of learning -- that we first purify until we get precise and reliable predictions in a restricted domain, then search widely for opportunities to pontify (translate these intuitions into equally precise but more "abstract" predictions in faraway domains of knowledge) and only very rarely acquire new basic intuitions -- is that it fits my anecdotal experience on cultural consumption.
If you'll allow a bland over-generalization: kids and neophytes appear to love repetition and purity; adults, or people further along the learning curve, are attracted to syncretism. I remember being very upset, as a child, when one of my birthday gifts turned out to be a set of Legos from a new series on time travellers. They had a pirate galleon equipped with lasers and flying through space-time, encountering knights and dragons and robots and whatnot. Maybe other kids would have been wowed by this debauchery of Rule of Cool, but I was keen on learning, and such confusion just wouldn't do. Knights had to be knights. Pirates had to be pirates. A fantasy novel had to have a young hero and a magic sword and an evil empire.
Now that I have a precisely-adjusted mental model of fantasy novels, and can predict every one of their beats with Swiss accuracy, I cannot bear rereading one, but absolutely relish incongruous transpositions, such as a pastiche of a 1880-style socialist pamphlet set in a fantasy world. All the while, I love my detective novels "done right", and remain reluctant to delve into any genre that would require learning a whole new set of reference points.