I want to tell a story about analytic philosophy over the last ten years. It is not a clean story, and it does not end with a moral, but it does have a recognizable arc. If you had asked me in, say, 2005 what analytic philosophy was about, I probably would have said something like “language, logic, and modality.” If you asked me the same question in 2025, I would say “structure.”
By structure, I mean something like this. Instead of asking what words mean, or what is possible in other worlds, philosophers have become increasingly obsessed with the architecture of reality itself. What is fundamental. What depends on what. Whether the world has sharp edges or blurry ones. How social facts latch onto physical facts. How norms attach to nature. What kinds of explanations bottom out and which ones keep going.
This is not a return to old fashioned metaphysics exactly. It is more like a retooling. The new metaphysician is not cataloging entities so much as mapping dependency relations. The new epistemologist is not just asking whether a belief counts as knowledge but how inquiry should proceed under uncertainty and social constraint. The new ethicist is not just asking what is right but how to act when unsure what right even means.
I am going to call this cluster of moves the structural turn. I am not claiming everyone signed a manifesto. I am claiming that if you skim the major journals from 2015 to 2025, this is the shape that emerges whether anyone intended it or not.
1. Metaphysics, or: What Depends on What and Why That Might Not Be Well Defined
1.1 Grounding and the Dream of a Single “In Virtue Of”
If you want one concept that captures the metaphysical mood of the decade, it is grounding. Grounding is supposed to be the non causal relation that explains why some facts obtain in virtue of others. The statue exists in virtue of the clay. The mental exists in virtue of the physical. The moral exists in virtue of the natural. And so on.
At first glance, this looks like a godsend. We get to talk about metaphysical explanation without pretending it is causation. We get hierarchy without time. We get dependence without dynamism.
The big question is whether grounding is one thing.
Jonathan Schaffer and Kit Fine think it is. On their view, grounding is a primitive relation that structures reality into levels. Fundamental facts ground derivative facts. The job of metaphysics is to identify the base and describe how everything else flows from it.
Schaffer even gives us formal tools for this. He adapts structural equation modeling from causal inference and uses it to model metaphysical dependence. The idea is roughly this: just as causal equations tell us how changing one variable would change another over time, grounding equations tell us how changing the grounds would change the grounded. This lets us make sense of metaphysical counterfactuals like “if the physical facts had been different, the mental facts would have been different.”
This picture is elegant. It also makes metaphysics feel like a serious theoretical science.
Jessica Wilson thinks this picture is misleading. On her view, capital G Grounding is not doing real work. What does the work are many different small g relations: realization, constitution, determinable to determinate, type identity, and so on. When we say “the mental is grounded in the physical,” what we really mean is something more specific like “the mental is realized by the physical.” Lumping all of these together under one label hides the machinery.
This is sometimes called the coarseness problem. Saying “A grounds B” often tells you less than saying exactly how A relates to B. Unitary grounding theorists reply that this is fine. “Cause” is also coarse, and we still think it is a real relation with many species. Kicking and pushing are different, but they are both causal. Likewise, realization and constitution might be species of grounding.
Things get even messier when we look at the logic of grounding. If grounding is one relation, it should probably be irreflexive, asymmetric, and transitive. Nothing grounds itself. If A grounds B, B does not ground A. If A grounds B and B grounds C, then A grounds C.
Each of these principles has been challenged. There are plausible cases where transitivity seems to fail, especially once we allow contrastive explanation. There are proposals involving metaphysical loops, where entities mutually ground one another. At that point, the nice hierarchical picture starts to wobble.
Why does any of this matter? Because grounding is doing enormous work across philosophy. Physicalism is often stated as a grounding thesis. So is moral naturalism. If grounding turns out to be disunified or incoherent, then “naturalism” might mean very different things in different domains, and metaphysics loses its unifying backbone.
1.2 Vagueness in the World Itself
For a long time, analytic philosophers were pretty confident that vagueness lived in language, not in reality. Mountains do not have vague boundaries. Words do. The world itself is perfectly precise. We just talk about it sloppily.
This view was supported by a famous argument from Gareth Evans against vague identity. If a is vaguely identical to b, then b has the property of being vaguely identical to a, which a lacks. By Leibniz’s Law, they are not identical. So vague identity is impossible.
Over the last decade, this consensus has cracked.
Elizabeth Barnes and J.R.G. Williams argue that Evans’ argument assumes what is at issue. It assumes that vagueness must be modeled as a property of relations. They propose instead that the world itself might be unsettled. On their view, reality corresponds not to one perfectly precise world but to a range of precise “ersatz” worlds. A proposition is metaphysically indeterminate if it is true in some of these worlds and false in others, and reality has not settled which one is actual.
Jessica Wilson proposes a different model. She suggests that indeterminacy can arise when an object instantiates a determinable property without instantiating any single determinate. The particle has a position determinable but no precise position determinate. This is not semantic vagueness and not mere ignorance. It is a specific way of being.
Quantum mechanics looms large here. Superposition looks a lot like metaphysical indeterminacy. Treating the wavefunction as representing genuine indeterminacy may be more parsimonious than adding hidden variables or branching worlds. Some have even suggested that Everettian many worlds is just metaphysical indeterminacy in disguise.
If the world is vague, what logic applies to it? Does classical logic still hold, or do we need truth value gaps or gluts? What about the open future? Is “there will be a sea battle tomorrow” already true or false? Or is reality genuinely unsettled?
What once looked like a technical issue about borderline cases has turned into a fundamental question about the shape of reality.
1.3 Social Reality and the Trouble with Construction
Social ontology has become unavoidable. Race, gender, money, institutions, and social roles all have real causal power. At the same time, they are clearly not fundamental in the way quarks are.
Sally Haslanger’s work reframed the debate by introducing ameliorative analysis. Instead of asking what our concept of race is, or what race really is, she asks what the concept should be for legitimate purposes. Her answer defines race and gender in terms of social hierarchy and oppression.
This creates a dilemma. If race is socially constructed and biologically unreal, should we eliminate it? But if we eliminate it, how do we track injustice? Haslanger resolves this by being a realist about social kinds. Race is real as a social structure, even if it is not a biological one.
Critics have raised worries about inclusion, especially regarding trans identities. If womanhood is defined by subordination on the basis of observed sex, what about trans women who have not been subordinated in that way? Or trans men who have?
This has pushed some philosophers toward conferralism. On this view, social properties are conferred in context by social recognition. You are a woman in a context if you are treated as one by those with standing. This shifts focus from static structure to dynamic interaction.
Race debates have similarly splintered. Some defend social realism. Others argue for a revised biological realism based on population genetics. Others are eliminativists. Still others treat race as primarily political and historical.
A lurking question is whether there can be a unified theory of social kinds at all. Money, gender, and race might all be constructed, but they might be constructed in importantly different ways. Pluralism may be unavoidable.
2. Philosophy of Mind, or: Explaining Why Consciousness Feels Like a Problem
2.1 The Meta Problem of Consciousness
The hard problem of consciousness asks how physical processes give rise to subjective experience. The meta problem asks why we think there is a hard problem.
David Chalmers frames the meta problem as explaining our problem intuitions. Why do we say things like “there is something it is like” or “no physical explanation seems sufficient”? This is an “easy” problem in the technical sense. It concerns behavior and judgments.
Illusionists like Keith Frankish and Daniel Dennett argue that solving the meta problem is enough. If we can explain why brains generate reports about ineffable qualia, there is nothing left to explain. Consciousness, as ordinarily conceived, is a user illusion.
Realists reply that this misses the residue. Even if we fully explain why a system claims to have experiences, the question remains whether it actually has them. If introspection reveals the essence of experience, physicalism is in trouble. If introspection is systematically misleading, illusionists owe us a story about how a physical process produces the appearance of non physical properties.
This debate increasingly intersects with epistemology. If our beliefs about consciousness are shaped by evolutionary and cognitive biases, should we trust them? This mirrors evolutionary debunking arguments in ethics.
2.2 Predictive Processing and Representation Anxiety
Predictive processing has become the dominant framework in cognitive science. The brain is modeled as a prediction error minimizer. It maintains a generative model of the world and updates it to reduce surprise.
The philosophical fight is over representation. Andy Clark argues that predictive processing vindicates representationalism. The generative models function as maps. The brain builds representations because they are useful.
Others argue that this is a category mistake. The models are mathematical descriptions of system dynamics, not internal symbols with semantic content. The brain is regulating itself, not representing the world.
The dark room problem sharpens the issue. If the brain wants to minimize prediction error, why not sit in a dark, silent room? The standard reply appeals to higher level expectations about action. We predict that we will explore. But this collapses desire into prediction. Wanting becomes expecting.
Is that right? Does it capture the phenomenology of desire? Or are we losing something important?
Some propose structural representation as a compromise. A state counts as a representation if it stands in a structural homomorphism with its target and is used for control. The open question is whether predictive processing models meet this criterion or whether they are just control loops all the way down.
3. Epistemology, or: How to Reason While Being Human
3.1 Inquiry Versus Belief
Traditional epistemology focuses on belief states. Zetetic epistemology focuses on inquiry.
Jane Friedman points out a tension. Evidentialism says you should believe what your evidence supports. Zetetic norms say you should take the means necessary to answer your question.
Suppose your evidence supports a belief to a high degree, but you could easily gather more evidence. Should you believe now or suspend judgment and keep investigating? Being a good believer and being a good inquirer can pull apart.
Some respond by making epistemic norms instrumental. Beliefs are tools for inquiry. Others argue that belief is just a stopping point of inquiry, and inquiry norms are primary. Others try to dissolve the tension by separating norms governing states from norms governing actions.
The deeper question is whether rationality is static or dynamic. Is it about matching mind to world at a time, or about improving that match over time?
3.2 Higher Order Evidence and Rational Akrasia
Higher order evidence is evidence about your own reliability.
The hypoxia case is standard. A pilot calculates that she has enough fuel. Then she learns she is cognitively impaired. Should she trust the calculation?
Conciliationists say higher order evidence defeats first order justification. Your confidence in the belief should match your confidence in your reliability.
Level splitters disagree. Maria Lasonen Aarnio argues that you can be justified in believing P and justified in believing that your belief in P is unreliable. Rationality is about responding to reasons, not about internal coherence. The result is epistemic akrasia: believing P while believing you should not believe P.
This is uncomfortable, but maybe reality is uncomfortable. The debate pits internalist coherence against externalist truth tracking.
3.3 Epistemic Injustice Grows Teeth
Fricker’s original framework focused on credibility deficits and hermeneutical gaps. More recent work emphasizes agency and power.
Contributory injustice occurs when marginalized groups have the concepts to understand their experience, but dominant groups refuse to uptake them. This is not absence. It is active blockage.
Epistemic exploitation highlights the wrong of demanding that marginalized people do the labor of educating others about oppression. A request for evidence can be virtuous inquiry or vicious exploitation depending on context.
This forces epistemology to confront its own norms. Asking for reasons is not always innocent.
4. Language and Logic, or: Fixing Our Tools While Standing on Them
4.1 Conceptual Engineering and the Implementation Problem
Conceptual engineering aims to improve our concepts, not just analyze them.
The implementation problem asks how this is supposed to work. If meanings are fixed by social usage, how can philosophers change them from the armchair?
If redefining “woman” does not change public meaning, engineering is toothless. If it does change meaning, we need a story about metasemantic control.
Some appeal to speaker meaning and gradual uptake. Others argue that we cannot control meaning directly and must engage in conceptual activism, hoping the metasemantics follow.
There is also the problem of topic continuity. If we change the meaning of “freedom” to make it compatible with determinism, are we still talking about freedom, or have we changed the subject? Solving a problem by changing topics is not obviously progress.
4.2 Logical Pluralism and the Fear of Collapse
Logical pluralists argue that there is more than one correct logic. Validity is truth preservation across cases, and different notions of case yield different logics.
Williamson argues for monism and anti exceptionalism. Logic is a scientific theory of the most general structure of reality. We should choose the one best supported by abductive fit.
Pluralism faces the collapse problem. If one logic says you must believe P and another says you may refrain, normativity seems to side with the stronger logic. Pluralism collapses into monism in practice.
Some propose domain specific logics. Quantum theory might need one logic. Database theory another. There may be no single global logic.
4.3 Slurs and the Semantics of Offense
Slurs test the boundary between semantics and pragmatics.
Pure expressivism says slurs just express contempt. Hybrid views add descriptive content plus derogatory force.
Appropriation creates trouble. When members of a target group reclaim a slur, the valence flips.
Prohibitionist views explain this by taboo violation. Echoic accounts explain it by ironic quotation. A remaining puzzle is subject dependent semantics: how a word’s meaning changes based on who says it.
5. Ethics, or: What to Do When Everything Is Uncertain
5.1 Moral Uncertainty
Normative uncertainty asks how to act when unsure which moral theory is correct.
Maximizing expected choice worthiness treats moral theories like hypotheses and weights them by credence. The problem is intertheoretic comparison. How do you compare utility to duty?
Variance normalization tries to fix this by equalizing influence. Critics say this is arbitrary. Others give up and act on their favorite theory.
The deeper issue is whether moral theories share a common scale at all.
5.2 Population Ethics and Impossibility
The repugnant conclusion says that a huge population with barely good lives can be better than a smaller population with excellent lives.
Arrhenius shows that you cannot avoid this while satisfying a few plausible axioms. Something has to give.
Responses include abandoning transitivity, embracing skepticism about large numbers, or biting the bullet and accepting the conclusion.
No option is comfortable.
5.3 Evolutionary Debunking
If evolution shaped our moral beliefs for fitness, not truth, why trust them?
Realists appeal to third factors. Critics say this begs the question. Others argue debunking spreads too far. Math and logic were also selected for usefulness.
Vavova argues that debunking only works if we assume massive error. We can correct for bias using our existing beliefs.
5.4 Transformative Experience
Some choices change who you are. You cannot know their value in advance because you cannot know what it is like, and because the evaluator changes.
Decision theory breaks. Testimony does not solve the problem. Some suggest we choose for discovery, not utility.
This threatens the rational agent model itself.
6. AI Alignment, or: When Philosophy Stops Being Optional
AI alignment forces all of these issues into practice.
If intelligence and goals are orthogonal, value loading is hard. Inverse reinforcement learning tries to infer values from behavior. But behavior is messy.
Should AI follow revealed preferences or idealized ones? What counts as idealization? This is moral philosophy in code.
Constitutional AI makes the problem explicit. We are writing a constitution for a non human agent. Every unresolved normative question matters.
If AI systems appear conscious, the other minds problem becomes urgent. If consciousness is an illusion, what does that imply for artificial agents?
Summary Table
| Domain | Core question | Technical pivot | Key debate |
|---|---|---|---|
| Metaphysics | Grounding unity | Structural equation modeling | Unitary realism vs pluralism |
| Metaphysics | Worldly indeterminacy | Precisificational models | Semantic vs metaphysical vagueness |
| Mind | Meta problem | Problem intuitions | Realism vs illusionism |
| Mind | Predictive processing | Free energy principle | Representation vs enactivism |
| Epistemology | Inquiry norms | Instrumental principles | Evidentialism vs zeteticism |
| Epistemology | Higher order evidence | Level splitting | Conciliation vs right reasons |
| Logic | Logical choice | Anti exceptionalism | Monism vs pluralism |
| Ethics | Moral uncertainty | Variance normalization | MEC vs incomparability |
| Ethics | Population ethics | Impossibility theorems | Totalism vs impossibility acceptance |
If there is a unifying theme here, it is this. Analytic philosophy spent decades refining its tools. Now it is asking whether the tools themselves are adequate to the shape of reality, agency, and value. The answers are not in. But the questions are sharper than they have been in a long time.
And yes, some of this sounds bonkers. But it is the good kind of bonkers.