Throughout history, income and wealth inequalities have been at the center of philosophical, economic, and political debates. In the twentieth century, John Rawls([1971] 1999) offered one of the most influential philosophical defenses of egalitarian justice. In the twenty-first, the economist Thomas Piketty ([2013] 2014) has become a leading critic of persistent inequality.
Two of the twentieth century’s most prominent opponents of such views were Friedrich August von Hayek and Robert Nozick. Both insist that inequalities of outcome—that is, differences in the distribution of income or wealth—are not unjust ipso facto. This convergence is puzzling. Hayek, writing in the Scottish Enlightenment tradition, treats social order as an evolutionary product of rules that no mind designed (Boettke 1990, 63). Nozick, working in the Lockean natural-rights tradition, grounds justice in self-ownership and historical entitlement (Mack 2024). How can thinkers with such divergent normative starting points reach similar conclusions about distributive justice?
This article offers an epistemological explanation for this convergence. Both Hayek and Nozick are epistemological nonjustificationists: they reject the traditional view that a belief qualifies as knowledge only when the believer has internally available justification (cf. Nozick 1981, 172–76; Scheall 2018, 2). Hayek treats ordinary beliefs as fallible conjectures that can become true in dynamic social processes (Scheall 2018, 6). Nozick develops an externalist truth-tracking account: a belief counts as knowledge when it reliably covaries with a fact across nearby possible worlds, even if the subject lacks reflective access to that reliability (Kvanvig 2004, 205).
This article will show that a combined Hayek-Nozick theory of knowledge has implications for distributive egalitarianism. Because the information that could justify any patterned distribution emerges only ex post, no planner can certify a preferred pattern ex ante without disabling the very discovery mechanism on which societal learning depends. Engineering preconceived distributions, therefore, comes with epistemic costs.
The remainder of the article is organized as follows. The first section reconstructs Hayek’s and Nozick’s principal arguments against patterned equality and then contrasts their divergent philosophical foundations. The second section shows how a shared nonjustificationist epistemology shapes their conclusions. The third section considers implications for distributive justice.
Arguments against Distributive Egalitarianism
This section first discusses Friedrich Hayek’s and Robert Nozick’s arguments against a notion of distributive justice that seeks to create certain patterned outcomes, such as an equal distribution of wealth or income. This notion of distributive justice shall be called distributive egalitarianism hereafter. As will be seen, Hayek’s argument is based on the ideas of rule of law and cultural evolution, whereas Nozick’s argument is grounded in the idea of natural rights. In a second step, the section analyzes the differences between their respective defenses of distributive inegalitarianism.
Friedrich Hayek’s Thought on Equality
Hayek approaches the subject of equality through the lens of liberty, spontaneous order, and cultural evolution. In a free society, institutions like markets, common law, and language are products of cultural evolution: “a product of cumulative growth without ever having been designed by any one mind” (Hayek [1960] 2011, 78). These institutions make the coordination of divergent plans of large numbers of individuals possible. Social order arises through such evolved institutions because the institutions enable people to make effective use of knowledge that is dispersed among them. The consequence of this fragmented nature of knowledge is that successful coordination through central command is essentially impossible (229–31).
For Hayek, the correct conception of equality within a free society is equality before the law (i.e., general, abstract rules that apply equally and impersonally to all individuals). Such rules supply a predictable framework within which individuals can pursue their personal objectives and thereby make the best use of dispersed knowledge (223–25). In this context, Hayek treats the distributions of income and wealth as by-products of the institutional conditions of a free civilization. Because people differ in talents, information, and luck, general rules inevitably generate unequal results. But unequal results provide important signals to society; they show where resources are most valued and thus guide further adaptation (149–51). Any egalitarian planner who fixes final distributions must substitute conjecture for this feedback mechanism and thus risks suppressing the very information on which prosperity depends. Furthermore, attempts to impose material equality conflict with equality before the law; tailoring rules to persons and allowing discretionary reallocations violate the generality of the law (150).[1]
Hayek’s theory of equality and distributive justice is thus procedural, not material. General, abstract rules channel dispersed knowledge into socially useful patterns. Material inequalities that emerge are evidence that the discovery process is working, not grounds for government intervention that aims at patterned redistribution. Coercive schemes that have the goal of creating an egalitarian distribution of income or wealth suppress vital knowledge signals, erode the rule of law, and ultimately jeopardize the freedom on which cultural and economic advancement rest.[2]
Robert Nozick’s Thought on Equality
Nozick’s critique of distributive-egalitarian theories begins with the claim that “individuals have rights, and there are things no person or group may do to them” (Nozick 1974, ix). These rights are prepolitical and negative (10–11). They mark out kinds of treatment, like killing, theft, and enslavement, that are morally forbidden even when such treatment would maximize social welfare. Because the rights precede any contract or institution, they function in Nozick’s state of nature as side constraints on every agent’s action (28–31). Their foundation lies in the “fact of our separate existences” (33). Each person has her own goals. To force her to serve another’s ends is to treat her as a means rather than as an end in herself.
From these side constraints, Nozick deduces a historical entitlement account of justice in holdings. A distribution is just if and only if each particular holding has arisen through (1) just initial acquisition, (2) voluntary transfer, or (3) rectification of past injustice (150–53). Crucially, justice is backward-looking. It asks whether the process that led to a holding was permissible, not whether the resulting pattern satisfies some ideal of equality or utility.
With his Wilt Chamberlain argument, Nozick offers a direct critique of the focus on end states and patterned outcomes by distributive egalitarians. Suppose an initially egalitarian distribution D1 is judged perfectly just by Rawls’s ([1971] 1999, 72) difference principle or any other patterned rule. A million basketball fans then choose to pay Wilt Chamberlain twenty-five cents each to watch him play, producing a new distribution D2 in which Chamberlain is vastly richer. Because the transfers were voluntary and noncoercive, every transaction was, by hypothesis, just. Yet D2 now deviates from the favored pattern. To restore the pattern, the state must either forbid such exchanges in advance or seize part of Chamberlain’s earnings after the fact. Such measures, however, constantly interfere with people’s lives and violate their rights. Since liberty and voluntary exchange will constantly upset the initially imposed pattern, continuous respect for voluntary choice is incompatible with the perpetual maintenance of any patterned distribution (Nozick 1974, 160–64).[3] The result is that the only stable criterion is the historical one discussed above (i.e., whether holdings were acquired without violating side constraints). The historical view does incorporate a Lockean proviso, meaning that initial appropriations are legitimate only if they do not leave others worse off than they would have been had the resource remained unowned (Nozick 1974, 178).[4]
In sum, Nozick’s theory of justice combines strong moral side constraints with a process-based entitlement criterion. Because each person is a distinct center of agency, no pattern of holdings may be enforced at the cost of violating anyone’s liberty or property.
Divergent Foundations
Now that both arguments have been described, their different foundations can be analyzed. The parallel defenses of market-generated inequality offered by Hayek and Nozick grow out of divergent intellectual lineages. Understanding those divergent starting points clears the way for the next section’s analysis of the epistemological bridge that aligns both thinkers.
Hayek situates his liberalism in the Scottish Enlightenment tradition of David Hume, Adam Smith, and Adam Ferguson (Boettke 1990, 63). From Hume he takes the idea that rules of justice are conventions that emerge by gradual evolution and secure mutually beneficial cooperation without being anyone’s deliberate project (Hayek [1960] 2011, 124). From Smith he adopts the invisible hand insight that a market order can be the unintended consequence of individuals’ pursuit of their personal aims (Boettke 1990, 64). Echoing Ferguson’s (1767, 187) dictum that institutions are “the result of human action but not of human design,” he applies this thought to language, law, and morals, all of which owe their complex efficiency to countless small improvements spread by social imitation and selection (Hayek 1948, 6). Peter Boettke (1990, 61–64) recasts these themes as a single research program: explaining social coordination without invoking a top-down designer. For this, Hayek extends the Scottish Enlightenment theory of spontaneous order into a model of cultural evolution in which habits, rules, and institutions follow a Darwinian contingent mechanism of selection (Marciano 2009, 57–60).[5]
Nozick begins from a Lockean natural-rights ontology (Scanlon 1976, 4). At the outset, he argues that individuals’ rights constrain permissible actions by both persons and institutions (Nozick 1974, ix). These rights are prepolitical side constraints that limit the state to protecting life, liberty, and property. Another key influence is Immanuel Kant, from whom Nozick takes the central principle that “individuals are ends and not merely means” (Harris 1979, 180).[6] Historically, Nozick’s libertarian turn was catalyzed by conversations with Murray Rothbard, whose rights-based anarchism he sought to answer while preserving a minimal state (Mack 2024; Nozick 1974, xv).
Substantive Differences
From the above descriptions, it becomes apparent that the differences between the two approaches are not only genealogical but also substantive. The following are three substantive differences:
-
Hayek and Nozick differ in their ontologies of social order. Hayek treats social order as an emergent process driven by evolved rules (Boettke 1990, 64), whereas Nozick (1974, ix) sees it as a moral constraint dictated by inviolable rights.
-
Despite the fact that both authors bring forth critiques of distributive egalitarianism, their core objects of critique are different. For Hayek ([1960] 2011, 426–27), the epistemic pretensions of constructivist rationalism are central. He argues that aiming at the creation of specific patterns of distribution undercuts the process of cultural evolution and is incompatible with the rule of law. Nozick (1974, 163) emphasizes that the ambition to impose a certain patterned distribution of holdings will lead to continuous interference with people’s lives and thereby to the violation of their rights.
-
They furthermore differ on the role history plays in their theories. For Hayek, history is a contingent process supplying the cultural selection that vindicates general rules.[7] For Nozick, history plays the role of validating present holdings or providing grounds for rectification where holdings were acquired through the violation of rights. His entitlement theory is historical because distributive justice “depends upon how [the distribution] came about” (Nozick 1974, 153).
Apart from these, more differences could be pointed out. However, this is not the focus of the article. Important substantive differences as well as different genealogical foundations illustrate that the Hayekian and Nozickian critiques of distributive egalitarianism do not converge on the level of social or political philosophy. Hence, their common denominator must lie at a different level. The following section will argue that the commonality can be found at the epistemological level: the two thinkers share a nonjustificationist epistemology that underwrites their convergence on the legitimacy of unequal distributions of income or wealth.
A Shared Nonjustificationist Epistemology
This section discusses Hayekian and Nozickian understandings of knowledge with respect to traditional epistemology. First, the classical justified true belief (JTB) account of knowledge and the famous Gettier cases are presented. The section then lays out the perspectives of Nozick and Hayek and finally attempts to bridge them.
The JTB Backdrop and Gettier’s Challenge
Classical epistemology from Plato on usually equates knowledge with JTB (Gettier 1963, 121), in which S knows that p if and only if
-
p is true;
-
S believes p; and
-
S is justified in believing p.
Edmund Gettier’s 1963 paper famously shows that this definition is extensionally inadequate because a person can have a JTB that nevertheless falls short of knowledge. Gettier provides two examples of this:[8]
-
The coin case: “Smith is told and has every reason to think that Jones will get the job, and he sees Jones count ten coins. So Smith concludes, ‘Whoever gets the job will have ten coins.’ Surprise: the firm hires Smith instead. By sheer luck Smith himself happens to have ten coins in his pocket, so his statement is true—but only because of a coincidence, not because his evidence really connected to the truth” (Gettier 1963, 122).
-
The Ford case: “Smith often rides in Jones’s Ford, so he confidently says, ‘Jones owns a Ford.’ Wanting variety, he spins that into ‘Jones owns a Ford or Brown is in Barcelona.’ If at least one part is true, the whole sentence is true. Unknown to Smith, Jones has sold the car, but by coincidence Brown is indeed vacationing in Barcelona. Again, Smith’s sentence comes out true, yet only by accident” (Gettier 1963, 122–23).
In both cases, the result is a belief that is both true and justified yet acquired accidentally—and therefore not genuine knowledge. With this problem for the JTB account becoming apparent, the search for a fourth, anti-luck condition or for a wholly new analysis was on.
Nozickian Truth Tracking and Dropping of Justification
Nozick (1981) takes a route to answering Gettier that avoids simply adding a fourth condition. His solution is to drop justification altogether and replace it with two counterfactual “tracking” conditions. On his view, S knows that p if and only if
-
p is true (Nozick 1981, 172);
-
S believes that p (172);
-
if p were not true, S would not believe that p (172); and
-
if p were true in nearby worlds, S would believe that p (176).
Conditions 3 (sensitivity) and 4 (adherence) are counterfactual reliability conditions. Together they ensure that the belief covaries with the fact across counterfactual spaces. Knowledge is no longer a matter of internal warrant (as in classical epistemology) but a matter of an external relation between a knower’s belief-forming method and the truth of the proposition.
Nozick recasts knowledge as a belief that tracks truth across nearby possible worlds. The belief would change if the fact changed and would persist if the fact persisted. By substituting counterfactual reliability for classical justification, he dissolves the Gettier threat without adding an ad hoc fourth condition and explains why perceptual error and lucky coincidence fail the knowledge test.
If we look at the two Gettier cases above, this becomes apparent: If Jones were not selected, Smith would still believe that the man who would get the job would have ten coins, because his evidence concerned Jones. So condition 3 (sensitivity) is not met. Smith therefore lacks knowledge even though his belief is true. The same is true in the Ford case: Were Brown not in Barcelona, Smith would still believe the disjunction, since his confidence rested entirely on the false claim that Jones owns a Ford. Again, the case fails sensitivity. Nozick’s test denies that Smith has knowledge.
In sum, Nozick replaces internal justification with an external, modal tie between belief and fact: a genuine knower’s belief would vary with the truth-value of p across nearby possible worlds. The Gettier threat dissolves because the accidental alignments that power his examples break due to the sensitivity requirement.[9]
Hayekian Knowledge Generation
For Hayek, knowledge consists of “fragments . . . existing in different minds” (Scheall 2018, 2) that institutions like markets must coordinate. He is not invoking the classical notion of JTB; if every fragment were already true and well justified, there would be no contradictory plans for prices to reconcile and hence no economic problem at all (Scheall 2018, 1–2). Hayek therefore dispenses with the truth condition (condition 1) at the individual level: an agent’s “knowledge” may be incomplete, even false, yet it still guides her actions. Indeed, he defines the data that enter each plan simply as “the things as they are known (or believed by) [the person] to exist” (Hayek [1937] 2014, 60). Those subjective data are “frequently contradictory” (Hayek [1945] 2014, 93). Thus, a Hayekian theory of knowledge must begin with fallible belief, not truth.
Nor does Hayek retain the justification clause (condition 3) as epistemologists usually understand it. All beliefs, he argues, arise either from the individual’s own encounters with the world or from adaptive neural linkages inherited from ancestors (Hayek 1952, 167–68). Because every belief has one of these origins, every belief is “justified” in that causal sense. But this finding does not amount to much. Hayek’s conception of knowledge includes information of which the agents holding it are themselves not aware (Hayek 1952, 19). Thus, requiring an agent to be able to state her justification would exclude the vast domain of tacit skills and dispositions she “merely manifests . . . in the discriminations which [she] performs” (19). Consequently, Scott Scheall (2018, 4–5) finds that “for Hayek ‘knowledge’ is nothing more than a synonym for belief.”
At first glance, this approach seems to collapse all knowledge into mere belief. However, Hayek does not finish here. He is not indifferent to truth. The central task of social science in a Hayekian framework is to explain “how individual beliefs . . . might come to be true (and, thus, how plans based on these beliefs come to be mutually consistent)” (Scheall 2018, 6). Signals like prices and institutions like competition operate as error-correction devices: they confront each agent’s expectations with the unexpected results of others’ plans, prompting revisions that—over time and across agents—select for truer beliefs. Scheall (2018, 6) formulates this project as “describ[ing] the environmental, psychological, and social processes whereby mere beliefs become true beliefs.”
Truth, then, is dynamic and systemic. At any moment what counts as knowledge inside a single mind may be mistaken. But through iterative feedback the sum of beliefs across minds can converge toward accuracy. That evolutionary picture explains how coordination and learning continually sift error from genuine knowledge.
Bridging the Gap
This subsection will show that Hayek’s truth discovery outlook can be aligned with Nozick’s tracking account because the two thinkers share a nonjustificationist epistemology. In other words, their theories of knowledge drop the notion of internal justification that is central in classical epistemology. Instead, knowledge depends on external factors. In such an epistemological framework, social rules must be judged by their ability to let beliefs correct themselves over time based on external signals.
Although Hayek’s evolutionary picture and Nozick’s counterfactual analysis are developed in different idioms, they converge on an externalist understanding of how error is corrected and truth secured. For Nozick, the reliability of a belief hinges on modal facts outside the believer’s ken. In the nearest worlds where the proposition is false, she would not believe it, and in those where it is true, she would (Nozick 1981, 172–76). For Hayek, the warrant for a belief likewise turns on forces beyond the individual mind. Institutions like competition or common law norms expose misalignments between expectations and reality and thereby select for truer beliefs over time (Hayek [1937] 2014, 63; Scheall 2018, 6). For both thinkers, truth is vindicated retrospectively by an external process rather than certified prospectively by internal reasons. This process is truth tracking in Nozick’s case and truth selection and discovery in the context of cultural evolution in Hayek’s.
To complete the bridge between Nozickian truth tracking and Hayekian truth discovery, the next step is to show that Nozick’s possible-worlds tracking conditions have a functional analogue in Hayek’s institutional discovery procedure. Two connections bring the two approaches together.
The first connection is counterfactual orientation. Nozick’s sensitivity clause (condition 3) holds that S knows that p only if, in the nearest possible worlds where p is false, S would not believe that p (Nozick 1981, 172). Prices in a competitive market supply agents with exactly this kind of counterfactual information. Each new bid or offer embodies an alternative, not-yet-actual allocation of resources (Hayek [1945] 2014, 93). When a trader discovers that her trade cannot be executed at the price she expected, she confronts a practical representation of the world in which her background belief is false. Hayekian discovery procedures thus materialize the neighboring worlds that Nozick treats abstractly: disequilibrium prices, shifts in relative profitability, and the entry of rival firms give agents continuous evidence of how things would stand if the propositions guiding their current plans were mistaken. Market learning performs, in real time, the counterfactual test that Nozick builds into the definition of knowledge.
The second connection is retention of reliability. Nozick’s adherence condition, condition 4, requires that, in nearby worlds where p remains true, S would still believe that p (Nozick 1981, 176). A Hayekian framework locates the social analogue of this forward-looking reliability in the durability of institutions that have survived past error correction. General rules of conduct that have survived over time (e.g., in a common law system) conserve the information revealed by earlier tests, ensuring that beliefs and practices which have proved successful under one set of circumstances remain available under modest variations of those circumstances (Hamowy 2003, 243). Because the rule-of-law framework protects past discoveries from arbitrary interference, the beliefs embedded in such norms continue to “track” the relevant facts as those facts persist across time and across adjacent states of the world. In short, Hayek’s constitutional safeguards give social learning the same modal stability that Nozick demands of individual knowers.
Taken together, these parallels show that the epistemic virtues Nozick formalizes at the level of single beliefs are instantiated, in Hayek’s view, by the very market and legal processes that liberal institutions nurture. The price system realizes the counterfactual scenarios presupposed by sensitivity, while the impersonal rule of law secures the cross-scenario reliability captured by adherence. Thus, the evolutionary selection of beliefs in a Hayekian order and the truth tracking of beliefs in Nozick’s analysis are not competing stories but complementary descriptions of the same truth-finding mechanism operating at different scales.
Implications for Distributive Justice
The common epistemic stance developed so far does not refute distributive egalitarianism. It does, however, pose a specific difficulty for any egalitarian theory that identifies justice with the continuous attainment of a patterned outcome—income equality, maximin, weighted utility, or the like. The theory of knowledge proposed above implies that such patterns presuppose a body of information that is never available to any person, agency, committee, or planning bureau. Hayek ([1945] 2014, 95) stresses that the knowledge of the “particular circumstances of time and place” on which productive plans depend is dispersed among many people and constantly changing. The Nozickian sensitivity and adherence tests imply the same point in possible-worlds terms: a central designer cannot know in advance whether her preferred pattern would still track individual entitlements in the nearest worlds that differ only slightly from the actual one.
An Epistemic Wilt Chamberlain Argument
A useful illustration is to recast Nozick’s Wilt Chamberlain story in epistemic rather than moral terms. Imagine that a planner succeeds, at t1, in engineering an egalitarian distribution D1. Now suppose a series of voluntary exchanges—basketball tickets, tip-driven gigs, viral crowdfunding—produces, by t2, a new distribution D2. Because each transaction reveals information that could not have been foreseen at t1 (who values an evening’s entertainment, which performer suddenly walks onto the cultural stage, what unexpected surge of generosity occurs), D2 reflects a body of dispersed knowledge that the planner necessarily lacked since it was generated between t1 and t2 and was thus unavailable ex ante. To maintain or restore D1 (or any other predetermined pattern), the planner must prohibit the information-revealing exchanges in advance or undo their results after the fact. In both cases, the corrective action interrupts the very process by which the new knowledge entered the system. The epistemic Wilt Chamberlain argument therefore does not claim that redistribution violates rights; it claims that a planner cannot know enough, quickly enough, to recalibrate holdings without simultaneously suppressing the discovery of further, unforeseen information. Thus, engineering any patterned distribution in a society has epistemic costs.
This diagnosis does not imply a complete rejection of egalitarian concerns. It suggests, more modestly, that distributive principles framed as target patterns face a dilemma. Either they tolerate the informational flux generated by voluntary exchange—thereby allowing their favored pattern to be continuously upset—or they suppress that flux and, with it, the feedback that would reveal whether the imposed allocation actually achieves its goal.
To elaborate on this, let us look at the case of the difference principle of Rawls. The objective of this principle is that “social and economic inequalities are to be arranged so that they are . . . to the greatest expected benefit of the least advantaged” (Rawls [1971] 1999, 72). Imagine that a planning bureau imposes at time t1 a distribution Dopt that fulfills the requirements of this principle. Between t1 and t2 several transactions take place such that at t2 the new distribution Dnew diverges from Dopt. Under these new circumstances, it is unclear whether the inequalities still maximize the expected benefit of the least advantaged. There is no way for the planner to know whether the voluntary transactions provided the greatest benefit to the least advantaged. Under these new circumstances, the planner confronts a knowledge problem. To decide whether Dnew still satisfies the difference principle, she must predict how the present pattern of incentives will shape the future productive prospects and bargaining positions of the least advantaged (Rawls [1971] 1999, 70–77). Those prospects in turn depend on decentralized judgments about skills, risk, demand, and innovation—the very “particular circumstances of time and place” that come to light only through market interaction (Hayek [1945] 2014, 95). Since the information was produced by the exchanges that displaced Dopt, bringing the system back to the pattern Dopt would require offsetting those exchanges. Such an intervention would extinguish the informational gains the exchanges generated. The planner therefore faces the same dilemma as in the epistemic Wilt Chamberlain case: either intervene continuously, recreating the knowledge deficit at each step, or tolerate departures from the target pattern.
A Rawlsian Response
In Political Liberalism, Rawls (1993, 283) argues that the Nozickian understanding of the difference principle, sketched above, is incorrect.[10] The difference principle is not meant to police the results of every discrete exchange, he writes. Instead, it guides the architecture of a publicly announced and predictable basic structure—tax schedules, transfer rules, and property institutions—within which individuals can form stable expectations (Rawls 1993, 283–84). Once that structure is in place, citizens earn and hold entitlements on the known condition that certain transfers will occur. Because the tax is itself part of the publicly known structure, no one’s legitimate expectations are upset by the tax that finances redistribution.
Let us suppose that Rawls is correct and interpret the difference principle as guiding public policy architecture. The knowledge problem still applies for this more structural version of distributive egalitarianism. Even a fixed set of rules cannot lock expectations in place, because the very exchanges that occur under those rules generate new information—about tastes, technologies, and opportunities—that no planner can anticipate at t1. A basic structure calibrated to maximize the life prospects of the least advantaged today may, after a wave of innovation or a demographic shift, fall short of that same criterion tomorrow. Restoring compliance would require new legislation, new forecasts, and thus fresh leaps beyond what the actors already know. In effect, the designer must still choose between two imperfect options: freeze the basic structure and watch it drift away from the intended pattern, or continually retune it on the basis of data that emerge only after private agents have already rearranged their plans. Either strategy recreates the informational deficit.
Rawls’s appeal to a stable framework softens the practical intrusiveness of redistribution, yet the knowledge problem persists. The facts that determine whether the difference principle is met evolve endogenously. Therefore, not even a “once-and-for-all” egalitarian design can avoid the epistemic dilemma that voluntary exchange perpetually creates for patterned understandings of distributive justice.
General Rules for a Dynamic World
Hayek’s ([1960] 2011, 300–304) concessions to a tax-financed safety net are conditioned on the aid being delivered by general rules that minimize price distortions.[11] Nozick’s (1974, 178) proviso permits compensation when original appropriations leave others worse off than before. Both concessions acknowledge distributive worries while insisting that rectification must operate through procedures like general rules or side constraint–respecting transfers that leave the discovery mechanism largely intact.
Consequently, the shared Hayek-Nozick epistemology does not deny the concerns of distributive egalitarianism. It invites its proponents to recast their aims in procedural and dynamic terms. An egalitarian policy compatible with this framework would focus on open access to markets, transparency of rules, and the removal of entry barriers that impede the flow of new information, rather than on the repeated imposition of a favored end state. Whether such a procedural egalitarianism could satisfy the deeper moral aspirations of pattern-oriented theories remains an open question, but the epistemic challenge identifies the price of pursuing those aspirations through static redistributive design.
Conclusion
Friedrich Hayek and Robert Nozick confront distributive egalitarianism from opposite ends of the liberal canon, yet their convergence is neither accidental nor merely ideological. This article reconstructed their parallel defenses of market-generated inequality and traced that parallel to a shared epistemic premise. Both thinkers deny that knowledge requires internally accessible justification. Hayek replaces justification with a process of social learning powered by dispersed information; Nozick replaces it with counterfactual reliability. Hayek’s evolutionary account shows how prices and general legal rules institutionalize the formal Nozickian account of knowledge for society at large.
Their combined epistemology has implications for distributive egalitarianism. Since the information that creates a specific distribution emerges only ex post, no planner can implement a preferred pattern ex ante without disabling societal learning and consequently creating epistemic costs. The result is a presumption in favor of institutions that leave tracking and discovery intact. However, this epistemology does not foreclose egalitarian goals; it invites their reformulation in procedural and dynamic terms. An egalitarian policy compatible with this epistemology would aim at general rules that do not impede the flow of new information.
The goal of this article has been to evaluate neither those egalitarian moral aims nor the positions of Hayek and Nozick. Its contributions are solely to the field of epistemology. First, it identifies an epistemological convergence in the critiques of distributive egalitarianism by Hayek and Nozick. Second, it shows that the pursuit of static redistributive designs carries a knowledge cost that must be acknowledged and, if possible, justified.
Hayek concedes a narrow welfare role for government. Universal rules may finance basic relief for people incapable of self-support, and minimal healthcare, education, and insurance, provided these schemes are administered impersonally and do not aim at levelling incomes (Hayek [1960] 2011, 405). The goal is to prevent destitution without corrupting the spontaneous processes that generate knowledge and growth.
Hayek supports proportional taxation or modest progressive taxation to offset regressive indirect taxes but rejects steep progressive rates whose purpose is to achieve a more just distribution of income (Hayek [1960] 2011, 437). Progressive scales let a majority impose burdens it does not share, violating the very principle of general law (441).
Patterned theorists may reply—as Rawls (1993, 283) does—that the difference principle applies to a predictable and announced system of public law (for instance, through a publicly known tax-and-transfer scheme), not to daily snapshots of holdings as Nozick (1974, 160–64) argues in the Wilt Chamberlain example. This objection will be discussed further in the third section.
Nozick (1974, 182) believes that the proviso will typically be satisfied under a market system and the mechanism of competition. When it is not satisfied, compensation—rather than expropriation—should be used to remedy the harm (180–81).
Hayek believed that the selection mechanism in his theory of cultural evolution differs significantly from the Darwinian one in biological evolution. However, this belief was based on a misunderstanding. For a detailed discussion of this, see Marciano (2009).
Harris (1979) argues that Kant’s own understanding of this principle does not “justify the prohibition of all benevolent and paternalistic activities by the state” and that Nozick’s position rather “rest[s] on a kind of ‘common-sense’ interpretation” of the principle. This in itself, however, does not refute Nozick’s position. For the primary source of Nozick’s understanding of the Kantian principle, see Nozick (1974, 30–33).
Hayek’s ([1960] 2011, 297–99) discussion on the development of the rule of law in The Constitution of Liberty is a good elaboration of this perspective.
The examples have been slightly modified from those in Gettier’s original article. However, their essence remains intact.
Massimo Dell’Utri (2005) has questioned whether Nozick can coherently combine a robust correspondence view of truth with the thoroughgoing fallibilism suggested by his rejection of internally accessible justification. On Dell’Utri’s reading, Nozick’s appeal to facts as truth-makers tends to reintroduce an infallibilist standpoint that sits uneasily with the idea that all our beliefs remain revisable, making the combination of “truth” and “fallibilism” in Nozick’s framework a “dubious combination.” I cannot do justice to that debate here. In the present context, Nozick’s tracking account is used in a more modest way, namely, as a schematic model of externalist reliability that helps highlight the structural affinities between his epistemology and Hayek’s evolutionary picture of truth discovery, rather than as a fully satisfactory theory of truth.
Whether Nozick’s understanding of the difference principle is indeed incorrect cannot be evaluated here. This would be a subject for another article.
Jeremy Shearmur (1996, 77, 106, 116–17, 143) argues that Hayek’s scattered remarks about social insurance and welfare provision do not generate a clear limiting principle on state activity and may be compatible with a fairly expansive welfare state. From that perspective, Hayek’s appeal to general rules risks underdetermining the size and scope of government. The present article largely brackets that intra-Hayekian debate. This article’s conditional claim is that, wherever Hayek does allow for a safety net, he treats it as constrained by general rules that preserve price signals and the decentralized discovery process. The epistemic argument developed here turns on that structural feature.