On Object-Oriented Programming and Clean Code

The word "Clean Code" is not subjecting some code to be clean rather a cult of developing and designing software, that claims produce clean software, I will argue that it does not. The motive for this reference, long, document is that often times when I'm talking with friends or coworker and I joke about the Clean Code mindset, some people are surprised that it's even possible to argue against it, for many it has been the Bible of software design. Same thing happens when I talk about OOP, and in both cases I'm in the "oh, how shall I start?" state So I will try to make this document here a reference, for my very honest thoughts about Clean Code and Object-Oriented Programming (OOP henceforwards).

Clean Code is a symptom of OOP's deeper failures. For two decades, developers have followed a set of principles that promise clarity, maintainability, and professional craftsmanship. Functions should be small, two to four lines. Every class does one thing. Polymorphism should replace conditionals and keep abstracting away the details. All of which has resulted in software that is harder to understand than the “unclean” version it replaced and architectures so layered that finding business logic requires archaeological expeditions. Systems that crawl on hardware a thousand times faster than what ran snappier software in 1995.

Clean Code

I claim that OOP's fundamental design philosophy is wrong. The abstractions OOP encourages, encapsulation that fragments related logic, polymorphism that hides control flow, inheritance hierarchies that couple things that should be separate, and all the whatnot, these create the pressure that Clean Code “solves” badly. We'll examine OOP's failures later.

Decomposition

“The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that.”

The recommended function length in Clean Code is two to four lines. This advice is not only wrong it produces code that is actively harder to understand. The blind pursuit of small functions confuses a proxy metric (line count) with the actual goal (comprehensibility), and the confusion leads to codebases that are fragmented beyond recognition.

To understand this we need to understand the difference between deep modules vs shallow modules. Good software design creates deep modules: units that provide significant functionality through simple interfaces. The Unix file API (open, read, write, close, seek) is deep. Five functions with simple signatures however, they have an immense complexity handling file systems, caching, permissions, concurrency, multiple storage backends, etc… The interface hides the complexity; that's the value of abstraction. You can use these functions without understanding their implementations. The abstraction pays for itself many times over.

Clean Code produces shallow modules: functions that do almost nothing, with interfaces nearly as complex as their implementations. When a function has four lines and no branches, there's nothing to abstract. The “interface” is the implementation with a name attached. Consider the mathematics: abstraction value equals implementation complexity minus interface complexity. If both are tiny, the value is zero or negative, you've added indirection without reducing cognitive load. Each shallow function becomes another node in an entangled web, that requires you to chase definitions rather than read code linearly.

Okay but what does that actually means? Let's have a look at some code.

Here is example code from the Clean Code book, presented as the correct way to write software, it's a prime number generator;

public class PrimeGenerator {
    private static int[] primes;
    private static ArrayList<Integer> multiplesOfPrimeFactors;

    protected static int[] generate(int n) {
        primes = new int[n];
        multiplesOfPrimeFactors = new ArrayList<Integer>();
        set2AsFirstPrime();
        checkOddNumbersForSubsequentPrimes();
        return primes;
    }

    private static void set2AsFirstPrime() {
        primes[0] = 2;
        multiplesOfPrimeFactors.add(2);
    }

    private static void checkOddNumbersForSubsequentPrimes() {
        int primeIndex = 1;
        for (int candidate = 3; primeIndex < primes.length; candidate += 2) {
            if (isPrime(candidate))
                primes[primeIndex++] = candidate;
        }
    }

    private static boolean isPrime(int candidate) {
        if (isLeastRelevantMultipleOfLargerPrimeFactor(candidate)) {
            multiplesOfPrimeFactors.add(candidate);
            return false;
        }
        return isNotMultipleOfAnyPreviousPrimeFactor(candidate);
    }

    private static boolean isLeastRelevantMultipleOfLargerPrimeFactor(int candidate) {
        int nextLargerPrimeFactor = primes[multiplesOfPrimeFactors.size()];
        int leastRelevantMultiple = nextLargerPrimeFactor * nextLargerPrimeFactor;
        return candidate == leastRelevantMultiple;
    }

    private static boolean isNotMultipleOfAnyPreviousPrimeFactor(int candidate) {
        for (int n = 1; n < multiplesOfPrimeFactors.size(); n++) {
            if (isMultipleOfNthPrimeFactor(candidate, n))
                return false;
        }
        return true;
    }

    private static boolean isMultipleOfNthPrimeFactor(int candidate, int n) {
        return candidate == smallestOddNthMultipleNotLessThanCandidate(candidate, n);
    }

    private static int smallestOddNthMultipleNotLessThanCandidate(int candidate, int n) {
        int multiple = multiplesOfPrimeFactors.get(n);
        while (multiple < candidate)
            multiple += 2 * primes[n];
        multiplesOfPrimeFactors.set(n, multiple);
        return multiple;
    }
}

Do you understand it? I don't think you do, I can't either, nor can the Clean Code author after revisiting it 18 years later. He actually says something very interesting about it:

“I could see all the moving parts, but I could not figure out why those moving parts generated a list of prime numbers.”

The decomposition obscured rather than revealed.

When you decompose aggressively, you don't get independent modules. You get entangled modules, pieces that cannot be understood in isolation. isNotMultipleOfAnyPreviousPrimeFactor calls isMultipleOfNthPrimeFactor, which calls smallestOddNthMultipleNotLessThanCandidate. To understand any one, you must understand all three. They share implicit state (multiplesOfPrimeFactors). They have implicit ordering constraints (the candidate parameter must be non-decreasing across calls). None of this is documented in the interfaces. This is worse than a single longer function. With one function, you read it once, top to bottom. With entangled functions, you're jumping between definitions, holding multiple frames in your head, trying to reconstruct the data flow that the decomposition deliberately obscured.

The Clean Code book rails against side effects:

“Side effects are lies. Your function promises to do one thing, but it also does other hidden things.”

Yet smallestOddNthMultipleNotLessThanCandidate-which sounds like a pure computation-mutates multiplesOfPrimeFactors. The decomposition hid the side effect behind an interface that appears pure. The code is more dishonest, not less. The author admits he had to go on an hour-long bike ride to understand why the algorithm's square optimization was correct, a comment could have saved him from that, but Clean Code argues that writing comments is an “acknowledgment of failure of writing expressive code.” I could talk a lot about how problematic that position is, but here I'm interested in software design more than coding style.

Abstraction

Clean Code treats abstraction as an unqualified good. The practice is as follow: create abstractions, which hides implementations, later depend on interfaces, not concretions. But abstraction has costs, every abstraction is a bet that the interface captures what matters and hides what doesn't. When that bet is wrong (when the hidden details do matter, which believe me, happens much more than you think, much much more) abstraction becomes an obstacle. And the bet is wrong more often than abstraction enthusiasts admit.

Let's look at the code again, The PrimeGenerator's methods have interfaces (their signatures) and implementations (their bodies). But the interfaces don't capture crucial constraints: smallestOddNthMultipleNotLessThanCandidate requires that candidate never decrease and isMultipleOfNthPrimeFactor depends on prior calls having updated multiplesOfPrimeFactors. The order of calls to different methods matters, these are implementation details that leak through to callers. and you cannot use these functions correctly without understanding their implementations. The abstraction promises independence it cannot deliver.

This is endemic to OOP because objects encapsulate state, so related logic is scattered across method boundaries. Each method sees its own slice. The interactions between slices (the actual system behavior) are implicit in the call patterns, to understand the system, you must mentally simulate the calls. The “abstraction” doesn't help here, instead it fragments the picture you need to see whole. What appears modular is actually tangled; the tangling is just hidden behind method boundaries rather than visible in a single function.

The Clean Code philosophy descends from Parnas's information hiding: modules should hide design decisions likely to change. This is sound in principle, however, in practice, OOP developers hide everything, whether it's likely to change or not. The result is code where finding anything requires traversing abstraction layers. You want to know what happens when a user clicks a button? Follow the event handler to the controller to the service to the repository to the database adapter. Each layer hides the next and each layer has its own interface that doesn't quite say what happens behind it. Good design makes common cases obvious and hides irrelevant details. OOP-style abstraction hides everything equally, making common cases as hard to trace as edge cases.

Clean Code emphasizes interfaces and depend on abstractions, not implementations. But an interface without a contract is a fiction, the Java interface tells you the method names and types, it does not tell you what the method actually does, what preconditions must hold, what postconditions are guaranteed, what side effects occur, or what exceptions might be thrown and when (More on that #Exception handling). Without this information, you cannot use the interface correctly. You must read the implementation, defeating the purpose of the interface. OOP languages provide no mechanism for specifying contracts, and Clean Code provides no guidance for documenting them informally.

Polymorphism

“Replace conditional with polymorphism.”

This is core Clean Code practice. Instead of if-else or switch statements that handle different cases, create a class hierarchy where each case is a subclass. I claim that this advice is architecturally dangerous.

A switch statement puts related logic in one place. All the cases are visible together, you can see what happens in each case and you absolutely can add a new case by adding one block. By contrast, polymorphism scatters the cases across multiple files, each subclass handles one case. To understand all the cases, you must find and read all the subclasses. To add a new case, you must create a new file, ensure it's registered wherever subclasses are discovered, and hope you haven't missed any abstract methods, this is presented as a benefit: “The Open/Closed Principle! You can add new cases without modifying existing code!” But you do need to understand existing code to add a new case, you need to understand the base class contract, the existing subclasses' conventions, the registration mechanism. And when requirements change to affect all cases simultaneously, which happens constantly, you must modify every subclass anyway.

Polymorphism has a runtime cost: virtual method dispatch, the CPU cannot see through the abstraction to predict which code will run. Branch predictors fail more often, instruction caches contain code for all subclasses, not just the path being executed. Inlining is impossible because the target is unknown at compile time. Data layout is fragmented because each subclass has its own fields and For any single polymorphic call, this cost is small. But Clean Code applies polymorphism everywhere. When every design decision adds indirection, the costs compound. Modern software is often 10-100x slower than equivalent software from decades ago, running on hardware 1000x faster. Where did the performance go? Virtual calls, cache misses, memory allocation for objects that exist only to enable polymorphism can say a bit about this.

There's a fundamental asymmetry in program structure: you can organize code by operations (what you do to data) or by data types (what the data is). OOP organizes by data types, each class handles a type. Adding a new type is easy: add a new class. But adding a new operation is hard: you must modify every class. Functional programming organizes by operations, each function handles an operation, adding a new operation is easy: add a new function, but adding a new type is hard: you must modify every function, Clean Code assumes you will add types, not operations which is often wrong. Many systems have stable types and evolving operations, for these systems, OOP's organization is exactly backwards, and polymorphism makes the common operation, adding a new behavior, unnecessarily painful.

Performance

“Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs.”

Clean Code interprets this Knuth quote as license to ignore performance entirely. Abstract first, optimize never (or “later,” which means never). I claim that this interpretation is both wrong and dangerous.

Each Clean Code practice has a performance cost. Small methods mean more call overhead and less inlining, polymorphism means virtual dispatch and poor branch prediction. Encapsulation means accessor methods instead of direct field access. Dependency injection means runtime lookup and extra allocations. Many small objects mean cache misses and allocation pressure.

Individually, each cost is negligible, however, collectively, they are catastrophic. A simple CRUD operation might traverse as follows

HTTP handler → router → controller → service → repository → ORM → connection pool → database driver.

Each layer allocates objects, makes virtual calls, checks preconditions, catches and rethrows exceptions. The actual database query is a fraction of the total work. This is why “enterprise” software requires enterprises' worth of hardware.

“Optimize later” assumes you can optimize later, that performance problems will be localized, a hot function you can tune without restructuring. Often, performance problems are architectural, they stem from too many layers of indirection, data layouts that cause cache misses, allocation patterns that pressure the garbage collector, and polymorphism preventing inlining of hot paths. You cannot fix architectural performance problems without rewriting the architecture.

And rewrites don't happen. The business has deadlines, the original developers left and whatnot, the test suite assumes the current structure and a slow system does not really "hurt".

Proposal

Clean Code fails because it optimizes for the wrong things: function length, abstraction count, test-first ritual, which are proxies for good design, pursued to extremes where they become anti-patterns. What should we optimize for instead?

If you read my past post about Names (Programmers and software developers lost the plot on naming their tools) you can tell that I actually love optimizing for cognition loads. The goal of software design is reducing the cognitive load required to understand and modify code. Everything else is instrumental, for example, small functions reduce cognitive load when they create meaningful abstractions, they increase cognitive load when they fragment coherent logic. Polymorphism reduces cognitive load when the cases are truly independent but it increases cognitive load when you need to understand the interaction between cases.

Abstraction reduces cognitive load when it hides irrelevant details, it increases cognitive load when it hides details you need to see. There is no universal rule (like) “functions should be small,” “always use polymorphism” that reliably reduces cognitive load, you must evaluate each design decision in context.

Locality. Related code should be together. This is the opposite of Clean Code's scattering approach, if understanding A requires understanding B, put A and B together. If modifying A usually requires modifying B, put A and B together. If A and B share implicit constraints, put A and B together so the constraints are visible. This is why a 50-line function is often better than five 10-line functions.

Finally, data oriented design. OOP organizes code around objects: bundles of data with methods attached. This fragments data and scatters operations, data-oriented design organizes code around data flows: what data exists, how it transforms, where it goes. This keeps related data together (which helps improving cache behavior) and keeps related operations together (which, again, helps improving comprehensibility). Many performance improvements come from thinking about data layout: arrays instead of linked structures, structs-of-arrays instead of arrays-of-structs, batched operations instead of per-object calls.

Object-Oriented Programming

Clean Code's has the following symptoms: aggressive decomposition that obscures rather than reveals, abstractions that leak, polymorphism that scatters logic, and a systematic blindness to performance. But these are symptoms, not causes. I claim that the cause is OPP itself, a paradigm that has dominated industry practice for three decades while consistently failing to deliver on its promises, and that made issues, many of them, that Clean Code, Design Patterns, SOLID, and other stuff try to solve, which all of them do badly.

This is not a fringe position, (which most people think when I talk about OOP, that this is my own weird conspiracy theory) Edsger Dijkstra called object-oriented programs “alternatives to correct ones.” Alan Kay, who coined the term “object-oriented”, is a vocal critic of what OOP has become, saying “I invented the term, and I can tell you I did not have C++ in mind” and calling Java “the most distressing thing to happen to computing since MS-DOS.”, Joe Armstrong, creator of Erlang, stated flatly that “objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds.” These are not marginal figures, they are architects of systems that actually work at scale.

OOP originated from a simple need: greater customization for data types. In procedural programming, you had records, flat collections of fields. Classes extended records by attaching methods, functions that operate on the record's data, so the idea was actually elegant: bundle data with the operations that manipulate it, encapsulate the internals so they can change without affecting callers.

This seemed like good engineering practice, and in the narrow case of protecting access to mutable state, it was defensible, however OOP didn't stop there. It inflated this modest abstraction mechanism into an entire programming philosophy, insisting that everything should be modeled as objects, that all code should be organized around class hierarchies, and that all behavior should be attached to data. This was a category error, the techniques that work for encapsulating mutable state do not generalize to organizing entire programs.

Methods

Instance methods are what distinguish OOP from procedural programming. A method is a function attached to a class, with an implicit this or self parameter referring to the object instance, it seems like a minor syntactic convenience, but it introduces fundamental problems with reusability and coupling.

Consider a simple example1. Here's a method that formats a user's display name:

class User {
  firstName: string
  lastName?: string
  middleName?: string
  address: string
  friends: User[]
  
  constructor(firstName: string, lastName?: string, middleName?: string) {
    this.firstName = firstName
    this.lastName = lastName
    this.middleName = middleName
    this.address = ""
    this.friends = []
  }

  getDisplayName(): string {
    return [this.firstName, this.middleName, this.lastName]
      .filter(Boolean)
      .join(" ")
  }
  
  hasFriend(id: string): boolean {
    return this.friends.some(f => f.id === id)
  }
}

And here's an equivalent function:

const getDisplayName = (user: {
  firstName: string
  lastName?: string
  middleName?: string
} | null | undefined): string | undefined => {
  if (!user) return undefined
  return [user.firstName, user.middleName, user.lastName]
    .filter(Boolean)
    .join(" ")
}

How do these differ?

  1. The method is tightly coupled to a specific class, it depends not on an interface, but on the concrete User class, it requires not only the data it actually needs (firstName, lastName, middleName) but also every other field and method that exists in User, including address, friends, hasFriend, and anything added later. Anyone wanting to reuse getDisplayName must either inherit from User (bringing in all the baggage) or use delegation (adding boilerplate).
  2. The method cannot be used without an instance, you cannot call getDisplayName on a plain object literal with the same fields. You must first construct a User:

    // OOP: Requires class instantiation
    const name = new User("Alexander", "Danilov").getDisplayName()
    
    // Cannot do this:
    ({firstName: "Alexander"}).getDisplayName() // Error: object has no such method
    
    // FP: Works with any object matching the shape
    getDisplayName({firstName: "Alexander"}) // "Alexander"
    
  3. The method conflates interface and implementation, the method's signature says it takes no arguments, but it actually depends on three fields. This dependency is invisible at the call site., the function's signature explicitly declares what data it needs.

The coupling problem becomes severe when you need to share logic across different types. Suppose you have an Npc class that also has firstName, lastName, and middleName. How do you reuse getDisplayName?

class Dog {
  firstName: string
  lastName?: string
  color: string
  
  // How to reuse getDisplayName from User class?
}

In OOP, your options are ugly:

You can make Dog extend User. But this forces Dog to include address, friends, hasFriend, and everything else User has. You're forced to provide values for fields you don't need:

class Npc extends User {
  constructor(name: string, surname: string) {
    super(name, surname, "", []) // Forced to provide address and friends
  }
}

This violates the Liskov Substitution Principle (a Dog is not a User), creates meaningless fields, and couples unrelated concepts.

You can also modify the existing code to create a Nameable class:

class Nameable {
  name: string
  surname: string
  getDisplayName() { ... }
}

class Friendable {
  friends: User[]
  hasFriend(id: string) { ... }
}

// But User needs both! No multiple inheritance.
// Which to inherit, which to embed?
class User {
  nameable: Nameable
  friendable: Friendable
  // Now getDisplayName is accessed as user.nameable.getDisplayName()
  // The interface has been broken.
}

This breaks existing code, forces awkward delegation, and doesn't even work if you need both behaviors (most languages lack multiple inheritance for good reason).

Finally, you can actually write the same method in both classes. Violates DRY.

Compare with the functional approach:

type BaseUser = {
  id: string
  name: string
  surname: string
}

type User = BaseUser & {
  address: string
  friendIds: string[]
}

type Npc = BaseUser

// Function specifies only what it needs
const getDisplayName = (entity: { 
  firstName: string
  lastName?: string 
  middleName?: string 
}) => {
  return [entity.firstName, entity.middleName, entity.lastName]
    .filter(Boolean)
    .join(" ")
}

const hasFriend = (entity: { friendIds: string[] }, friendId: string) => {
  return entity.friendIds.includes(friendId)
}

// Works with anything that has the right shape
getDisplayName(user)  // OK
getDisplayName(npc)   // OK
getDisplayName({firstName: "Charlie", lastName: "Brown"}) // OK

hasFriend(user, "123") // OK
hasFriend(npc, "123")  // Compile error: Npc has no friendIds

Types compose freely through intersection (&) and union (|). Functions declare precisely what they need through their parameter types. There's no hierarchy to manage, no base classes to extract, no forced coupling between unrelated concepts.

OOP languages provide multiple mechanisms for method overriding, each with subtle differences that create opportunities for bugs:

public void GetDisplayName() // Cannot be overridden in subclasses

public virtual void GetDisplayName() // Can be overridden

public override void GetDisplayName() // Overrides parent method

public sealed override void GetDisplayName() // Overrides but prevents further override

public new void GetDisplayName() // "Hides" the parent method - behavior depends on 
                                 // reference type at call site, not runtime type

The new keyword is particularly dangerous. The method called depends on the compile-time type of the reference, not the runtime type of the object:

class Parent {
  public void Print() => Console.WriteLine("Parent");
}

class Child : Parent {
  public new void Print() => Console.WriteLine("Child");
}

Child c = new Child();
c.Print();           // "Child"

Parent p = c;        // Same object, different reference type
p.Print();           // "Parent" - Surprise!

This is a source of subtle bugs that functional programming simply doesn't have. In FP, there's nothing to override:

const getParentDisplayName = (entity: Parent) => { ... }

const getChildDisplayName = (entity: Child) => {
  // Can reuse parent logic if needed
  if (someCondition) return getParentDisplayName(entity)
  // Or do something different
  return ...
}

Everything is explicit. There are no surprises about which code runs.

Inheritance

Even among OOP developers, inheritance has become an anti-pattern. The standard advice is now “prefer composition over inheritance.” This is an extraordinary admission: the mechanism that was supposed to be OOP's killer feature for code reuse is now officially discouraged.

Joe Armstrong articulated the fundamental problem: “You wanted a banana, but what you got was a gorilla holding the banana and the entire jungle.”

You cannot inherit specific fields or methods, you inherit the entire class. Want to reuse one method? Take everything:

class User {
  id: string
  name: string
  surname: string
  address: string
  friends: User[]
  paymentMethods: PaymentMethod[]
  subscriptionTier: string
  loginHistory: Date[]
  preferences: Preferences
  
  getDisplayName() { ... }
  hasFriend(id: string) { ... }
  addPaymentMethod(pm: PaymentMethod) { ... }
  upgradeTier(tier: string) { ... }
  recordLogin() { ... }
  updatePreferences(prefs: Partial<Preferences>) { ... }
}

// I just want getDisplayName!
class Npc extends User {
  constructor(name: string, surname: string) {
    // Forced to initialize everything
    super(
      generateId(),
      name,
      surname,
      "",           // No address needed
      [],           // No friends
      [],           // No payment methods
      "none",       // No subscription
      [],           // No login history
      defaultPrefs  // No preferences
    )
  }
  
  // Npc now has all these methods that make no sense:
  // addPaymentMethod? upgradeTier? recordLogin?
}

Inheritance creates tight coupling between parent and child classes. Seemingly safe modifications to a parent class can break child classes in subtle ways:

// Version 1
class HashSet {
    private int addCount = 0;
    
    public void add(Object o) {
        addCount++;
        // add to set...
    }
    
    public void addAll(Collection c) {
        for (Object o : c) {
            add(o);
        }
    }
    
    public int getAddCount() { return addCount; }
}

class InstrumentedSet extends HashSet {
    private int addCount = 0;
    
    @Override
    public void add(Object o) {
        addCount++;
        super.add(o);
    }
    
    @Override
    public void addAll(Collection c) {
        addCount += c.size();
        super.addAll(c);
    }
}

Have you found the bug? addAll in the child adds c.size() to addCount, then calls super.addAll, which calls add for each element, which increments addCount again. You've double-counted.

You might fix this by not overriding addAll. But then:

// Version 2: Parent implementation changes
class HashSet {
    public void addAll(Collection c) {
        // Optimization: bulk insert without calling add()
        internalBulkAdd(c);
    }
}

Now InstrumentedSet.addAll doesn't count at all, because add() is never called. The parent's internal implementation detail has broken the child. You cannot examine a base class in isolation and know whether changes are safe, you must understand all descendants.

Multiple inheritance creates the “diamond problem”: if a class inherits from two classes that share a common ancestor, which version of the ancestor's methods should it use?

      Drawable
       /    \
    Shape  Colorable
       \    /
    ColoredShape

If Shape and Colorable both override Drawable.render(), what does ColoredShape.render() do? Different languages handle this differently (C++ has virtual inheritance, Python has MRO, Java forbids it), but all solutions are complex and error-prone.

Most OOP languages “solved” this by prohibiting multiple inheritance of implementation. But this cripples inheritance as a reuse mechanism, you can only inherit from one class, better make it count!

The standard advice is “use composition instead of inheritance.” But composition in OOP requires explicit delegation i.e. boilerplate code for every method you want to expose:

class Nameable {
  name: string
  surname: string
  getDisplayName() { return `${this.name} ${this.surname}` }
}

class User {
  private nameable: Nameable
  private friendable: Friendable
  
  // Delegation boilerplate for every method
  getDisplayName() { return this.nameable.getDisplayName() }
  hasFriend(id: string) { return this.friendable.hasFriend(id) }
  addFriend(friend: User) { return this.friendable.addFriend(friend) }
  removeFriend(id: string) { return this.friendable.removeFriend(id) }
  // ... and so on for every method
}

This is OnceAndOnlyOnce violation at scale. Every method in every component must be delegated, if a component adds a method, every class that composes it must add delegation.

Compare with FP type composition:

type Nameable = { name: string; surname: string }
type Friendable = { friendIds: string[] }
type User = Nameable & Friendable & { address: string }

// Functions work on any type with the right fields
const getDisplayName = (n: Nameable) => `${n.name} ${n.surname}`
const hasFriend = (f: Friendable, id: string) => f.friendIds.includes(id)

const user: User = { name: "Alex", surname: "Smith", friendIds: ["1"], address: "..." }
getDisplayName(user)    // Works - User is Nameable
hasFriend(user, "1")    // Works - User is Friendable

Encapsulation

Encapsulation is sold as OOP's safety mechanism, protect internal state from outside access. Objects hide their data; you interact only through their public interface.

I claim that encapsulation is a Trojan horse. It sells the idea of shared mutable state by making it appear safe but it doesn't eliminate mutable state, instead it hides it. And hidden mutable state is more dangerous than visible mutable state, because you cannot see what's happening.

Rich Hickey, creator of Clojure, articulated the core issue:

“I think that large object-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be.”

The human brain can hold approximately five items in working memory. Programming with mutable state consumes those slots. When reasoning about OOP code, you must track:

  • Which objects exist
  • What state each object is in
  • What references each object holds to other objects
  • What any given method call might mutate
  • Whether other parts of the code hold references to the same objects

This is cognitive overhead that has nothing to do with the problem you're solving. It's pure tax imposed by the paradigm.

Consider a concrete example:

class ShoppingCart {
    private List<Item> items;
    private DiscountStrategy discount;
    private TaxCalculator tax;
    
    public void addItem(Item item) {
        items.add(item);
        // Does this affect the discount? The tax? Other carts?
    }
    
    public double getTotal() {
        double subtotal = items.stream().mapToDouble(Item::getPrice).sum();
        double discounted = discount.apply(subtotal, items);
        return tax.calculate(discounted);
        // What state is discount in? Tax? Has someone else modified items?
    }
}

To understand getTotal(), you need to know: 1. What items are in the list (could have changed since the cart was created) 2. What state the discount strategy is in (could be stateful) 3. What state the tax calculator is in (could depend on previous calculations) 4. Whether anyone else has a reference to items and might be modifying it concurrently.

None of this is visible from the code, but must trace through the entire codebase to find where these objects were created, who has references to them, and what mutations might occur.

The promise of encapsulation is that each object's state is isolated. The reality is that objects hold references to other objects, and through those references, the entire object graph becomes interconnected mutable state.

Consider dependency injection-, the “best practice” for managing object creation:

class OrderProcessor {
    private final InventoryService inventory;
    private final PaymentGateway payment;
    private final NotificationService notifications;
    private final AuditLog auditLog;
    
    public OrderProcessor(
        InventoryService inventory,
        PaymentGateway payment,
        NotificationService notifications,
        AuditLog auditLog
    ) {
        this.inventory = inventory;
        this.payment = payment;
        this.notifications = notifications;
        this.auditLog = auditLog;
    }
    
    public void processOrder(Order order) {
        inventory.reserve(order.getItems());
        payment.charge(order.getTotal());
        notifications.sendConfirmation(order);
        auditLog.record(order);
    }
}

This looks clean. Each dependency is injected, so they can be mocked for testing. But what's really happening?

Each injected dependency is a reference to a shared, mutable object. The InventoryService is used by other processors. The PaymentGateway has internal state (connection pools, retry counters, rate limiters). The NotificationService might batch messages. The AuditLog writes to shared storage.

When you call processOrder, you're not operating in isolation. You're reaching into a web of shared mutable state, making changes that affect objects used by other parts of the program. Encapsulation promised isolation; dependency injection delivers shared global state with extra steps.

The ritual of encapsulation in OOP produces getters and setters, methods that provide access to fields:

class Person {
    private String name;
    private int age;
    
    public String getName() { return name; }
    public void setName(String name) { this.name = name; }
    public int getAge() { return age; }
    public void setAge(int age) { this.age = age; }
}

How is this different from public fields? It's not. The data is still externally accessible and mutable. The “encapsulation” is theatrical, it adds boilerplate without adding protection.

Some argue that getters and setters allow you to add validation or logging later. But this assumes a change pattern that rarely occurs. In practice: 1. If you need validation, you probably know it upfront 2. Adding validation to setters later breaks existing code that relied on the lack of validation 3. The setter pattern encourages mutation, which is the real problem

Compare with immutable data:

type Person = { readonly name: string; readonly age: number }

const withUpdatedAge = (person: Person, age: number): Person => ({
  ...person,
  age: age >= 0 ? age : person.age  // Validation at creation
})

Mutable state shared across objects creates concurrency nightmares. When multiple threads access the same object graph:

class BankAccount {
    private double balance;
    
    public synchronized void deposit(double amount) {
        balance += amount;
    }
    
    public synchronized void withdraw(double amount) {
        if (balance >= amount) {
            balance -= amount;
        }
    }
    
    public synchronized double getBalance() {
        return balance;
    }
}
// Thread safety violation despite synchronized methods
if (account.getBalance() >= 100) {
    account.withdraw(100);  // Balance might have changed!
}

Between getBalance() and withdraw(), another thread might have modified the balance. The synchronization on individual methods doesn't help because the operation spans multiple calls.

To fix this, you need transaction-level locking, which leads to deadlock risks when multiple objects are involved. The complexity compounds, and encapsulation provides zero help, in fact, it makes reasoning about the problem harder because the state is hidden.

Functional programming with immutable data sidesteps this entirely. Immutable data can be freely shared across threads without synchronization. There are no race conditions when nothing can change.

Polymorphism

Polymorphism is type-based dispatch, it fragments codebases, hides control flow, and prevents optimization.

With polymorphism, each subclass handles one case. The logic for all cases is scattered across multiple files:

shapes/
  Shape.java       // abstract base
  Circle.java      // implements draw() for circles
  Rectangle.java   // implements draw() for rectangles
  Triangle.java    // implements draw() for triangles
  Ellipse.java     // implements draw() for ellipses
  Pentagon.java    // implements draw() for pentagons
  ...

To understand what “draw” means in your system, you must find and read every subclass. To understand the full behavior, you must hold all implementations in your head simultaneously.

Compare with a single function:

type Shape = 
  | { type: 'circle'; radius: number }
  | { type: 'rectangle'; width: number; height: number }
  | { type: 'triangle'; a: number; b: number; c: number }

const draw = (shape: Shape): void => {
  switch (shape.type) {
    case 'circle':
      drawCircle(shape.radius)
      break
    case 'rectangle':
      drawRectangle(shape.width, shape.height)
      break
    case 'triangle':
      drawTriangle(shape.a, shape.b, shape.c)
      break
  }
}

All cases are visible in one place. You can see the complete behavior by reading one function. Adding a case requires adding one block, not creating a new file.

OOP's subtype polymorphism is uncontrolled. A virtual method can do anything. There's no constraint on what subclasses implement beyond matching the signature:

interface Drawable {
    void draw();
}

// Subclass can do literally anything
class MaliciousDrawable implements Drawable {
    public void draw() {
        System.delete("/");  // "Draw" to the filesystem
    }
}

The type signature tells you nothing about what the method does.

Functional programming offers parametric polymorphism, where the type signature constrains the implementation:

-- This type signature tells you exactly what the function CAN do
map :: (a -> b) -> [a] -> [b]

A function of type map must apply the given function to each element of the list. It cannot do anything else (inspect the elements, modify global state, delete files) because a and b are completely abstract. The type is the documentation.

This is called parametricity, and it's one of the most powerful tools for reasoning about code. OOP lacks it entirely.

Nouns

Steve Yegge's famous essay “Execution in the Kingdom of Nouns” diagnosed a fundamental problem: OOP forces everything to be nouns.

Actions are natural in human thought and in code. “Open the file.” “Calculate the tax.” “Send the email.” But in OOP, you cannot have standalone actions. You must have:

  • FileOpener.open()
  • TaxCalculator.calculate()
  • EmailSender.send()

Or worse, following enterprise patterns:

  • FileOpeningStrategy.execute()
  • TaxCalculationService.performCalculation()
  • EmailSendingFacade.dispatchMessage()

The nouns are meaningless wrappers. The TaxCalculator class exists only to hold the calculate method. The class adds nothing, it's ceremony required by the paradigm.

This reaches its absurd conclusion in enterprise Java:

  • SimpleBeanFactoryAwareAspectInstanceFactory
  • AbstractInterceptorDrivenBeanDefinitionDecorator
  • TransactionAwarePersistenceManagerFactoryProxy
  • RequestProcessorFactoryFactory

These are real class names from production frameworks. Each represents a sincere attempt to express something in OOP's noun-centric vocabulary.

A factory is already a noun. But what if you need to create factories? FactoryFactory. What if you need to configure how FactoryFactory creates factories? FactoryFactoryConfigurationProvider.

In functional languages, you just have functions:

const openFile = (path: string) => ...
const calculateTax = (income: number, brackets: TaxBracket[]) => ...
const sendEmail = (to: string, subject: string, body: string) => ...

No wrapper classes. No nouns for nouns' sake. The code expresses actions directly.

Compare implementations of the same operation:

// OOP: Need a class to hold the comparison logic
public class StringLengthComparator implements Comparator<String> {
    public int compare(String a, String b) {
        return a.length() - b.length();
    }
}

// Used as:
Collections.sort(list, new StringLengthComparator());
// FP: Just express the comparison
list.sort((a, b) => a.length - b.length)

What replaces OOP?

A lot of programmers I knew, including myself, learnt programming by learning OOP, they can not think of another approach, and to them, a critique of OOP is a critique of programming. Here are some proven-to-be-better approaches to programming:

Functional Programming

Functions are simpler than methods. Pure functions are simpler than stateful ones, composition is simpler than inheritance. Higher-order functions are simpler (much) than design patterns.

Languages like Clojure and Elixir make these approaches natural. Languages like TypeScript, Kotlin, and Swift allow them with discipline. Even in Java and C#, you can write procedurally, using classes as namespaces for static methods.

Data-Oriented Design

Data and behavior separate. Data lives in contiguous memory, laid out for cache efficiency. Behavior operates on batches, not individual objects. There are no inheritance hierarchies, no tangled object graphs, just data and transformations.

Less

Often the alternative to OOP isn't another paradigm, it's less structure. A 200-line script beats a 2000-line “properly architected” application. A single file beats five files with interfaces and factories. Procedural code with clear control flow beats dispersed methods with hidden state.

The best code is code that doesn't exist, the second best is code that's obviously correct.

Footnotes


1

Copied from this article