|<<>>|73 of 274 Show listMobile Mode

Framework Design: Programming Paradigms and Idioms

Published by marco on

Updated by marco on

The discussion React in concurrent mode: 2000 state-connected comps re-rendered at 60FPS (YCombinator) is illuminating mostly in that it shows how ego can impede productivity.

Ego-driven design

Ego can also be that thing that drives a talented programmer to create something of use to the rest of us, but that’s honestly a very rare case. More often than not, the best case is that a developer improves their skills—and perhaps learns to be more humble instead of shooting of their mouth about how “easy” it is to create a “good” product. Such claims are nearly always made without defining what they mean by “good”.

Some comments are from programmers more interested in a pissing contest of who can write performant code on their own. Their implementation often focuses laser-like on a specific use case not often found in nature without tackling the tough question of how to design a more generalized solution that incorporates and balances more than just the one aspect of the system that they think they’re good at (e.g. performance).

That is, they tend to carefully define the application domain based on what they’re already good at. This is not how product development works. Many of the commentators get distracted by the overreaching claims of the reposter (faster than any other WebGL rendering, which is patently not true) instead of reading the much more reasonable claims of Dan Abramov, who is the original poster.

Product-driven design

Thankfully, there are others who seem to understand that giving up a logical, declarative paradigm in order to do so is not an acceptable tradeoff in almost any given project. What are some facets other than performance that contribute to a good solution?

  • Maintainability
  • Extendability
  • Readability: low syntax noise
  • Testability: composition-based
  • Discoverability: consistent API
  • Flexibility: applicability to different problem domains
  • Expressiveness: concise and precise
  • Correctness: error-free is strongly encouraged or enforced
  • Completeness: degree to which the definition captures all facets of the problem domain
  • Learnability: low number of concepts cover application needs
  • Scalability: layered API allows more control where needed
  • Efficiency: ratio of effort to product low
  • High signal-to-noise ratio
  • Optimizability/Performance

Products that try optimizing all facets generally never see the light of day or serve as the base material from which more viable projects are born.

A higher level of abstraction is a good thing. It allows mediocre programmers (and be happy if you have even mediocre programmers) to write programs that aren’t a nightmare to maintain or refactor. It allows good developers to very quickly write maintainable programs. If the underlying framework has a declarative and easily understood paradigm that has only a handful of orthogonal concepts and it offers great performance by default, that’s a win. There are few projects that need spectacular performance as their main feature.

Focus on your application domain

I would argue that most web programming is about making line-of-business apps and pages where look and feel matters so much that it’s worth investing 50% more budget to get near-perfect and smooth updates. If it janks, it janks. There is no time or budget (or, sometimes, programming skill) to “fix” it. And, if “fixing” it means abandoning the high-level declarative programming model that makes working with Reactive so efficient, maintainable and productive, then that’s even more implicit cost bound up in it.

As the commentator Onion2k put it:

“This is a demo of good performance using a web framework on top of a WebGL framework. It’s showing that a future version of React will make building a solid 60fps web app UI […] within the reach of most web developers. Sure, you can hand-roll code to get that performance today if you know how, but this is about putting that performance in the hands of developers who can’t (or, more often, aren’t given the resources to). To argue that is unnecessary or actually bad is ridiculous. Libraries that make it easier to build better apps are universally good things. (Emphasis added.)”

Defining an application model

To use React, you have to make concessions to Reactive mode in your application definition. But that’s the way programming works. Instead of writing “a person must have a company, while the company has a possibly empty list of people”, we write (example from Quino),

Elements.Module.Company
  .AddOneToManyRelation(Elements.Module.Person)

Programming is all about explaining what an application does. The programming language and framework and runtime balance all of the factors listed above to be able to transform the formulation most accessible to a product owner (“I want a CRM”) through a business analyst (“It has a list of companies, each of which has a list of people”) to a programmer (formulation above).

The formulation above is still quite high-level, but satisfactory for 99% of cases. For the remaining 1%, the API has to provide some way of digging into the underpinnings of the implementation without dropping the developer off of a cliff. Quino does this reasonably well, as does React. The focus here is on realizing that a framework’s ability to accommodate that 1% of use cases smoothly is only one aspect of its effectiveness. Given that it doesn’t come up very much, it makes no sense to focus too much effort on optimizing that path, no matter how much more interesting it would be to the developers to do so.

Concurrent mode in React

This is one of those silly blogs-posted-as-tweets, but the points in Is Concurrent Mode just a workaround for “virtual DOM diffing” overhead? […] by Dan Abramov (Twitter) are good.

The point is that Concurrent Mode is not a speed improvement only for React. It also improves how your app’s code updates and is scheduled without you having to change your code (much, or at all). The linked article explains how this sea change in rendering components forms the basis of many other performance improvements that apply to existing applications without modification.

It’s exciting that a near-future version of React will make animations and updates even smoother than they are now. This taking into consideration that they are already more than good enough for most apps without tweaking.

React’s idioms

React is not a game-programming framework. It makes no sense to claim that React apps will blow away apps written in Unity. We make line-of-business apps with it. React already allows apps to have much better update characteristics with almost no code other than a few functional declarations to define rendering and components and the state that they rely on.

The model is unimpeachable in that it accurately reflects the application model without adding any ceremony.

You make some concessions in order to define your declarations about your program’s logic and states so that the framework can optimize as much as it can, but no more. With hooks, you can declare simple, mutable state or one-time, partially mutable state (memos and callbacks), listeners for lifecycle events (effects) and so on.

On the one hand, you’re forced to define your logic using React’s idioms but, on the other, they still make sense in that they make your assumptions about your app’s logic explicit rather than implicit. Once you’ve done this, the framework knows more about what it can optimize away and what it can’t. And you haven’t wasted time because you’re technically describing salient properties of your application domain.

Declaring Behavior

That’s the idea behind the < Suspense/> component: the app can declaratively determine how it would like components to be updated in different asynchronous situations involving multiple asynchronous tasks. Concurrent Mode allows the framework to work before that update is technically complete because it allows any work to be interrupted—and discarded, if it is no longer relevant.

This allows the reconciliation to benefit a bit from something like the branch predictor in a CPU, where speculative branches are executed in parallel and occasionally discarded. JavaScript imposes a cooperative rather than parallel model, but low-level support for interruptibility (especially when automatically applied) is worlds better than nothing.

Working within a paradigm

Any language—and the combination of the underlying programming language and the framework API, combined, is the language a programmer uses—must have a shape, a paradigm that it enforces. Naturally, a programmer can use a different paradigm than the recommended one. But a good framework finds the balance between a paradigm that is comfortable for a large part of its audience and one that is enough of an abstraction that it has a lot of leeway for applying to the next layers down (until it gets to machine code).

A good framework provides an out-of-the-box experience that provides a clearer programming idiom and better performance than most programmers could do on their own.

In the thread above, Abramov in no way claims that it’s not possible to create a faster application for thousands of components, just that the new renderer is much, much faster than the old one without changing the programming idiom at all. The programming idiom in React is very good, if not great. This is really good news.

Instead, you could say that Abramov’s claim is that anyone who claims to have made a faster renderer is making tradeoffs in other areas (e.g. from the list above). Most likely, the resulting balance is not as good as the clear, declarative syntax of React or it doesn’t cover nearly as many use cases.

Comparing Idioms (React vs. Svelte)

Is React’s syntax the best it can be? Maybe not yet. For example, a component declares mutable, internal state with the useState() hook, which returns a state variable and a “setter” function to change that state. Svelte, for example, improves on this by allowing the app to just declare the state variable and automatically noticing when that state is updated and generating the state-update code in the transpilation phase. This is an improvement that allows an app developer to work even closer to “normal” code than before.

If Svelte can provide this clearly more readable feature without introducing problems in other facets (e.g. learnability, performance, completeness), then it’s a clear win.

Case study: async/await)

A similar kind of improvement is async/await. This feature didn’t actually change how asynchronous code works. Instead, it allowed a programmer to write synchronous code that could be made asynchronous automatically.

This is a sea change for most developers—even those clever and experienced enough to have written that level of asynchronous code themselves. The point is that the developer is no longer wasting time writing what amounts to boilerplate code that is very error-prone and difficult to thoroughly test (which means that it’s often not thoroughly tested).

The idiom of async/await imposed minimal “noise” (none, actually) and has a tremendous upside. The code doesn’t necessarily get faster, but it could be made faster without changing it.

Inherent Limitations of Languages/Runtimes

The comment on Fiber Principles: Contributing To Fiber by sebmarkbage (React/Github) is another well-written contribution to this discussion that shows that there are a lot of clever people working on React that are aware of the fine balance between the requirements involved in writing a strong framework.

The user responds to accusations that much of this work would not be necessary if JavaScript had proper threading. The author argues that globally mutable prototypes are an intrinsic concept that is used in many, many JavaScript use cases. However, they also limit the ability of ever bringing threads to JavaScript. The language is limited from the get-go.

That doesn’t mean we should all stop using JavaScript. It just means that this is something that goes in the cons list and must be weighed against all of the pros. Anything that is in the cons list must be compensated with effort. JavaScript has many pros going for it: for example, it’s won the client-side programming-language war.

Perhaps WebAssembly will replace it as a runtime, but only time will tell. By then, it won’t matter, because we’ll be using languages like Elm or TypeScript to write our code. Even this doesn’t matter, though, because these languages must also transpile to the underlying paradigm defined by an engine that must run JavaScript.

That goes—for now—for WebAssembly targets as well. And threading is out for any of this stuff. Until something in this situation changes and we can target a threaded execution engine on the client side, we should be happy that there are very clever people making cooperative multi-tasking transparent and easy to program for the rest of us.

Those of us who worked on Apple OSs before OS X or Windows before 95 know what it’s like to have to deal with cooperative multi-tasking in our own code. I welcome the declarative paradigm that allows excellent performance for a wide range of use cases without making me write and maintain a whole bunch of code that has nothing to do with my application domain.

Avoiding shared, mutable state

There’s a reason why everyone with sense is talking about this concept. Using shared, mutable state makes it very easy to write the happy path of a single use case, but it makes it very hard to reason about other use cases and branches. It doesn’t scale, extend, test or maintain well. If these requirements don’t apply to your application—e.g. a script or one-off throwaway prototype—then you might be fine.

I would personally advise against practicing or becoming accustomed to techniques that apply to one use-case but that are dangerous in all other situations. You’ll generally end up using the technique to which you’ve become accustomed. While training yourself to build high-quality solutions risks the danger of over-engineering solutions to problems that could have been solved more simply, it’s easier to “downscale” your coding style than to “upscale” it.

With enough practice and the right techniques, you can write quality code just as initially efficiently as crappy code. I would also say to beware of the seductiveness of bad programming models that promise an initial speed in development that quickly drops off once it’s too late to change.

Prototypes happen to be built into the language in JavaScript’s case, but shared mutable data is the great stumbling block of concurrent programming. Applications that batch work into parallelizable chunks can be optimized to run more quickly by a clever runtime.

It is much simpler to reason about an application without shared mutable data. There are fewer cases and branches. Otherwise, an application must use locks (or fences or some sort of synchronization concept). The point is that efficient synchronization is not easy and many laic implementations tend toward speed rather than robustness and are buggy as a result.

Though it’s possible to hand-code faster concurrency than standard frameworks, most people can’t do it. And, given time, framework implementations get really, really good at optimizing nearly all cases. C# and .Net, for example, have a tremendously clever runtime underlying async/await now that can hardly be beaten for throughput, scheduling, etc. Successive versions have built on new language concepts introduced precisely to allow an application—where needed—to be more declarative in ways that allow even more optimization (e.g. record references, etc.)

It’s nice to see that Concurrent React—much like async/await in JS—provides a simple idiom for moving that effort out of the hands of most developers.

Reinventing the Wheel

Naturally, a developer is free to do that work on their own—and many commentators in the original thread at the start of this article seem to enjoy writing code that has nothing to do with their actual app just to show that they can. But with enhancements like async/await or Concurrent React, they don’t have to in order to enjoy performance benefits. That’s a win-win—a free lunch.

The point made above by Onion2k is very salient: very often “developers [aren’t] given the resources to” make the kind of optimizations that React will provide for free. Could a given rockstar developer write something even faster for exactly their application domain? Probably. Are they going to be given the time and budget to do so? Almost certainly not. It’s far better to have a good default that is smooth as silk and more than adequate to the task for almost all conceivable applications.

No-one’s paying you to reinvent the wheel. That’s almost certainly not your job. If you’d like it to be your job, then maybe you should work on a project where you’re inventing the wheel directly (i.e. a framework project). Then, you can build on that experience and your framework to turn around tightly written, maintainable and performant applications for your paying customers.

Inventing the Wheel

It’s important to be pragmatic and remember when you’re working on framework code and when you’re working on code that benefits from framework code without reinventing it. Otherwise, you’ve got a terrible situation: you invest in framework/infrastructure on every single project because you never reap the benefits of having written a framework. In the case of frameworks that are completely external to your application, like React (or Quino), you never even had to invest in writing the framework at all.

If you write a framework for just expert developers, there will be no adoption and you don’t help a large part of the community to write better apps. But what do we mean by better?

Continuing with React as an example, the abstract requirements at the start of this article roughly map to:

  • Maintainable (minimum of idioms)
  • Robust (not buggy)
  • Scaling performance
  • Consistent UX for
    • Error conditions
    • Networking latency (progressive handling, from avoiding needless jank to notifying the user that something is happening when it takes longer)
    • Small/large data/updates
  • Tooling support for introspection, code-completion, debugging, discovery, etc.
  • Support for common tasks
    • State management, from function-local to multi-function/context
    • Describing state (mutable, non-mutable)

Minimizing Impedance

An application should have to only declare things about itself that are relevant to itself—but that also help to render the application better. Again, these idioms should scale: an application which will not have foreseeable performance issues in most components should be able to write those components with more approachable code.

Individual “islands” of code can provide additional information to optimize hotspots (like memoization, immutability hints, etc.) It’s important to note that these concepts are not introduced by the framework—they are intrinsic to the application’s domain model, but usually kept implicit.

If the application does not describe these aspects of itself, then the framework must make more pessimistic assumptions. Often this doesn’t matter. Where it does matter, the application should be able to use compatible and familiar idioms to improve the granularity of its description about itself. This, in turn, lets the framework use a faster approach where it now knows that it won’t violate the application’s definition.

The simplest of these is to tell React which parts of the state are mutable and which are immutable. When determining what has changed in an application state, a framework can simply compare the reference to the root node of an immutable object graph to the previous root-node reference to determine if that part of the graph has changed. If the object graph does not declare itself as immutable, then the framework must be pessimistic and compare the entire subgraph to determine if it has changed.

This is a concept that is intrinsic to programming. It is hard to conceive of it ever not being relevant. Naturally, if there is more than enough processing power available or the graph is small enough, it won’t matter, but it’s still axiomatically more work to compare potentially mutable graphs than immutable graphs. If an application fails to express immutability where it could have, that small missing bit of information reduces flexibility in choosing an algorithm.

This is not a new thing: most functional languages have immutability baked in as the default. Even C has the notion of const and volatile to give hints to the compiler about how it can deal with that data. Naturally, higher-level languages try to abstract away these concepts, but it constrains all the layers below.

Other Intrinsic Concepts

On this subject, another unavoidable concept is nullability: is a reference assigned or not? Most new languages (and newer versions of languages, like C#) are switching from the age-old—and convenient-for-the-compiler—default of nullable references. Again, reference assignment is a core concept that is unavoidable when thinking about code with pointers.

Another concept that limits choosing a more performant transformation during compilation is failing to express function purity. Does a function cause a side-effect? A compiler can optimize a function known to be pure in ways that it cannot with impure functions.

All of these features are a balance between programmer convenience, onboarding of new developers, and allowing programmers to focus on application logic rather than making concessions to the language and framework. As discussed above, though, there are concepts intrinsic to programming that have ostensibly nothing to do with application logic, but that an application declares (if not explicitly, then implicitly).

Taking the example from above, if an application declares that a person is in a company, but fails to mention that a person must be in a company, then the underlying software (framework and compiler) must be more pessimistic about that relationship than is strictly necessary.

A good framework encourages software to be precise about its own model by allowing the application to declare the salient parts of its model in a declarative minimal set of idioms.