Your browser may have trouble rendering this page. See supported browsers for more information.

|<<>>|105 of 273 Show listMobile Mode

Programming-language Features: How much is too much?

Published by marco on

Updated by marco on

The article Dark Path by Robert C. Martin was an interesting analysis of a recent “stricter” trend in programming languages, as evidenced by Swift and Kotlin. I think TypeScript is also taking some steps along this path, as well as Rust, which I have a read a lot about, but haven’t had much occasion to use.

Correct vs. Expressive

The point Martin makes is that all of these languages seem to be heedlessly improving correctness at the possible cost of expressiveness and maintainability. That is, as types are inferred from implementation, it can become more difficult to pinpoint where the intent of the programmer and the understanding of the compiler parted ways. As well, with increasing strictness—e.g. non-null references, reference-ownership, explicit exceptions, explicit overrides—there comes increasing overhead in maintaining code.

Not only that, but developers must know their types—and hence their design—up front, which restricts evolving design as practiced in the very successful TDD approach and seems to be headed back to the stone age of waterfall design. As well, that level of strictness convinces developers—who are similarly encouraged by the language designers—that once their code compiles, then it runs as expected.

But then they think they don’t need to test, whereas the compiler really has no idea whether your code does what it should do. All it can guarantee is that no exception went unhandled—or explicitly ignored—(e.g. in Kotlin or Swift) or there are no race conditions or deadlocks (Rust) or that there are no null references where not explicitly programmed (Swift, Kotlin, TypeScript).

These compiler-enforced language features are very useful, but are in the same class as the spell-checker in your text editor. Having no red, wavy lines in your document is no guarantee that the document makes any sense whatsoever.

So these are interesting and useful features. They can lead to increased safety. But, they won’t make your program do what it’s supposed to do. At best, they help you avoid writing behavior that you most definitely don’t want.

These features are nice to have, but they are not worth having at any price.

Round Two: Defaming Types

It was an interesting article that I more-or-less agreed with. The follow-up article Types and Tests by Robert C. Martin (Clean Coder Blog) followed close on its heels because Martin apparently wanted to respond to feedback he’d received on the first article. I thought he went a bit far in the second article. For example, he emphasized that,

“No, types are not tests. Type systems are not tests. Type checking is not testing. Here’s why.”

That’s absolutely true, but types are still related to testing. Types help me specify my interface more precisely and I can trust the compiler to enforce them. That’s a lot of tests I don’t have to write.

Otherwise, for every API I write, I’d have to write tests to prove that only the supported types can be passed in—and I’d also have to specify how my API behaves when value with an incorrect type is passed in. Do I fail silently? How do I let the caller know what to expect? This seems not only sloppy but time-consuming. It sounds like busy work, having to think about this kind of stuff for every API.

Precise Requirements

Martin continues,

“[…] the way f is called has nothing to do with the required behavior of the system. Rather it is a test of an arbitrary constraint imposed by the programmer. A constraint that was likely over[-]specified from the point of view of the system requirements. (Emphasis added.)”

The first sentence is a useful observation. The second is hyperbole. Indicating int rather than object for a parameter called limit hardly seems like an over-specification. In fact, in seems like exactly what I want.

If the requirement says shall allow a user to enter a value for limit… rather than shall allow a user to enter a positive number for limit…, then I would argue that 99% of the time it’s the requirement that isn’t precise enough. I would not assume that the requirements engineer knew just what she was doing when she left the door open for a limit given as a string.

Without types, our requirements would also become bloated with over-definitions like:

  • The system shall allow the user to enter a value that defines the upper limit of records to return.
  • The system shall throw an exception of type ArgumentOutOfRangeException for values that are less than zero or greater than 1000.
  • The system shall marshal the value to a numeric representation where necessary.
  • The system shall throw an exception of type ClassCastException if the given value cannot be marshaled to a numeric value.

For this specification, a developer could write:

public void SetLimit(object limit)
{
  int limitAsNumber;
  if (!Int32.ParseInt(limit, out limitAsNumber))
  {
    throw new ClassCastException("…");
  }

  if (limitAsNumber > 1000)
  {
    throw new ArgumentOutOfRangeException("limit");
  }

  _limit = limit;
}

The developer could also write:

public void SetLimit(UInt32 limit)
{
  if (limit > 1000)
  {
    throw new ArgumentOutOfRangeException("limit");
  }

  _limit = limit;
}

That’s actually what we want the developer to write, no? If you choose JavaScript to implement this requirement, then you would need to over-specify because you need to decide how to handle values with unsupported types. If the requirements engineer is allowed to assume that the implementing language has a minimal type system, then the requirements are also easier to write, as shown below.

  • The system shall allow the user to enter a positive integer that defines the upper limit of records to return.
  • The system shall throw an exception of type ArgumentOutOfRangeException for values that are less than zero or greater than 1000.
  • The system shall marshal the value to a numeric representation where necessary.
  • The system shall throw an exception of type ClassCastException if the given value cannot be marshaled to a numeric value.

Expressive Types are a Good Thing

Assuming a minimal type system in the target language saves time and effort. The requirements engineer can specify more concisely and the software engineer wastes less time writing boilerplate that has nothing to do with application behavior.

Martin finished up with this sentiment,

“So, no, type systems do not decrease the testing load. Not even the tiniest bit. But they can prevent some errors that unit tests might not see. (e.g. Double vs. Int) (Emphasis added.)”

As you can imagine, I strongly disagree with the “[n]ot even the tiniest bit” part, based on my arguments above. If you use JavaScript, then you have to test all valid input and verify its behavior. In JavaScript, literally any data is valid input and it’s up to your method to declare it invalid.

Only tests can provide any protection against your method being called at runtime with invalid data. You have to write a test to verify that your method throws an error when passed a double rather than an int. Most people will not write these kind of tests, which I suspect is why Martin says there’s no change in testing load.

Strict Languages for Bad Programmers?

I agree that the pendulum in Swift has swung too far in a restrictive direction. The language does feel pretty overloaded. I also agree that the behavior of the system itself needs to be tested and that types don’t help you there.

Martin again,

“On the other hand, internal self-consistency does not mean the program exhibits the correct behavior. Behavior and self-consistency are orthogonal concepts. Well behaved programs can be, and have been, written in languages with high ambiguity and low internal consistency. Badly behaved programs have been written in languages that are deeply self-consistent and tolerate few ambiguities. (Emphasis added.)”

Agreed.

I think, though, that Martin might be forgetting about all of the people writing software who aren’t the kind of people who can write a well-behaved program in a wildly inconsistent language. I, for example, am so awesome[1] that I wrote my entire web-site software in PHP—one of the worst languages in the world for internal self-consistency—and it’s been running my site for going on 18 years. Programming skill and iron discipline fill the gap left by language consistency.

But for bad programmers? They write utter garbage in PHP. Maybe it’s not a bad idea to create languages that channel poorly disciplined programmers into better practices. I take the point from the previous article (Dark Path) that bad programmers will simply work their way around the rigor, where possible. They will mark every class as open in Swift instead of thinking about their architecture.

For those of us with discipline, the language will put up roadblocks that force us to write more code rather than less.

Unfixable Errors?

As a counterexample, there is Rust, which enforces reference-ownership in a way that guarantees concurrent code with no deadlocks and no race conditions. This is a good thing. It probably gets in your way when you’re trying to write other types of programs, but it’s overall a good thing.

I haven’t had any personal experience with it, but I’ve heard that it’s sometimes difficult to figure out why a given program won’t compile. I would hope that these situations become fewer with experience, but would also be cautious because I remember programming in C++ with templates and know how much time can be lost when you don’t know how to fix your program based on an error message.

Non-nullable types are not a step too far

I, for one, like that my compiler tells me when I have potential null-reference exceptions. I use attributes in C# to tell me exactly that and I use R# to find all places in my code where I have potential violations. Those are more tests that I don’t have to write, if the compiler can “prove” that this code is never called with a null reference.[2] It lets me write more concise implementation and spares me a lot of scaffolding.

What about const/non-mutable types?

Many years ago, I had the same experience with const in C++ as Martin discusses. After some time working with const, I starting making everything I possibly could const in order to eliminate a whole class of mutation errors in my code. That did have consequences, at the time. Changing one thing could—as Martin describes for his hypothetical language TDP—lead to knock-on changes throughout the code base.

Generics can have this effect, as well, with changes leaking into all of the places they’re used. I wrote a blog series on having pulled back from generics in a few central places in Quino.

Exceptions in method signatures

I often felt the way that Martin does about Java’s throws declaration. I imagine that I’ll start to feel the same about Swift’s, as well. I read once about a nice typing system in Midori, the managed version of Windows created by Joe Duffy and team at Microsoft Research, that I felt I would like to try (no pun intended).

Toward more expressive types

Martin says that he uses both dynamically and statically typed languages. He acknowledges that certain extensions to the type system can be useful (but just that some languages have gone too far).

I, too, think some innovations can be very helpful. I like immutables (types, declarations, whatever) because they let me reason better about my code. They let me eliminate unwanted code paths with the compiler rather than having to write more rote tests that I think even Martin will agree have nothing to do with the original specification or the behavior of my application.

If I can mark something as readonly because I don’t expect it to ever need to be changed, that’s a little note I’ve left for future programmers that, should they want to modify that value, they will have to make sure to reason differently about the implementation. The value was never intended to be rewritten and there are no tests for that behavior. It’s a nice way of reducing the scope of the implementation.

It simultaneously restricts that scope, but that’s a good thing. A program can, very quickly, do a lot of things that it should not do. I don’t want to write tests for all of this stuff. I have neither the inclination nor the time—nor the budget—to write tests for things that I could instead eliminate entirely from the realm of possibility with a powerful type system.

Agreeing with Martin

I read up on Kotlin and saw a seminar on it last year. I, too, noticed that there seems to be an “everything but the kitchen sink” feel to it. It’s the same feeling I get when I look at Scala’s type system, though that one is less about restriction than about letting you do everything in 3 different ways.

I’ve been reading through the Swift language guide and I’m getting the same feeling. It doesn’t help that they have their own name and keyword for nearly every commonly known programming concept. You can use self. but the guide prefers just ., which takes some getting used to. finally? Nope. Use defer instead.

An example from TypeScript

To be honest, I’m also a bit dizzy at how quickly the TypeScript type system has gotten more and more complex. TypeScript 2.1: keyof and Lookup Types by Marius Schulz includes details on even more typing specifications that let you infer types from dynamic objects with flow-control analysis.

I think this is quite an interesting approach, akin to more functional languages, like ML and F#, where return types are inferred and even parameter types are inferred. Swift has also gone a long way in this direction. Interfaces are replaced with non-inheritable types that describe the shape of data.

Types can even be inferred by which fields you access within conditionals so that a single variable has a different inferred type depending on which path through the code it takes. It’s all very exciting, but I wonder how much can be used correctly—especially by the aforementioned crappy programmers.

For example, this is the definition for the Object.entries() method from JavaScript.

interface ObjectConstructor {
    // …
    entries<T extends { [key: string]: any }, K extends keyof T>(o: T): [keyof T, T[K]][];
    // …
}

Inferred method return-types

After having used languages that have explicit return types for methods, I’m still a bit at sea when I read TypeScript code without them. I find myself hovering identifiers to see which type was inferred for them by the real-time compilation.

I agree that the code is cleaner, but maybe something’s gone missing. It’s harder to tell what the hell I’m supposed to pass in as a parameter or what the hell I get back from a function when the type can be a union of 3 or 4 other vaguely and sometimes ad-hoc–defined types.

For example, a lot of code just constantly redefines the hash-table interface rather than just defining a type for it … so the caller isn’t restricted to implementing a specific interface. This is nice for library code, I guess, but it makes it harder to reason about the code because you don’t have good names for types. This is an interesting enough experience for seasoned programmers; I can’t even imagine how average or bad programmers deal with it.

I see where Martin is coming from, that he’s afraid of BDUF, something he’s been fighting for years by arguing that you can design as you go if you’ll just test your code as you write it. If you see that a parameter has to be an IHashMap, that’s easier to understand than { [key: string]: any } or { [key: string]: T } where T is a completely different type. There are advantages and disadvantages.

“Every step down that path increases the difficulty of using and maintaining the language. Every step down that path forces users of the language to get their type models “right” up front; because changing them later is too expensive. Every step down that path forces us back into the regime of Big Design Up Front.”

I agree with the sentiment, but I don’t know if we’re there yet. Martin argues that there is a balance and maybe I need more experience with the languages he’s horrified about. He does write:

“I think Java and C# have done a reasonable job at hovering near the balance point. (If you ignore the horrible syntax for generics, and the ridiculous proscription against multiple inheritance.)”

…which I agree with wholeheartedly. I have learned to live without multiple inheritance, but I regularly railed against its absence for decades. I have given up because the world has moved on. I would love to see proper contravariance and covariant return types and anchored types, but I’ve kind of given up on seeing that kind of stuff in a mainstream language, as well. Instead, I’ve drifted more toward immutable, stateless, functional style—even in C#. I’m ogling F#. I’m working with Swift now and will do much more of that this year.


[1] Did you expect me to soften that brag down here, in this footnote?
[2] I also use R#’s solution-wide analysis.