This page shows the source for this entry, with WebCore formatting language tags and attributes highlighted.
Title
Programming-language Features: How much is too much?
Description
The article <a href="http://blog.cleancoder.com/uncle-bob/2017/01/11/TheDarkPath.html" author="Robert C. Martin">Dark Path</a> was an interesting analysis of a recent "stricter" trend in programming languages, as evidenced by Swift and Kotlin. I think TypeScript is also taking some steps along this path, as well as Rust, which I have a read a lot about, but haven't had much occasion to use.
<h>Correct vs. Expressive</h>
The point Martin makes is that all of these languages seem to be heedlessly improving correctness at the possible cost of expressiveness and maintainability. That is, as types are inferred from implementation, it can become more difficult to pinpoint where the intent of the programmer and the understanding of the compiler parted ways. As well, with increasing strictness---e.g. non-null references, reference-ownership, explicit exceptions, explicit overrides---there comes increasing overhead in maintaining code.
Not only that, but developers must know their types---and hence their design---up front, which restricts evolving design as practiced in the very successful TDD approach and seems to be headed back to the stone age of waterfall design. As well, that level of strictness convinces developers---who are similarly encouraged by the language designers---that once their code compiles, then it runs as expected.
But then they think they don't need to test, whereas the compiler really has no idea whether your code does what it should do. All it can guarantee is that no exception went unhandled---or explicitly ignored---(e.g. in Kotlin or Swift) or there are no race conditions or deadlocks (Rust) or that there are no null references where not explicitly programmed (Swift, Kotlin, TypeScript).
These compiler-enforced language features are very useful, but are in the same class as the spell-checker in your text editor. Having no red, wavy lines in your document is no guarantee that the document makes any sense whatsoever.
So these are interesting and useful features. They can lead to increased safety. But, they won't make your program do what it's supposed to do. At best, they help you avoid writing behavior that you most definitely don't want.
These features are nice to have, but they are not worth having at any price.
<h>Round Two: Defaming Types</h>
It was an interesting article that I more-or-less agreed with. The follow-up article <a href="http://blog.cleancoder.com/uncle-bob/2017/01/13/TypesAndTests.html" author="Robert C. Martin" source="Clean Coder Blog">Types and Tests</a> followed close on its heels because Martin apparently wanted to respond to feedback he'd received on the first article. I thought he went a bit far in the second article. For example, he emphasized that,
<bq>No, types are not tests. Type systems are not tests. Type checking is not testing. Here’s why.</bq>
That's absolutely true, but types are still related to testing. Types help me specify my interface more precisely and I can trust the compiler to enforce them. That's a lot of tests I don't have to write.
Otherwise, for every API I write, I'd have to write tests to <i>prove</i> that only the supported types can be passed in---and I'd also have to specify how my API behaves when value with an incorrect type is passed in. Do I fail silently? How do I let the caller know what to expect? This seems not only sloppy but time-consuming. It sounds like busy work, having to think about this kind of stuff for every API.
<h>Precise Requirements</h>
Martin continues,
<bq>[...] the way <c>f</c> is called has nothing to do with the required behavior of the system. Rather it is a test of an arbitrary constraint imposed by the programmer. <b>A constraint that was likely over[-]specified from the point of view of the system requirements.</b> (Emphasis added.)</bq>
The first sentence is a useful observation. The second is hyperbole. Indicating <c>int</c> rather than <c>object</c> for a parameter called <c>limit</c> hardly seems like an over-specification. In fact, in seems like exactly what I want.
If the requirement says <i>shall allow a user to enter a <b>value</b> for limit...</i> rather than <i>shall allow a user to enter a <b>positive number</b> for limit...</i>, then I would argue that 99% of the time it's the requirement that isn't precise enough. I would not assume that the requirements engineer knew just what she was doing when she left the door open for a limit given as a <c>string</c>.
Without types, our requirements would also become bloated with over-definitions like:
<ul>
The system shall allow the user to enter a value that defines the upper limit of records to return.
The system shall throw an exception of type <c>ArgumentOutOfRangeException</c> for values that are less than zero or greater than 1000.
The system shall marshal the value to a numeric representation where necessary.
The system shall throw an exception of type <c>ClassCastException</c> if the given value cannot be marshaled to a numeric value.
</ul>
For this specification, a developer could write:
<code>
public void SetLimit(object limit)
{
int limitAsNumber;
if (!Int32.ParseInt(limit, out limitAsNumber))
{
throw new ClassCastException("...");
}
if (limitAsNumber > 1000)
{
throw new ArgumentOutOfRangeException("limit");
}
_limit = limit;
}
</code>
The developer could also write:
<code>
public void SetLimit(UInt32 limit)
{
if (limit > 1000)
{
throw new ArgumentOutOfRangeException("limit");
}
_limit = limit;
}
</code>
That's actually what we <i>want</i> the developer to write, no? If you choose JavaScript to implement this requirement, then you would need to over-specify because you need to decide how to handle values with unsupported types. If the requirements engineer is allowed to assume that the implementing language has a minimal type system, then the requirements are also easier to write, as shown below.
<ul>
The system shall allow the user to enter a <hl>positive integer</hl> that defines the upper limit of records to return.
The system shall throw an exception of type <c>ArgumentOutOfRangeException</c> for values that are less than zero or greater than 1000.
<del>The system shall marshal the value to a numeric representation where necessary.</del>
<del>The system shall throw an exception of type <c>ClassCastException</c> if the given value cannot be marshaled to a numeric value.</del>
</ul>
<h>Expressive Types are a Good Thing</h>
Assuming a minimal type system in the target language <i>saves time and effort</i>. The requirements engineer can specify more concisely and the software engineer wastes less time writing boilerplate that has nothing to do with application behavior.
Martin finished up with this sentiment,
<bq>So, no, type systems do not decrease the testing load. <b>Not even the tiniest bit.</b> But they can prevent some errors that unit tests might not see. (e.g. Double vs. Int) (Emphasis added.)</bq>
As you can imagine, I strongly disagree with the <iq>[n]ot even the tiniest bit</iq> part, based on my arguments above. If you use JavaScript, then you have to test all valid input and verify its behavior. In JavaScript, literally any data is valid input and it's up to your method to declare it invalid.
Only tests can provide any protection against your method being called at runtime with invalid data. You have to write a test to verify that your method throws an error when passed a <c>double</c> rather than an <c>int</c>. Most people will not write these kind of tests, which I suspect is why Martin says there's no change in testing load.
<h>Strict Languages for Bad Programmers?</h>
I agree that the pendulum in Swift has swung too far in a restrictive direction. The language does feel pretty overloaded. I also agree that the behavior of the system itself needs to be tested and that types don't help you there.
Martin again,
<bq>On the other hand, internal self-consistency does not mean the program exhibits the correct behavior. <b>Behavior and self-consistency are orthogonal concepts.</b> Well behaved programs can be, and have been, written in languages with high ambiguity and low internal consistency. Badly behaved programs have been written in languages that are deeply self-consistent and tolerate few ambiguities. (Emphasis added.)</bq>
Agreed.
I think, though, that Martin might be forgetting about all of the people writing software who aren't the kind of people who can write a well-behaved program in a wildly inconsistent language. I, for example, am so awesome<fn> that I wrote my entire web-site software in PHP---one of the worst languages in the world for internal self-consistency---and it's been running my site for going on 18 years. Programming skill and iron discipline fill the gap left by language consistency.
But for bad programmers? They write utter garbage in PHP. Maybe it's not a bad idea to create languages that channel poorly disciplined programmers into better practices. I take the point from the previous article (<a href="http://blog.cleancoder.com/uncle-bob/2017/01/11/TheDarkPath.html">Dark Path</a>) that bad programmers will simply work their way around the rigor, where possible. They will mark every class as <c>open</c> in Swift instead of thinking about their architecture.
For those of us with discipline, the language will put up roadblocks that force us to write more code rather than less.
<h>Unfixable Errors?</h>
As a counterexample, there is Rust, which enforces reference-ownership in a way that guarantees concurrent code with no deadlocks and no race conditions. This is a good thing. It probably gets in your way when you're trying to write other types of programs, but it's overall a good thing.
I haven't had any personal experience with it, but I've heard that it's sometimes difficult to figure out <i>why</i> a given program won't compile. I would hope that these situations become fewer with experience, but would also be cautious because I remember programming in C++ with templates and know how much time can be lost when you don't know how to fix your program based on an error message.
<h>Non-nullable types are not a step too far</h>
I, for one, <i>like</i> that my compiler tells me when I have potential null-reference exceptions. I use attributes in C# to tell me exactly that and I use R# to find all places in my code where I have potential violations. Those are more tests that I don't have to write, if the compiler can "prove" that this code is never called with a <c>null</c> reference.<fn> It lets me write more concise implementation and spares me a lot of scaffolding.
<h>What about const/non-mutable types?</h>
Many years ago, I had the same experience with <c>const</c> in C++ as Martin discusses. After some time working with <c>const</c>, I starting making everything I possibly could <c>const</c> in order to eliminate a whole class of mutation errors in my code. That did have consequences, at the time. Changing one thing could---as Martin describes for his hypothetical language TDP---lead to knock-on changes throughout the code base.
Generics can have this effect, as well, with changes leaking into all of the places they're used. I wrote a <a href="{app}view_article.php?id=3166">blog series</a> on having pulled back from generics in a few central places in Quino.
<h>Exceptions in method signatures</h>
I often felt the way that Martin does about Java's <c>throws</c> declaration. I imagine that I'll start to feel the same about Swift's, as well. I read once about a <a href="http://joeduffyblog.com/2016/02/07/the-error-model/">nice typing system in Midori</a>, the managed version of Windows created by Joe Duffy and team at Microsoft Research, that I felt I would like to try (no pun intended).
<h>Toward more expressive types</h>
Martin says that he uses both dynamically and statically typed languages. He acknowledges that certain extensions to the type system can be useful (but just that some languages have gone too far).
I, too, think some innovations can be very helpful. I <i>like</i> immutables (types, declarations, whatever) because they let me reason better about my code. They let me eliminate unwanted code paths with the compiler rather than having to write more rote tests that I think even Martin will agree have nothing to do with the original specification or the behavior of my application.
If I can mark something as readonly because I don't expect it to ever need to be changed, that's a little note I've left for future programmers that, should they want to modify that value, they will have to make sure to reason differently about the implementation. The value was never intended to be rewritten and there are no tests for that behavior. It's a nice way of reducing the scope of the implementation.
It simultaneously restricts that scope, but that's a good thing. A program can, very quickly, do a lot of things that it should not do. I don't want to write tests for all of this stuff. I have neither the inclination nor the time---nor the budget---to write tests for things that I could instead eliminate entirely from the realm of possibility with a powerful type system.
<h>Agreeing with Martin</h>
I read up on Kotlin and saw a seminar on it last year. I, too, noticed that there seems to be an "everything but the kitchen sink" feel to it. It's the same feeling I get when I look at Scala's type system, though that one is less about restriction than about letting you do everything in 3 different ways.
I've been reading through the Swift language guide and I'm getting the same feeling. It doesn't help that they have their own name and keyword for nearly every commonly known programming concept. You can use <c>self.</c> but the guide prefers just <c>.</c>, which takes some getting used to. <c>finally</c>? Nope. Use <c>defer</c> instead.
<h>An example from TypeScript</h>
To be honest, I'm also a bit dizzy at how quickly the TypeScript type system has gotten more and more complex. <a href="https://blog.mariusschulz.com/2017/01/06/typescript-2-1-keyof-and-lookup-types" author="Marius Schulz">TypeScript 2.1: keyof and Lookup Types</a> includes details on even more typing specifications that let you infer types from dynamic objects with flow-control analysis.
I think this is quite an interesting approach, akin to more functional languages, like ML and F#, where return types are inferred and even parameter types are inferred. Swift has also gone a long way in this direction. Interfaces are replaced with non-inheritable types that describe the shape of data.
Types can even be inferred by which fields you access within conditionals so that a single variable has a different inferred type depending on which path through the code it takes. It's all very exciting, but I wonder how much can be used correctly---especially by the aforementioned crappy programmers.
For example, this is the definition for the <c>Object.entries()</c> method from JavaScript.
<code>
interface ObjectConstructor {
// ...
entries<t>(o: T): [keyof T, T[K]][];
// ...
}
</code>
<h>Inferred method return-types</h>
After having used languages that have explicit return types for methods, I'm still a bit at sea when I read TypeScript code without them. I find myself hovering identifiers to see which type was inferred for them by the real-time compilation.
I agree that the code is cleaner, but maybe something's gone missing. It's harder to tell what the hell I'm supposed to pass in as a parameter or what the hell I get back from a function when the type can be a union of 3 or 4 other vaguely and sometimes ad-hoc--defined types.
For example, a lot of code just constantly redefines the hash-table interface rather than just defining a type for it ... so the caller isn't restricted to implementing a specific interface. This is nice for library code, I guess, but it makes it harder to reason about the code because you don't have good names for types. This is an interesting enough experience for seasoned programmers; I can't even imagine how average or bad programmers deal with it.
I see where Martin is coming from, that he's afraid of <abbr title="Big Design Up Front">BDUF</abbr>, something he's been fighting for years by arguing that you can design as you go if you'll just <b>test</b> your code as you write it. If you see that a parameter has to be an <c>IHashMap</c>, that's easier to understand than <c>{ [key: string]: any }</c> or <c>{ [key: string]: T }</c> where T is a completely different type. There are advantages and disadvantages.
<bq>Every step down that path increases the difficulty of using and maintaining the language. Every step down that path forces users of the language to get their type models “right” up front; because changing them later is too expensive. Every step down that path forces us back into the regime of Big Design Up Front.</bq>
I agree with the sentiment, but I don't know if we're there yet. Martin argues that there is a balance and maybe I need more experience with the languages he's horrified about. He does write:
<bq>I think Java and C# have done a reasonable job at hovering near the balance point. (If you ignore the horrible syntax for generics, and the ridiculous proscription against multiple inheritance.)</bq>
...which I agree with wholeheartedly. I have learned to live without multiple inheritance, but I regularly railed against its absence for decades. I have given up because the world has moved on. I would love to see proper contravariance and covariant return types and anchored types, but I've kind of given up on seeing that kind of stuff in a mainstream language, as well. Instead, I've drifted more toward immutable, stateless, functional style---even in C#. I'm ogling F#. I'm working with Swift now and will do much more of that this year.
<hr>
<ft>Did you expect me to soften that brag down here, in this footnote?</ft>
<ft>I also use R#'s solution-wide analysis.</ft>