Java Community Antipatterns, an Ongoing Series

Published at 08:35 on 12 August 2023

So I’m starting to play with the Go programming language again, mainly because in many ways it’s the anti-Java (it is not even a fully object-oriented language, by design). It was written mainly by Rob Pike, who is part of the old Bell Labs software culture that is generally skeptical of object-oriented programming, for many of the same reasons that I have come to be skeptical of it. One of those reasons is that excessive reliance on the object paradigm tends to breed unnecessary complexity.

Generally, when I have a question, I can find an answer in the documentation fairly easily. This morning, I had a build system question I could not easily find an answer to. So I decided to figure it out by looking at what the developers of a well-known, open-source software project in Go had done. I chose Helm.

It only took a few minutes of examining the source code to figure it all out. With Java, I would have pissed away half a day, easy. Instead of a Makefile, there would have been pom.xml (Maven) or build.gradle (Gradle). Both are incredibly complicated compared to Make, and both inevitably involve the use of multiple plugins that are also incredibly complicated. I would have been combing through documentation and scratching my head for hours.

Instead, boom! Answer obtained, in the space of a few minutes. The way it should be.

And Helm is not a small or simple project. In other words, despite its simplicity, Go seems to be every bit as powerful and useful a language as Java. More powerful and more useful, in fact, since it is easier to use, and one can spend time focusing on designing software instead of battling the (unnecessary) complexity of the overall programming environment.

But wait, there’s more! Just like the gratuitous complexity of the standard Java class library has proven contagious, the clean simplicity of the base Go programming environment seems equally contagious. I decided to satisfy a bit of intellectual curiosity about how Helm did something. This is something that frequently takes me hours with a Java project. But not here! Within a minute, I found the relevant bit of source code and my question was answered.

Which, again, is the way it should be.

On (Not) Being a Java Careerist

Published at 07:40 on 29 July 2023

Not to slam Java careerists. One thing they are is very smart and talented. One just has to be, in order to deal with all the gratuitous complexity bred by the traditions of that programming community.

But here’s the thing: I don’t want to devote basically all of my mental effort to doing that. I don’t want to lose my botanical knowledge, or my wide-ranging general scientific knowledge. And I would have to in order to succeed in the Java world. The mental load is just so extreme.

Even if I wanted to, I am not sure I could. I crave knowledge in a diversity of subjects. My mind would rebel, strongly, against being forced to hyperspecialize.

In a sense, this means I’m “lazy” in that I “don’t want to work very hard” at software software development. But I don’t see that as necessarily a bad thing. Why should I work harder than necessary? If there is an easier way to do a good job at something, why not choose the easier way?

Is it really intelligent behavior to continue doing something in a difficult way when you are aware that an easier way exists?

This all was, in fact, something I wondered a bit about going into this job. And I decided then that if this was the case, I wouldn’t succeed at the job, wouldn’t want the job, and would end up departing from it. And so here I am.

Why I Hate Java: An Example

Published at 20:46 on 28 July 2023

Building on this entry, let us relate a little story that transpired in the past week.

About a week ago, I make a stupid error and introduce a bug into the code. Shouldn’t be a big problem; one good thing about where I work is that there is a very extensive battery of tests for things.

But this is the Java universe we are talking about. Simplicity is not appreciated as a virtue. Both the test and build frameworks are ginormous and hypercomplex. Somehow, I still do not know why, some feature got triggered that caused the test(s) that would have detected my bug to fail to run.

Because the Java universe does not appreciate simplicity, that code base itself is ginormous and hypercomplex. If the code were not written by members of such a dysfunctional programming culture, it would have been broken up into smaller, more managable bits that communicated with each other somehow. The test log for the subset of the code I was working with would have been short enough I would have probably noticed something missing. Instead, the missing tests were buried in a little over 80,000 lines of test and build output. Can you read an 80,000 line log file without falling asleep first? I sure can’t. So I didn’t even try. Naturally, the missed tests go unnoticed.

The check-ins get rejected for other reasons, so I get to work on addressing them. Meanwhile, the whatever-it-was that caused the critical tests to get suppressed ceases to do so. So my first attempt to test the recode fails for this out-of-the-blue, off-the-wall reason. I look at my recent changes and see nothing that could cause this issue to manifest.

Not much can be done but to attempt to instrument the daylights out of the code with debug log statements and try to figure out what the heck is going on with the data as it gets operated on.

My first attempt to do so fails because the company’s network infrastructure suffers a hiccup and causes my build to fail. The company’s infrastructure is super-complex, poorly-documented, and unreliable. (Everyone else just basically accepts it because everyone else is a Java programmer and thus used to unnecessary complexity and the resulting unreliability.)

In my second attempt, I discover that for some reason the test framework suppresses all log messages. So I recode to use writes to standard output, figuring (correctly) that it won’t “intelligently” suppress those. The instrumenting turns out to be insufficient, so I add more.

Each of these iterations takes way, way longer than it should, because the code is big and bloated and complex and so takes 30–45 minutes to build. If everything was factored into smaller units, I doubt builds would take longer than 5 minutes (if that). So each iteration takes roughly 6 to 10 times longer than it should.

Finally, after at least 4 hours of effort, I locate the bug.

And this is why I hate Java. Not because of the core language itself (dated, but still not bad considering it was designed in the ’90s) or because of its runtime (still one of the best virtual machines out there), but because of the traditions of the community that uses it. A minor bug, that would have been resolved in half an hour easily, instead almost makes it into production and takes half a day to resolve.

And this happens everywhere, all the time. Everything is more difficult, more tedious, and more error prone than it should be, with a lot more busy work than there should be.

Those dysfunctional traditions are such an irritant that I have developed my own special term for them: Java community antipatterns, or JCA’s for short.

I have recently learned that I am on my way out where I work, mainly because I can’t cope with the JCA’s as well as the Java careerists. And frankly, I can’t wait till I move on. I’m already looking for another position, and it will be as far from the enterprise Java world as it can be.

Java Annoyances

Published at 07:22 on 29 May 2023

When Java first came out in the 1990’s, I gave it a try, then turned away from it. My reason was not the core language itself but its standard library, which impressed me as something of a poorly-organized and overcomplex mess.

Decades later, and with some professional coding experience in that language ecosystem under my belt, and that is still basically my takeaway conclusion. The worst that can be said about the core language is that it’s a bit dated (understandable, as the the design is now decades old). But the overall pattern of the standard class library being awkward has extended to the language ecosystem as a whole.

Just about every third-party library for Java tends to be a special combination of big, awkward, and given that size and ponderousness surprisingly feature deficient. Take the Jackson JSON library for instance. Its current release totals just shy of 800 classes (yes, 800, I am not making this up). Yet when I tried to do something as simple and basic as generate pretty-printed output (nicely indented and formatted, with all keys in JSON objects sorted alphabetically), I couldn’t do it out of the box. (There is an ORDER_MAP_ENTRIES_BY_KEYS option, but it fails to act as advertised in all — in fact, in most — cases.) I had to write helper methods to get my output formatted as desired.

And this was after blowing most of a day poring over documentation and trying experiment after experiment attempting to get my output correct. The configuration settings in Jackson are split up amongst at least three classes, and of course the documentation for one configuration class does not mention the others. It is left as an exercise for the programmer to discover the others.

Contrast with Python, which has a simple JSON serializer and deserializer built-in to the language’s core library. (Jackson is a third-party library, because in Java you must use a third-party library if you want to read or write JSON; the standard Java library lacks JSON support. This, despite the standard Java library being much larger in terms of number of classes than Python’s library.) And there is no hunting the documentation in Python: right out there in the documentation for the json module (one module, one class, one HTML page of documentation to read, that’s it) the indent and sort_keys options to json.dump are described. And the options work as advertised in all cases! What takes over a day to code in Java can be accomplished in under a minute in Python.

Yes, Jackson can do deserialization into objects, with schema checking, and the built-in Python library cannot. That’s nice, dear. The basic functionality of being able to generate pretty-printed output out of the box seems to be missing. It’s like driving a luxury car with heated seats and a fancy entertainment system but no factory headlights or taillights, so you must add those if you want to drive it after dark.

And I run into this sort of thing over and over and over again. In the Java world, I am literally always encountering this or that use of some giant, cumbersome, poorly-documented third-party package, that compels me to waste multiple hours understanding it. Or, in most cases, just partially understanding it and still making a huge number of educated guesses about it. And because those packages also tend to be surprisingly limited in functionality, one either has to pull in more huge, cumbersome, weak libraries to make up the deficiency, or add more lines and complexity to the code base.

It all ends up sending the cognitive complexity of understanding what a Java program does into another whole universe of mental difficulty.

It’s a real shame, because as I said the core Java language really isn’t too bad at all. And the core Java runtime environment is, by any objective measure, great: garbage-collected, platform-independent, with full support for preemptive multi-threading, and with a portable graphical user interface that (with a little programmer effort) manages to replicate the native look and feel on all three of Windows, Macintosh, and Linux.

But oh, those library antipatterns. They do so much to take away from the overall experience.

And We’re Back

Published at 17:06 on 22 May 2023

Ubuntu Linux package manager badly botched a routine upgrade and hosed my database. Thankfully I take routine backups. It just was a matter of time until I could perform the necessary restore.

Unix, the Alarm System Call, and Power Management

Published at 20:37 on 25 April 2023

And by “Unix” I include Unix-like operating systems like Linux and MacOS. In fact, my experience is limited to Linux and MacOS in this regard, but I would be surprised if the various BSD and System V Unix systems out there with automatic power management differ much.

I have a simple alarm clock/reminder script I wrote in Python. The heart of it was the following logic:

def sleep_until(then):
    delta = then - time.time()
    if delta > 0.0:
        time.sleep(delta)
        return True
    return False

Now, the time.sleep call in Python is implemented as a call to sleep in the C standard library, which in turn is implemented as via the alarm system call. All of these accept an offset in seconds, which in the former case specifies the amount of time to sleep, and in the latter the amount of time before an alarm signal is delivered to the process.

The logic above is simplicity itself, yet from time to time my reminders would come in late! Eventually, I linked it to those times when the system suspended itself due to lack of activity for a while; and my alerts were late by an amount that corresponded with the time the system was suspended. Apparently, when Unix and Unix-like systems suspend themselves, time as specified to alarm ceases to pass; that system call only counts seconds that transpire when the system is awake.

The cure is to break up the sleeping into chunks, and to repeatedly check the system clock:

MAX_SLEEP = 60.0
def sleep_until(then):
    delta = then - time.time()
    if delta <= 0.0:
        return False
    while delta > 0.0:
        time.sleep(min(MAX_SLEEP, delta))
        delta = then - time.time()
    return True

At least, this seems to work. I implemented the change yesterday and alerts that spanned times when my computer was asleep got raised at the correct time. It’s a little ugly to replace a blocking with busy-waiting like this, but although the above logic busy-waits, it still spends most of its time blocked.

Note that this seems to affect other programs as well. In fact, one of my motives for writing this script was the frequent failure of the Gnome clock app to issue alarms at the proper time.

Note also that this assumes the computer will be in an awake state at the time the alert is scheduled. If the computer goes to sleep and stays asleep, it will issue no alerts during the time it is asleep. Remedying this state of affairs requires privileged system calls that one must be careful making. I decided that the safety of having a nonprivileged program was worth the slight chance of a missed alert; in my case, the problem almost always happens as a result of a system suspending itself on lunch break, with the alert time being while I am at my desk in the afternoon.

Where the Rust Language Makes Sense

Published at 19:48 on 19 April 2023

Per this, I think Rust makes the most sense for things you would have otherwise written in C or C++. It is a more modern, relatively low-level, language than either of these two (and is much cleaner than C++, which was an attempt to add all sorts of extra features onto C, and which suffered as a result of having to be a proper superset of that earlier language).

If you were not going to write it in C/C++, in other words if computing resource limitations are not a constraining factor, then writing it in Rust just doesn’t make sense. Use some other programming language with automatic garbage collection, so you don’t have to worry so much about memory management.

Which means, that for other than embedded systems, it is generally stupid to use Rust from the ground up. Use a higher-level language like Python. If the higher-level language proves too slow or too memory-inefficient, do some profiling, find the weak links in the chain, and rewrite those in Rust instead of rewriting them in C/C++. There’s already libraries out there to facilitate doing the latter.

And that is why I can’t feel much love for Rust: because I am right now not running up against any resource constraints that make Python, Java, or Kotlin impractical.

Not Feeling the Love for Rust

Published at 00:56 on 18 March 2023

The Rust programming language has been hyped to be the “most loved” one for several years now. I personally can’t feel the love.

Sure, Rust is fast. So what? Python, Java, and Kotlin are almost always fast enough for me.

Rust’s memory management is nowhere near as convenient as the memory management in a garbage collected language (like Python, Java, or Kotlin). There’s all sorts of confusing rules to remember. It took me rereading those rules three times to finally get it. By contrast, with garbage collection, I don’t have to worry about stack versus heap, borrowing, boxing, and all that crap. I just pass objects around as easily as Rust passes primitive types, and everything just works. The computer handles all the details behind the scenes, leaving me free to concentrate on other things.

Yes, yes: efficiency. Again: so what? Garbage-collected languages are fast enough for me. As Knuth once said, “Premature optimization is the root of all evil.”

Suppose for some reason I bump into a need to go faster than Python or the JVM allow. Then what? Still not sold on Rust. I’ve run into this sort of thing before: the problem was caused by one tiny bit of code, and rewriting that little bit of code in C (which both the JVM and Python can easily call) fixed the performance issue handily. Yes, C isn’t as nice as Rust for memory management, but it was only one little bit of code, and unlike Rust (for which such support is still in the early stages), it is super-easy to call C from within Java or Python.

And then we get to graphics. Some of my recent coding has involved making tools to modify graphics files. If one is modifying graphics files, a graphical user interface is almost always essential. Python supports Qt, and Java has Swing. Both are excellent cross-platform GUI libraries that, with care, can achieve results that look almost as good as a native application. Rust basically has only GTK, which is a poor second-class citizen for anything other than Linux.

Maybe if I was doing lots of embedded systems on resource-constrained platforms Rust would be more appealing to me. But I’m not, so it’s hard for me to feel the love.

The iCloud Disk Is a Racket

Published at 23:10 on 16 March 2023

A racket is best described, I believe, as something that is not what it seems to the majority of the people. Only a small “inside” group knows what it is about. It is conducted for the benefit of the very few, at the expense of the very many.

— Smedley Butler

The iCloud disk is Apple’s cloud storage service. They give away a basic amount of storage to all their users, and charge for extra.

The rub is, that just about every Apple program is configured to put just about everything it saves on iCloud by default. Even very, big bloated things. Especially very big, bloated things. It is possible to turn this off, but it is not easy or obvious, and as I said just about every Apple program is configured to use iCloud heavily, so you must fight with app after app to stop it from dumping megabytes of crap onto iCloud.

The biggest offender is iOS (the iPhone and iPad operating system itself), which is of course configured to back up everything to iCloud by default. How to turn this off (and how to back up an iPhone to a local disk) is described here. Note that you should definitely plan to back up your device to a local disk regularly if you turn iCloud backups off.

The natural consequence is that iCloud fills up quickly. At that point, every Apple device you own will breathlessly and ominously announce that your iCloud storage is about full and recommend purchasing additional storage. Actually the warnings come well before that point, at around the 80% mark. Since iCloud comes with 5 gigabytes for free, that amounts to getting warned about full storage when in fact you have a gig of free space still left.

It doesn’t recommend you investigate why iCloud is filling up, of course. That might result in the user not agreeing to spend money in perpetuity on iCloud. There are ways to investigate usage, but they are not obvious.

I did the work and it was astounding how much crap various Apple programs had stuck there. Most people won’t do that. They will just cough up the dough every month to make their devices shut up.

Agile: A Crap Process for Making Crap Software

Published at 17:36 on 15 December 2022

Take a look here. The very first principle listed is:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

By virtue of being made the first principle, a serious hint is being dropped that this principle is the most important one of all. Any lingering doubt is cleared up by the phrase “highest priority.”

This is, quite frankly, crap. It is predicated on the premise that what customers want most of all is continuous updates and change.

Continuous updates and change are one of the chief reasons why my industry’s products suck so much.

Gone are the days of physical tools like hammers, drills, saws, or even more complex ones like bicycles and motor vehicles, physical devices in a physical world whose limitations binds form to function. Instead, we have user interfaces driven more by fashion trends amongst the UI design crowd than anything else.

Learn to ride a bicycle and you have learned to ride a bicycle for life. Learn to use a mechanical typewriter and you have learned how to type for life. Learn how to use the most recent version of Microsoft Word and what you just learned will be obsolete within a few years.

Lack of stability, plus a near-absolute decoupling of form from function, are two of the very worst things about software.

And the agile manifesto actively mandates one of these two evils. As its highest priority.

It’s not all evil, of course. There is this one in there:

Simplicity — the art of maximizing the amount of work not done — is essential.

Well, gold star. “A” for effort. You got one dead right.

The trouble is, this is the tenth of twelve principles. It appears way down on the list, below a first principle that is unambiguously proclaimed the highest priority.

A first principle based upon a monstrously wrong premise.

And this is the most popular software development methodology within my field today.

No wonder most software tends to be crap.