Re-Discovering the Advantages of .NET

Published at 09:06 on 28 January 2026

Fifteen or so years ago, as an exercise in curiosity, prompted by how often I saw the technology mentioned in job listings, I decided to check out Microsoft’s .NET framework. I was expecting to come away feeling smug about how much better competing technologies more popular in the Linux world were.

Surprise No. 1: I didn’t have to just read about it. .NET is an implementation of an open standard called the Common Language Runtime (CLR), and there was what turned out to be a very nice open source implementation of the CLR called Mono. Which I proceeded to install on my Mac and play with.

Surprise No. 2: It (both the C# programming language and the .NET framework) was well designed! This one floored me, given how sucky I generally find things that Microsoft has been heavily involved in. C#’s designers obviously learned from Java’s mistakes, particularly when it came to designing a standard library. And, frankly, they had to do a good job. Unlike its operating systems and desktop environments, which have long been market leaders, and could get away with coasting on their well-established momentum, Java was the clear market leader in virtual machines that ran byte-compiled code. If Microsoft didn’t do a good job, people would just stick with Java, which runs just fine on Windows.

I ended up writing a bunch of command-line utilities in C# and a web site using ASP.NET. It even led to a job where my history as an individual who knew both .NET and Linux servers was the special sauce that got me hired.

But that job didn’t last forever, and there was still a lot of anti-Microsoft tradition that caused most of the open source world to dismiss .NET and Mono out of hand. I could tell I was probably not going to luck out like that again, so I shelved .NET in favour of technologies more common in the open source universe.

Fast forward 15 years and Microsoft has now open-sourced .NET and merged its codebase with that of Mono, meaning the two formerly separate projects are now effectively one.

I have been struggling in the past few days with how to integrate authentication into a web app I am writing. Rolling your own is generally frowned upon (it’s surprisingly complicated; you have to deal with sign-ups, account deletions, forgotten password resets, perhaps two-factor authentication, etc.) But the off-the-shelf solutions available for Python or Node.js just plain suck.

Mainly, they don’t have the flexibility I need. You see, I need access to the actual password used to log in, because I am using it to derive an encryption (and decryption) key used to protect sensitive per-user data in my database. One of my web app’s selling points will be that even I won’t be able to know your secret data. Most authentication services and libraries simply don’t support this: you never see the user’s password, because you don’t prompt for it yourself.

So I check out what sort of authentication systems the .NET world has to offer, and immediately find one that doesn’t suck: one of its key design principles is in fact to let their clients do the prompting for authentication credentials, because, guess what? They just might want access to them, themselves. Cluefulness, what a concept.

Then I find out that I don’t need that product at all, because ASP.NET comes with a surprisingly capable identity management system built in. Which, while it doesn’t let you do your own prompting for credentials by default, does offer it as an option.

Database access is better, too. Most open source object-relational managers (ORM’s) are flat-out terrible. They force you to code all sorts of repetitive boilerplate to mirror what’s already in your database schema*. Instead of simple, logical, expressive SQL, you have to use awkward and clunky chains of method invocations. It’s bad enough that I’ve written my own ORM for Python. It wasn’t that hard, and it’s a whole lot nicer to use.

* How utterly asinine this is becomes clear when one realizes that one of the key characteristics of a relational database is the ability to use queries to programmatically deduce the schema of an existing database. Most ORM’s are, in other words, forcing the programmer to do manually what they could do automatically themselves.

Well, the two most popular ORM’s in the .NET world, Dapper and Entity Framework, are both best of breed. They don’t suck. Entity Framework even has, with C#, query expressions as first-class language constructs.

Then we have file-based routing, where you create a new file and get a new route automatically, something that Apache did 30 years ago (and still does today) but many modern open-source frameworks (particularly in the Python universe) still can’t do. Another win.

Documentation is another big win. .NET has some of the best in the business. Nearly everything is covered by both tutorials and comprehensive API documentation, the latter of which is liberally supplied with examples. It’s not just documentation, either; there is all sorts of help for the programmer in the form of what the .NET world calls “scaffolding,” in which example code can be created for you on request. It’s almost always easier to do something by modifying existing code that comes close to what you want, rather than to start from a completely blank slate.

It’s just generally a better developer experience all around. Normally, you pay for convenience like this, typically in the form of poorer performance. Not this time: ASP.NET sits at the very top of web framework performance benchmarks.

It’s not all roses. .NET is arguably overengineered (just look at function parameters: you have normal parameters, out parameters, keyword parameters, ref parameters, and readonly ref parameters). And there’s at least four different ways to template and generate web pages in ASP.NET.

But while the overengineering is tiring at times, there’s still nothing as bad as the hideous shambolic mess that is the Javascript module and import system. And, arguably, it does make for a lot of choices, choices that I will be taking advantage of to develop exactly the sort of web application that I want.

Evaluating Node.js as a Client-Side Technology

Published at 14:49 on 24 January 2026

Executive Summary

  1. I can see it perhaps making sense in some corporate situations,
  2. I am not personally in such a situation, therefore, it does not make much sense for me, and
  3. The Javascript module system is (or should I say systems are) absolute garbage.

How I Got Here

Prompted by data like this, I decided Node.js and frameworks based upon it were worth a closer look, since I am starting a web-based software project for a nonprofit I am associated with.

First, I want to choose something that someone else can easily step in and maintain, and that means choosing something in common use. Second, at least some of what is in common use is likely so for a reason.

A word or two is first necessary on that chart. It’s what pops up highest when I ask Duck Duck Go about the most popular web frameworks. I did once find out the source; it was a poll taken at a major conference of web developers in 2024. So it is actually based on actual real data, which is more than one can say about most lists of top web frameworks. But Node.js is not a web framework; it is an implementation of a programming language, Javascript. Why this is so becomes clear if you scroll down the right sidebar and look at the question asked, which was about “web frameworks and web technologies [emphasis added],” and not strictly frameworks.

But I digress. There are frameworks based on that language, however, which come up repeatedly (and are the top rankers) in that chart.

I’ve dabbled in client-side Javascript over the years, simply because one cannot avoid doing so. It is just not possible to do all that needs to be done purely in declarative HTML. But the key term here is “dabbled.” I have deliberately avoided the sort of Javascript overuse that harms the user experience for so many on the Web, but it is still relatively easy to run into situations for which the easiest and simplest solution is a little bit of client-side scripting.

So I install the latest version of Node.js, and because I have personal experience with how Javascript’s lack of static typing makes it easy to write and difficult to find coding errors, I install Typescript as well.

Module and Import Hell

Right off the bat, I run into obstacles of the sort I seldom do with a new programming language. Typescript blows up, big time, spewing error after error for a small snippet of code that has absolutely nothing wrong with it so far as I can see. After some time, and some sleuthing on the Web, I pinpoint the cause as having my import resolution options incorrectly configured.

Which right off the bat shows that said subsystem is hot garbage in the Javascript world. Just for openers, there’s at least three kinds of modules. Most all programming languages get by fine with just one, but not Javascript. Modules and imports are super-simple in most programming languages. There should be no need to configure things. Imports should just work out of the box, like they have for the vast majority of the world’s programming languages at least for the past fifty years or so.

If you have to write a lengthy web page to explain how your programming language’s imports and modules work, long enough to require a special summary section, and that summary itself takes up three screenfuls, you have a problem.

I finally figured it out and got my test script to compile, but I basically blew a day doing so, and it’s still largely a crap shoot for me how to properly import a given module, because the rules are so complex and I have yet to fully internalize them.

To reiterate: this aspect of Javascript is hot garbage. No other honest assessment is possible. Javascript does modules and imports worse than any other programming language I know of.

Random “WTF Javascript” Stuff

Javascript has more than its share of strange quirks. Most of these are related to it being a weakly-typed language. Probably the worst thing is that Javascript has two sets of equality and inequality comparison operators, which differ in the details of the type coercion they apply to their operands (most languages get along just fine with one). More than one web site is dedicated to pointing this all out.

While this does make for a quirky and sometimes annoying experience, it doesn’t rise to the level of malicious complexity, bordering on total unpredictability, that the module and import system does. Moreover, a lot of it simply comes with the territory when one has a dynamically-typed language. For most of the “WTF Javascript” examples I have seen, if I feed the analogous expression to Python (which is famed for being a “clean” and logical language), I get a similar result. So I could have coped with this if it was all there was.

As such, I almost didn’t include this aspect of the programming language in this article. But I figured I’d mention it here because if I didn’t, people would keep commenting and telling me about it.

Generating HTML with JSX

Why did I persevere? In part because of JSX (and its Typescript analogue TSX). It’s really quite the clever innovation for generating HTML. Not quite so clever as it first appears, but it’s still pretty nice. It’s definitely the easiest and most logical way (and one of the most powerful) ways of letting the user generate custom markup tags that I have run across.

Eventually, however, the conclusion was inescapable: there is just too much mismatch between Javascript and what I want to do for a 100% Javascript framework to make much sense for me.

Event-Based Programming

Javascript gives the programmer a single thread and an event-based programming model with a dispatch loop. It’s evidently very efficient; benchmarks give Node.js a decided edge over alternatives such as Python.

I think, however, that in most cases this is a “so what?” moment. Benchmarks give C and C++ a decided performance edge over languages that do automatic memory management. Yet most programs are no longer written in C/C++, because the mental overhead of dealing with manual memory management is a drag on programmer productivity, and the overhead of finding and dealing with memory mismanagement bugs is an even bigger drag yet.

My contention is forcing the programmer to manually manage threading and context switching is approximately as much a drag on programmer productivity as is forcing him or her to manually manage memory allocation. And just like the latter has the automated solution of garbage collection to relieve the programmer of cognitive load, the former has the solution of preemptive multithreading.

Yes, it’s not as fast. Again, so what? Processors are faster than they have ever been. Raw speed is not nearly so important as it once was.

And how much faster is it? The figure thrown around on the Net is that Javascript comes out 40–70% faster than Python for back-end web software. Frankly, that is a very modest improvement, considering that Python is one of the slowest languages out there. What happens when you compare Node.js platforms to those coded in compiled languages? Let’s just stay it starts looking a lot less impressive for Node.js.

It also must be mentioned that while Python’s ability to multithread has historically sucked, it is about to get a whole lot better.

Culture, or Tradition Being Mightier than Innovation

One of the most overlooked aspects of any framework is the culture and traditions that have evolved around it.

Java really isn’t that bad a language, when you consider its core features. Sure, it seems dated now, but that design represented state-of-the-art consensus in the 1990s. That Java is still around after all those years shows its design got at least some things right. The problem with Java is all the dysfunctional culture and traditions that are present in that language’s community.

Well, part of the traditions of Javascript is where it started: in the browser, as a relatively simple scripting language that was thrown together in a hurry. That use case grew, eventually growing to encompass server-side code, and the language acquired more features as a result. There was never a great deal of careful planning in the process; features were just added in a mostly ad hoc manner, as needed. Sometimes this resulted in features that didn’t grow well (this is the source of much of the module and imports mess).

Because of this history, there is this hidden implicit assumption in the Javascript world that web pages will have a lot of Javascript in them. Javascript frameworks, even server-side ones, tend to make this the path of least resistance for doing anything. There is, to put it mildly, insufficient appreciation of the pitfalls of doing this in the Javascript community, both client-side and server-side.

Javascript frameworks make it easy to write web pages with lots of Javascript in them, pages that perform poorly on mobile devices and when networks are slow. They make it difficult to write anything else.

The Real Argument for Javascript

It’s that there is currently, and has historically been, no alternative for client-side scripting. Web Assembly is in the process of ending that, but it has not yet done so. Plus sheer momentum means that Javascript on the client side won’t be going away any time soon.

So there are a lot of workplaces that have teams coding in Javascript for the front end. Probably too many of them, given the current overuse of client-side scripting, but I digress. Javascript is already in use, and wouldn’t it be nice to be able to move people between the front-end and back-end teams as demand for labour requires? Or maybe get rid of the front-end and back-end distinction and just hire for full-stack positions? If all the code was in the same programming language, that would be a whole lot easier.

And, fortunately enough, it just so happens that there are server-side implementations of Javascript, and while they can’t match the performance of compiled languages, they actually perform very respectably for an interpreter. Major win!

And that explains the current popularity of server-side Javascript.

But That Is Not My Situation

I am developing a service that I want to work as well as possible on mobile devices with subpar Internet connections. That means, wherever possible, using plain HTML, eschewing the Ajax pattern (which I consider mostly an antipattern). I don’t need deep Javascript knowledge to do that.

I am not trying to bully people into installing a custom app (which then proceeds to make me money for me by spying on the user) by having my web site be slow and unusable on mobile devices, because I am not a slimy capitalist. I don’t personally have the time to write both and app and a website, so just writing a mobile-friendly website serves my best interest as much it does the user’s. Win/win!

I already know Python well, and Python is squarely within a virtuous cycle of programmers with healthy design patterns attracting other programmers with similarly healthy patterns. The Python standard library is more comprehensive than the one for Node.js. The overall quality of open-source, third-party Python libraries in my experience exceeds that in the Node.js world. Python’s modules and import logic are sane, not a shambolic mess. Python allows event-based programming but doesn’t force it on me.

While Python isn’t the No. 1 back-end web language, it is No. 2 or 3, and over all, for all uses (not just the web), ranks No. 1. As such, there are plenty of Python web frameworks out there. While they are not as performant as Node.js frameworks, they aren’t that far behind, and should do a fair bit of catching up soon.

And so far as recoding if I want better performance, I have written web sites using C# and ASP.NET (currently sitting at the very top of the benchmarks, and no, you don’t need Windows to run it), and I found it an easier technology to adapt to than Javascript (just for openers, there is but a single module and import system, and it behaves predictably). Even old-school Java Server Pages, when built with a simple and predictable build tool like Ant, was less painful. Why should I choose an option that has more pain and less performance?

Node.js just doesn’t seem to make much sense for me.

The Problems with React

Published at 10:30 on 15 January 2026

If you look at most lists of the most popular web frameworks,
React ranks at the top. (Node.js is not a framework; it is an implementation of the JavaScript programming language. It turns out that the question which prompted that survey response was about “web frameworks and web technologies,” not strictly frameworks, which explains some of the answers listed.)

It explains why so many web sites suck, particularly when used on smartphones and in other situations with slow network connections. The entire premise of React seems to be that everyone has a speedy connection to the Internet.

React is sold as a framework that does server-side rendering. This is highly misleading. Yes, it does do some rendering on the server, but then it repeats the entire exercise on the client. It is part of how its process of “hydration” works, by creating a second entire document object model called the VDOM (virtual DOM), then reconciling the differences between the VDOM and the DOM. It does server-side rendering merely to help conceal what a slow, booger-eating fat pig it is under the hood.

And it can download a lot of code to do this. I created, as a test, a simple “hello, world” page using React and Next.js. When I say “hello, world” I mean it literally; that two word sentence was its entire content. It was about half a megabyte in size. I am not making this up: half a megabyte. For two words.

Now, Next.js has something of a reputation for page bloat, but still. It is also sold as doing server-side rendering, which misled me into thinking maybe it could deliver reasonably-sized content.

The root of the problem is a React design goal, that of so-called isomorphic code, wherein the exact same code runs on both client and server. This inevitably leads to an unnecessarily excessive amount of JavaScript being sent to the client. Forcing the programmer to be aware of the distinction between client and server is actually a good thing, as it leads to more performant code.

There are far better ways to use JavaScript. I just wrote a small web application using Express JS and the rendering engine from an open-source static site generator (because the latter allowed me to use JSX).

It is not a minimalist “hello, world” page; it has interactive content. It does hydration, too, by hand. Since I hydrate the page by hand, I have complete control over how much client-side rendering I do, and how much of a JavaScript payload burden the page has.

Total size of all assets sent to the browser: under 10 kbytes. Yes, it’s that simple. No, it is not isomorphic. So what? Again, the entire size of the page, both procedural JavaScript and declarative HTML and CSS, is under 10 k. Hardly a great burden of additional complexity. So much of the bloat you see on modern web sites is the result of pure laziness and dubious design decisions. It doesn’t have to be this way.

React, I think, is rotten down to its very roots. It was developed by Meta, the Facebook people. They don’t care how much their site sucks on mobile devices. In fact, they actively want it to suck, because they would prefer people install their app, which spies on their users. React is a framework that reflects the moral bankruptcy of the organization who sponsored its creation.

Re-Evaluating Kotlin

Published at 15:40 on 15 December 2025

Back in 2019, I wrote:

The pity is that once one does things other than Android software development in Kotlin, the rough edges in its ecosystem quickly become all too apparent. Just out of curiosity I’ve been playing with the Ktor server-side framework. The documentation ranges from flat-out obsolete (and thus incorrect) to simply nonexistent. The result is that even simple things take hours of tedious experimentation to determine how to do.

I’m hoping that Android development goes better, but unless those rough edges get smoothed out, and soon, Kotlin may well end up being stereotyped as an Android-only thing.

Only last month, I wrote:

The big problem with Java is not the language, it is the culture around the language. I have a friend with a number of pet sayings, one of which is “tradition is mightier than innovation,” and that certainly applies in spades to Java. There’s just so much bad tradition enshrined as respected convention in the Java world. I call the result Java Community Antipatterns or JCA’s for short.

      ⋮

If it weren’t for the JCA’s, Kotlin would be a near-ideal programming language. If it weren’t for the heat and dryness, Phoenix would have a near-ideal climate. (And aside from that, Mrs. Lincoln, how did you enjoy the play?)

But, I ended up concluding:

The Java community’s faults may be legion, but Java set out to be a “write once, run anywhere” language, and its virtual machine has to this date succeeded at that goal better than any other such environment of which I am aware.

So I’m back to coding in Kotlin, and once again Kotlin’s nemesis becomes clear. It is its proximity to the Java universe, and by implication the universe of JCA’s.

The Java virtual machine’s advantages make the concept of a language like Kotlin tempting. It is why JetBrains decided to create Kotlin. It is why I have been coding in Kotlin.

Unfortunately, realizing that concept is a very tall order. Quite simply, it is not exactly easy to take a complicated, badly-designed, antiquated ecosystem and attempt to layer a more rational, more modern, more well-designed one on top of it. This problem becomes all the more acute if said ecosystem is associated with a culture that has enshrined harmful antipatterns as part of its respected traditions. The Kotlin development team is obviously trying as hard as they can, but it doesn’t matter: their effort still falls short.

I am picking up a project I put aside about five months ago, due to a need then to focus on other more pressing goals. Then, I had foundered for some time on serialization. Kotlin has what is in theory a great serialization subsystem. Of course, given the Java world it was layered on, it took no small effort to implement it. And the latest version of Kotlin at the time hadn’t quite got its implementation right; I was getting bitten by those bugs, which were causing my code to throw exceptions and die. JetBrains was aware of those issues, and planning to fix them in the future, but that didn’t help me in the here and now. Eventually, after blowing several days on the issue, I found the magic combination of an older Kotlin compiler and serialization library that did not make these bugs manifest.

That resolved my issue, but enough time had transpired that when I recently resumed my efforts, one of the things at the top of my agenda was to upgrade to the current versions of things and see if JetBrains had fixed the bugs. I did, and they had. So far, so good.

The problem is, the build system (based on the Java universe’s Gradle, which coming from the Java world is the standard shambolic mess you find in that world) has now for some reason started producing a corrupt Jar file. The jar command-line utility can list and process the Jar file. It can locate the class I am trying to invoke. Yet when I attempt to invoke it, the JVM claims it cannot find the class.

So now I have to troubleshoot that build system and figure out why it is (once again) failing me. And this is the sort of shit that keeps coming up in the Kotlin world, simply due to its proximity to the Java one.

Where this goes, I am not sure. Maybe I will find a resolution or a workaround, like I did the last time the build system started spitting out Jar files that were corrupted in a slightly different way. Maybe not. Maybe I will just give up on Kotlin, despite its advantages, because the disadvantages simply outweigh them.

Cutover Complete

Published at 13:17 on 19 November 2025

If you can read this, it means the site cutover is (hopefully) complete. This site is now being hosted by a server in Canada, in a server farm being run by a non-US (French) entity.

The DNS name service (for you non-geeks, that is what makes the blackcap.name part of this site’s address work) has yet to be cut over, but that is because there is about a year left on my existing contact, and I am being cheap. I already have an existing business relationship with a Canadian name registry, and if things start going south rapidly it won’t be terribly hard to cut that over.

Given the current state of affairs, I think it best remove as many dependencies on US-based organizations as possible. This is not exactly a prominent blog, but still better safe than sorry. With authoritarian bastards, you never know.

Portable GUI Framework Principles

Published at 08:25 on 17 November 2025

There are two major schools of thought when it comes to portable graphical user interface frameworks:

  1. To create applications that harmonize as well as possible with the rest of the platform an application happens to be running on.
  2. To create applications that appear as alike as possible, no matter what platform an application happens to be running on.

It is my contention that, for desktop applications, the first principle is the correct one, and the second principle is a harmful design anti-pattern.

The reason is quite simple: most users use one sort of platform, and that is it. You have Windows users, Mac users, and Linux users, and those users tend to stick with their desktop platform of choice. So they don’t care how a given application looks on some other platform. All they care about is how well it harmonizes with the rest of the platform they do use. This makes the basic rules of the game the same no matter what application they happen to be using. I don’t care how special you think your own application is, so far as the average user is concerned, it’s just another tool in their toolbox.

And it is the users, not the developer, who is the truly important ones here. The users are the ones for whom the application was written, after all. They vastly outnumber the developers.

Yes, some users switch between platforms of different types. That is their choice. As part of their choice, they have accepted the natural consequence of having to deal with multiple pattern languages. There is no way to be such a user and to not have this consequence imposed on one. As such, an application whose look and feel varies from platform to platform does not impose any significant new onerous cost on such users.

I am speaking here about desktop applications. For smartphone applications, the matter is quite different. This is because, despite all the hullabaloo about which is best, the Android look and feel really does not differ that much from the iPhone look and feel. It is one of the things that really struck me when I moved from Android to iPhone. Yes, there were a few places where I got confused and had to get used to doing things “the iPhone way,” but surprisingly few. For the most part, I was able to just pick up my new iPhone and start using it.

Since the pattern languages of the two smartphone platforms are so similar, it is sensible to have a goal of a smartphone app appearing as alike as possible on both platforms. It makes the jobs of your documentation writers and user-support people easier, and it does so at approximately zero cost to any end user.

It is even less appropriate to have a goal of making a smartphone app and a desktop application appear as identical as possible than it is to have such a goal for two different desktop platforms. This is because the two basic types of platform are so radically different. A smartphone is profoundly resource-deprived compared to a desktop system. The desktop can be an appropriate place to host large, complex applications (assuming large, complex things need to be done). A smartphone is never an appropriate place for such an application.

What this all means is that frameworks like Avalonia which prioritize Principle No. 2 above should be seen as primarily smartphone frameworks that also happen to support the desktop… badly.

Alternatives to the JVM for Portable Desktop Applications

Published at 19:17 on 16 November 2025

I have been using the JVM (Java Virtual Machine) to host desktop applications I develop. Originally I wrote the code in Java, but in recent years have switched to Kotlin, because it is a more modern language with a more concise and expressive syntax and a more sensibly-designed standard library.

The big problem with Java is not the language, it is the culture around the language. I have a friend with a number of pet sayings, one of which is “tradition is mightier than innovation,” and that certainly applies in spades to Java. There’s just so much bad tradition enshrined as respected convention in the Java world. I call the result Java Community Antipatterns or JCA’s for short.

Kotlin is better in the JCA department, but due to its intellectual proximity to the Java world, some of the Java brain rot has inevitably bled over, so Kotlin still has its issues. To pick just two examples:

  • Its Ktor Client HTTP request library is a lot more complex than it ought to be (way more complex than the Requests library that is common in the Python world or the standard System.Net.Http package of the .NET world). Despite the complexity, some of its features, such as bearer token management, still manage to fall short of what is commonly needed.
  • Its lightweight multithreading is likewise (there are both Job and Deferred objects, both very similar yet subtly different, where .NET gets along just fine with a single Task object).

On top of that, for a lot of things, there simply isn’t a Kotlin library. You end up calling a Java library. That is easy enough to do, because Kotlin runs on the JVM and was designed to interoperate with Java code, but the Java library is inevitably a lot clunkier and harder to use than it ought to be, due to those JCA’s.

If it weren’t for the JCA’s, Kotlin would be a near-ideal programming language. If it weren’t for the heat and dryness, Phoenix would have a near-ideal climate. (And aside from that, Mrs. Lincoln, how did you enjoy the play?)

So I decided to kick the tires on potential alternatives yet again. Always a good idea, because the state of the art is always in flux in the computing world. And the answer I got was: despite the flaws of the JVM, it’s still hard to do better than it.

Mostly, it boils down to three things the Java world got right:

  • To prioritize platform agnosticism.
  • To include graphical user interface (GUI) capability in the core framework.
  • To prioritize, or at least facilitate, making the graphical elements in Java GUI programs harmonize with the overall pattern language of the platform the application happens to be executing on.

The first two encapsulate the “write once, run anywhere” philosophy that has been one of Java’s key design principles basically since Day One. Other virtual machines still just don’t do as good a job of actualizing this principle.

This most often manifests when graphical desktop applications are involved. Pretty much any virtual machine out there will do a great job of running command line utilities or daemons running as detached jobs portably.

Python, for example, has a great cross-platform GUI library called PyQt. Alas, it doesn’t ship with it; one must add it on. And, like many Python libraries, it isn’t written purely in Python. In fact, it’s mostly written in C++, a language that compiles down to machine code, not portable byte code. This makes it a lot harder to distribute a run-anywhere application, particularly on platforms like the Mac, which is unusually programmer-hostile in this regard.

Microsoft .NET has a very nice virtual machine, with a standard library that, unlike Java’s, is for the most part well-designed and easy to use. But it was written by Microsoft, whose corporate interests as the creator of Windows run counter to the ideal of platform agnosticism. .NET code can run (and long has been able to run) on Macs and Linux boxes… as long as you stick to command-line or daemon programs. Out-of-the-box desktop support is no longer strictly limited to Windows (Macs are now supported), but Linux is left out in the cold even as of this late date.

There are third-party frameworks like Avalonia that claim to address this deficiency, but by not being present out-of-the-box, they raise the same gotchas that PyQt does in Python. Plus most of them fail badly when it comes to harmonizing well with the overall pattern language of a platform.

What it all boils down to is that I could shift to some alternate platform, and this would make my life easier in some respects, but only at the cost of making it significantly harder in others, or inescapably compromising the quality of my applications. It is far from clear to me that there would be any overall net benefit. In fact, I rather suspect the opposite would be the case. I guess that’s good news in a sense, as it means I probably haven’t been wasting my time using a suboptimal platform.

The Java community’s faults may be legion, but Java set out to be a “write once, run anywhere” language, and its virtual machine has to this date succeeded at that goal better than any other such environment of which I am aware.

Newspapers and Magazines Are Not Timelines

Published at 07:45 on 2 November 2025

Time to clarify what I recently published here.

Per my recent definition, newspapers and magazines might appear to be timelines, but they are not. This is because all articles in a publication have a single source: the the individual (or typically) firm producing the publication. Everything goes through the same editorial team before it gets in. The information has been curated by humans.

The exception would be a publication with extremely lax (or no) editorial standards whatsoever, which simply publishes everything (or nearly everything) submitted to it. Those would be timelines.

This also explains why the posts of an individual social media account are not timelines, even though virtually all social media users repost content from others. Those reposts were still done by a human. The information has still been curated.

Thesis: Timelines Are Evil

Published at 07:41 on 31 October 2025

Before continuing, it is necessary to define what I mean by timeline in this article.

timeline, n. An online list of one-to-many communications from mixed sources.

So, Facebook’s infamous algorithmic timeline qualifies as a timeline, but so are its “feeds” of friends and groups. The chronological timelines of Bluesky and Mastodon are also timelines, and therefore also evil. An email account that is on one or more mailing lists is also a timeline, but an email account that is not subscribed to lists is not a timeline. If you log onto Facebook, the list of your friends is not a timeline, because that is a list of Facebook accounts, not communications from those accounts. If you click on a friend and view their posts, that is also not a timeline, because the contents come from a single source, not mixed sources. And so on.

Timelines are evil because of the time burden they impose. This is because of how computer technology makes it so easy to send information, coupled with how timelines often contain many senders of information, inevitably makes for very busy timelines.

Some very timeline-like things existed before the dawn of the Internet. Junk mail and junk phone calls turned physical mailboxes and telephones into such things. This is why so many people rightly found them objectionable.

Algorithmic timelines are more evil than strict chronological ones, because of the opaque nature of the criteria for ordering and selecting timeline contents, but even strict chronological timelines are evil.

The only thing that can make a timeline non-evil is sparse traffic, but due to information being so cheap and easy to send this can never reliably be the case. Evil is the natural state of most timelines, and even normally non-evil timelines will at times assume this state.

Timelines are the chief thing responsible for making people spend so much time online and disconnected from the real world that exists outside of cyberspace. Create a timeline for someone, and the fear of missing out on something important that might be buried in it leads them to spend unhealthy amounts of time online.

As such, timelines are probably responsible (or at least partly responsible) for much of the recent trend of politics and society getting worse, which is driven by organic and real-world interactions being replaced by time spent in cyberspace, based on opaque criteria, all the while being monitored and exploited by capitalists and politicians.

At least this is my current operating theory. I arrived at it as a result of struggling over why I spent so much time in front of computer screens, to the detriment of achieving other goals in my life. As such, I am now in the process of experimentally de-timelining my life.

A Little Cheesy, but Not Terrible

Published at 07:46 on 21 September 2025

That’s my executive summary of Apple’s new Liquid Glass theme in macOS 26 and iOS 26.

Sure, it was needless make work for Apple’s design department. Sure, it would have probably been better off not to spend all that effort. However, I haven’t run across anything bad overall. I was ready to follow this suggested list of settings tweaks, but quickly realized the default settings were just fine.

The minor changes in the system itself are nothing like the major, revision-to-revision changes in Apple’s mail client. Those were genuinely annoying, and it was hard to turn them all back. There was this persistent trend of using more and more screen real estate to display less and less useful information about each message. At least this is how things were five or six years ago; I have been using Mozilla Thunderbird ever since then.

It is Thunderbird’s lack of frequent, gratuitous design changes which generally makes for a generally better overall user experience. It may look a little dated, but you can see a lot more useful information about the messages in your inbox at a glance. The worst thing about Thunderbird is the message-composition editor, which has always been a little janky when one tries to do anything more than the most basic of HTML formatting, but putting up with that has been a price worth paying for some useful UI stability and density of information presentation.