Or, why I like Rust
There’s a lot of talk these days about how important concurrency is. The reasons are obvious: in a world of computing at scale, single processors don’t. Many people decry the death of Moore’s law, and increasingly they seem to be right. Transistors can only get so small, after all. As a result, the constantly increasing desire for computer performance will have to come from concurrency, working around the issue of Moore’s law entirely. All of this is sound reasoning.
Most older languages were designed in a world where Moore’s law was still very valid, and where compute performance came from single processors, not farms of them. Concurrency and scale networks existed, but were largely restricted to scientific computing or very special edge-cases, and the technologies that came out of them were not generally useful. Those days were great. However, that world doesn’t exist anymore, and instead the lessons being drawn from the current flock of “giants” is the importance of scale. The future is now, and it looks like networks.
As a result, these older languages did not serve the concurrency story very well. Though concurrency primitives existed in those languages, they usually require careful thought and planning for effective use. Data races abound, memory corruption is easy. In general, it’s not the most inviting environment. In reaction to this, certain modern languages tout “concurrency support” as a major feature. The goal: to be rid of the pitfalls of traditional concurrency, and instead present a sane, easy-to-use interface that falls neatly in with all the other traditional paradigms of programming everybody is used to. Truly a noble cause.
The problem lies here: concurrency is an anti-pattern to the traditional programming paradigms. Computing takes a certain level of discipline, especially with non-managed languages. Concurrency makes everything much harder, requiring either a much higher degree of discipline or a much higher degree of management. The latter group includes languages like Haskell, which are generally much more declarative. What you lose in terms of control over your code, however, you gain in performance. Suddenly, all the traditional problems with concurrency go away, and you’re left with something concurrent and safe. This is awesome, except that you lose many of the traditional paradigms most programmers are used to. It’s great for a wide variety of problems that benefit greatly from concurrency, but it’s also hard to get used to, and not great for an equally large number of corner cases in more traditional programming where a certain level of concurrency plays positively to the application. Not only this, but learning this style requires a lot of domain-specific work and knowledge. This doesn’t sound like the best solution.
What certain modern languages offer does sound like a great solution: sane concurrency without any of the micro-management baggage. It’s a great promise, but one that’s ultimately contradictory. These modern languages do provide great front-ends to the concurrency primitives everybody loves to hate. If your greatest worry is about correctly creating or scheduling threads, or scaling to tens of thousands of threads, or about correctly cleaning up a thread’s resources when it finishes, then these new languages offer exactly what you’re after.
That’s not really the–or, at least, my–greatest concern with threads. Obviously, resource management and scale are an issue, but those are whenever I’m doing any programming. They’re amplified by threads, but they aren’t fundamentally new worries. The new worries I do have relate to data races. Threading means giving up a huge amount of control over how your code executes, which means any interactions different parts of your code have are now much less predictable. The easy way to get out of this trap is to prevent all interactions, but that precludes any real benefits that concurrency provides. If all concurrent tasks are completely independent, then they cannot service the same problem.
Using concurrency to work on a single problem requires sharing certain resources, even if those resources are simply messages. That shared state means data races are possible. Data races are an entirely different class of worries from what’s possible in traditional, single-threaded programming. These concerns are serviced by the micro-management model, but not by the sexy-interface model employed by certain modern languages. As a result, these new languages do not really solve any underlying problems, but instead lather on a fresh layer of paint. On a small scale, everything feels awesome. On a larger scale, all the same problems come back.
This is why concurrency doesn’t matter. It doesn’t because the real problems concurrency causes aren’t addressed by modern languages. Certain other, more declarative languages address them to a certain degree, but are “weird” enough to not be generally useful. Or, if they are, they haven’t seen very much adoption.
What about the resource management and scaling category of problems, however? Although they are fairly general, they are definitely amplified by concurrency. The traditional answer to resource management has been garbage collection, which is a good solution, but doesn’t always scale very well. Not only this, but resource management incurs a performance penalty, caused by the garbage collection itself and by the extra work required when acquiring or releasing resources. Much like the micro-management model for concurrency, this doesn’t seem like the best solution.
This is where Rust comes in. Rust’s major innovation is the borrow checker, an additional bit of information the compiler tracks which enables it to determine when data comes into and falls out of scope. This means the compiler can identify in a deterministic fashion when to free resources, meaning it can insert this information into the binary it produces. No more need for separate garbage collection, instead resources are freed at the ends of certain lexical scopes. This can obviously hurt performance–especially when code is written such that large amounts of data are copied and freed–but this is always the case when the language does not micro-manage your code.
What isn’t always the case is the level of control that a borrow checker gives you. Even if the worst case isn’t very different from a garbage collector, the average case is much better. All the resources your program uses are right there, instead of being hidden away behind a garbage collector. Plus, the behavior is defined by the lexical scope of your code, and not by some black-box mechanics taking place behind the scenes of the garbage collector.
All of this makes for a mechanic that I think will survive Rust as a language. The borrow checker is something extremely useful in general programming, not only because it manages resources with no overhead, but because it also prevents many classes of memory corruption issues and data races often present in other languages. Much like type checking before it, I think borrow checking will become part of future languages and we’ll all wonder how we ever dealt with programming without it.
The new MacBook is pretty cool. This morning, when I was watching the liveblog, their announcement of it being fanless made me immediately suspect that it would be ARM-based. I’m impressed by Intel, and their ability to bring the TDP down enough to not require a fan.
I’m similarly excited about Type C. It’s basically the god-cable that we’ve all always wanted: one that handles power delivery as well as bidirectional, high-bandwidth data delivery. It really fills all the connectivity needs of the laptop, except for analog output.
That being said, it is priced rather high. I know the overall quality of the components is to blame, but it would be nice to have it either be more on par with the performance of the Pro, or more on the price level of the Air.
As an aside, I’m somewhat fearful as the trend of computing is to move to more “secured” devices, and other technologies that have the end result of making it harder to run the OS of my choosing on my computers. I fear that one day I may be restricted to a very large degree on most devices.
I could completely deal with OS X or Windows; both are very good operating systems. The problem is their software packaging: since both are somewhat stable, and applications are more self-contained, everything just ships statically linked, giant binaries. Plus, neither operating system has anything nearing APT, making software management much harder than it has to be. Yes, OS X has the App Store, but it is missing most of the applications that I use on a regular basis.
This, plus desktop Linux is going to be very exciting in the next few years. KDBus is going to land soon, as well as Wayland/Mir/others. Valve is getting serious with SteamOS, and btrfs/systemd/containers are promising some very interesting things in the system management and software packaging areas. It’s just the right time to be using Linux, and having that taken away from me would make me sad.
Having done by now a fair bit of work building a terminal application in Linux, I’d like to mention how much the interface matches what I’d like to call “the text box of doom.” There are a few major issues that I have with it, namely
No way to read out the contents of the screen
I understand the issue with doing so, but there being no way of getting the contents of the current screen means there is no real way to know what is on there–even if you clear everything and redraw, there might be some quirks with the terminal emulator that cause it to not actually contain what you think it contains.
No strict separation of control and text sequences
Again, the limitations of the serial connection mean escape characters are really the only way to go when designing control sequences, but it would be nice to not mix them in with the text. I understand the simplicity of having applications just be able to print out contents and read back lines, but given the existence of termios this shouldn’t need to be. Instead, there should be a “high” mode where text is a control sequence.
Output at the end of the line counts as a newline
Ideally, lines should be inputted as that: lines. At least, there should be a way to do so, much like there is a line input method on the other end. Applications shouldn’t have to worry about how long a line is exactly, and instead the terminal should treat each line like a line.
The result of all this is a lot of work and redundancy on the part of the application to maintain a correct representation of the state of the display, something which a good API should specifically avoid.