Nov 4, 2008, Wayland, WTF?

I read on Slashdot about this new X server for Linux that attempts to clean up historical cruft and make things smaller (sounds like what was trying to do over XFree86). One thing I noticed is that I can't actually find this Wayland thing to view any of the source, let alone try it out.

It seems like another one of those things that a bunch of geeks go, "oh, a shiny new idea!" (even if it isn't really a new idea) and get all excited about it. I'd be interested in checking it out if I could, but I can't, so no checking out.

This Wayland thing got me thinking and now I'm wondering about something. What is the usefulness of this network transparency thing for the majority of things we do with a computer? I'm sure there are a few cases where it would be handy, but not that many. I have to say there's been like two occasions where I've had to use it: once when I wanted to run a program from home when I was at work - which ended up horribly failing because it seems any modern Linux app is so resource-heavy that running it over the Internet is ridiculously slow, easier just to install a VM - and second, to read User Friendly because for some reason my box couldn't access it but our dev server could. Basically the only feasible way to have this whole network transparency thing is over a LAN, because over the Internet it's just too damn slow.

In a world of mainframes and slow-ass terminals, I can see how network transparency could be useful. Have the big box in the middle handle all the processing, and just send the display info to the terminals. This is much more feasible when your terminal is really slow, because the network slowdown is not so much the bottleneck as the CPU of the terminal.

Those days have long past with the advent of personal computers (hell, I don't even remember the days of mainframes and terminals, we had a desktop computer for as long as I can remember). Our computers nowadays can handle our processing needs (my computer here is probably more powerful than most mainframes when X was created), and the network delay is really the main slowdown for this kind of computing.

I haven't yet addressed the issue of distributed computing. There are plenty of reasons why we'd want a centralized system for our computing. Things like data storage, a central location for our apps so we don't need to install them on our system, etc. These are all very useful tools, and the whole personal computer thing doesn't handle it very well.
However, neither does X's model of computing. There are many reasons, and here are a few that I can think of:
  • Ease of use - X isn't exactly the most user friendly of environments. Believe it or not, there are non-geeks out there that may need to use a distributed computing environment. Say, the marketing team for your company, or your customers. Making them have to use X would probably make them less productive, or drive them away.
  • Programmer productivity - A lot of the time you have to re-invent the wheel when you're doing things like working with the window being resized, etc. Or you have to deal with memory management (even with Java or Ruby - Java is a huge memory whore). These things slow down the pace of a developer. It may not seem quite so apparent at this point in the essay, but I'll get to the alternative soon.
  • Scalability - Can X scale to say, thousands of users per day? Millions? If I want to market my app over the Internet, this might be a requirement.
Many of these problems have been solved by a different distributed computing technology: the Web. It's nice. It is easy to use - in the sense that accessing a web application is easy, no having to do X11 forwarding or all that crap.
Applications are simpler. You design your application with a quick execute-and-dump idea, as opposed to a more state-based system. Web technologies seem to acknowledge that web apps aren't usually CPU bound, and so can cut all sorts of corners and provide you with a nicer coding experience - look at Rails, or even PHP.
It is scalable. While scaling is not easy, it is definitely easier to make a web app scale to thousands of concurrent users than with a regular app.

Basically what my point is is that the need for this network transparency thing is not really as high as the Linux geeks seem to think. Most of it is covered either by traditional desktop computing, or "the cloud" of web apps out there that do pretty much the same thing but more easily. Maybe when redesigning something like X, they should focus on splitting the network forwarding from the whole display system - instead of one program that does everything, have several programs that do one thing very well, seems kinda UNIX-like to me.


Michael Mol said...

I use the network transparency all the time to have apps running on my desktop/server machine display on my laptop. Things like running an image browser without mounting a network share, or burning a CD/DVD in absentia.

It's quite useful once you get into the mindset where you think to use it.

Rob Britton said...

Yeah that's true. I guess since I only really have one computer in the house I never really think to use X forwarding for anything. I can see though why it might be useful if I were to say, set up a media box in my living room.

Ethan Anderson said...

I'm typing this text into a web page with text entry fields and gtk radio and check buttons...

I say we just make all the standard gui widgets available via the web, like some kind of wx-html thing.. then dump X11 for http. With a strict version of webkit, online apps could be as responsive as desktop ones.

Manabu said...

The source code of wayland:

There is also an google group for discussion:

Jan said...

I suggest you look up up "Sun Ray" on Sun's website before spouting non-sense about X forwarding not being used... ( )

Anonymous said...

Another useful, but long-forgotten feature once made "easy" by X11 being a network protocol was the possibility of proxying your apps: Thus, open them to a "proxy" server, and kill and restart your display server (or move from server to server, on different machines) to your heart's content -- they'll redraw when you start it up again and reattach. Like `screen` for GUIs.

The problem is that this already breaks [performance] with modern clients and libs [freetype etc] which want to use a lot of new accelleration extensions, etc which no proxies were ever updated to support [or can't support, if those extensions make an end-run around network abstraction and don't force the client to keep enough state to let it redraw itself on a new display that's starting from scratch].

Rob Britton said...

@Manabu: Nice, thanks a lot!

@Jan: Never heard of Sun Ray, so I wouldn't have been able to look it up ;)
Technically it is not X forwarding, as they use their own protocol and software. But I get your point.
I wonder what performance would be like in an office full of these.

@Anonymous: That's a good point, I can think of a number of times when a proxy like that would have been handy.

wladimir said...

I never really use the X network transparency. When I want to remote-control a computer I use VNC, rdesktop or similar protocols.
X network transparency always has had a huge problem: If you lose your network connection, your applications are dead.

Anonymous said...

Wayland is not taking network transparency away from you. You will still be able to do network transparency with Wayland using VNC, RDP, SPICE, whatever protocol for networking you prefer. And yes, you will be able to redirect a single window or the full desktop to other machines.

What Wayland is doing is simply separating graphics from networking into different layers, this is not only better but it's also more clean.

We will have a more modern graphics stack and we will still be able to do networking. It's a serious win in my opinion.