Mar 31, 2008

Flaming Thunder

I'm checking out this new language called "Flaming Thunder", a mathematical programming language with a cool name.

First impression: It is trying to do things that have been done before and failed, and it is comparing itself to languages of a different class (Java, C++).

Good aspects of it:
- It is easily readable. What language does this remind me of right away? COBOL. Where is COBOL? Useful if you want to maintain software written thirty years ago. Other than that, useless. Easily readability is good, but it shouldn't take away from the ease of coding. Suppose I want to output a number in variable x and then move to the new line:
Write x, NewLine.
The lines end in a period, like English. While I agree that this is better than the semicolons that are used in C++ or Java, there are no terminating characters in Ruby or Python. The period makes it awkward for certain operations:
Write 50!.
which looks odd. Why not make the period optional? Languages don't need a terminator to determine where the end of the statement is. Second, the character that is represented in other languages as "\n" as a nice, short little notation for a new line is represented as NewLine in Flaming Thunder. While this may be clear, it takes 5 seconds to explain to an engineer that "\n" means NewLine. And they've given up a nice short-hand for the sake of unneeded clarity.
- Arbitrary precision arithmetic. Always handy, although Ruby and Python do this too. They criticize C++ and Java for not having it, but if you're using C++ or Java for large calculations then you're either ignorant or idiotic. These languages are designed for speed, not for calculating large numbers. Don't compare apples with oranges.
- Interval arithmetic. Great for experimental science, when you have uncertainties in your measurements.
- Fast garbage collection. Uses stack-based allocation and reference counting to handle memory management. This is very good, and the perils of reference counting don't appear here since you can't have cyclical references.

Bad aspects:
- Verbose. As a programmer, I believe that more is less. If I can express something in less characters, that's better. For example, setting a value:
Set x to 5.
While very clear, I'd rather use
x = 5
Or even better, use := for assignment (although this is longer, there are big problems with using = for assignment). For strings, I'd rather use strlen() or myString.length() instead of GetStringLength().
- Lacking modern programming structure. There are basic loops and if statements, however the examples make heavy use of the GO TO construct. Last I checked, this was dumped in the 70's as very bad programming practice. Oh, and no structures, arrays or classes. This is not a bad thing, as Lisp only has lists and does very well. But when they keep comparing Flaming Thunder to C++, they should be able to back up their language when people talk about things C++ can do that Flaming Thunder can't.
- Lack of shorthands. In Flaming Thunder, --x means "the negative of a negative", as a mathematician who has never programmed before would expect. A programmer with experience in other languages would see this as "decrement the value of x and return the value of the variable after". In order to write this in Flaming Thunder, you would have to put
Set x to x - 1.
- On non-Windows platforms, you need to compile. This is a huge productivity loss. For Windows they provide a text editor that runs the programs, but lacks several handy editing features like Ctrl+A to select all, indentation detection, code folding, syntax highlighting, etc. Use Ruby with Kate and you're better off.

The language looks pretty cool, although I can't really see why I'd use it over Ruby or Python. My biggest problem with it is that it is a domain-specific language (simple mathematical calculations) but they argue that you should use it instead of C++ or Java. I argue this too, using C++ or Java for scientific research is not a great idea! These languages were not designed with scientific research in mind. Use the right tool for the job. Arguing that Flaming Thunder is better at scientific research than C++ or Java is like arguing that a hammer is better for driving nails into things than a screwdriver.

Mar 28, 2008

Ubuntu and the GeForce 8800

UPDATE: Karmic and Lucid work fine too. They doesn't use the nvidia-glx-new package, Karmic is using the nvidia-kernel-common and nvidia-glx-185 packages and Lucid is using nvidia-current. They're installed through System->Administration->Hardware Drivers, and they seem to work just fine.
UPDATE: Intrepid has zero problems with this video card, at least from my experience. The nvidia-glx-new package works just fine.

Here's a little how-to guide for those of you in my situation (also a how-to guide for me, since whenever Ubuntu does kernel upgrades it wipes out my drivers and my memory is usually pretty foggy). I have an Nvidia Geforce 8800 GT and here is my guide to getting it working under Ubuntu. Unfortunately, as of the time of this writing, the card does not work out of the box with Ubuntu - surprising for Nvidia cards, I know. The default nv driver "works" in that you can see things, but running at 640x480 on a 20" widescreen is not my definition of "working".

EDIT: This doesn't work with Hardy yet (as of May 12, 2008). You can install the nvidia-glx-new package which has the drivers, but there is still a glitch with the 8000 series where your title bars won't show up (they'll be there, but are invisible) if you have Compiz Fusion on. I'll put more information about this as soon as I find a fix for it.
EDIT2: One of my updates fixed the title bars thing. Things work perfectly now. Firefox still crashes.
EDIT3: Forgot, you need a few packages before you get going. Install the build-essential and your kernel header packages or the Nvidia installer will not be able to compile a kernel interface for the driver. To do this, type:
sudo apt-get install build-essential linux-headers-`uname-r`
and hit Enter. Remember that those are backquotes, which are in the top-left of your keyboard under the Esc key.

Before you start, you should be reading this on a separate computer, or memorizing it. You'll need to shut down X for this - don't worry, I'll hold your hand. Since you'll be shutting down the X server, anything running in X will be shut down too. So save anything important now.

First, you need to get the Nvidia drivers. If you hate proprietary things, then go buy an older video card or write your own drivers, because the open-source drivers don't work. I tried. You can get the drivers on Nvidia's website, just pick the model you have and the OS (I'm assuming Linux, so choose either Linux 32-bit or 64-bit depending on what you have). Save this file to your desktop (or wherever you want, if you know what you're doing).

Now you need to kill X. To do this, press Ctrl+Alt+F1 to switch to the console. Type your username and password there to log in. Now, type:
sudo /etc/init.d/gdm stop
and press enter. You'll be asked for your password again, so type it in. This will shut down the X server. Hopefully you took my advice and saved anything you wanted to keep.

Now you're in the console with X shut down. Congrats, you've risen to a higher level of geek. Now you need to install the driver. Type:
cd Desktop
and press enter. This puts you in your Desktop folder, where the Nvidia driver was saved - if you didn't save it to your desktop, I'm assuming you know how to use cd so go to whatever folder you saved the file to. Type
ls
and press enter to see the files there. You should see one that starts with NVIDIA-Linux somewhere, followed by your architecture (x86 for 32-bit, x86_64 for 64-bit) and the version number of the driver. If you have lots of things on your desktop, you can use Shift+PageUp and Shift+PageDown to scroll up and down through the listing.

Now to install the driver:
sudo sh NVIDIA
and press Tab. This should automatically fill in the filename, if it doesn't then just type out the whole filename that you saw after you used ls. Now press enter.

This will start up the NVIDIA installer, and if everything went well, it should start going. You have to use the arrow keys and enter to do everything, but trust me, it's not that hard. Usually you just need to do "Next" or "Accept" or "Yes" all the time and it should all work - unless you've customized your xorg.conf file, but I'm assuming that if you can do that you're probably not needing my help anymore.

After the installer has done all its magic, you should be ready to start up again. Type:
sudo shutdown -r now
and press enter to restart your computer. When it boots up again, you should have a fancy new graphics card installed and will be able to set your screen resolution, play cool games (some DO exist for Linux, I assure you) and all that jazz. Enjoy!

PS: I'm hoping Nvidia makes it easier for Linux users to install drivers (ie. not having to use the command-line) but for now, you'll have to do it this way.

Mar 27, 2008

On Languages and Freedom

Just as freedom in society helps foster creativity, freedom in programming languages allows programmers to come up with interesting solutions to different problems. By restricting what programmers can do, or by telling them the way they should be doing things, you are effectively restricting their freedom in coming up with interesting solutions. As a language designer, you should not assume that you are smarter than the programmers that will be using your language (although in the majority of cases this is true).

Just as the presence of people who need leading in a society creates a demand for the reduction of freedom in that society, the presence of programmers who don't know what they're doing creates a demand for languages that will bring the hammer down and tell them what to do. Many programmers need to be saved from themselves, given the proper tools (operator overloading, macros, open classes) they will surely wreak havoc. Giving a large choice of tools that they do not understand to them will only confuse them and when it is time to make a choice about which tool to use, they may hesitate and choose the wrong one. Languages like Java (and possibly Python, although I haven't worked enough with Python enough to make this claim valid) decide that there is only one way to go about a problem, alleviating programmers from having to decide which way to go about doing things - even if this way of going about doing it may not always be the most optimal. This is why I'm in support of these types of languages, as they are excellent for business. They reduce the risk of programmers screwing things up and turn a potential Chernobyl into a burning house.

For good programmers, or when risk is acceptable, these languages do not serve any purpose except for the satisfaction of the programmers who like these languages. They instead have a negative effect, restricting the programmer from experimentation with different ideas. Experimentation is the key to the evolution of science, and although it may sometimes have disastrous effects, it may also lead to discovery of new ideas (like how we can tackle this damn concurrency problem we're coming to). Things like macros allow programmers to tweak the language to make it better - how long has foreach been in the Boost libraries when it should really be part of the C++ language itself? Just as extensions for Firefox give hints to the developers of Firefox what should be in the browser (see Jeff Atwood's article for more), macros and things like them show the language developers what people are actually using and should be built into the language.

Many people argue against adding features, saying that it over-complicates a language. Simpler is better, they say. While they are correct on some issues, they are wrong on others. If you need to use all the features to program in the language, then they are problematic. However if you only need to use 20% of the features to do what you need to do, then what's wrong with having the other 80%? I would argue that it is better to have a language with a lot of features that allow you to do things in different ways, but not making all those features necessary to complete a given task. Joel Spolsky makes an argument that for Microsoft Office 80% of the people only use 20% of the features, but each person uses a different 20%. So Office supports a shit-ton of features that most people never use. Why don't programming languages do the same? Well, there are a few that do: C++ comes to mind. I can imagine there are several obscure languages that have even larger feature sets, although I can't think of any at the moment. One thing I love about programming languages is learning them, or new features about them. When I first started programming in Ruby, I would program away in it and then randomly stumble across something I'd never known about before (ie. the ||= or &&= operators, or that you could put if/unless AFTER the statement that you want to be conditional). I love discovering new features, and I can imagine that many other programmers also enjoy this. I can then use this newfound knowledge to change the way I code, or use whatever features I think that would result in a more optimal solution to a problem.

Pruning features is a good way to keep crazy people from doing crazy things, but it is a great way to limit the way programmers can do things. Try writing a complex number or vector class in Java: the formula z^2 + c (from a Mandelbrot set project I did once upon a time) becomes z.times(z).plus(c). Ugly! Pruning features from a language is like turning English into newspeak. Could you imagine a writer writing anything without using synonyms or antonyms? It would make reading books and blogs a lot more boring.

There are many additions to languages that save the programmers from themselves without restricting freedom. Things like the interface keyword add restrictions, however they help programmers convey meaning to other programmers. Things like typedefs and enums are very useful for conveying meaning. It's like writing comments, without actually writing comments.
I would really like to talk about garbage collection, which is an excellent improvement to languages. Although it does incur some overhead, it really gets rid of the pain in the ass bugs caused by dangling pointers or memory leaks. This is not restricting freedom, it is propping you up so that you don't have to worry about the nuances of the machine. I'm of the opinion that just as more modern compilers are generally able to generate faster assembly code than assembly programmers, garbage collectors may eventually outpace programmers using manual memory management in that they may get better at optimizing memory for cache, reducing the risk of memory fragmentation, etc. However, I believe there will always be a need for manual memory management in systems with limited processing power, such as embedded systems.

I believe that freedom fosters creativity, and it is this creativity that will allow us to solve our current problems, or come up with better solutions to older problems. Why limit ourselves?

Mar 25, 2008

On Capitalism and Freedom

No, George W. Bush has not hijacked my blog, this is about a book I recently finished, referred to me (rather indirectly) by a Harvard economics professor. It's called "Capitalism and Freedom", by Milton Friedman - a rather influential economist who unfortunately passed away not too long ago. I would have liked to have met him.

It is an excellent book, which really makes you think about the way things are run in the Western world (while most of the examples are for America, they usually apply to other Western nations as well, like Canada, where I live). It flies in the face of the "modern" welfare state that has been around since the Great Depression, saying how most of the problematic effects of the free market are the product of the government (ie. the Great Depression) meddling with things, usually making things worse than they normally would have been.

After reading this, I wonder, where are the classical liberals in modern politics? It seems like the name "liberal" is now synonymous with "socialist" and "free market" is now associated with "conservative". However, if you look at the American political parties, we have the Democrats (the "left wing" party) who don't seem to understand the concept behind an economy; and on the other hand the Republicans (the "right wing" party) who can't seem to shut up about Jesus or gay marriage. There is no liberalism here. Same goes for Canada, although there is considerably less bible-thumping from our Conservative party which would probably put them the closest to classical liberalism compared to the other political parties here. I would guess the lack of classical liberalism in modern politics would be due to the fact that classical liberalism demands limited government intervention, quite against what any politician would like. Not to mention the general public would rather have the government take care of them than take care of themselves.

There are a few points that I disagree with, I find Friedman falls into the standard trap that economists generally fall into, of optimistic assumptions of altruism and rationality. He seems to think that people will not exploit the system, and they will act rationally with perfect information. Under this scenario, I fully agree that a perfectly free market will yield the best outcome for mankind, however unfortunately not everybody has perfect information, and therefore others will exploit that ability, leading to all sorts of nasty things like monopolies and price discrimination.

Some education in economics is required, and some patience as it is fairly dry at points, but I would recommend this one to anybody with an interest in economics or politics.

Mar 24, 2008

That was fun, but what now?

After seeing things like
(1..100).inject(&:+)
(link)
or
$<.each do |line|
$> << line unless line.strip.split(",")[2].gsub('"','').strip.empty?
end
(link)
I'm seeing why Ruby really has that appeal to hackers. Just like in C where you can write
while (*a++ = *b++);
to copy a string, Ruby allows you to whip out some really good one-liners. Even better than C though, Ruby gives you a whole whack of things for you to modify the way the language works and ignore some of the ugly things of the machine below. You don't have to worry about memory management, or pointers, or working within the static typing system. You can open up classes easily with reflection, or patch them up with open classes. There is so much to boast about.

Then why do I find myself going back to C++? Ruby makes my life a lot easier with a lot of things, like strings and arrays and hashes and dynamic typing and blah blah blah. In C++ I have to worry about deleting, *gasp* declaring my variables and I can't use functional style approaches (although this is changing, thanks to Boost). Yet when I start up a new random project, I am reluctant to use Ruby for it.

Originally I thought that it might have to do with C++ being my first language. Then again, it wasn't, I was using qBASIC before I'd ever seen a line of C. Then I used C for about a year before I did anything with C++. So this can't be the reason.

One reason would have to be that C++ is more easily portable than Ruby. Before the Ruby enthusiasts have me burned at the stake for such a comment, let me explain. Anything you write in Ruby, you must tack on an interpreter with it. This is easy for programmers, or Mac users, but for my target audience (my friends) this is not really an option. Windows doesn't come with Ruby, and telling them to install the Ruby interpreter is a pain in the ass. But for simplicity's sake, let's say that they do that. Now what? I need a windowing library. So now they have to install a windowing library. This would probably be GTK, which means I'd probably stick Glade in there too, adding on two more things they need to install. Easy if you're running Ubuntu, not so easy otherwise. With C++ I can just pack in a .dll with the install and ship it off. There's probably easy ways to deploy Ruby apps, but just the installing of the Ruby interpreter is enough for me to not want to do it.

Another reason: static typing. While I appreciate dynamic typing for things like web pages and small apps, the structure of static typing is much more reassuring than what I'd have with dynamic typing. I don't want to have to write tests to do things that the compiler should be doing (not that I enjoy writing tests in general).

Wait, a third reason: I like being close to the metal. Although I'm fully aware that the language isn't usually the speed bottleneck, I like writing fast code and unnecessary slow-downs tend to irk me. Even little things:
x = [1, 2, 3, 4] - [2]
When I think about what's required under the hood for this, I can't help but shudder.

Many of these things are handled by Java, with the added benefit of garbage collection. But when you have to write
try{
BufferedReader stdin = new BufferedReader(new FileReader(System.in));
}catch(Exception e){
System.out.println("Something is very wrong.");
}
In order to receive input from the keyboard, I decide that I have better things to do than write out class names that average on 10 characters a piece.

Mar 20, 2008

The Difference between Programming and Software Development

In short, programming is fun. Software development is not.

There is essentially no difference between the two, except that what geeks and normal people call programming, business people call software development (I think the size of the words they use shows that they are trying to reflect their big...ego). In fact, I would have to say that the term "development" doesn't quite reflect what actually goes on. Development implies some sort of growth, as in "the economy of South Korea has developed quite quickly over the last few decades" or "I have developed a tumour in my ear". Software doesn't develop, it is designed and programmed. While the design may change, I would never call it growth - unless you're talking about the Windows underlying system, which from my perspective has just been growing (like a tumour) since the dark days of the 1980's.

Programming, sometimes referred to by geeks as coding, is fun - provided you're a geek. You program random things for the hell of it, not caring when or if it will be finished, or what people will think of it. You do it because you want to. I find myself a lot of the time re-inventing other people's work, just to see if I can also do that. Maybe other programmers do the same, or they program to do cool stuff or to try something new. Whatever their reason, they do it because they want to.

For many of us, we realize that we like programming and think, "Hey, maybe I could do this for a career." And why not? We like it and it pays well, what's not to like? Unfortunately, fun stuff is usually not what makes the money. Making massive changes to software, planning the software, or debugging the software for hours, etc. You design it not to your specifications, but to what the users/bosses want. It is not coding for you, it's coding for them (see Jeff Atwood's UsWare vs. ThemWare). This reduces the fun factor of programming by a lot.

So what do we do? All things considered, we don't have it so bad. A lot of jobs don't really take all the fun out of programming. We get paid very well for people fresh out of university, and we can pretty much just sit our butts all day.

Personally, I see software development as a great way to butcher a long-time hobby of mine. I no longer want to program when I go home, I just want to sit around and watch movies or play games, or blog about how programming in the real world sucks. Why not just get a job in another field and do programming in your spare time?

Mar 17, 2008

The Productivity of the Office

I would say the vast majority of programming jobs are 40-hour/week jobs, 9-5 (or a similar time-frame). You go to an office and program for 8 hours a day with some breaks thrown in there somewhere - sometimes. Unless you work in the gaming industry, in which case you're probably working 10-12 hours a day for 6-7 days a week.

I fail to see how this is productive, for the following reasons:
  • From my experience, most good programmers program at night. You might not get started until 2 or 3 in the afternoon, but then you get going and can keep going until well after 9pm.

  • Programmer productivity is not constant over time. A lot of the code I write before 11am (when my coffee kicks in) is slower than anything I write in the afternoon. I find little mistakes, or wonder why the heck I did certain things. Even worse, I completely forget how it works unless I'm actually looking at the code. It's like when you're writing papers for that English class they made you take, and 2 months later you can't remember anything you wrote about.

  • Transportation = no fun. It's an hour (at least) commute to work today, and another to get home. Most of the stuff I do at work I can easily do at home, so I really see no reason for all this travelling.

The problem with working from home or having an unorthodox schedule is that it does not fit into the normal business world. Most people work well between 9-5. That's when the managers are in. They want to keep an eye on you while you are working.

You've also got the problem of software licenses. Many of the tools we have to use at work (even though there are free ones available that do the same thing) are proprietary software and you can't install them at home. The bosses want everyone to be working in the same environment - it makes it easier to manage - and so using the software of your choice is not an option. Neither is taking home the software that they shelled out their "hard-earned" cash for.

I did contract work for a while. Although it doesn't pay as well and isn't a nice steady income, it is a much nicer work environment. I could sleep in each day, the commute was walking from my bedroom to my computer room (with maybe a side trip to the kitchen and/or the bathroom). My only problem with this environment is that there is no line between work and home unless you impose it on yourself. The greatest thing about working in an office is that as soon as you leave the office, you can completely forget about the work you do until the next morning - unfortunately in my situation I tend to forget a lot of the work that I did the previous evening until midday after the fog in my head has cleared a little.

Mar 14, 2008

Maybe Rails isn't so slow

After seeing sites like Twitter and Penny Arcade that run on Rails, maybe it isn't so slow after all. I know about the basic things like caching and FastCGI to speed things up, but after those I found that my sites still went a bit slow - I may attribute it the fact that I'm still running on a dev server, but that's beside the point.

However in the long run, it is not the language that you write the software in that will slow it down, it is the design of your site. I've seen sites written in Java/JSP - supposedly super fast - that run slow as shit. There are other sites like Facebook or Youtube that run on PHP or Python, and are super fast. It doesn't take a Ph.D. in Computer Science to know that Java is faster than PHP or Python, and so obviously the latter sites are doing something else right that the first site(s) are doing wrong.

From my experience, the speed bottlenecks tend to come more from the database. Making too many database queries (especially writes) will really slow down your site. The problem with Rails here is that it is so easy to make lots of database calls. On top of that, the Rails community (as far as I've seen) tends to see SQL as a filthy beast that only hackers and PHP developers like to touch. This means that you will be seeing things like User.find(:all, ...) everywhere in your code, which then use the nice little belongs_to constructs to use more queries to do everything that could have been accomplished by a simple join (I could write a whole blog entry on how annoying the built-in :joins parameter is in Rails, but we won't go there).

So, Rails developers, use SQL for complicated things including belongs_to relationships. It was designed for a purpose, by very smart people. On top of that, it is probably a better query optimizer than you are. If you need to do a complex query for sophisticated relationships, you need to either a) rework your DB structure to a simpler system, or b) use SQL for what it is good at. SQL is considered dirty because it is verbose for simple things: UPDATE users SET all_my_stupid_fields WHERE stupid_criteria, so that is where you use the Rails built-in stuff.

Next, memcached! Oh my god. I cannot express how awesome this software is. I've found a nice tutorial on how to use it with Rails elegantly. It is basically a hashtable in memory that you can use to store things. Like ActiveRecord objects. That means that if you want to get the user with ID 342, you can get it from RAM instead of the database. This is a lot faster than accessing the hard drive, and since it's a hashtable and not a B-tree or whatever you're using in the DB, it is O(1) to access what you want.

EDIT: It's been pointed out here (thanks Guillaume) that hashtables are not always faster than B-trees. I agree with many of the points made, and from what I gather the efficiency of hashtables is highly implementation dependent. I don't claim to know how memcached implements their hashtables, but considering the app is open-source I'd assume they have (or soon will have) a good implementation. This article also notes that for a small n, B-trees perform better than hashtables. This is true, but if a site is at the point where it needs to use memcached, the n is not going to be very small. Finally I will point out that sometimes you will not be able to use only an indexed field in your query, so your time complexity for DB lookup increases to O(n).

Given all these little speed up things, mixed with a good server farm and smart developers who don't abuse Rails' nice features, you could have a super-high performance site with Rails up in no time. Now if only I could convince the people at work to switch to it...

Mar 12, 2008

So many configuration files

Given the highly customizable nature of Linux, configuration files are probably the most efficient method of telling a program what you want. Given a man-page with well-written documentation and some commenting in the config file, making the configuration file is fairly easy.

Unfortunately, the average human being is afraid of the keyboard, or doesn't speak geek-ese, which is what most configuration files are written in. While it is easy for us to understand the syntax of a configuration file, most people would probably have a bit more trouble.

The thing that bothers me is that Ubuntu claims to be "Linux for human beings", yet I still find myself having to edit configuration files, either because there is no graphical version (as with GRUB) or the graphical version is too limited or complicated (as with xorg.conf). Configuration files are not human-friendly. Make a typo and your system could be screwed. Or if your memory is bad like mine and you put the wrong partition number in your grub config file for your Windows partition after the kernel updates wipe out your customized GRUB file. These are things that a simple graphical application could do. Gparted automatically detects the types of the partitions, why can't there be a nice little boot settings manager installed by default?

I believe that configuration files are also something that deters the semi-technical users from Ubuntu. These are people who understand computers, learn the Linux system and try to fix things for themselves. Unfortunately, they don't really know what they're doing and don't really want to spend a lot of time to get things to work. They think, "editing a config file, that's not so bad", mess it up, and then have a whole lot more problems than they had originally. They then go "Windows works, Linux doesn't" and switch back to Windows with a now soiled reputation of Linux in their mind for the next ten years. They then go and tell all their semi-technical friends about their experience, and further soil Linux's reputation - given the usability of the average Linux distribution, this reputation is not entirely undeserved.

Ubuntu should definitely iron out these usability situations, and quick. Since Ubuntu is hitting the mainstream with Dell and Asus selling Ubuntu PCs, it would not be good for Ubuntu to get a reputation as unusable in the mass market.

Mar 7, 2008

The Value of a Computer Science Degree

After graduating and working for a while, I began to wonder what the value of my computer science degree was. I don't really use much of what I learned, in fact the majority of things that I use at work are things that I learned on my own time. There is the rare occurrence when I have to reason about the efficiency of a piece of code, but a lot of that is common sense and the rest you can ignore because computers are fast enough to handle it anyway.

I had my first experience with judging new graduate resumes the other day and began to wonder, what is the value of the degree? The majority of the students I found didn't really do anything on their own time, they only had experience with things from random classes they took.

So I'm thinking, maybe we should overhaul the computer science programs at universities. There is a lot of fluff in there about lambda calculus, flip-flops and NP-completeness that we don't really need, scrap all that. How often are students going to be designing parsers? Get rid of that. We should start giving courses on professional software development, with chapters on things like version control and how to use professional IDEs - you know, the useful stuff.

Algorithms courses are good, but they go into too much detail. We don't really need to know about characteristic equations or the exact theta-complexity of the algorithm, because if it runs slow, we can just buy a faster one. This is a free market after all. Just teach the kids how to use a quicksort() function and be done with it.

We should give big projects at the beginning of the course, and then change the requirements at random points in the semester. This will teach the students how to adapt to changes. It will show them what the work world is really like. Don't tell them exactly how to do everything, just tell them what you want and they have to do it.

With all these courses stripped out, we can now make room for the real stuff. There should be advanced Java courses teaching about J2EE, JSP and Struts, things like that. Also another one that teaches advanced C# programming with .NET. The C++ classes should teach useful things like how to make a window. I have never seen anything on resource scripts while in university, this should be remedied.

The only room for Linux in the degree is to teach students how to manage a server. Teach them things like configuring LDAP and Apache, security, and the like. Programming under Linux is inefficient and is not portable to Windows, which is the primary platform that software should be developed for.

After this reform, the universities will be producing students who have hands-on experience with real-world software. This will increase the efficiency of the software market tremendously, as we will no longer have to worry about training new employees. It will also benefit the students, as they will have the skills to tackle today's changing world and can compete in the highly competitive job market.

It is important to note that the majority of this blog entry is in a sarcastic tone. Please do not take it seriously.

Mar 4, 2008

Firefox Decay

It seems as though Firefox has been slowly decaying over time. I wrote a post on what I loved and hated about Firefox, but it seems as though it is changing - rapidly.

What are the problems: it crashes. Often. Sometimes without warning, the window will just disappear, as though it were never there. Usually it takes me a few seconds to realize what has happened. Other times, it will suddenly stop working (usually while using Gmail) and I have to force close it. This bug appears much more often in Windows than in Ubuntu, but it is still a problem. The other main problem I've seen is that pages are rendered differently on different platforms. Under Windows, sites will look different than under Ubuntu or Mac (I don't know if they are the same under Ubuntu vs. Mac, never had the opportunity to compare side-by-side). Although they are different operating systems, they are the same browser. Get some consistency. Fortunately, the Mac and Linux market share using Firefox combined is under 5%, so we really don't have to care that much. As an Ubuntu user though, I would prefer to see websites appear nicely.

I would have to say that the only reason I'm still using Firefox is due to the amount of extensions. I don't know how I'd do web development without Firebug, at least not nearly as efficiently. There isn't really any other plugins that I would miss that much. In other words, Firebug is the chain attaching me to the lead weight that is Firefox.

What are my alternatives? The first one to come to mind in many people's heads is Internet Explorer. Ha! While it is possible to get IE in Linux, it is slow and ugly - it uses the Wine rendering engine which has never been known for elegance.
Opera is another choice, however it doesn't work on my 64-bit Linux box and not being open-source, I can't compile it. Other than that, I have found it to be fairly usable, with a few things missing: a status bar at the bottom? Is it enable-able? And make an option so that my tabs don't automatically select when I open them. Often when I'm doing my daily blog reading (using Opera under Windows so that I don't have to worry about crashes) I like to open a tab for when I'm done reading the current blog.
The other one I've tried is Epiphany. It is only for GNOME (I remember not thinking much about Konqueror) and so it is not available for platforms other than this - EDIT: apparently it is available for Mac as well. So far, my experience has been good. Although I don't have Firebug, the browser is fast, reliable, and has all the basic features that I use. It uses the Gecko engine, so pages look the same as in Firefox. It even has its own extensions, however the library for this is by far much smaller than that of Firefox.

So it seems that with browsers, it is the same as most other software: there is no silver bullet. I just have to stop complaining and suck it up for when there are problems with something. I'll probably keep complaining though.

Mar 3, 2008

Love/Hate: Distribution Upgrades

With the scheduled release of Ubuntu 8.04 "Hardy Heron" next month, I find myself wondering if I actually want to go through the process of upgrading. So far, here has been my experiences with Ubuntu's distribution upgrades:

Edgy (6.10) to Feisty (7.04): This was wonderful. Bug fixes everywhere, everything looked a lot better and everything was overall easier to use. With this one I did a complete re-install of Ubuntu, mainly because I was new to Ubuntu and had messed up the Edgy install pretty bad.

Feisty (7.04) to Gutsy (7.10): Not so good. The interface changed a fair bit, so doing some of the simple things that I had done before were now in different locations, so until I found those new locations I was stuck doing things the *shudder* command-line way. Of the new things added to Gutsy, I find that I don't really use many of them:
  • Fast-user switching? Nope, I'm the only person who uses my computer. When I visit my parents, I found that whenever people used the fast-user switching it was annoying because their processes would slow the whole computer down -- especially when they left BitTorrent running. Or their MSN would be on, giving me annoying MSN noises. Fortunately they have this thing that lets you force log them out.

  • Compiz Fusion? Oh I love it, but I keep it off now since it tends to slow things down. I don't really like waiting the half second for the fancy animation to go through when I'm switching through menus or all that. So effectively this addition was not really anything important, just penis envy of Windows Aero.

  • Tracker? I keep it off. Never use it, never need to use it. I keep my files fairly organized anyway, and I know where I put things 99.9% of the time.


The install of Gutsy was also painful, I spent two days (!!!) downloading through Synaptic only to have it crap out halfway through updating all the packages. So this meant I had to then download the ISO and install it myself because the Feisty install was now some twisted hybrid of Feisty and Gutsy.

My biggest beef with Gutsy is that I am forced to use it, because the Feisty repositories don't seem to be updated with all the newer software. My girlfriend is still on Feisty (because Gutsy doesn't support her hardware, weird much?) and can't get a lot of the software updates that I have since she still uses the Feisty repositories. I suppose it's probably possible to change the sources for aptitude, but I tend to avoid editing anything in the /etc folder in fear of destroying my computer.