May 30, 2008

Adventures in Scala: Part II

My first impressions of Scala: It's different. I have a feeling a lot of Java programmers will hate this language. And it is not a newbie language, I would recommend knowing at least one functional language before tacking this. Preferably one with static-typing, like Haskell or ML.

This is my first experience with a type-inferenced imperative language. In fact, I can't really think of any other type-inferenced imperative languages, since the only languages I can think of that use type-inferencing are functional languages.

The documentation and community around it is still fairly small, as is expected. The tutorials on their site seem to be more oriented toward language features and the differences from Java. This is nice and all, but is there differences in the libraries, for example I/O?

So in trying to learn Scala, my first project was something I did in university in a Java class: build an interpreter for TML (tiny machine language). TML is a simple RISC-style assembly language. There are a few registers (I put 8) and some memory (I put 1024 bytes). You have basic commands like load, add, jmp, etc. and labels to make jumping easier. And there are comments.

The little things I found are this:
  • The difference between var and val. Both are used to declare a variable, but from what I've seen in my short escapade is that variables declared with val are immutable (const).

  • Array syntax: You use parentheses instead of square brackets to do array access. Odd, but not hard to adapt to.

  • Generics: This was a bit more interesting. The square brackets are used to specify the type instead of angle brackets:
    var list = new LinkedList[String]()
    As with the array syntax, this isn't so bad, you just need to adapt to it. However there are some issues with the type inferencing in situations like this:
    var list = new LinkedList()
    In Java, this would default to being a linked list that holds values of type Object. In Scala, this means that the list holds values of type Unit. The Unit type is equivalent to void in C/C++/Java. So this list can only hold the one possible object of type Unit: (). Pretty useless! So you have to specify which type of objects you want in the list, even if that type is Object. Little annoying.

  • Keeps Java standard libraries: So I still have to go
    var input = new BufferedReader(new FileReader(stdin))
    when I want to read input from the console. At least I don't have to put try..catch around it. So it's slightly less annoying than Java. Note that I did try to look around a bit to see if Scala made this easier, but I couldn't really find anything. Might just be that I'm lacking Google skills.

  • Complicated: It has a lot of Java's features, plus a ton more. Singleton classes, case classes, functional-type things, etc. Not sure if this is a good thing or a bad thing, but we'll see.
So far, the language is a huge improvement on Java, but still too Java-like for me to really love the language. I'll still try and work with Scala, but I find myself still preferring C++ when I need a language with structure.

May 29, 2008

Reduce

It seems like there are a lot of programmers out there who don't know or understand what reduce (in Ruby, inject; in Scheme, fold) is. In a computer world where parallelism is becoming increasingly important, I feel that this is a basic concept that should be understood.

Reduce is a higher order function for converting a collection of data into a single value.
It takes 3 parameters: a function f, a list L, and an initial value s. f takes two parameters and returns one.
Reduce passes s and the first element of L to f, then takes the return value from f and the second element of L and passes that to f, and so on. When it hits the end of the list, the return result of reduce is the return result of f. In pseudocode:
function reduce(f, L, s)
if L is empty return s
set v to s
for each element i in L
set v to f(v, i)
return v

or in functional-ish style:

reduce(f, L, s) = if f == []
then s
else reduce(f, tail L, f(s, head L)))
So what's the big deal with this? I can do this type of thing just fine with a for/foreach loop and a variable. For a single processor or thread, it is probably slower if you use reduce. Also for certain functions, reduce doesn't work very well either. This is pretty useless.

Let's look at this mathematically for a second. If f is associative, then the following is true:
f(a, f(b, c)) = f(f(a, b), c)
Examples of functions that are associative are: addition, max/min functions, merge sort, etc. On top of this, if f is commutative ( f(a, b) = f(b, a) ), then we don't even need to keep track of the order.
Assuming that f is associative, and s is the identity value of f (for all possible values a that can be passed to f, f(a, s) = f(s, a) = a), then the following holds:
reduce(f, L, s) = f(reduce(f, L1, s), reduce(f, L2, s))
where L1 and L2 are the two halves of L (if f is commutative, it doesn't matter the order in which L1 and L2 come). The great thing about this is that this perfectly scales on two processors. One processor does the first reduce, the other processor does the second reduce. Voila, we have just halved the time it takes for us to process this list!

We can take this further. Suppose we have n machines and a big dataset. We can split our dataset into n equal parts and tell each machine to reduce a part, and then one machine puts all the reduce results into a list and reduces that list. The time it took to process our dataset has been divided by n!

We can take this even further. Suppose we now have a dataset bigger than Sidney Crosby's ego (if that's possible). Designate one machine as a master (make sure it is a super machine that is reliable). This machine then takes a piece of our dataset, splits it up into chunks and assigns it to a bunch (or maybe several thousand) of slave machines. Those slave machines then reduce each little piece that they were given, as before. However this time, the master occasionally polls each machine to see if that machine has finished. If it has, then it grabs a new piece of the data set and assigns it to that machine. However if the machine doesn't respond, the master can assume that machine has failed and give the piece that the slave was working on to a different slave. So now we don't have to worry about our slaves failing.

So what has reduce given us? A massively scalable, failure-tolerant processing system. Who uses things like this? Google immediately comes to mind (their algorithm is called MapReduce, after all).

Reduce isn't the only function that scales like this, just seems to be the one that is the most difficult to understand. Reduce is a method of converting a list into a single value. Two others that come to mind are map and filter. Map takes a list and a function and applies the function to each element, returning the list with the modified elements. Filter simply takes a boolean function and a list and finds all the elements in the list for which the function is true.

These types of concepts applied are great ways to make easily scalable code and are things that programmers should be familiar with.

May 28, 2008

The Income Effect

A little observation about how money isn't really everything.

First, a little lesson in economics. Almost everyone has heard of supply and demand. The quantity supplied of a commodity is essentially the amount of it that is made available for consumption by the producers of that commodity. Both empirical and mathematical (I'd show the math but lots of programmers are allergic to calculus) evidence show that for most commodities, there is a positive correlation between the price of the commodity and the amount that the producers will provide. In English, this means that when there is an increase in the price, the amount of the commodity that the producers will produce increases, all else equal.

This also relates to the labour market. In the labour market however, the positions of producer and consumer are reversed. The producers of labour are the general public, and the consumers of labour are the firms that employ them.
The labour supply comes from the total amount of some unit (ie. man-hours) of labour that is available in a given labour market (say, web programmers). Like the supply in other markets, the amount of labour supplied is affected by the price (the wage rate).

The thing that is interesting about labour supply though, is after a certain point the correlation between wage and labour supplied goes away, then reverses. When the wage rate is low, then an increase in it will mean there is more labour supplied - people will work longer hours, more people will want to work there, etc. However after a certain level of pay, what is called the "income effect" kicks in and the trend reverses. The wage increases and the amount of labour supplied actually decreases. This is primary due to the fact that for most people, work sucks. Suppose you have a monthly budget of $4000 including rent, food, RRSPs, etc. Suppose you're making $5000/month and working 40 hours per week. You don't really need any more money. But you get a raise anyway. This means that in order to make that same $5000/month, you don't need to work 40 hours a week anymore. You can drop down to 35 or 30 hours a week and spend more time doing other stuff like going the beach or playing with your kids, etc. Or getting Ubuntu to work (yeah, right).

It seems like our society is telling us that higher pay is always better. It's like getting a high score in a game, the higher the number, the better. But why? What are you going to do with all that money? Why not instead of searching for higher pay, search for the same pay but less hours per week or even lower pay with a more enjoyable job.
Ultimately, it is your own call. These are just ideas to throw out there.

EDIT: This is a gross oversimplification of the income effect. I do not factor in things like how people spend according to their income, varying utility, sticky wages, market cycles, or various other economic topics that have an effect on labour supply. I'm making an observation of how I feel about higher pay at the moment ;). It's good, but there are also other options worth exploring.

May 27, 2008

Internet Ads: Good or Evil?

The instant response to this of most geeks is that ads are inherently evil. They represent large corporations who generally have different aims than us geeks (especially the open-source supporters).
As a liberal I have another aversion to advertisements in that they are a way of manipulating one's individual choice (a sacred thing indeed!). There is a vast amount of research done on the psychological effects of advertising which shows that this is true.
Finally, ads are annoying. They take up space on the page, they distract you from the important stuff, and they make the pages longer to load. Lovely Firefox extensions like Ad-Block Plus block the ads for you, which is nice, but has other implications (I'll say more later).

Many of the studies done on advertising are about television and magazine ads. Internet ads are somewhat different. There are many kinds of internet ads. Some of them are bearable, some of them are not. Banner ads are bearable, unless there is more of them than there is content on the site. The really annoying ads are popups, or those ones that won't let you see the content for 15 seconds or so (at least most of them have a "skip" link).
These ads don't have as much of a psychological effect on people, because they don't usually have video. Usually they are just images, and small ones at that (compare this to magazines which have big whole-page ads).

Let us now look at the good aspects of Internet advertising. Following the Google business model, one can let users use a website for free, but charge companies for ad-space on those websites. This is great for the user, since they don't have to pay for things. It's why Google is able to give away things like Gmail and Google Apps for free.

The implications of this is that you will make revenue so long as you have traffic for your website. This means that you don't really have to do anything, except to keep the traffic coming. Sometimes this may take a lot of work, other times not. If it doesn't, it is a great business model in that it allows you either to not have to hire many people to work under you, or have enough time for some good R&D.

On a side note, I saw one site (link not available because I forget what it was called) that completely blocked Firefox because of ad. This is dumb, because an ever-increasing percentage of Internet traffic is now using Firefox. People aren't going to switch to IE to view your site (either they can't, like Mac or Ubuntu users, or they don't want to, like most people who have switched to Firefox), they just won't view your site period. Plus, the majority of Firefox users I've seen don't have Ad-Block plus installed and the ones who do probably wouldn't click on the ads anyway. Programs like Ad-Block plus don't threaten the ad-based business model since most people don't care.

May 23, 2008

Love/Hate: Ruby

I have a feeling I'm about to start another flame war (or at least I would if people actually read this blog).

I'm tired of Rails. For many boring things like writing simple SQL statements or things like that, Rails is great. But when you want to do something that Rails wasn't originally designed for (or designed to prevent) it is a big pain in the ass. And let's face it. The guys who made Rails are pretty smart (albeit some of them from what I've seen are getting a little big-headed) but they are still only human and cannot see the future. All computer technologies (especially where the Internet is concerned) go obsolete, and I think Rails is a bit to bulky to handle the huge Web 2.0 sites. Structure is both a blessing and a curse, it is great for making a good maintainable application, but when the problem calls for expanding outside of the framework it is a lot more difficult. Rails is too complex for the average programmer to tweak should the need arise, so it is usually easier to just dump it if it is being a burden. A simpler framework with a smaller footprint and smaller codebase would be much more desirable for a large site, so that if something needs to be changed, it can be changed easily and the framework is not holding you back.

Many of us know that there is much to Ruby besides Rails (at least we tell ourselves this). So let me now focus on the good and bad aspects of Ruby itself.

The Good:
You can do a lot in less code. It's wonderbar. I love using regexes and their handy little variables: $1, $2, etc. It makes the little scripts for manipulating files and text much easier. From what I'm told, Perl is even better for this. However I'm also told that maintaining Perl code is like stabbing yourself in the eye. So I stick with Ruby.
It's easy to get extensions. Just go "gem install myobscurelibrary" and you can usually get it.

The Bad:
Open classes. Shamefully, I have used them because they provide a very convenient way to patch a class that didn't quite have the functionality I wanted. However it makes maintenance a bitch. If you're reading code that somebody else made, you would expect that the method calls you know would do what they're supposed to do. But no, some dumbass could make String.split email their grandmother, and you would have no clue unless you happened to stumble across their code that did it. Instead, you'd be sitting there scratching your head wondering why the code you just wrote is failing horribly.
Slow/resource-intensive. I wrote a tiny little GUI app in Ruby that just displayed a window with some controls, etc. Very basic, I've done this with C++ and it runs way faster. With the Ruby one, you can see a noticeable lag.
I used RubyScript2Exe to "compile" the Ruby program into a single executable, which then produced an executable of 15mb. Tiny by today's standards, but for a itty-bitty little GUI app that didn't do anything, that's pretty insane. Wonder what the size will be once I tack a few more gems in there.

These are the main good/bad points that come to mind immediately. Ultimately I have decided that the best use for Ruby is for little scripts. Either maintenance scripts, or scripts for simple websites. Dynamic typing completely ruins Ruby for a large application, and the fact that it is interpreted rules it out for anything resource-hungry.
So my consensus is if it can be small and slow, use Ruby. Otherwise go for a different language.

May 22, 2008

Things I miss in PHP

PHP has to be one of the most insulted languages of today. It's right on up there with Java (Visual Basic seems to be a language of yesterday, so it's not included here).

I must say, it's not so bad. From a management perspective, PHP is great. Hosting is plentiful and cheap; PHP programmers are easy to find.
From a server admin perspective, PHP is also great. It's easy to install, and the base install comes with pretty much everything the programmers will need (and then some). Compare this to Ruby, which has a mess of packages (called "gems") that need to be installed.
When PHP came out, it was arguably one of the best suited languages for what it does: making web pages dynamic. You can just embed the little things into the HTML! Wonderful! With Perl you had to put the stuff in cgi-bin and have it execute and output code, but PHP just fits right into the HTML that you already had. It integrates seamlessly with GET/POST requests and session/cookie variables. Database access is easy too, just go mysql_connect and you're set.

I feel though, that it is starting to show it's age. Ruby has some nice stuff now (like Rails) that make it able to compete at PHP's level. Embedded Ruby means you can embed Ruby code just like PHP into the HTML files. The MySQL module for Ruby means that it's just as easy to interact with the database as it is in basic PHP. I'm still not sure how you'd access the GET/POST/session/cookie stuff without Rails, I'll look into this one more.

What do I miss from other languages when using PHP? One of the biggest has to be function closures and anonymous functions. In Javascript or Ruby I can pass functions to functions. This can be done in PHP, but you have to declare the function with a name and then pass the name of the function to PHP (this brings headaches because you have to go function_exists() and pray to god that somewhere, somehow nobody named a function the same as what you want). I can't just pass the function itself. On top of that, functions can only see their local variables (unless you declare a variable as global). They can't access variables of an enclosing function. This may not sound like much, but there is many times when I feel an array_map or array_reduce would provide a much more elegant solution, but I can't properly define a function for doing this.

Nit-pickiness: Some nicer syntax would be good. Having to go $this->something whenever I want to access a member variable is a bit annoying (had a few bugs caused by this). Having to go array(...) to create an array makes some code harder to read, especially with nested arrays.
I wish the language was a bit more object oriented, so you could go ->method for basic types like arrays and strings instead of calling a function. Using array_merge(array_slice()... is somewhat difficult to follow sometimes.

All-in-all, PHP is not so bad. When you apply good software design techniques, there is no spaghetti code and the application is easy to understand. I've seen Java programs with hundreds of classes that feels more like spaghetti code than good PHP code (ok this method called this other method from a different class, let's go figure out what that method does, oh, that method calls 3 other methods of other classes like AbstractWhatTheFuckFactory, now I have to go find those classes and figure out what they do, but it turns out this other class was just an interface and now I need to figure out which class it actually was in order to figure out what's going on, etc...). I still get the feeling that it's trying to be more and more like Java when it appears that the community is moving away from the Java way of doing things.

May 20, 2008

Adventures in Scala: Part I

I am attempting to learn at least one new language a year and one of them this year is Scala. It is a multi-paradigm language that pretty much just takes Java and modernizes it. By that I mean it adds functional-style ways of doing things, like lambda statements, type inferencing, and gives it a much prettier syntax.

This article is about how to install Scala on Ubuntu. First off, you need Java installed. Scala programs compile to Java bytecode, which means you need the Java virtual machine in order to run your Scala programs. My advice is to install the sun-java6-jdk package. There are open-source Java implementations, but they suck. The Sun one is much better (the fact that it actually works is my main reason for using it).

Now the next step you might think is to then install the scala package from the Ubuntu repositories. Unfortunately as of today, that version is 2.3, and the most up-to-date one from the Scala pages is 2.7 (UPDATE Dec. 20/09: The version in the repo is 2.7.5 now, which is still not the latest version). You're much better off just doing this:
wget http://www.scala-lang.org/downloads/distrib/files/scala-2.7.7.final.tgz
tar -zxvf scala-2.7.7.final.tgz
sudo mv scala-2.7.7.final /usr/share/scala
sudo ln -s /usr/share/scala/bin/scala /usr/bin/scala
sudo ln -s /usr/share/scala/bin/scalac /usr/bin/scalac
Now you have a working Scala implementation! Make sure to check the most up-to-date version on their website. I'll try and keep this up-to-date, but I'm only human and only check the Scala website every now and then.

To uninstall:
sudo rm -rf /usr/share/scala /usr/bin/scala /usr/bin/scalac

May 16, 2008

Downloading Music: Don't feel bad

With all the "controversy" over the last decade about music piracy, not much has resulted. It is still dead simple for anybody with an Internet connection to download music. And it probably will be for a long time.

The question is, do we really care? I mean us, the consumers. Let's see, what happens when I download a CD instead of buy it. I have an extra $15+ to spend on more important things like food, or rent. What about the receiving end? HMV/A&B Sound/Whoever doesn't have my $15+. Oh well. The recording company doesn't have the $15+ - markup. Big deal. They make enough money from suing people. Finally, the artist doesn't have the $15+ - retailer markup - recording company markup. Ok, I'll go see the artist in concert (unless it's Metallica, who's music I love but they're douchebags so I refuse to support them). Works for me, and we skip the middlemen.

There is always the question of ethics. Is it right to download a song without permission? Why wouldn't it be? You're not stealing it. Stealing it would be going into someone's house and swiping their CD. Stealing means that you now have it and the original owner does not. Then there is copyright infringement, which is when you take something and sell it as your own. So if you take a song and then distribute it saying it is yours, that is copyright infringement.

Unfortunately the RIAA and others have twisted the meaning of stealing to their own ends. When you download a song, you are making a copy. When somebody downloads a song from you, they are making a copy. It is as though everyone is making photocopies of your photocopy. Last I checked, this was not illegal. When I was in university, there were photocopiers in the library filled with copyrighted works. Some people I knew would even go so far as photocopying an entire textbook. Nobody is cracking down on this, so why should they crack down on the music copying? It's essentially the same thing, it's just a lot easier to copy files than it is to copy physical sheets.

So don't feel bad for downloading music. If you want to support artists, go see them in concert. Or go to a bar where they have a live band and pay the cover. Heck, mail the band a cheque if you feel the need. Don't support these money-grubbing middlemen who profit off of other people's work.

May 15, 2008

Don't Insult a Programmer's Language

It is common knowledge that languages all have their strong points and their weak points. C is super fast but needs a good knowledge of the machine, and lacks object-oriented constructs and strong typing that would make programs much easier to write. Java is portable and keeps new programmers from making too many mistakes, but it is a bitch to program in (at least compared to more modern languages like Scala). Et cetera.

There are some programmers that label themselves as a "X Programmer". For example, someone who loves C++ will call themselves a C++ programmer. These people are not just users of the language. I use PHP a lot more than anything else, but I would never call myself a PHP programmer. For the people who love their language, it is more than a language to them. It is a defining feature of themselves; an extension of their soul.

When you mention something about a language, whether in a neutral or negative tone, they fight to the death about why it is not really a bad thing, or why the language designers did it that way. Most of the time, I can see why a language designer did something. C and C++ do many things weird because it is fast. Maybe not so great on the programmer, but it is still fast. Ruby does things to make your life easier, regardless of the efficiency cost. Many of these language features can be put into a good or bad light just by changing what the goals of the language are.
Consequently, programmers will put their language into a good light and other languages into a bad light.

Note that I'm ignoring Blub progammers. When we start talking about those kinds of programmers, it is merely ignorance that holds them to their language and not any merit of the language itself.

May 14, 2008

User Files vs. System Files

A while back I wrote a post about why Ubuntu doesn't have viruses. I've thought about this some more and I thought, there is a vast amount of money invested in Linux-based systems on the Internet. If a virus broke that, it would cause havoc. So I'm thinking there is lots of incentive to hack Linux, they just can't do it.

This lead me to another thought though, back to Ubuntu on the desktop. Suppose a user does get a virus that deletes files. Well, the system is safe since the virus doesn't have root access, but does that matter? The system is easy to format and reinstall. Takes a half hour. However the files that are in the user's home folder (which are accessible to the virus) are much more valuable to the user. All the documents, photos, etc.

Linux geeks argue too much that "oh, if a virus gets on your system it can only affect the files that it has access to." Well that's still a problem! Those files are the ones that are irreplaceable, not the files outside of /home. A simple script going
rm -rf ~/*
would be enough to really fuck up someone's system.

Lots of people are computer illiterate (anybody who has worked in tech support knows this). It doesn't matter what OS they're using, if it gives them the ability to run programs and it gives them to ability to modify their files, then there is the possibility of files getting wiped out. Unfortunately to get around this security risk we have to give up a lot of convenience. How annoying would it be to either have to type in a password every time you want to modify your personal files?

So there is no way to be perfectly safe, unless you snip that network cable (wireless users are screwed - just kidding). Don't use Linux assuming that you are invincible.

May 12, 2008

Don't Lie To Newbies

Ubuntu "just works." That is Canonical's slogan for why Ubuntu is better than other distributions.

I tell you now, it's all lies. Usually when a new version of Ubuntu comes out I have to wrestle with it for a week. I still don't have my printer working yet, and the new Nvidia drivers have a glitch that makes things appear a little weird with Compiz Fusion on. Then on my other computer, the wireless still only works once in a while.

Not everything works right away. Sometimes it will, sometimes it won't. Perhaps they're being selective about this, installing it on hardware they know will work out of the box, and then saying "oh look, it works out of the box!" Sounds like a Microsoft salesman to me.

Even if everything does work right away, what do you have? Ubuntu at first glance is not very impressive. The grey blocks with the orange and brown everywhere really looks like garbage. From what I've seen, most people change the theme if they can. So if most people are changing the theme, shouldn't this give you a hint that your theme sucks? At least put some eye candy on the default theme, like Mac and Windows do. They seem to understand that outside of geek-land, aesthetics play a huge part in people's judgement of quality.

First impressions are everything. Give Intrepid (the next version of Ubuntu) an overhaul before too many first impressions are made.

May 11, 2008

Law of Averages

After reading a blog entry from Raganwald's delicious feed called Perfecting OO's Small Classes and Short Methods, I have some words to say. The author lists a number of exercises to force programmers out of a procedural groove and into a more object-oriented one.

This makes me wonder, is an object-oriented groove really a good one? In Java, you tend to write way more code than you really have to. Example:
//input a string from the keyboard
String input;
try{
BufferedReader in = new BufferedReader(new FileReader(System.in));

input = in.readLine();
}catch(Exception e){
}
Compare to Ruby:
input = gets
or even C++:
char input[1024];
cin.getline(input, 1024);
Something about the enforced "object-oriented" approach seems to me like there is more work to it than in a less object-oriented fashion. Don't get me wrong, using object-oriented techniques really saves you effort in many situations, but it is a diminishing return. If you try to program everything with an object-oriented style, you'll end up doing more work than you have to.

Let's move over to functional programming. This seems to be the latest craze among geeks nowadays, which means in 10 years it'll probably be big business. Best for young programmers like myself to get into it now.
It has some great advantages. For example, using map or reduce (in Ruby, inject) you can write some really good one liners:
#sum up the prices of our order
sum = shopping_cart.inject(0) { |total, item|
total + (item.price * item.quantity)
}
compare it to PHP:
$sum = 0;
foreach ($shopping_cart as $item){
$sum += $item->price * $item->quantity;
}
Not a lot longer, but shows a very different style of programming. For a trivial example there isn't a huge difference, but for more complex things the functional approach can be a lot nicer. Plus it scales really well, which probably explains why functional programming is a big thing nowadays, since scaling to multi-processing seems to be another big thing nowadays.

Functional programming also has it's disadvantages. Look back at the Ruby vs. PHP example. If you had never heard of inject/reduce before, would you have any clue what that did? Now compare it to this:

(define (roots a b c)
(if (< (* b b) (* 4 a c))
#f
(list (/ (+ (- b)
(sqrt (- (* b b) (* 4 a c))))
(* 2 a))
(/ (- (- b)
(sqrt (- (* b b) (* 4 a c))))
(* 2 a)))))
Do you have any clue what this is? To the educated eye (or the intelligent eye) you can probably figure out that this is the quadratic formula. Written in Scheme. While Scheme has a beauty in its simplicity, it is rather difficult to read.

Note that other functional languages like ML or Haskell are a lot easier to read than Scheme, but they have problems too. Attempt to learn them and you will probably see. Personally I don't see the appeal of Haskell with its "pure" functional approach. I'd say that mutable values are a great way to get things done, you don't have to spend time thinking "OK, how will I do this without changing any values, and still have the function tail recursive so that I don't run out of memory?"

What I'm thinking is take the good parts of everything. There are many good things about object-oriented programming (ever write a GUI app?), there are also many good things about other paradigms. Knowing the different paradigms will make you a better programmer, but using them alone will probably make you worse. It's good to see that the designers behind languages like Scala seem to have understood this. Now if I could just get Scala to work...

May 9, 2008

Ubuntu for Web Development

I was reading a little article on 10 signs you are ruining your career as a web developer, which talks about several things you're probably doing wrong as a web programmer. I think a lot of them are seem somewhat biased like 3, 4, 5, 7 and 9. You don't have to be an active member of the community to be a good web programmer. Nor do you have to spend every evening studying.

What I found to be the most meaningful of these was number 10: "WAMP is still the development platform for your LAMP app." For most web developers, this is probably the case, at least for LAMP developers (I'm still not certain as to why anybody would want Windows as a server, other than the obvious steak and strippers reasons). In fact, I'll bet many of them don't even have the AMP part, they just do their development in some PHP (or whatever language you may be using) editor and upload it to some server with the stuff on it.

There are a couple problems with this approach. I did a professional site using Ruby on Rails a while back on my Ubuntu box, and I have to say it is a lot easier to debug things when you have a web server and everything running on your local machine. You can have a debugger/profiler running, you can read log files much more easily, and you don't need this thing called a "development server" since the development server is the computer you are working on.
One problem I found that crops up from time to time on Windows development is capitalization. When working on a team, everybody codes differently. When naming files, these differences show up. One person might name their file "myfile.php" whereas someone else might put "myFile.php". Not really a problem, until you start throwing things in SVN. The SVN server is probably running Linux, so these two files are different files. The SVN client (like Tortoise SVN) you're using on Windows however, will probably see these as the same. So it tries to do merges or whatever on two files that it thinks are the same when the server says they are different, this creates complications. It's a big mess that would never had happened if you were working in Ubuntu in the first place.

What tools are available for development in Ubuntu? Quanta is one I like, it's for KDE but it's easy to install KDE libraries in Ubuntu. There's also Eclipse which a lot of people like, but it's a beast. Plus it's a Java IDE, with other functionality tacked on through extensions. I want something a little more devoted. There are a lot fewer IDEs for Ubuntu than for Windows, however despite this I think there are more free IDEs for Ubuntu than for Windows ;).

UPDATE: It's been a few years since I wrote this article, I now develop almost exclusively in Vim. I use IDEs for fancy refactoring or bells and whistles like that when necessary, but not for writing any code.

I really should start looking for a new SVN client. I mean, the svn command line isn't so bad, but something that automatically detects when I've added or deleted files is always nice.

For FTP there's gFTP or the ftp command line stuff, although Quanta has built-in FTP support, which means once it's set up you don't have to navigate directories. You just press the shortcut (Ctrl+Shift+U for me, habit from my Dreamweaver days) and it uploads to the proper directory.

The major deciding factor in my mind is the image manipulation programs. If you want something really powerful, you're probably not going to find it on Ubuntu. Let's face it, the GIMP sucks. It just doesn't compare to Photoshop. I find it much easier to use Photoshop than GIMP (not that I really know how to use Photoshop for anything). So if you need to do a lot of fancy image manipulation, then Ubuntu might not be the choice for you unless you want to figure out how to use the GIMP.
For me, I don't do much complex image manipulation, and in this regard Ubuntu turns out to be better for me. I want to have a basic image editor that supports transparency and nice cropping stuff. There's occasionally some other stuff that I want, but this is what I need 90% of the time. Paint just doesn't cut it here, and using Photoshop for this is like using a shotgun to kill a mosquito. Here's where Kolourpaint comes in. It has transparency (you can select it at the bottom like it is a colour) and resize/scaling. It even produces some really nice file sizes and the smooth scaling option is pretty easy.

So here is my consensus: For the web designer, who mainly do the design of the site, images, HTML, CSS, you're probably better off on Windows or Mac. But for us web programmers who work with the server-side stuff a lot more, we're better off on Ubuntu or some other Linux distro. Why would you program on a WAMP stack (or even worse, a WIMP stack - Windows, IIS, MySQL (or MS SQL), PHP/Perl/Python) when you can program on a LAMP stack and speak the same language as the server that will be running your software?

UPDATE: I've posted a follow-up for this article here.

May 4, 2008

Installing Hardy

Here we go with Ubuntu 8.04: "Hardy Heron".

Startup is better. It says "Try Ubuntu without affecting your computer." That is good for the human aspect. They have separated the installer from the liveCD stuff, which is a good idea I think.

The installer is still dead simple. Say which language you speak (this is a hard one). If you live in the US or English Canada, you can just click next for everything, except where you pick your time zone. Unfortunately, a lot of people can't read maps, so this might be the hardest step. My answer? Fuck 'em. If you can't find your city on a world map, that's pretty sad. Especially since when you mouse over the city dot, it says the city's name.

One thing that struck me is that the installer set the resolution to 1680x1050. This was amazing. For Gutsy, the highest resolution I could get before installing the drivers from Nvidia was 800x600. I didn't have to do anything for this. Great job!

The installer is a lot simpler now that it doesn't completely load up GNOME. It gives you the standard dialogs: where are you located, what do you want your username/password to be, partition your drives (if you don't want to dual-boot and just want to format everything, click Next) and then a little progress bar tells you what it is doing. Once that is done, it shows a little thing that says "You must now restart your computer" and voila! It is done.

Now we go into the system itself. Compiz Fusion is not on, which is slightly annoying. This means that I have to figure out how to get it to work again. The completely idiotic keyring thing (who's idea was that anyway?) is still here, so I have to reset my wireless stuff. Then I found out that they have this Unlock feature on the network settings dialog. Not too bad, but after a few times of looking at the dialog and clicking a few things with no result, it gets annoying. Unfortunately it's difficult to tell whether the dialog is locked or thinking. I automatically assume that if things aren't responding it is thinking, so I just sit there and stare at it until I notice the "Unlock" button. Very annoying. They probably (or at least I hope) had a good reason to do this, although it'd be nice if there was some indication that it was locked, like highlighting the unlock button when you try to click things.

Other than that, my first impression has been pretty good. I still think they should update the default GNOME theme so that it doesn't look like you're using Windows 95, but this is merely aesthetics.

May 2, 2008

Building an Ubuntu PC

Currently, this can be a bit difficult. Installing Ubuntu is not hard, provided your hardware works. A lot of hardware does work, some of it works better than others, and some of it only works with a decent amount of hacking and hair-tearing. This article will try and help you get through this with a minimum of fuss.

Note that by the time you read this article, things might have changed. Right now Ubuntu 8.04 has just come out, although I'm giving it a week or two so that downloading isn't super slow, and for any immediate bugs to be fixed.

Here's a list of the parts, with some recommendations:

  • Processor: Go for dual-core. With triple-cores and quad-cores coming out like crazy now, dual-cores are dirt cheap. Quad-cores aren't twice as fast as dual-cores unless you're doing something that actually multi-threads well like video encoding or serving web pages. For normal web browsing, listening to music, etc. a dual-core is all you'll need. The second core means your system won't die if one program steals all your resources, but adding more cores doesn't really help all that much.
    Cache is something good to have. More cache means more speed.
    Manufacturer doesn't really matter. I prefer AMD because they generally use less power, they're cheaper, and supporting AMD means that Intel won't get a Microsoft-esque choke-hold on the CPU market.

  • RAM: Whatever you want. RAM doesn't care what OS is running behind it. More RAM means things will probably go a bit faster, but it doesn't scale that much after a certain point.

  • Video Card: Anything Nvidia. They have much better Linux support than their competitor, ATI. The newer cards will probably not work as well as the older ones, but usually with a little bit of time you can have them running nicely. I wrote an article on how to get it working with a Geforce 8800 GT, I doubt the process is much different for any other Nvidia card.
    Don't go with ATI. I tried with an ATI card a couple years ago (Radeon X800) and it was a nightmare.

  • Hard Drive: This is an interesting one. Because on Linux you generally split up the partitions more than you would under Windows, you have some more freedom in choice here. The coolest option would be to get a SSD (solid-state drive) for your root partition ( / ) and a regular SATA2 drive for your /home folder. SSDs are still pretty expensive (they're essentially big USB sticks), but they're super fast and don't consume a lot of power. Maybe pick up an 8-16 GB SSD and a SATA2 drive. If you don't want to splurge on a SSD, just get a SATA2 drive (or two).
    Brand doesn't really matter too much here, go with whoever you want.

  • Case: Get a cool case. Preferably with cool things like blue lights.

  • Sound/LAN: Usually your motherboard will have this built in. I usually get an nForce board because of Nvidia's Linux support (you'd think Nvidia is paying me to say all this stuff), but I'd think other companies would have decent Linux support too. If you want something spectacular you can go for a special sound card, but usually the onboard ones nowadays are of good quality.

  • Wireless: This is the tricky part. It's harder to find a wireless card that works with Linux. Sometimes different models of the same card will have different chipsets, which mean that they use different drivers. Your best bet is to just look at a bunch of cards and Google for "D-Link DWL-510 linux" or whatever the card's name is. Most of the time you should have an answer for whether it works well or not. Many of them need ndiswrapper, a wrapper for the Windows version of the drivers. It's not too hard to use, although native support is always preferable. EDIT: I use a D-Link DWL-G520. The versions of Ubuntu that I've used on it (Feisty through Hardy) all automatically detect it without any problems. Absolutely no mess whatsoever.

  • Peripherals: Any normal keyboard/mouse should do it. If your keyboard/mouse has extra buttons like volume, play, or whatever, it's pretty easy to set them up in Ubuntu. Just go to System->Preferences->Keyboard Shortcuts and you'll get a dialog where you can set the keys. Just click the Shortcut column for the one you want to set and press the key.
    Speakers are speakers. If the sound card works, the speakers usually should too.
    Monitor: Get an LCD screen. CRT screens are so 5 years ago. Get one between 19 and 25 inches, but beyond that is a bit ridiculous. Jeff Atwood posted about monitor productivity, check this out if you want.
    Webcams are a bit touchy. For me, my webcam works but the microphone on it does not. I tackled this a while back unsuccessfully, after I install Hardy I will try it again and post an article about it if I succeed. For now, just get the two separate (microphones are cheap anyway).
These are some guidelines that should get you a pretty good system. Remember that if you have any odd stuff in your system that I haven't listed here, it might be a bit tricky to install, or it might be a breeze. Remember that Google is your friend.