So Rails 2.2 has come out recently, and it's touting some nice features. These include internationalization (i18n) and thread safety - the thread safety aspect isn't so great for C Ruby, but is excelled for JRuby which uses a different threading system.
There is one problem though, and that's for anybody (like us) who is using ruby-gettext, which is an i18n gem for Ruby and Rails for doing translations. The problem is that gettext depends on plenty of the little things in Rails that were not thread-safe, and are consequently no longer there. If you even attempt to load the Rails environment with the gettext gem included (by doing something like script/server, or a rake task), it will blow up in your face.
The solution right now is not that cool. You've got a few options:
1) Revert to 2.1.2. Kinda sucks, since we're using JRuby for our app and I was looking forward to the fancy schmancy new stuff.
2) Fix the gettext issues yourself. Really sucks. While like a good FOSS boy I should be all over this one, time is my most limited resource and well, it has to go to things that either keep me fed or keep me sane.
3) Wait for the people behind ruby-gettext to fix it. Also sucks, because we have no idea when that will happen.
4) Rip out all the gettext stuff and replace it with Rails' new i18n stuff. Probably the best solution, however this takes more time than 1) and so is not on the immediate horizon.
So if you're using gettext, remember not to update to Rails 2.2+ without first ditching gettext. Or if you were thinking of using gettext, I'd recommend saying "screw that" and going with Rails built-in stuff.
UDPATE(Jan. 11/09): We have gone through and removed the gettext stuff from our app. Fortunately we weren't dependent on any gettext-related tools, so this was not a difficult task, just replace all the _ calls with t() calls (can by done easily with Netbeans using refactoring tools), and some configuration details. Was pretty simple fortunately.
Nov 27, 2008
Nov 26, 2008
Kingdom of ... Lennoxville?
I was talking to a friend from university yesterday and we started playing Kingdom of Loathing again. For those who haven't played it, it's a browser-based role-playing game where you control a character who goes around fighting monsters and doing quests (that is kinda implied by the term "role-playing game"). The difference is that it's quite satirical, it makes a lot of jokes based on puns or play-on-words, or pop culture, or things like that.
We were thinking that it'd be really fun to have something like that, except based on our experience at Bishop's, or on college life in general. It'd be based in the little town called Lennoxville which is where Bishop's is, and so the places you would explore would likely be places around there. However the game itself and most of the adventures would be more oriented towards college life in general.
I'm wondering if anybody would be interested in this idea, maybe not necessarily for coding (if you want to do that, it'd be cool too, but it's a fair bit of work and so I don't expect anything) but for ideas of things that would be stereotypically college, or ideas on the mechanics of the game. Or even just if you'd be interested in playing.
We were thinking that it'd be really fun to have something like that, except based on our experience at Bishop's, or on college life in general. It'd be based in the little town called Lennoxville which is where Bishop's is, and so the places you would explore would likely be places around there. However the game itself and most of the adventures would be more oriented towards college life in general.
I'm wondering if anybody would be interested in this idea, maybe not necessarily for coding (if you want to do that, it'd be cool too, but it's a fair bit of work and so I don't expect anything) but for ideas of things that would be stereotypically college, or ideas on the mechanics of the game. Or even just if you'd be interested in playing.
Nov 24, 2008
FOSS and the Software Industry
I read an article on Slashdot about how open source is slowly eroding a lot of the commercial applications out there. This isn't really anything new, it's been happening for years. However it seems like the quality of open-source is continually getting better and it's just a matter of time before it becomes "good enough" for other people to want to use it over the proprietary equivalents. Even the Economist thinks that open-source is going to be bigger - not that we really needed them to tell us it was, we knew it already.
I'm not trying to say open-source will eventually wipe out the market for proprietary software. Here's what I think will happen. Anything that's fun or interesting to code will eventually be taken over by open-source. These are things that programmers such as myself would not mind doing in our free time as one of those things, you know... what are they called? Oh right, a hobby. Like how some people spend tons of time building model trains or burning wood, we sit at our computers and churn out code that does cool stuff - well some people do, I tend to do more experimentation with random things and blog about them.
So what does that mean for professional software developers? Probably that demand for us will shrink. People won't need us to develop their software, since there are open-source versions that are as good as whatever we'll put out. People will still need custom-made software, or they won't want to comply with the GPL, or will want to make games (note to self: write about how the open-source development model doesn't really fit gaming), or whatever. There will still be jobs for us.
What I'm worried about is that most of the interesting jobs will be gone. We'll all be stuck either working for slave shops, fancy new web 2.x startup ideas, or content-management systems. Or things like that.
So I'll stop and make a confession here. I'm going to school. Not in anything really related to computers. I've been doing classes on and off since I graduated, but I'm starting to look into it more seriously. My rationale is this: I enjoy programming, but there is such thing as too much of something you enjoy, and it makes you not enjoy it any more. Therefore, I want to work in something else I enjoy, and then come home and mess around with software. Much more fun that way.
I'm not trying to say open-source will eventually wipe out the market for proprietary software. Here's what I think will happen. Anything that's fun or interesting to code will eventually be taken over by open-source. These are things that programmers such as myself would not mind doing in our free time as one of those things, you know... what are they called? Oh right, a hobby. Like how some people spend tons of time building model trains or burning wood, we sit at our computers and churn out code that does cool stuff - well some people do, I tend to do more experimentation with random things and blog about them.
So what does that mean for professional software developers? Probably that demand for us will shrink. People won't need us to develop their software, since there are open-source versions that are as good as whatever we'll put out. People will still need custom-made software, or they won't want to comply with the GPL, or will want to make games (note to self: write about how the open-source development model doesn't really fit gaming), or whatever. There will still be jobs for us.
What I'm worried about is that most of the interesting jobs will be gone. We'll all be stuck either working for slave shops, fancy new web 2.x startup ideas, or content-management systems. Or things like that.
So I'll stop and make a confession here. I'm going to school. Not in anything really related to computers. I've been doing classes on and off since I graduated, but I'm starting to look into it more seriously. My rationale is this: I enjoy programming, but there is such thing as too much of something you enjoy, and it makes you not enjoy it any more. Therefore, I want to work in something else I enjoy, and then come home and mess around with software. Much more fun that way.
Nov 22, 2008
In The Office vs. Home
A few months ago I quit my job at the office to work for a startup, which has involved me working from home most of the time. I've worked at home before, and have sorta flip-flopped on how I feel about it. Here's my current analysis:
Pros:
Pros:
- Flexibililty - If I have to go to some place that is only open on weekdays between 9 and 4, I don't have to rush during my lunch break. If I want to stay up late and sleep in, I don't have to wait for the weekend. I can take classes if I want.
The flexibility of working at home is the main reason why I like it. - Freedom to Choose - I don't have to adhere to a particular software just because everybody else is using it. I can choose to work under Ubuntu with Vim instead of Windows and some IDE.
- No commute - Well, there is a bit of a commute. I have to walk all the way from my bedroom to my computer room in the morning. Sometimes I even have to stop by the bathroom and the kitchen on the way, which means going really far out of my way since the computer room is next to the bedroom. Damn.
This one is really awesome in the winter here in Montreal. No more having to jump from a metre-high snowbank into the bus, only to spend an hour and a half on a normally 15 minute bus route. No more having to walk down a frozen sidewalk because it is faster to walk from the metro line than take the bus. - Choose a time frame - One interesting thing about coding is that your good coding periods do not really occur at the standard working hours. Sometimes they might, other times maybe not. I don't really have a time when I'm most productive, it changes from day to day. Sometimes I can churn out some good stuff at midnight, other times at 11am. It depends on the day, and the office doesn't compensate for this.
- Claim things on taxes - since I work at home, I can claim things like Internet and hydro on my taxes. Awesome, since in Quebec you get raped when the tax man comes around.
- Takes self-discipline - There's a Wii with Rock Band in the other room. I have Starcraft installed. I have beer here. The temptation to do any of these is quite high when you're working at home. It takes some good self-discipline to not do these while you're working.
- No boundary between work and play - When working in an office, you have a clear distinction between what is work and what is not - the location. When you're not at the office, you don't have to give a shit about work and can sit back and relax. For me, I always have this feeling that I haven't got enough work done, so even if I have done 40 hours for the week I feel like I can put more in. Doesn't give me much time to relax.
Nov 19, 2008
Analyzing Blogs
This is completely unrelated to any thing I write about. I found this random site that analyzes a blog and tells you about the thought processes of the author.
Here's what I got:
The active and play-ful type. They are especially attuned to people and things around them and often full of energy, talking, joking and engaging in physical out-door activities.
The Doers are happiest with action-filled work which craves their full attention and focus. They might be very impulsive and more keen on starting something new than following it through. They might have a problem with sitting still or remaining inactive for any period of time.
I'm really not sure what to think about this. The second paragraph is pretty fitting, but the first? Hmm...
It is beta after all...
Here's what I got:
ESTP - The Doers
The active and play-ful type. They are especially attuned to people and things around them and often full of energy, talking, joking and engaging in physical out-door activities.
The Doers are happiest with action-filled work which craves their full attention and focus. They might be very impulsive and more keen on starting something new than following it through. They might have a problem with sitting still or remaining inactive for any period of time.
I'm really not sure what to think about this. The second paragraph is pretty fitting, but the first? Hmm...
It is beta after all...
Nov 15, 2008
Using Reduce as a Decorator
A while back I wrote a post about reduce, which is a technique used in functional languages to get a single value from a list.
While one main use of this is for things like summation or finding a max/min of a list, it can be used in other ways too. One neat thing that I discovered you can do is apply the decorator pattern with reduce (this example is in Ruby which calls it inject):
While one main use of this is for things like summation or finding a max/min of a list, it can be used in other ways too. One neat thing that I discovered you can do is apply the decorator pattern with reduce (this example is in Ruby which calls it inject):
decorators = [Mocha, Whip, TonsOfSugar]I thought this was pretty cool, and realized that it can be used in a more general case. If you have a set of transformation functions in an array, and you can apply them all using just this one line:
new_coffee = decorators.inject(old_coffee) { |coffee, dec| dec.new coffee }
result = transforms.inject(original) { |o, t| t.call o }One example I found is if you have a string, and a set of replacements to apply to it (could be stored as a hash), you can do it like this:
replacements = {"a" => 1, "b" => 2, "c" => 3}This is probably old news for FP folks, but to us young'ns it's an interesting discovery!
original = "abcdefg"
result = replacements.inject(original) { |string, repl| string.gsub(repl[0].to_s, repl[1].to_s) }
puts result # outputs 123defg
Nov 13, 2008
Rock Band Drums in Linux, Part II
I went back to my Rock Band drums code and decided to actually whip something up out of it. It's a tiny little app that uses some OpenGL to render the drum pads, and they bounce and things like that when you hit them. The kick pedal makes everything shake and gives a flash. It's pretty cool.
The link to the code is here: http://code.google.com/p/opendrums/
You can check out the code if you like, I released it under an MIT license so you can really do whatever you want with it.
There are a couple issues still. Mainly is the audio latency can be annoying sometimes, as it gets to be a half second off. Since I don't know much about programming audio, I can't fix it too much, but I'm doing my research and will try to get this resolved as fast as I can.
I tweaked the old code because there was something making it so that if you hit two pads at once, only one of them plays. This is no longer the case, you can hit as many pads as you like and they will all play.
The code is done with SDL and OpenGL, so it should work just fine on Windows or Mac too, you just need to compile it because I haven't made a compiled version for either of those platforms.
UPDATE (01/20/09): It seems there has been some trouble getting opendrums to compile. On Ubuntu you need the following packages:
The link to the code is here: http://code.google.com/p/opendrums/
You can check out the code if you like, I released it under an MIT license so you can really do whatever you want with it.
There are a couple issues still. Mainly is the audio latency can be annoying sometimes, as it gets to be a half second off. Since I don't know much about programming audio, I can't fix it too much, but I'm doing my research and will try to get this resolved as fast as I can.
I tweaked the old code because there was something making it so that if you hit two pads at once, only one of them plays. This is no longer the case, you can hit as many pads as you like and they will all play.
The code is done with SDL and OpenGL, so it should work just fine on Windows or Mac too, you just need to compile it because I haven't made a compiled version for either of those platforms.
UPDATE (01/20/09): It seems there has been some trouble getting opendrums to compile. On Ubuntu you need the following packages:
- build-essential - In order to compile C++
- libsdl1.2-dev - For SDL
- libsdl-mixer1.2-dev - For SDL_Mixer
- libgl1-mesa-dev - For OpenGL
Nov 12, 2008
Montreal on Rails
I'm doing a presentation at the upcoming Montreal on Rails gathering, on Nov. 18. My presentation is a simple introduction to jQuery and jRails, and why you might want to use them instead of Prototype and Scriptaculous. I haven't thought of a cool name for it yet, stay tuned for that.
If you're in the Montreal area and like/use/are-interested-in-using Rails, you should come check this out as there are plenty of interesting talks and cool people to meet.
If you're in the Montreal area and like/use/are-interested-in-using Rails, you should come check this out as there are plenty of interesting talks and cool people to meet.
Ruby Makes You Lazy
I was messing around the other night with some code, working on a simple artificial life simulator thing to play Conway's Game of Life. I figured I'd take the opportunity to learn how to use RubySDL, which is Ruby bindings to use the SDL media library. It took about 45 minutes to an hour to get everything rolling and before long, I was seeing the shapes run around everywhere on my screen.
It was pretty nifty, although after I upped the size of the world to 200x200, Ruby could no longer process everything quickly enough to maintain the 1 FPS framerate I was working with - which I found pretty funny really, because there isn't that much processing going on. But whatever.
I decided to port the thing over to C because C is fast, and when it comes to SDL most of the calls are the same. It was now though that I really realized how much Ruby makes you lazy. Maybe not lazy, but it spoils you. For example, in Ruby you can do this:
The answer is because it is faster. Like ridiculously faster. I put up the screen size to 800x800 (didn't make it higher because the window was only 800x800, but I could probably go higher) and it still maintained the same framerate.
This whole experience taught me something. While it is really nice to work with Ruby all the time (or languages like Ruby, say Python, Perl or dare I say it, PHP), these languages spoil you in ways that in time, you forget. However these nice things do come at a price, and when we need to do some work that is CPU-time-bound, I'm afraid that we may no longer have the skills to speed things up. It is important that when we're working with "productivity" languages like Ruby, we should still work with faster languages to keep us sharp.
It was pretty nifty, although after I upped the size of the world to 200x200, Ruby could no longer process everything quickly enough to maintain the 1 FPS framerate I was working with - which I found pretty funny really, because there isn't that much processing going on. But whatever.
I decided to port the thing over to C because C is fast, and when it comes to SDL most of the calls are the same. It was now though that I really realized how much Ruby makes you lazy. Maybe not lazy, but it spoils you. For example, in Ruby you can do this:
pieces = File.read("my_file").split("\n").map { |l| l.split(//).map(&:to_i) }What this does (for those who don't know Ruby) is take the file, split it into an array of it's lines, and convert each line into an array containing the integer version of each character in the line. It's magical! Try doing this stuff in C:
FILE * f;It's so much longer in C! Why do I have to write so much?
char c;
int pieces[SIZE][SIZE];
int i;
f = fopen("my_file", "r");
while (!feof(f)){
c = fgetc(f);
pieces[i / size][i % size] = (int)(c - '0');
i++;
}
The answer is because it is faster. Like ridiculously faster. I put up the screen size to 800x800 (didn't make it higher because the window was only 800x800, but I could probably go higher) and it still maintained the same framerate.
This whole experience taught me something. While it is really nice to work with Ruby all the time (or languages like Ruby, say Python, Perl or dare I say it, PHP), these languages spoil you in ways that in time, you forget. However these nice things do come at a price, and when we need to do some work that is CPU-time-bound, I'm afraid that we may no longer have the skills to speed things up. It is important that when we're working with "productivity" languages like Ruby, we should still work with faster languages to keep us sharp.
Nov 10, 2008
Glassfish with RMagick
Deploying a JRuby on Rails app with Glassfish is a relatively simple process, and in my opinion much easier than with Mongrel. Takes about 15 minutes.
Unless of course, you have a fancier Rails app which has dependencies. Then, sometimes Warbler doesn't always link the proper files, and you end up with a WAR file that can't properly connect to your gems. The one I had a big problem with was RMagick. See, this one is already an issue for JRuby, since it was written in C and JRuby is in Java, and so you can't load it. I have said before though how to get around that with a little gem called rmagick4j which ports the functionality of RMagick (supposedly there are a lot of things not built yet, but I haven't had any problems).
UPDATE: You can fix this by using the warbler config:
REST OF POST:
Unfortunately, the gem is not recognized from Glassfish. There is an alternative to RMagick called ImageVoodoo which you can use if you like, however I need to use RMagick because I use other gems that use RMagick (like gruff).
Here's how we fix it. First, you need to freeze the rmagick4j gem. There's a nice little thing called Gems on Rails, which freezes your gems in your Rails project folder. Install and use it like this:
Create a file called rmagick4j.rb with this in it:
Put it in the vendor/gems/rmagick4j-0.3.6/lib folder (replace 0.3.6 with the version of rmagick4j that you have). Now you can use Warbler to package up your app, and it will work fine.
Unless of course, you have a fancier Rails app which has dependencies. Then, sometimes Warbler doesn't always link the proper files, and you end up with a WAR file that can't properly connect to your gems. The one I had a big problem with was RMagick. See, this one is already an issue for JRuby, since it was written in C and JRuby is in Java, and so you can't load it. I have said before though how to get around that with a little gem called rmagick4j which ports the functionality of RMagick (supposedly there are a lot of things not built yet, but I haven't had any problems).
UPDATE: You can fix this by using the warbler config:
cd /path/to/Rails/appIn the generated file config/warble.rb, find a line that goes:
jruby -S warble config
# config.gems += ["activerecord-jdbcmysql-adapter" ...Uncomment it, and add "rmagick4j" at the end of the array. This will fix the rmagick4j thing. However, feel free to keep reading, as I learned a bit of stuff from all that and you might too.
REST OF POST:
Unfortunately, the gem is not recognized from Glassfish. There is an alternative to RMagick called ImageVoodoo which you can use if you like, however I need to use RMagick because I use other gems that use RMagick (like gruff).
Here's how we fix it. First, you need to freeze the rmagick4j gem. There's a nice little thing called Gems on Rails, which freezes your gems in your Rails project folder. Install and use it like this:
jgem install gemsonrailsNow you're almost set. There's a problem with Rails. It seems to think that since we have this gem called rmagick4j in our app, that there should be a rmagick4j.rb file in the gem's folder. It's really a reasonable assumption, since that is the convention, but rmagick4j has to break the convention in order to be compatible with the original RMagick. So you have to do a little tiny hack here in order to get things working.
cd /path/to/Rails/app
gemsonrails
jruby -S rake gems:freeze GEM=rmagick4j
Create a file called rmagick4j.rb with this in it:
require File.dirname(__FILE__) + "/RMagick"
Put it in the vendor/gems/rmagick4j-0.3.6/lib folder (replace 0.3.6 with the version of rmagick4j that you have). Now you can use Warbler to package up your app, and it will work fine.
Nov 9, 2008
CUSEC 2009
I just bought my ticket to CUSEC 2009, which is a conference for Canadian university students in software engineering. While I'm not a student in software engineering, anybody is welcome, and the presentations are usually really good. I've gone for the last two years and they were awesome, and I recommend that if you are anywhere near Montreal that you attend too. Bring warm clothes though, Montreal is cold in January. And a toque too.
They've got some pretty big speakers picked out so far, such as Richard Stallman (founder of GNU, if you're reading an Ubuntu blog and haven't heard of this guy, get out from under your rock) and Dan Ingalls, one of the chief guys who built SmallTalk back in the day.
So if you're a developer in Montreal (or somewhere close to Montreal, like Quebec, Toronto, New York, Boston) you should come check it out, it's a blast and you'll likely learn a lot. If you're a student at a Canadian university (or an American one, you're welcome too) see if your school has a head delegate and if not, maybe become one and head on out for good times.
Plus, the parties are wicked.
They've got some pretty big speakers picked out so far, such as Richard Stallman (founder of GNU, if you're reading an Ubuntu blog and haven't heard of this guy, get out from under your rock) and Dan Ingalls, one of the chief guys who built SmallTalk back in the day.
So if you're a developer in Montreal (or somewhere close to Montreal, like Quebec, Toronto, New York, Boston) you should come check it out, it's a blast and you'll likely learn a lot. If you're a student at a Canadian university (or an American one, you're welcome too) see if your school has a head delegate and if not, maybe become one and head on out for good times.
Plus, the parties are wicked.
Nov 8, 2008
Welcome to Intrepid
This would not be an Ubuntu blog if I didn't make a post about the most recent release, Intrepid Ibex. It was released not too long ago, but I usually give these things a few days before I test it out.
So upgrading was a little tricky, but not really. Hardy, being a LTS release, by default only upgrades if the new release is also a LTS release. Intrepid is not LTS, so you don't get any options to automatically update, or even update using apt-get dist-upgrade. Fortunately it is easily fixed by going to System->Administration->Software Sources, choose the Updates tab, and go to "Show new distribution releases" and set it to "Normal releases". After that, you should be able to auto-update. I was scared for a sec because I thought I would have to download the ISO and install from that. However, then I remembered that I've had to do that for every other version of Ubuntu that I've used, so it wouldn't be any different.
Another good thing to do is switch your download location to one of the mirrors, that way you aren't hitting the regular Ubuntu servers which are completely overloaded. You can do this in the same Software Sources dialog from the Ubuntu Software tab. Just go to "Download from", click "Other" and then you can pick a server near you. Makes it much smoother, and for me, much faster since USherbrooke (the mirror I'm using) has a fat pipe.
This was the first install that actually went smoothly. The upgrader didn't crap out and corrupt my install like it did with Gutsy and Hardy, it actually did all its stuff properly and left me with a usable system. There are a few differences with menus and what-not, the fonts are slightly changed and the arrows for menus are freakin' huge.
I did notice the tabbed browsing for Nautilus, it will probably take a little while to get used to but it will probably be handy. However if you're using the console a lot, you probably won't notice. Also I'm sure the fact that it doesn't really use xorg.conf anymore will be huge when my video card dies. Whether it is huge in a good or a bad way is yet to be seen.
The only really problem I'm having now is that I get a bunch of errors during boot which apparently mean nothing, because I'm not having any problems with hardware. The Ubuntu splash screen is gone, but I'm sure I can find a way to fix it up.
All-in-all this release is pretty good, although I'm thinking they put more focus on the server edition. I'm hoping that this is not the start of a trend, I would like to see more focus put on the desktop edition as that is where Linux needs the most work.
So upgrading was a little tricky, but not really. Hardy, being a LTS release, by default only upgrades if the new release is also a LTS release. Intrepid is not LTS, so you don't get any options to automatically update, or even update using apt-get dist-upgrade. Fortunately it is easily fixed by going to System->Administration->Software Sources, choose the Updates tab, and go to "Show new distribution releases" and set it to "Normal releases". After that, you should be able to auto-update. I was scared for a sec because I thought I would have to download the ISO and install from that. However, then I remembered that I've had to do that for every other version of Ubuntu that I've used, so it wouldn't be any different.
Another good thing to do is switch your download location to one of the mirrors, that way you aren't hitting the regular Ubuntu servers which are completely overloaded. You can do this in the same Software Sources dialog from the Ubuntu Software tab. Just go to "Download from", click "Other" and then you can pick a server near you. Makes it much smoother, and for me, much faster since USherbrooke (the mirror I'm using) has a fat pipe.
This was the first install that actually went smoothly. The upgrader didn't crap out and corrupt my install like it did with Gutsy and Hardy, it actually did all its stuff properly and left me with a usable system. There are a few differences with menus and what-not, the fonts are slightly changed and the arrows for menus are freakin' huge.
I did notice the tabbed browsing for Nautilus, it will probably take a little while to get used to but it will probably be handy. However if you're using the console a lot, you probably won't notice. Also I'm sure the fact that it doesn't really use xorg.conf anymore will be huge when my video card dies. Whether it is huge in a good or a bad way is yet to be seen.
The only really problem I'm having now is that I get a bunch of errors during boot which apparently mean nothing, because I'm not having any problems with hardware. The Ubuntu splash screen is gone, but I'm sure I can find a way to fix it up.
All-in-all this release is pretty good, although I'm thinking they put more focus on the server edition. I'm hoping that this is not the start of a trend, I would like to see more focus put on the desktop edition as that is where Linux needs the most work.
Nov 6, 2008
JRuby and OpenOffice
UPDATE: This code works for OpenOffice 2.4, but has not been tested with OpenOffice 3+ or with LibreOffice. Your mileage may vary.
I wrote a post not too long ago about how OpenOffice doesn't have much support for basic statistics, and that I would like to create something to fix that.
The OpenOffice API is written largely for Java and seemingly so far to a lesser extent, C++. However, they do try to make it work for most languages, including C#/Mono and Python.
I thought to myself, "JRuby can use Java classes, so can JRuby use the OpenOffice API?" And the answer is, "Yes, it can!"
So I'll tell you how to do it. It's not really that hard, although you'll have to deal with the Java way of doing things a fair bit, so you'll end up writing a fair bit more code than you would if it were a Ruby API. Also, there are a few cases where you have to convert from Ruby basic types to Java basic types, which can catch you if you're not careful.
The first thing you need to do is install the OpenOffice SDK. You can get it here, or if you're on Ubuntu you can install it like this:
Once you have that, you're pretty much ready to go! You'll need to check for a few things first, to ensure that the proper JARs are there. Go to your OpenOffice install directory (if you installed using apt-get, this is /usr/lib/openoffice) and check under program/classes for juh.jar and unoil.jar. These are necessary to get the correct classes for this sample.
So let's get to the coding! We want to make something useful, so we'll create something that I complained about in the other post: a random number generator. It'll be a little script that reads in 3 arguments from the command-line: number of values to generate, and the lower and upper bound. We'll ignore some error checking for simplicity, you can feel free to add it in if you like.
First thing we need to do is load in all the Java stuff:
I also create a little helper function there to save you some typing. Basically this queryInterface function is used to get an object of a specific type (klass) based on an interface that you pass (obj). It's not really necessary with Ruby, but since we're working with an API that is based around static languages it is something we need to do for now.
Let's set some variables based on the command line. Let's assume only 3 variables were passed, and that they were all integers:
After that we create a component loader (read: Component Factory) and use that to create us a component for Calc. Note that I use this:
Assuming everything is working, let's start entering our data. We want to create a new sheet:
Let's add our random numbers:
Finally, if we want to automatically switch to the new sheet:
If you have any questions, let me know! I'm no expert by far, but I've been doing a bit of digging and this is what I've got so far.
I wrote a post not too long ago about how OpenOffice doesn't have much support for basic statistics, and that I would like to create something to fix that.
The OpenOffice API is written largely for Java and seemingly so far to a lesser extent, C++. However, they do try to make it work for most languages, including C#/Mono and Python.
I thought to myself, "JRuby can use Java classes, so can JRuby use the OpenOffice API?" And the answer is, "Yes, it can!"
So I'll tell you how to do it. It's not really that hard, although you'll have to deal with the Java way of doing things a fair bit, so you'll end up writing a fair bit more code than you would if it were a Ruby API. Also, there are a few cases where you have to convert from Ruby basic types to Java basic types, which can catch you if you're not careful.
The first thing you need to do is install the OpenOffice SDK. You can get it here, or if you're on Ubuntu you can install it like this:
sudo apt-get install openoffice.org-dev openoffice.org-dev-docThe doc package is optional, however I find that documentation is a handy thing to have if you need it.
Once you have that, you're pretty much ready to go! You'll need to check for a few things first, to ensure that the proper JARs are there. Go to your OpenOffice install directory (if you installed using apt-get, this is /usr/lib/openoffice) and check under program/classes for juh.jar and unoil.jar. These are necessary to get the correct classes for this sample.
So let's get to the coding! We want to make something useful, so we'll create something that I complained about in the other post: a random number generator. It'll be a little script that reads in 3 arguments from the command-line: number of values to generate, and the lower and upper bound. We'll ignore some error checking for simplicity, you can feel free to add it in if you like.
First thing we need to do is load in all the Java stuff:
require "java" require "juh.jar" require "unoil.jar" include_class "com.sun.star.uno.UnoRuntime" include_class "com.sun.star.comp.helper.Bootstrap" include_class "com.sun.star.beans.PropertyValue" include_class "com.sun.star.sheet.XSpreadsheetDocument" include_class "com.sun.star.sheet.XSpreadsheet" include_class "com.sun.star.sheet.XSpreadsheetView" include_class "com.sun.star.table.XCell" include_class "com.sun.star.frame.XModel" include_class "com.sun.star.frame.XComponentLoader" def queryUno(klass, obj) UnoRuntime.queryInterface(klass.java_class, obj) endWhat this does is fairly obvious, it loads in the JAR files, and the classes that we will need for this example. Fortunately, JRuby only needs to load in the classes that we directly use. Ones that are used in the code but not directly used don't need to be included this way - otherwise we'd have to add like 5-10 more include_class lines to this.
I also create a little helper function there to save you some typing. Basically this queryInterface function is used to get an object of a specific type (klass) based on an interface that you pass (obj). It's not really necessary with Ruby, but since we're working with an API that is based around static languages it is something we need to do for now.
Let's set some variables based on the command line. Let's assume only 3 variables were passed, and that they were all integers:
N, A, B = ARGV.map(&:to_i)Next thing we need to do is actually connect to OpenOffice. We do this by creating a remote context which connects to OpenOffice itself, and a service manager which gives us access to various components of OpenOffice. We want to access the "com.sun.star.frame.Desktop" service which handles the documents that are loaded. After that, we want to load up Calc so that we can start mucking with things.
# bootstrap the environment and load desktop service remoteContext = Bootstrap.bootstrap desktop = remoteContext.getServiceManager.createInstanceWithContext("com.sun.star.frame.Desktop", remoteContext) # get us something to load the Calc component componentLoader = queryUno(XComponentLoader, desktop) # load the Calc component calcComponent = componentLoader.loadComponentFromURL("private:factory/scalc", "_blank", 0, [].to_java(PropertyValue)) calcDocument = queryUno(XSpreadsheetDocument, calcComponent)Wow, all that code. As we can see, we're making heavy use of the Factory pattern. We connect to OpenOffice using the Bootstrap class, which spits out a remote context object that we can use to access the various services of OpenOffice. We then create a desktop object, which lets us access the Desktop service - which we need in order to edit documents.
After that we create a component loader (read: Component Factory) and use that to create us a component for Calc. Note that I use this:
[].to_java(PropertyValue)Since the loadComponentFromURL method is expecting a Java array, we need to pass it a Java array. The little snippet there is the equivalent to this in Java:
new PropertyValue[]Finally, we create an object for our Calc document. We can now start editing things! But first, let's see if this actually works for you. Save the file as calc.rb, and run this line to execute it (replace OOHOME with your OpenOffice install directory, /usr/lib/openoffice on Ubuntu):
jruby -IOOHOME/program/classes calc.rb 10 1 10If you run this, what you should get is a blank OpenOffice Calc window opening up.
Assuming everything is working, let's start entering our data. We want to create a new sheet:
sheets = calcDocument.getSheets sheets.insertNewByName("Random Numbers", 0) sheet = queryUno(XSpreadsheet, sheets.getByName("Random Numbers"))Fairly straight forward, we get the sheets of the document, insert a new one called "Random Numbers" at the beginning, and get the object for it.
Let's add our random numbers:
N.times do |i| cell = sheet.getCellByPosition(0, i) cell.setValue( rand(B - A + 1) + A ) endIf we run this code, we now have a new sheet in our open Calc window called "Random numbers, and the first column has 10 random numbers in it. Cool! Now we can do whatever we want with those numbers.
Finally, if we want to automatically switch to the new sheet:
model = queryUno(XModel, calcComponent) controller = model.getCurrentController view = queryUno(XSpreadsheetView, controller) view.setActiveSheet(sheet)OpenOffice uses an MVC structure for its information. We need to get access to the view, which is done through the controller, which is accessed through the model. Then we tell the view to set our sheet to the active one.
If you have any questions, let me know! I'm no expert by far, but I've been doing a bit of digging and this is what I've got so far.
Nov 4, 2008
X.org, Wayland, WTF?
I read on Slashdot about this new X server for Linux that attempts to clean up historical cruft and make things smaller (sounds like what X.org was trying to do over XFree86). One thing I noticed is that I can't actually find this Wayland thing to view any of the source, let alone try it out.
It seems like another one of those things that a bunch of geeks go, "oh, a shiny new idea!" (even if it isn't really a new idea) and get all excited about it. I'd be interested in checking it out if I could, but I can't, so no checking out.
This Wayland thing got me thinking and now I'm wondering about something. What is the usefulness of this network transparency thing for the majority of things we do with a computer? I'm sure there are a few cases where it would be handy, but not that many. I have to say there's been like two occasions where I've had to use it: once when I wanted to run a program from home when I was at work - which ended up horribly failing because it seems any modern Linux app is so resource-heavy that running it over the Internet is ridiculously slow, easier just to install a VM - and second, to read User Friendly because for some reason my box couldn't access it but our dev server could. Basically the only feasible way to have this whole network transparency thing is over a LAN, because over the Internet it's just too damn slow.
In a world of mainframes and slow-ass terminals, I can see how network transparency could be useful. Have the big box in the middle handle all the processing, and just send the display info to the terminals. This is much more feasible when your terminal is really slow, because the network slowdown is not so much the bottleneck as the CPU of the terminal.
Those days have long past with the advent of personal computers (hell, I don't even remember the days of mainframes and terminals, we had a desktop computer for as long as I can remember). Our computers nowadays can handle our processing needs (my computer here is probably more powerful than most mainframes when X was created), and the network delay is really the main slowdown for this kind of computing.
I haven't yet addressed the issue of distributed computing. There are plenty of reasons why we'd want a centralized system for our computing. Things like data storage, a central location for our apps so we don't need to install them on our system, etc. These are all very useful tools, and the whole personal computer thing doesn't handle it very well.
However, neither does X's model of computing. There are many reasons, and here are a few that I can think of:
Applications are simpler. You design your application with a quick execute-and-dump idea, as opposed to a more state-based system. Web technologies seem to acknowledge that web apps aren't usually CPU bound, and so can cut all sorts of corners and provide you with a nicer coding experience - look at Rails, or even PHP.
It is scalable. While scaling is not easy, it is definitely easier to make a web app scale to thousands of concurrent users than with a regular app.
Basically what my point is is that the need for this network transparency thing is not really as high as the Linux geeks seem to think. Most of it is covered either by traditional desktop computing, or "the cloud" of web apps out there that do pretty much the same thing but more easily. Maybe when redesigning something like X, they should focus on splitting the network forwarding from the whole display system - instead of one program that does everything, have several programs that do one thing very well, seems kinda UNIX-like to me.
It seems like another one of those things that a bunch of geeks go, "oh, a shiny new idea!" (even if it isn't really a new idea) and get all excited about it. I'd be interested in checking it out if I could, but I can't, so no checking out.
This Wayland thing got me thinking and now I'm wondering about something. What is the usefulness of this network transparency thing for the majority of things we do with a computer? I'm sure there are a few cases where it would be handy, but not that many. I have to say there's been like two occasions where I've had to use it: once when I wanted to run a program from home when I was at work - which ended up horribly failing because it seems any modern Linux app is so resource-heavy that running it over the Internet is ridiculously slow, easier just to install a VM - and second, to read User Friendly because for some reason my box couldn't access it but our dev server could. Basically the only feasible way to have this whole network transparency thing is over a LAN, because over the Internet it's just too damn slow.
In a world of mainframes and slow-ass terminals, I can see how network transparency could be useful. Have the big box in the middle handle all the processing, and just send the display info to the terminals. This is much more feasible when your terminal is really slow, because the network slowdown is not so much the bottleneck as the CPU of the terminal.
Those days have long past with the advent of personal computers (hell, I don't even remember the days of mainframes and terminals, we had a desktop computer for as long as I can remember). Our computers nowadays can handle our processing needs (my computer here is probably more powerful than most mainframes when X was created), and the network delay is really the main slowdown for this kind of computing.
I haven't yet addressed the issue of distributed computing. There are plenty of reasons why we'd want a centralized system for our computing. Things like data storage, a central location for our apps so we don't need to install them on our system, etc. These are all very useful tools, and the whole personal computer thing doesn't handle it very well.
However, neither does X's model of computing. There are many reasons, and here are a few that I can think of:
- Ease of use - X isn't exactly the most user friendly of environments. Believe it or not, there are non-geeks out there that may need to use a distributed computing environment. Say, the marketing team for your company, or your customers. Making them have to use X would probably make them less productive, or drive them away.
- Programmer productivity - A lot of the time you have to re-invent the wheel when you're doing things like working with the window being resized, etc. Or you have to deal with memory management (even with Java or Ruby - Java is a huge memory whore). These things slow down the pace of a developer. It may not seem quite so apparent at this point in the essay, but I'll get to the alternative soon.
- Scalability - Can X scale to say, thousands of users per day? Millions? If I want to market my app over the Internet, this might be a requirement.
Applications are simpler. You design your application with a quick execute-and-dump idea, as opposed to a more state-based system. Web technologies seem to acknowledge that web apps aren't usually CPU bound, and so can cut all sorts of corners and provide you with a nicer coding experience - look at Rails, or even PHP.
It is scalable. While scaling is not easy, it is definitely easier to make a web app scale to thousands of concurrent users than with a regular app.
Basically what my point is is that the need for this network transparency thing is not really as high as the Linux geeks seem to think. Most of it is covered either by traditional desktop computing, or "the cloud" of web apps out there that do pretty much the same thing but more easily. Maybe when redesigning something like X, they should focus on splitting the network forwarding from the whole display system - instead of one program that does everything, have several programs that do one thing very well, seems kinda UNIX-like to me.
Subscribe to:
Posts (Atom)