Nitobi
About Nitobi
Services
Products
Home -> Blogs -> Joe@Nitobi

Joe@Nitobi

Archive for the 'Linux' Category

DogOnRails, only a smaller piece of a bigger picture.

January 9th, 2008

I noticed that I’ve been referring to DogOnRails a lot in my Ruby Examples (because I don’t like exposing our clients’ code to the outside world if I can avoid it), so I better talk about what it is other than it being an abstract example I use here!

What is Dog On Rails?

DogOnRails is a WifiDog Captive Portal Authentication Server. Unlike the WifiDog server, it is written in Ruby on Rails (hence the name). It originally started out as a small hobby project so I could use it to run on a Linksys WRT54G at my apartment and have it redirect to my Dreamhost account. Then some other people saw that I was working on the project on Google Code and started to help me out with it. Then I started doing this Meraki FreeTheNet stuff, which gave me a new platform to put WifiDog client, and this was the perfect opportunity to de-mystify the whole process.

So, why is it cool?

The original concept of WifiDog is what drew me to it. The original WifiDog project was designed so that Captive Portals, which were only being used by WISPs at the time could be used to display both user-centric and location-centric information to the user so that they would learn more about the area around them. The reason I started working with DogOnRails was because I am able to add more features faster with Rails than I would be able to with PHP, and the fact that I dislike the Smarty Templating Engine, which made adding larger features a major hassle.

There is also the fact that the project seemed to cater more and more to the WISP community than its original purpose, so I felt the server no longer was interesting.

The thing that I like about it is the fact that this allows anyone who can hack Ruby on Rails to add more features onto something that normally would require that you write in C or Shell Script. By moving the authentication off the device, you can do much more with the process than simple authentication, you could redirect the user to what you want to see before they are on their way. And since they’re using YOUR bandwidth anyway. The analogy here is showing a visitor to your house around before they sit down and watch television. It’s not always necessary, but it’s a good courtesy for those who haven’t been to your house before.

Oh yeah, did I mention that it was voted the Best of HackDay 1. I think that bars it from competing in Hack Day 2. I have a whole new killer app for that. :P

So, what features did you add during HackDay?

I added the following features:

  • User-Agent detection for mobile devices
  • GoogleMaps Functionality
  • GoogleEarth Functionality

The User-Agent detection was much easier than I thought. I borrowed Alexei’s iPhone to test the final design, and at the end I was able to get the UserAgent detection to show the view easily, the code looks like this:


user_agent = request.user_agent.downcase
mobile = false

# Mobile Request URI
[ ‘iphone’ , ‘ipod’ ].each { |b|
mobile = true if user_agent.include? b
}

The rest is pretty self-explanitory at this point! After that was added, it was just a matter of doing some iPhone-based CSS. I used Facebook as an example of how to do this, and after cursing WebKit/Safari’s existance, I managed to get something that didn’t require a LOT of resizing to get working.

Of course, the following features were added post-hackday:

  • ROBIN/Open-Mesh Update Recieving - (Can’t update settings yet, can get the Mesh status)
  • Improved GoogleMaps Functionality - Looks more like the Merkai Map
  • Per Node Auditing
  • Graphs using gruff
  • MAC Address Blocking
  • Facebook Functionality

That’s right, DogOnRails is now a Facebook application as well. The idea behind this is to encourage people to grow Wifi networks like they would grow their own garden. Make it so that there’s a certain level of pride for having the nodes up and working. It’s very, very early alpha stages, but it exists and it currently looks like this:

It needs a lot more polish, but it’s going to be used by the FreeTheNet group in the coming month, and will be the replacement to the Meraki Dashboard that we have been looking it. There will probably be more changes to this, and to other things like this. But, when I refer to DogOnRails, I’m referring to a real app, and not some abstract thing like in a textbook.

And I will keep using it as an example of what to do and what NOT to do for many posts to come.

Creating a Production Ruby on Rails: nginx

October 14th, 2007

I was talking to someone about a new rails project, and we asked about which server would be the best one to work on. Of course, me being a person who loves apt-get and hates RPM, I said one word.

Debian

But then I had to point him to a place to go to make things easier. The problem is that deploying a Rails app is rarely easy, especially one that actually is meant to not die. Not only that, but I don’t always agree with all the tutorials. So I’m going to write a series of blog posts about how I’m going to do it. I’d recommend reading this post when it comes to installing ruby, rails and gems on Debian/Ubuntu but forget any of that Capistrano/Apache stuff that they talk about. I recommend vlad an nginx for setting up and deploying rails apps. We’re going to talk about nginx in thsi post.

So, the first thing that is done when setting up a production server is to choose the webserver. Now, conventional wisdom says that when you’re running Linux, you will most likely use Apache. Conventional Wisdom is very wrong, since Apache is a giant 800lb Gorilla of a webserver that has more features than you’ll ever possibly need. It’s great if you want to load things like mod_php or mod_python (which you would do with django, but that’s the topic for another post), but it sucks if you want to use it for Ruby, since we’re going to be forwarding everything to the mongrels anyway.

So, what do we use? We’re going to use the Big Red Webserver from Russia, nginx. nginx is a nice http/reverse proxy server, with small, human readable files. The first thing that we’re going to do is install it on Debian. Sudo as root and do this:


apt-get install nginx

See, isn’t apt-get the coolest thing ever! Beats the crap out of yum! Anyway, what this just did was installed nginx, so in /etc/nginx, you are now going to have to delete your stock nginx file and create a new one. The first thing that you do is specify the user. It’s best to create a user for this such as www-data.


user www-data;
worker_processes 1;
error_log /var/log/nginx/error.log debug;

Note, we also set the log files. Now, we have to set some basic settings, such as the mime-type includes, the connections that we will accept, and gzipping your data. Simple, commonsense stuff. This begins the http configuration block:


http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
tcp_nodelay on;
gzip on;
gzip_min_length 1100;
gzip_buffers 4 8k;
gzip_types text/plain;

OK, so far so good. Now, let’s specify some mongrel clusters. Depending on your app, you may want more or less clusters to balance the load. I’d ideally say at least 2 per processor, but sometimes you may want to run less of these for some weird reason. So, here’s what I have setup for a dual-processor machine.


upstream mongrel {
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}

We’re going to show how to setup this in mongrel later. This is what we have currently. Now, we have to specify the server.


server {
listen 80;
server_name www.dogsridingrails.com;
root /var/www/dogonrails/current/public;
index index.html index.htm;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect false;
if (-f $request_filename/index.html) {
rewrite (.*) $1/index.html break;
}
if (-f $request_filename.html) {
rewrite (.*) $1.html break;
}
if (!-f $request_filename) {
proxy_pass http://mongrel;
break;
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}

Not much to see here. We’re using nginx as a proxy to the mongrel servers. We point to public just like how we would in any rails application that’s going to production, and we specify what we do in the case of a filename request. In the case that we request index, and index exists, we show the index.html page. Otherwise, we pass it all to mongrel. Then we use a closing brace to finish the scope.


}

Now, that was MUCH simpler than the beasts of Apache logs that you’d have to wade through to do the same thing. It’s interesting to note that nginx is a lightweight proxying server, and is actually designed to do this, as opposed to Apache which is more general purpose, and is meant to load web apps using shared libraries which is always much faster than doing something like using mongrel.

I’m not saying that nginx is the right tool for every job, in fact, I would use think seriously about using Apache for a Python/Django project, but that’s the topic of another post entirely. Stay tuned for my next post about Vlad the Deployer!

S3

October 5th, 2007

In the life of a web application, there comes a point where that shared hosting account just isn’t good enough (and you found out because your provider kicked you off), or your server just isn’t able to pull the queries from the database fast enough. Then one day, you finally get the filesystem error EMLINK, which you have a VERY hard time googling.

This is simple, you just created the maximum number of subdirectories that you can have in a directory. This is suprisingly not a common issue with file_column, acts_as_attachhment or attachment_fu, although I’m shocked as why it’s not. So, what do you do when you’re faced with scalability issues, and you’re image handling plugin is broken!

THROW IT ALL AWAY!

That’s what I had to do. Recently we worked on a site and we decided that because it was getting too hammered, that we would put the images on S3. Then we found the ultimate weakness of S3, which is that it’s not able to easily handle batch processing. We used the AWS:S3 library for most of the movement of the files, but we found that if we made a mistake, it would cost us hours to get these back.

Eventually, the day was saved with jetS3t, and Cockpit. Using jetS3t, we were finally able to actually get through all the S3 issues, and it saved the day at the end. (Actually, Dave saved the day at the end, my computer kept running out of memory). But we managed to get S3 support into it, and all we had to do was sacrifice File Column and replace it with this:


def user_image=( blob )
# establish S3 connection
AWS::S3::Base.establish_connection!(:access_key_id => AWS_ACCESS_KEY_ID, :secret_access_key => AWS_SECRET_ACCESS_KEY)
datestamp = Time.now.strftime(’%d%m%Y’)
identifier = UUID.random_create.to_s
object_path = “images/” + datestamp + ‘/’ + identifier + ‘/’
object_key = object_path + blob.original_filename
self.image = blob.original_filename
self.image_dir = ‘http://s3.amazonaws.com/bucket/images/’ + datestamp + ‘/’ + identifier + ‘/’
image_data = blob.read

#Send the file to S3
AWS::S3::S3Object.store(object_key, image_data , ‘bucket’, :access => :public_read)

# resize to thumnail here
img = Magick::Image.from_blob( image_data ).first
thumbnail = img.resize_to_fit! 96, 96

# Set the thumbnail directory path
thumb_key = object_path + ‘thumb/’ + self.image

AWS::S3::S3Object.store(thumb_key, thumbnail.to_blob , ‘bucket’, :access => :public_read)
end

However, if you have to do S3, I would highly recommend using a long key so that you can sort your re.sults better based on this key! However, the biggest gotcha I found when adding S3 integration to my rails app was including AWS/S3. If you include and require it, it will break your routing, this is something that can cause hours of headaches, especially if you are doing something else. At the end, we learned that S3 is a misnomer. For a large number of files, it’s far from simple.

How the mesh works

September 18th, 2007

Many people have picked up on the Meraki Mesh idea, but people seem to be confused as to what a Mesh Network actually is. Here is the Wikipedia definition of a Mesh Network:


Mesh networking is a way to route data, voice and instructions between nodes. It allows for continuous connections and reconfiguration around broken or blocked paths by “hopping” from node to node until the destination is reached. A mesh network whose nodes are all connected to each other is a fully connected network.

The truth is that the Mesh Network is not really a Mesh but a Mobile Ad-Hoc Network. This means it has the properties of a mesh, but it’s mobile! For those who are interested, here’s the link to how this works from wikipedia:

ExOR Wireless Network Protocol

This should explain the basics of how this works, which is what Wikipedia is really good for. The Routing protocol that is being used is SrcRR, which what was used in Roofnet. This is an open-source protocol, and can be used in anything that has an Atheros Radio.

The nice thing about the Meraki Hardware is that it makes it accessible to people who want a finished product. It could be possible to mesh with the Linksys WRT54G using Optimized Link State Routing, but then there’s the problem of forcing the radio into Adhoc mode because Broadcom has a more closed design than Atheros. It also could have been done with Netgear WGT634Us and the new Linksys WRT150N, but these devices are twice as expensive.

Also, I find myself warming a bit to the Dashboard. It doesn’t allow people to configure custom spash pages per node, which would be nice functionality, nor does it allow for very much customization of the spash page. I think that this will be added down the road by Meraki since a lot of
people seem to be asking for this. Also, I find that it’s an interesting project working with a group of people who have admin on the nodes.

We’re definitely learning as we go along, and that’s what makes this interesting.

Pics of the Meraki Gear

September 11th, 2007

I managed to get my girlfriend’s Digital Camera to work, and now I finally have some photos in on my Flickr account (many years after I registered the account)

Check them out here

Mesh Wireless goes to the Mainstream, (maybe)

September 10th, 2007

I have a hobby of hacking the firmware on the Linksys WRT54G. I originally started doing it because I wanted to learn about how Embedded Linux worked, and I thought it was cool that it could run Linux. That’s how I got introduced to the Community Wireless movement.

Basically, the problem with the DIY Community Wireless hacking is that you’d have to either take off the shelf routers, flash them (and void your warantee) and then hope that you got something working. Then you can write applications for it like WifiDog, or various Mesh Networking Applications such as Optimised Link State Routing. This was great, but it ran into two big problems:

  1. It’s hard to convince someone to run a hacked Linksys router in their home, because it looks sketchy
  2. You’re at the whim of the manufacturer, who may not like that you can extend your hardware, or may change the hardware randomly or End of Life(EOL) it because they can make something that is cheaper.

In fact, the original Linksys WRT54G was changed after Version 4.0 to run vxworks because it took less processing power, voltage and memory to do what they wanted than they needed from their prior cookie-cutter design. Also, Netgear also discontinued the WGT634U because of similar logic. The reason I mention the WGT634U router is because that is what MIT Roofnet originally used to build the prototypes for what is now Meraki Mesh.

After BarCamp and talking to Boris at Bryght, I decided to buy some Meraki hardware. Now, I was expecting some unmarked boxes, and the devices to be large, but I was very suprised to find a branded box, like what you would find in FutureShop, and a very small device. Not only that, but it is extremely user friendly. I was also impressed with the range of the device. I put one in my Window at my apartment, and it seems to have a 100m (about 300 ft) range. Now, this is important, since the way mesh works is that you put a bunch of mesh nodes out into the world, and they route between each other to the nearest gateway node, the node with the least latency.

When I compare the Meraki out-of-the-box solution to the alternative, which is the Freifunk OLSR, there’s really no comparison for how easy it is to use. I think that Meraki has a very interesting project and it’s worth testing out. The main advantage of us testing out mesh is obvious, since we can facilitate a test bed for Ajax components in mobile devices right outside our window. With the release of the iPhone and the iPod touch (more importantly the iPod Touch, since we’re in Canada), content that is dynamic, and takes advantage of both geography, as well as the various user agents, is critical to providing a user experience like nothing else.

With more and more mobile devices equipped with Wifi for mass adoption, it just makes sense to at least play with the stuff. I’ll have pictures up here soon of us playing with the hardware!

Rails on the Weekend? Why?????

July 4th, 2007

Well, this weekend I had to write some Rails stuff. Once I did the work stuff that I had to do on Saturday morning while I’m sure some kids watched some cartoons. I then decided to revisit something that I used to contribute to.

The result of a couple hours of hacking is this:

http://code.google.com/p/dogonrails/

That’s right. DogOnRails! I used to advocate running WifiDog hotspots back in the day. The problem that I had with Wifidog is that it was hard to run unless you had a dedicated server running Postgres. There was no way to run it on Shared hosting for just your one hotspot. So, I decided to hack this thing together.

It’s pretty much just written for my old WRT54G running OpenWRT, but I might do something interesting with it. It’s under GPLv2, so check it out!

What’s Joe doing now (Answer: Ruby on Rails Dev on his Ubuntu Box)??

June 21st, 2007

As you may have noticed, Ryan’s here and he’s answering the bulk of the support questions. This is because I’m doing more special client project work these days instead of replying to posts. I still read the support box, and I still try to answer as many as I can, but I’m definitely answering far less than I used to.

Although, now I’m a relatively new convert to Ruby on Rails. Not only that, but I now get to do work on my Ubuntu systems, so I get to blog about Ubuntu and Rails! Yay!

Doing Ruby on Rails Dev on Ubuntu seems far easier in many respects to doing it on Windows. (I haven’t played around with TextMate on Mac because I don’t have one) but many of the tools are familiar. They just have some gotchas. I decided that since I haven’t blogged in a while, I’ll blog about this before scrum starts:

The main IDE that I use for everything is Eclipse. I install on it Subclipse, RadRails and Aptana. The thing is that on Fiesty Fawn, RadRails won’t work out of the box. What you need to do in this case is install the Sun Java JVM, and set the /etc/eclipse/java_home on your desktop to point to /usr/lib/jvm/java-1.5.0-sun. java-gjc just doesn’t cut it for RadRails.

Once that’s done, you should be ready to rock for Eclipse. However, there’s a couple more gotchas. You should make sure that you install ALL your Ruby and Rails packages for Ubuntu, otherwise you’ll get some weird stuff. Googling a bit will help you, but you should at least have the irb installed, libopenssl-ruby, libreadline-ruby, libredcloth, librmagick-ruby, libruby, rake, rdoc, and of course ruby installed. I use the gem version of Rails, because I want the latest and greatest rails, and I compile gem and install gem from source.

All in all, things should be good for getting rails going. I’ll have more Ubuntu, Ruby and Rails gotchas in here soon.

PS: If you have problems installing CUI under Linux, please let me know what distro you are running, and I’ll try to help you out and get you hooked up!


Search Posts

You are currently browsing the archives for the Linux category.

Pages

Archives

Categories

All contents are (c) Copyright 2006, Nitobi Software Inc. All rights Reserved
Joe@Nitobi Entries (RSS) and Comments (RSS).