There is a huge Internet outage going on right now in Asia, the Middle East, and Africa. Its amazing to think how this could affect business.. in particular the financial sector and large businesses who rely on the Internet to communicate. Supply chains would be crippled, communication would be limited, and of course (maybe most shockingly), access to YouTube clips cut down to zero.
I’m curious how exactly an undersea cable gets ‘cut’.. earthquake? seamonster? Underwater Al Queda? Food for thought.
Posted in business | No Comments » | Add to Delicious | Digg It
I was doing some research on compression techniques to know what to recommend to customers and found this interesting resource:
CompressorRater
http://compressorrater.thruhere.net/
It runs your JavaScript resource through a variety of different compressors including YUI, JSMin, ShrinkSafe, and of course gZIP and shows a chart with the results. Here is a sample chart for the compression of our new (unreleased) RobotReplay script:
(Note to dave,dre: this includes non-production debug code)
Posted in robotreplay, web development | 3 Comments » | Add to Delicious | Digg It
Update 2: I posted another fix to the file I posted yesterday to correct an issue with text objects inside iFrames.
Update: I posted a fix to the file I posted yesterday to correct an issue with weird characters appearing at the end of textareas in IE.
Wow. I was so surprised today to find out how hard it is to reliably get the current text selection and caret position through JavaScript in different browsers. Ok, Firefox is easy. Internet Explorer is profoundly hard and weird. I looked at a lot of different methods including (but not limited to):
.. and of course the official docs, which suck:
None of these methods worked for me, for various reasons. The biggest issue is IE6/7 differences in the techniques, and also differences with how TEXTAREA’s versus INPUT fields work.
I have tested this script and it appears to be working on:
- input text fields
- textareas
on..
- IE6
- IE7
- Firefox 2
- Safari (PC 3)
It probably works on Opera too.
For my full script go here:
getcaretselection3.js
Posted in firefox, ie6, ie7, opera, safari | 9 Comments » | Add to Delicious | Digg It
In a fairly surprising move, Microsoft appears to have released a whackload of .NET source to their foundation classes, presentation framework stuff, and of course ASP.NET. The purpose seems to be to assist developers with debugging, so they can step into native .NET code to see whats happening in their programs.
Specifically, they have released the code for:
- NET Base Class Libraries (including System, System.CodeDom, System.Collections, System.ComponentModel, System.Diagnostics, System.Drawing, System.Globalization, System.IO, System.Net, System.Reflection, System.Runtime, System.Security, System.Text, System.Threading, etc).
- ASP.NET (System.Web, System.Web.Extensions)
- Windows Forms (System.Windows.Forms)
- Windows Presentation Foundation (System.Windows)
- ADO.NET and XML (System.Data and System.Xml)
I think this is a smart move and will really help developers with debugging.
Posted in .net, microsoft, web development | No Comments » | Add to Delicious | Digg It
I’m doing some casual research right now on website usability metrics for RobotReplay. I stumbled onto an old but still interesting blog post about the old ‘Three Click Rule’. Don’t worry, I hadn’t heard of it either. Essentially this is a rule of thumb that says that every piece of content on your site should take no more than 3 clicks to get to.
So what does this have to do with pogosticking? Pogosticking is the act of jumping up and down through the hierarchy of a web site, repeatedly hitting the back button to move to the next item in a list. The general concensus is this is a behavior you want to avoid in your users. The two concepts are related but are they both valid?
Turns out, probably not. The UIE blog people did some research that showed that number of clicks was not related to goal success.
The next thing they looked at was user satisfaction. What they wanted to know was did more clicking result in consistently lower satisfaction? Turns out not really.
This flies in the face of conventional wisdom (at least my limited wisdom anyway). Turns out dissatisfied ran the gamut in terms of the number of clicks they made. Satisfaction seems to be intrinsically linked to other factors.
However.. what they did find that the specific behavior of pogosticking could reliably predict low goal completion. In other words.. clicking ok, pogosticking bad.
I wonder if anyone has done research on pogosticking within a particular page. (ie: scrolling up and down, mousing all over the place, etc).
Posted in User Interface, analytics, web development | 1 Comment » | Add to Delicious | Digg It
I recently decided I wanted to be able to play music from my MP3 library in my livingroom. Simple right? Well not if you are a technophile and gadgetophile like myself.
I ended up with an AppleTV (which I still haven’t unboxed) and a small media server for my music. My first problem was that my music library is totally disorganized with improperly tagged music, duplicates, and other junk riddled througout. I took it upon myself to fix this. Here’s what my research turned up.
Original Library size: 93GB
First step: Import it into iTunes and let iTunes organize your library. This will convert all your silly WMA’s and whatnot into M4a’s, get rid of the corrupted files and other junk that somehow crept into those directories over time. It will also do some filtering for files with wacky filenames (which probably arent valuable music anyway). This got rid of about 10GB of junk.
Next: Get yourself a free copy of Picard by MusicBrainz. This will do 3 things for you:
- Picard will scan your library and identify all of the known music files based on Amazon.com data (and their own database as I understand it). Tell it to rewrite your ID3 tags and blow away the old ones. It’s quite accurate. Then tell it to rename all the files and organize it into a new folder (not your iTunes folder). It will normalize all the artist names (so “feist” and “_feist” and “feiST” all become “Feist”, etc) and fix the track names and so-on.
- This will leave about 30% of your music yet-unidentified and untagged. Run the ’scan’ feature of picard on these remaining titles. This will take a digital ‘fingerprint’ (called a PUID) of the MP3 and try to identify it based on the online database. My experience was that this is VERY accurate.. really cool. Budget a full day and night for this process if you have a lot of music. It’s totally automated so you don’t need to be in front of your machine.
- This will leave you with about 15% of your music unidentified. You can then run the ‘Cluster’ feature of Picard on these titles. Picard will try to group your mp3’s into artist and album groups based on their tags and titles. I only kept a small number of these because I dont really want music that isnt properly tagged. I mostly kept the obscure ones that were tagged but didnt show up on Amazon.com.
The remainder of music I then deleted. zap!
Now you will want to get yourself a tool to help remove duplicate music (of which I had a LOT.. evidently from Limewire downloads and whatnot). I forked out $20Â for a program called Abee MP3 Duplicate Finder. This is overall a good program but VERY buggy (maybe it was just in Vista that it was buggy). Eventually I got Abee to identify tracks that were similar based on title, tags, and song length (really clever). Then it recommends which ones to delete and keep based on bitrate and song length (also clever). I looked at other programs for this part of the process but liked Abbe the best.. again.. beware. Some of the bugs I encountered were:
- The 2nd time I ran Abee it double-counted all my music. Had I gone ahead with the delete, it would have erased ALL my music. If this happens. Uninstall Abee completely, delete the folder from Program Files, reboot, and reinstall.
- Some songs it couldn’t delete for some reason.. Just press OK when this happens.. I dont know why it does this.
- Occasionally it threw more serious errors. This only seemed to happen when I was running other programs at the same time (like Firefox or Explorer). Don’t do this. If it happens. Uninstall, delete, reboot, reinstall as before.
Final Step: Clear your iTunes library and re-import all your music.
Once that was all done I had successfully trimmed my library down to 1 copy of each track and properly named, labelled, and tagged music. I let iTunes run all night and it got all the volume levels and downloaded album art.
The Final De-crufted Library size: 43GBÂ
Anyway, that was my experience. Thought some of you might at least find it interesting. Would be interested to hear what other people have done.
Posted in media, resources | No Comments » | Add to Delicious | Digg It
So I was looking into in-browser compression for my RobotReplay work..
This is actually a really hard thing to Google because of the confusion with gZipping JavaScript resources for lower file sizes on the web. That particular problem is easy to solve and commonplace. What’s really unique and interesting to me are the applications of gZipping content in JavaScript for offline storage or delayed transmission, or for compressing Ajax requests over an uncompressed connection. It could also be a little useful for the purposes of encoding your data from prying eyes.
Why we might want to perform gZip compression in a JavaScript program:
- Reduce data footprint before storing in offline storage (IE UserData, Flash SharedObject storage, sessionStorage, globalStorage, Google Gears, etc etc) since all of these have limits to how much you can store.
- Reducing bandwidth requirements for transmitting large amounts of data via an Ajax or cross-domain XHR request.
I can’t think of any others right now but I found this showing a working proof of concept of LZ77 (gZip) compression in JavaScript. There are some ‘catches’, however.
This was only really useful if you are compressing large-ish amounts of data (10K+) or the benefits derived from compression don’t outweigh the costs, which are: larger footprint for your JavaScript program, the inherent hassle of dealing with compressed data, and also performance.
Also, this proof of concept really illustrates how SLOW JavaScript is in general. Even compressing small amounts of data can take several seconds. So I wanted to find a better way to do this. So I worked on finding another (better) way to compress the data.
Then I had an idea.. what about using Flash to do the same thing and then use ExternalInterface to marshal between your JavaScript program and the Flash movie? It was worth an experiment.
So here is a demo showing compression of text data using the same algorithm in JavaScript and AS2 (Flash 9) via external Interface.
Note: in the chart below, smaller is better.
Again the demo is here: http://blogs.nitobi.com/alexei/demos/compression/index.htm
Download the source here: http://blogs.nitobi.com/alexei/demos/compression/nitobi_js_compression.zip
Posted in flash, flex, resources, robotreplay | 9 Comments » | Add to Delicious | Digg It
I stumbled onto a really interesting proof of concept today for doing Huffman-style tree compression in Ruby. The other popular symmetrical (ie: lossless) compression schemes being LZ77 (think: gZIP), and of course RLE (run length encoding.. think: GIF, PCX) Check out Building Huffman Compression in Ruby.
This may be somewhat academic because it’s worth noting that the core Ruby classes often include Zlib as part of their base compilation giving developers access to high-speed, robust gZip LZ77 compression out-of-the-box.
I’m going to do another post tomorrow on the theme of compression.
Posted in resources, rubyonrails, web development | 2 Comments » | Add to Delicious | Digg It
Scott Sehlhorst, as always, has some interesting insights into the software development process. He argues this week that Agile is sometimes used by developers to hide or absolve themselves of responsibility, but that the opposite is true. Agile actually increases accountability by preventing a ‘throw it over the wall to QA’ culture and by promoting developer ‘ownership’ over features and quality.
Read the full post here.
I’ll admit that I’m not an Agile expert and I don’t understand a lot of it yet, but in a recent project I saw this responsibility-dodging behavior on an Agile team, and I think the culture of supremacy of developers over project coordinators preventing anyone from calling them out. The scrum-model is a short rapid-fire way of tracking team progress, but the flip-side is you get the perception of transparency but in fact only get a surface-level view of what the developer is actually doing. When things are not going right in a project, developers are able to cut features and push timelines, unfairly shifting the burden onto project coordinators who then have to deal with the client. The failure of the project coordinator in this case was that they didn’t notice or seem to mind that the developers had the same goals day after day and didn’t make progress. What’s funny is these people would rephrase their goals each day but say exactly the same thing. That’s a fault of the coordinator, not the model, but even if they did notice, what could they really do about it. In a room full of many people who is going to step forward and say ‘hey! you’re full of crap!’
Posted in agile, business, culture, politics | 2 Comments » | Add to Delicious | Digg It