Erik's blog

Code, notes, recipes, general musings

Archive for the ‘tutorial’ Category

cross-domain ajax with easyXDM

leave a comment »

While hacking around with easyXDM recently, I learned a few things I thought were worth noting/sharing. I wanted to replace something like a jQuery ajax call, eg
$.ajax({"url":"http://localhost/resource.json", "success":function(data){...}})
with a cross-domain equivalent, but it wasn’t immediately obvious where/how easyXDM would fit in. It was all in the documentation (see the code sample in the shipped /cors/ interface section of the readme file), but not phrased in the way I expected.  Here are the steps I went through to get it working:

  1. Upload the src/cors/index.html easyXDM support file to the domain I wanted to make available to cross-domain requests. For example, I wanted localhost to be the provider of data, so I made the file available at http://localhost/easyXDM/src/cors/index.html.
  2. Edit src/cors/index.html file to set useAccessControl to false, eg var useAccessControl = false;. The code comments state that this stops the code from using response headers to determine access control.  Setting this to false seems like a bad idea, but it’s what I had to do to get it working. I definitely want to learn more about how to establish access control safely.
  3. Edit src/cors/index.html file to pull easyXDM.debug.js and json2.js from the provider’s domain
  4. Wherever I wanted to make an ajax call, I needed to include easyXDM.debug.js and json2.js, and then drop in this code:
  var rpc = new easyXDM.Rpc({
      remote: "http://localhost/easyXDM/src/cors/index.html"
      remote: {
          request: {}

      url: "http://localhost/resource.json",
      method: "GET"
  }, function(response){

Here are some resources I found helpful:

To conclude, if you you’d like to learn more about honey badgers, and you don’t mind profanity, this is worth watching:


Written by Erik

March 7, 2011 at 1:52 am

Posted in tutorial

Tagged with ,

Running YETI tests automatically with Watchr

with 2 comments

YETI (YUI Easy Testing Interface) provides an easy, automated way to run YUI 3 tests. Watchr provides an easy way to run arbitrary Ruby based on file system events. Putting the two together, we get an easy way to run YETI when a YUI 3 script is saved.


  • Mac 10.6.4.  What follows may work elsewhere, but I haven’t tried it yet

Set up:

  1. Install Watchr. Please refer to the readme file in Watchr’s github repository for installation instructions. I wrote a post the other day about getting started with Watcher on Mac 10.6.4.
  2. Install Node.js.  YETI requires Node.js.  Please refer to the Node.js documentation for downloading and building Node
  3. Install npm.  YETI can be installed easily with npm.  Please refer to the readme file in npm’s github repository for installation instructions.  The Joyent blog also has an informative post on Installing Node and npm.
  4. Install YETI: npm install yeti
  5. Create the following directories: test and lib.  These directory names are completely arbitrary, but they match the watchr script introduced below.  If you want to use different names, please update the watchr script accordingly
  6. Create a file called autotest.watchr and put the following ruby into it:
  7. Create a file called test_example.html in the test directory and put the following html in it:
  8. Create one last file called example.js in the lib directory and put the following javascript in it:

You should now have a file structure like this:

Let’s run this rig:

  1. In your terminal, launch Watchr: watchr autotest.watchr
  2. Edit /lib/example.js so Y.example is no longer set to “foo”, e.g., Y.example = “bar”;
  3. Save /lib/example.js and view your terminal.  You should see YETI’s output of the failing test results
    Screen shot of YUI test failure
  4. Edit /lib/example.js resetting Y.example to “foo”, save, and note YETI’s output of the successful test results
    Screen shot of YETI output showing YUI tests passing
  5. Kill watchr (when you’re ready): Ctrl+C

Going forward:

Using the autotest.watchr script above, any file named /test/test_{lib name}.html will be run when /lib/{lib name}.js is edited.  The test file will also be run when it is edited.  If you add a new lib, but do not define a corresponding test file, watchr will fail silently.  Likewise, if you add a test file, but don’t put YUI tests in it.  In short, add libs and YUI tests in pairs, and you’re all good.

In closing, here’s one of my favorite songs from Drive Like Jehu:

Written by Erik

December 13, 2010 at 12:49 am

Posted in tutorial

Tagged with , , , , ,

Getting started with Watchr (and trying again to install Node.js on Mac 10.6.4)

with one comment

I recently started exploring testing options for Node.js. Yesterday, I wrote about my experiences with nodeunit. Today, I found Christian Johansen’s blog post Unit testing node.js apps. (Thanks for the write-up, Christian!) Although I was looking for unit testing options, what really got me excited was his mention of Watchr.

Watchr provides a way to run tests automatically in response to system events, e.g., when a file is saved, much like Autotest. I had fallen in love with Autotest’s functionality after learning about it in Micheal Hartl’s nice Ruby on Rails tutorial. According to Watchr’s docs, Autotest leaves something to be desired, but in any case I very much would like my tests to run without my having to think about it.

Git-ting (ha!) Watchr was easy enough, but to run Node tests on my Mac, which for some reason is an idea I’m hung up on, I need Node, and to date I haven’t been able to build Node on my Mac (10.6.4) successfully, so this is my challenge. After searching here and there, I found an archived thread from the Node mailing list that seemed promising. It mentions that MacPorts can break if I upgrade to Snow Leopard without upgrading MacPorts, which I had, and that this can prevent Node from compiling. After clicking through to the MacPorts migration docs, I followed the steps outlined there and I was able to build Node like this:

  1. I had tried and failed to build Node multiple times, so I blew away the build directory: rm -rf build
  2. ./configure
  3. Clean things up to be thorough: make clean
  4. make
  5. Run tests just in case: make test
  6. sudo make install

Ok, on to the testing. Here’s my folder structure:

    – autotest.watchr
    – lib/
      – example.js
    – test/
       – test_example.js

My autotest.watchr file is a blend of the one on Christian’s blog, and Watchr’s tests.watchr prepackaged script. It contains

watch( 'test/test_.*\.js' )  {|md| system("node #{md[0]}") }
watch( 'lib/(.*)\.js' )      {|md| system("node test/test_#{md[1]}.js") }

# --------------------------------------------------
# Signal Handling
# --------------------------------------------------
# Ctrl-\
Signal.trap('QUIT') do
  puts " --- Running all tests ---\n\n"

# Ctrl-C
Signal.trap('INT') { abort("\n") }

example.js contains = 'bar';

test_example.js contains

var assert = require("assert");
var example = require('../lib/example');

assert.strictEqual(, 'bar', 'var foo should be "bar"');

I fire up watchr like this: watchr autotest.watchr

Watchr then captures the terminal until I enter Ctrl+C. Saving either example.js or test_example.js causes test_example.js to run. At this point the tests are crude, so my output is nothing if the test passes, or an assertion error, e.g., “AssertionError: var foo should be “bar””, if the test fails.

I think this is a good start. Time to listen to some Bonobo and call it a day.

Written by Erik

December 9, 2010 at 1:17 am

Installing Nginx on Ubuntu 10.04

with 2 comments

I want to install Nginx on an Ubuntu 10.04 64-bit server. Luckily, someone named Sam Kleinman put together a great tutorial (“Host Websites with nginx on Ubuntu 10.04 LTS (Lucid)“) over on Linode’s site.  Following his instructions, I was able install Nginx without a hitch.  That was easy.  Thanks, Sam!

To keep things light, I like to wrap up tech posts w/ non-tech content.  Here’s a video of one of my favorite artists, El Mac, painting a mural freehand(!) with another artist, Kofie.

Written by Erik

November 21, 2010 at 12:32 am

Posted in tutorial

Tagged with ,

generating webdav propfind xml from yql

with one comment

E4X support makes YQL is a great XML-generation engine. Here’s some code to create the response xml for a WebDAV PROPFIND request for a directory called webdav containing an empty file called foo.txt.

Note: to initially get a handle on what XML WebDAV outputs, I turned on WebDAV support in apache and made a curl request to it like this:
curl -X PROPFIND –header “Depth:1” {user}:{pass}@{your ip address}/webdav/

You can run the code below in the YQL console.

<?xml version="1.0" encoding="UTF-8"?>
<table xmlns="">
        <author>Erik Eldridge</author>
        <select produces="XML">
                <key id="method" type="xs:string" paramType="variable"/>
                <key id="path" type="xs:string" paramType="variable"/>
                response.object = function () {
                    var xml = <D:multistatus xmlns:D="DAV:">
                        <D:response xmlns:lp1="DAV:" xmlns:lp2="">
                                 <lp1:getlastmodified>Sat, 02 Jan 2010 19:43:01 GMT</lp1:getlastmodified>
                              <D:status>HTTP/1.1 200 OK</D:status>
                        <D:response xmlns:lp1="DAV:" xmlns:lp2="">
                                 <lp1:getlastmodified>Sat, 02 Jan 2010 19:43:01 GMT</lp1:getlastmodified>
                              <D:status>HTTP/1.1 200 OK</D:status>
                    return xml;

Written by Erik

February 8, 2010 at 11:07 pm

Posted in tutorial

Tagged with , , ,

Building and running the Mochiweb dev server

leave a comment »



  • Mac OS X 10.5


  1. follow Bob Ippolito’s tutorial from his blog
  2. if your server returns
           exception exit: {noproc,{gen_server,call,  

    when you try to run http:request("")., start inets with inets:start(). and then run the http request call, as described on Pragmatic Dave’s blog.

Written by Erik

January 27, 2010 at 10:23 pm

Posted in tutorial

Tagged with ,

standard stack v1: git

with one comment


we’ll use git to facilitate the process of pushing code to the vm.  because there’s a cardinal rule about not serving files from a repo, we’ll need to create a git host and use a githooks to update the web root when code is pushed to the repo.  i’m using the terms hub and prime introduced by Joe Maller in his post A web-focused Git workflow.

i don’t have a cool picture of the concept, like Maller did, but here’s one of a cute red panda (credit: tambako) to set the mood before we get started:

ok, here we go:


  • prime is the copy of the repo accessible by the web server
  • hub is the bare source of truth repository
  • project refers to the prime/hub pair
  • vm is the vmware vm running centos
  • laptop is the development computer you ultimately want to push files from


  • mac os x 10.5.8
  • vmware 2.0.5
  • centos 5.4
  • git


  1. set up
    1. on the vm, install git as root:
      yum install git
    2. on the vm, create a user to handle git-related activity:
      useradd git
    3. on the vm, get its inet ip address using ifconfig:
    4. on the vm, copy your rsa public key (you’ll be pushing git updates over ssh) from your laptop into the git user’s .ssh/authorize_keys file on the vm
    5. on the vm, make sure the correct permissions are set on the authorized_keys file and .ssh dir:
      chmod 700 /home/git/.ssh; chmod 644 /home/git/.ssh/authorized_keys
    6. on your laptop, run a sanity check by logging into the vm via public key. note: if you’re using an alternate ssh port and/or different pub key file name, define these in your laptop‘s .ssh/config file:
      ssh git@{ip address}
    7. on the vm, in /var/www/, as root, create a directory that git can push content to (note: if the dir isn’t owned by git or isn’t world-writable, git throws an “error: cannot open .git/FETCH_HEAD: Permission denied” error):
      mkdir /var/www/git/; chown git:git /var/www/git/
    8. on the vm, cd into the /var/www/git/ directory and su to the git user:
      cd /var/www/git/; su git
  2. create a new project
    1. on the vm, create a new directory {proj name} for the prime repo and cd into it:
      mkdir proj; cd proj
    2. on the vm, initialize a git repo:
      git init
    3. on the vm, create and add a file so we can clone prime later (git dissallows cloning an empty repo):
      touch readme;
      git add readme;
      git commit -m ‘initial commit’

      Note: if you haven’t already told git who you are, run:
      git config “”
      git config “”
    4. on the vm, define a remote repository for the soon-to-be-created hub:
      git remote add origin /home/git/proj
    5. on the vm, cd into git user’s home directory:
      cd ~
    6. on the vm, create the hub repo by cloning the newly created repo using the –bare flag (that’s a double ‘-‘ before bare):
      git clone –bare /var/www/git/proj
    7. on the vm, create a post-update hook in the hub repo to update the web directory when an update is pushed.  open /home/git/proj/hooks/post-update and add the following:
              # jump into web dir
              cd /var/www/sites/
              # w/o this, git throws "fatal: Not a git repository: '.'" error
              # ref:
              unset GIT_DIR
              # pull in the updates
              git pull origin master
  3. start working
    1. on the laptop, open a terminal on whatever machine your going to develop on and clone the new host repo:
      git clone git@{ip address}:proj
    2. on the laptop, edit the readme file in the repo, check in the change and observe in the output the results of the hook-initiated pull
    3. on the laptop, view http://{ip address}/readme to confirm the new code is displaying


Written by Erik

January 2, 2010 at 11:46 pm

Posted in tutorial

Tagged with ,