Lately, I’ve had the opportunity to be exposed to some web technologies with which I had little previous experience. While I’ve spent quite a bit of time in the world of client-side programming in the browser using tools like WebGL, Backbone.js, etc, I’ve yet to have occasion to get exposed to any of the new breed of DOM binding libraries.
Thankfully, one of the new tools I’ve had a chance to develop with has been Knockout.js. This post is about a hello world exercise that that I worked on get some basic knowledge in the framework.
Motivation
I wanted to make a simple, roguelike game map that represented the UI using DOM elements, while keeping the actual state backed in a View Model. This seemed like the perfect nail in search of a Knockout-based hammer.
Aside from Knockout, I used bootstrap to style the markup. A usual coterie of utility libraries was employed (jQuery, underscore, etc). And since no self-respecting roguelike relies on the mouse, I ended up using Mousetrap for keyboard event hooks. Also mumble mumble require.js blah blah AMD.
For the purpose of my demo, my approach was that I’d deliver a JSON payload as part of the initial document. This would be accessed from my application setup code, which would parse the data into the View Model, set up keyboard events and call ko.applyBindings.
The Demo Itself
The goal is to create a simple, “roguelike” demo that consists of the venerable @ symbol, within a grid-like map that makes up its little world. Real games have all kinds of things, like: the passage of time, hunger, combat, magic, mongsters, death etc etc ad infinitum. Ours is concerned with just presenting a basic, 6x6 grid of tiles that’re passable (or not). The player’s location is marked by the @’s location on the map. You can move the player about with the arrow keys on your keyboard. I could probably do something for mobile, but I haven’t gotten around to it.
<divid="map_container"class="container"data-bind="foreach: map"><divclass="row"data-bind="foreach: Tiles"><divclass="col-lg-2"data-bind="css: tileBackground"><!-- ko if: playerIsHere --><h1>@</h1><!-- /ko--><!-- ko if: unoccupied --><h1> </h1><!-- /ko--><!-- ko if: !Passable--><h1>W</h1><!-- /ko--></div></div></div>
Here we see the basic outline of the knockout mapping used. At the top-level, a div element with a data-bind attribute. It uses the foreach binding to iterate over a map property, provided in the View Model. Within that, another div element, styled as a Bootstrap row that will, in turn, iterate over the individual tiles within its row via its own foreach binding.
The contents of the cell, itself, are simple. It’s a Bootstrap column, 2 units wide (6x6 map, remember), whose presentation reflects the current state of the Tile. On the column div itself is a css data-binding, which will apply a class based on the value of the tileBackground observable for the View Model. A series of ko if comments bindings determine the content of the cell, based on a number of observables within the View Model.
This is a snippet of a single row of the TileMap value, which is injected into the initial script black of our demo’s HTML markup. In a real application, this would be driven by logic on the server, but it’s stubbed out in this demo as a static value and attached to the window global.
The TileMap is just an array of objects, each with a single Tiles property that is also an array of javascript objects. This represents the layout of the map and is a small concession to the presentation, making it easier to showhorn into the knockout data-binding scheme. Most likely, an actual application would be representing the map as either a 2D-array, a flat array with multiplication/bit-shifts for y-axis access or even a tree structure for more sophisticated schemes.
We establish a MapViewModel that will contain our data. Additionally, we have set up a ko.subscribable to act as an event sink for input-driven changes to the @ position on the map.
The MapViewModel takes, as the map property, the TileMap covered earlier in this post. It will then iterate over the entire contents of that object, bolting-on several Knockout ko.computed observable functions that reflect the tile’s state and update dynamically with changes to the View Model. You will recognize these observables as being referenced in the HTML markup shown previously. All of the individual tiles, by virtue of function environment capture, have access to their x and y coordinates (tx and ty, respectively) to use for their logic.
playerPos – an observable representing the player’s global position. Changes this will update the playerIsHere computed observable
tileBackground – Sets a bootstrap class to change the tile element’s color based on whether it’s a wall or a passable space
playerIsHere – Indicates whether this is the space, on the map, where the player is located. Note that the tile itself doesn’t carry information on the player’s presence or lack thereof. Instead, the tile is listening for changes to the player’s position via positionSubscriber subscribable. Every time the subscription event fires, the playerPos observable is updated, leading to playerIsHere being recalculated
unoccupied – Naturally, the inverse of playerIsHere
This code block wires up the input handling and binds the View Model to the DOM. It’s pretty straightforward. The moveInDir function performs validation to make sure the player isn’t trying to move into a wall (and if they are, prevent that from actually happening). If the player’s desired destination, as per the input coords (you can see the four callers of moveInDir each pass in a different object with the target x and y offsets for the move), is available to be moved to. We call positionSubscriber.notifySubscribers with the new player position. This triggers the subscriptions, in the previous snippet, to update the View Model, triggering changes in the DOM representation.
Improvements
There are a number of things that could be improved upon in this demo.
General cleanup/consistency, of course
Some duplication in the observable logic (playerIsHere vs unoccupied.. probably shouldn’t duplicate the logic in these)
This post is a brief, hand-wavey outline of my efforts to integrate and abuse OpenWrap for the purposes of deployment automation in my shop. I began working on this post before starting the actual process of integrating OpenWrap and have updated as I’ve progressed. The goal should be that, when complete, it will provide an introductory outline of what a newcomer to OpenWrap (as I consider myself, prior to this post) will want to know/learn in order to competantly grapple with the topics contained herein.
I will begin by discussing some assumptions about the environment for this exercise (as they pertain to my particular situation), then talk about where my shop was before integrating with OpenWrap and what our current deployment process consists of. Next, I will outline where I want to be once fully integrated with OpenWrap. Finally (the important stuff), I will provide purposefully vague play-by-play of the conversion process (as, neccesarily, each shop’s needs and constraints will differ.. so excessive preciseness in the blog post will only serve to, ultimately, mislead you, the gentle reader) starting from a “vanilla non-OpenWrap’d” ASP.NET MVC 2.0 webapp.
It’s key to point out that, as far as I can surmise, using OpenWrap for:
Deployment automation
Packaging ASP.NET MVC projects
.. is a massive abuse of it’s intended purpose. I make no warranties as to the fitness of the process outlined herein and fully expect to inspire the Wrath of Seb as a result (have you ever seen how he browbeats the NuGet people?).
Some Assumptions
This post assumes that you’re working in a MS.NET environment deployed to some variant (most likely Server) of the Windows OS ecosystem. We’re still on Server 2003 (with a migration to 2008R2 in the next 12 months). It’s not-at-all clear to me what the Mono/Linux story is for OpenWrap, so I make no warranty that this process can be adapted for such an environment. My shop is still .NET 2.0/3.5SP1-based, so that is the point from which I’m starting in this post (.NET 4.0 is also on the 12 months plan, heh). Also, my actual production servers exist on a DMZ that is not domain joined and requires copious amounts of red-tape to be waded-through in order to open ports back into some of office backoffice machines in non-DMZ’d parts of our corporate intranet.
Our Pre-OpenWrap Deployment Process
Our current setup, roughly, looks like this:
Devs work and check-in to our mercurial repo, as needed
For every checkin, a jenkins build runs. The complete webapp project directory (bins, src, views, etc), post-build, is published as an artifact
Once we’re ready to deploy to prod, we have a special, manually triggered, jenkins build that will take the artifacts of the most recent, successful, build and do some config replacement for the prod environment (it’s configured for dev/test by default) and package the result into a zip. The resulting zip file can be decompressed on the root of the drive on our application servers (so it unpacks into Inetpub/wwwroot by default). This zip file is published as the artifact of the build
Whoever is doing the deployment (we’ll call them “Ops”) will go to the jenkins page and download the zip artifact from the last step to their local machine
Ops will then log into the application servers via RDP and copy the zip file onto the machine via the “Local Resources” exposure of their local hard drive
We’ve found that step 4 is the killer, as it appears to take ~15 minutes of manual time to copy the resource across the intranet. As noted above in the “Some Assumptions” section, the app servers are on a DMZ and all ports back into the intranet are closed by default (including network file shares) and the servers are not domain-joined. My suspicion is that this method of copying large files (the prod deployment zip is ~35MB, currently) across the Local Resources pipe (or, perhaps more aptly, straw) is sub-optimal. I’d like to test how long it’ll take when done via HTTP or other methods. I get a feeling that SMB fileshares will be a non-starter, here, as it appears to require the opening of 4-or-more ports and not to mention the whole Active Directory component that isn’t present on servers in the DMZ. I’d love to check out rsync, but I get the impression that it’s support is kind of flakey on Windows. So I’m interested in seeing what an HTTP transfer of an equivelant size, from the intranet, will look like time-wise.
But another component of what’s so horrible about Step 4 is not just the time, but the “disconnect” that it creates in what is otherwise a mostly fluid process. That is 15 minutes that an Ops person has to sit there and twiddle their thumbs waiting for the process to complete. It’s easy to get distracted and not check back until well after the copy has completed. I would like to get to an end-to-end automated process so that, regardless of how long it takes, we can click a button and know that the deployment will be finished, eventually, without dependence upon human intervention. An email notification of completion would be nice as well, but not neccesary.
Where we want to be
Still have the jenkins build(s) as detailed above
Chances are the manually-triggered “prod config/zip” jenkins build (detailed above in step 3) will be modified to also include publishing the resultant artifact to an intranet-based OpenWrap repository server. OpenWrap seems to support network fileshare-based repositories out-of-the-box, but I think this’ll be a non-starter for my shop based on the constraints of the DMZ outlined above. So I want a web-server based repository. The solution is quite simple (basically, we do both), and is detailed below (duh)
Open ip/ports from my production servers to the internal OpenWrap repository
From here, two options:
Setup scheduled tasks on the production servers that periodically checks our internal repo for a new package and updates, accordingly. A polling based approach.
Add support in the app, itself, to allow it to spin off a new process to check for updates via o.exe and update itself, as needed. We have a place in our app where this could live, already.
Finally, some kind of completion notification once the new site wrap has been installed.
Really the goal is to drastically reduce the amount of needed human intervention, in order to successfully deploy, to a couple of mouse clicks. Reducing the speed of the deployment would be nice, but is not essential (especially if we do discover that we are severely hamstrung by a bottleneck in throughput across the intranet between the DMZ and backoffice machines where the OpenWrap will live).
Step 1: Make The Web Application A Wrap
So, first things first, we have to set up an OpenWrap “skeleton” package into which we’ll place the source of the application that we’re trying to make into a deployable artifact (in OpenWrap parlance, this is called a “wrap”, although we’re misappropriate the framework a bit in order to have it do our evil bidding). In all places below, I use projectname as a standin for whatever you wanted to name your wrap, so please substitute appropriately.
I created a new “wraps” directory, inside of which I intend to place any packages I create. I think initialized my new project in there.
mkdir wraps
o init-wrap projectname
Since my environment is Windows Server 2003 based (and I’ve had problems dealing with the symbolic links/junctions that OpenWrap uses by default), I make sure to disable symbolic links as detailed here. Open up the projectname.wrapdesc in your newly created package directory and add:
use-symlinks: false
to the config.
After this, go ahead and copy in (or create) the project that you want to encapsulate in the wrap to the src/ directory in your package skeleton. After that, enter:
o init-wrap -all
from the CLI to have OpenWrap hook into your project file (it does this by changing the default compiler target in your .csproj). All of the intracacies of this process are beyond the scope of this blog post, so a good place to start, if you want to learn more, is here.
After this, you should now have your project set up as a wrap (in the default configuration). You should be able to do:
o build-wrap -quiet
from the CLI and get a .wrap file as output. This is actually just a zip, which you can rename/decompress to check out the contents of in order to get some understanding of how it packages your build artifacts by default.
For me, though, this wasn’t the end of the story…
Step 1.5: Shoehorning an ASP.NET MVC Webapp Structure Into OpenWrap
By default, it appears that OpenWrap is setup to work with projects that aren’t particularly finnicky about where the bins and resources are placed (or, perhaps more appropriately, are structured to lump everything together in a single build directory). Things like Class Library projects and CLI apps come to mind, here. This works pretty well for Class Libraries (and perhaps even typical CLI projects), but it doesn’t work (for me), out of the box, for ASP.NET apps (sadly). Of course, it should be noted that I’m not saying that OpenWrap is fundamentally incapable of dealing with these projects, but it seems that it doesn’t package them correctly out of the box (whether the remedy is education on how to properly configure OpenWrap or a feature implementation is beyond me at this time).
My solution, in this case, is to just hand-roll my own .wrap that abides the constraints required to have a running ASP.NET MVC site (all binaries in /bin under the root, expose Views, Content, Scripts, have a Default.aspx and Global.asax in the root, etc). For now, this is easier than bruising my brain trying to figure out how to make OpenWrap do this via configuration.
I’m not sure how much of the issue is solvable by configuration in the .csproj and how much is a current limitation of OpenWrap, but suffice to say that this is something I wanted to talk about so that it can either 1) be brought up as a possible feature/fix and use case for OpenWrap or 2) be solved by way of education. I’m also painfully aware of the fact that I’m taking OpenWrap way outside of it’s intended comfort zone as part of what I’m doing in this blog post.
Obviously, this situation is a big your-mileage-may-vary kind of thing, but to give you an idea of what I did to make an ASP.NET MVC package nicely into a wrap:
# this was derived from:
# http://snippets.dzone.com/posts/show/6409
def get_newest_wrap(dir)
matching = /.*wrap$/
files = Dir.entries(dir).collect { |file| file }.sort { |file1,file2| File.mtime(file1) <=> File.mtime(file2) }
files.reject! { |file| ((File.file?(file) and file =~ matching) ? false : true) }
files.last
end
def get_version_from(path)
f = File.open path
lines = f.readlines
f.close
lines.last.sub(/\.\*/,'')
end
@siteDir = 'bin-net35'
@toolsDir = '' # where I have some bin tools I use during builds
@targetProjectName = 'My.Asp.Net.Project'
desc "hand-roll a wrap for an ASP.NET MVC site"
task :drybuildwrap do
mkpath 'working'
version = get_version_from ".\\version"
ts = Time.now.to_i.to_s # being lazy and using a unix timestamp
# as the build number. it works, i guess.
pkg = "myaspnetproject"
wrapFile = "#{pkg}-#{version}.#{ts}.wrap"
Dir.chdir 'working'
mkpath @siteDir
# files needed in the root of the app, besides bin
rootFiles = ['Default.aspx', 'Global.asax', 'robots.txt', 'Web.config', 'Content', 'Scripts', 'Views']
rootFiles.each do |f|
cp_r "..\\src\\#{@targetProjectName}\\#{f}", @siteDir
end
# /bin
cp_r "..\\src\\#{@targetProjectName}\\bin", @siteDir
# needed OpenWrap files
cp_r "..\\version", "."
cp_r "..\\#{pkg}.wrapdesc", "."
mv @siteDir, 'bin-net35'
# tools dir -- some stuff needed to bootstrap the self-updating
# functionality .. covered in Step 4.5 below
mkdir "tools"
cp_r "..\\..\\..\\Tools\\7z\\7z.dll", "tools"
cp_r "..\\..\\..\\Tools\\7z\\7z.exe", "tools"
cp_r "..\\rakefile.rb", "tools"
# Finally, let's create our wrap
sh "..\\..\\..\\Tools\\7z\\7z.exe a -tzip -r ..\\#{wrapFile} *"
Dir.chdir '..'
rmtree 'working'
end
Pretty wild, I know.
Step 2: Setup An OpenWrap Repository To Publish To
This was a real head-scratcher for me, at first, so I asked about it on the OpenWrap mailing list. I was looking for a full HTTP implementation of an OpenWrap server and, seeing only a stub on github, I went to harass Seb for the full source. The thread is here. His solution was pretty elegant. From the thread (where he’s speaking about what he does for the main OpenWraps repo):
As for wraps.openwrap.org, I use something much simpler. I use a file share repository to write packages from teamcity, and on the read side i simply share the same folder as an http folder, that works out of the box as is, so if you’re just looking at exposing the read side, you can do that now already.
So, it’s pretty simple. With this in mind, I set up two servers:
file://buildserver/openwraps/myaspnetsite — A fileshare based repository server that the build server can publish to after a successful build
http://buildserver — A read-only repository that is exposed via IIS 6 directory listing magic (And do remember to add the .wraplist and .wrap to your MIME-types if using IIS 6, as I am :/ )
You can find some documentation on setting up OpenWrap repositories here. The actual, esoteric details of your hosting needs are something that I can’t give specific guidance on.
For me, above setup gives the flexibility I need for my deployment environment (where I can open a single port on a machine in my backoffice intranet to the DMZ servers), while allowing me to easily publish packages in the backoffice environment (where our build machine lives and the rules are somewhat more lax).
Step 3: Integrate OpenWrap Publishing Into Build Process
This one I will leave as an exercise for the reader. Personally, I’m using Jenkins (née Hudson) for our builds (with most of the action in rake scripts). I’ve written some tasks to hand-roll the .wrap (as detailed in Step 1.5 above) and publish that to my fileshare repository.
Bottom line: You need to start producing and publishing wraps as part of your regular build process. Depending on how pervasive your deployment automation will be (just for prod? why not your continuously updated UAT/Staging server, too?) you could continuously push to one repository, while only “selectively” pushing to another, “production-only” repository when you’re ready to deploy via some manually triggered automation. Which approach is appropriate to your situation will depend, largely, on Step 5 below.
Step 4: Setup On The Application Servers
After doing the neccesary legwork on my DMZ servers (like opening the port to my read-only HTTP server in the backend and setting up the OpenWrap shell), I have to a few things.
First, I need to make OpenWrap aware of the repository from which it can fetch the wrap it needs. This was as simple as:
o add-remote myaspnetsite http://buildserver
As outlined in Step 2 above, this corresponds to the read-only server that I exposed for my DMZ servers to pull from.
Next, I will initialize a stub project that I’m going to use to “host” the application that I want to deploy via OpenWrap. Finding the place where I want it to be (like D:\Inetpub\wwwroot or what-have-you), I did:
o init-all myaspnetsite_prod
OpenWrap will give me some static about setting up the project structure. I can now go into there and set things up.
cd myaspnetsite_prod
As above in Step 1, my particular situation warrants that I use use-symlinks: false in my myaspnetsite_prod.wrapdesc file. With that out of the way, I can do:
o add-wrap myaspnetsite -Anchored
And, if everything’s wired up properly, OpenWrap will download and unpack your package within the myaspnetsite_prod/wraps/myaspnetsite directory. If any of the stuff from previous steps is messed up (like your publishing or hosting for the HTTP repository), you’ll have some problems here.
The -Anchored flag tells OpenWrap that the dependency you’re adding you should be unpacked in a “visible” location (ouside of the _cache dir). Normally this is so that you can check that dependency in, but it serves us here by exposing it in a fixed location (independant of the version). More info on package anchoring is available here. From here, you should be able to point IIS at the myaspnetsite_prod/wraps/myaspnetsite folder and it will work (provided your .csproj has its references, etc, working right and nothing squirrely happens with how you’re reworking the wrap.. this is some brittleness that I’ve brought on myself, here.. I hope in the course of discussing this process, a more elegant solution can be sussed out).
Step 4.5: A Brief Digression On The Topic Of Updating ASP.NET Sites
The whole point of this exercise is to get to a point where, once we have the OpenWrap infrastructure in place, updating to newer versions will be a snap. This is complicated by a major, blocking issue: OpenWrap won’t “re-anchor” updated packages if any of the files in the old package directory are locked. This is the case because OpenWrap updates anchored packages by removing the old directory and replacing with a new one (which is pretty sensible in most cases). This isn’t neccesarily what we want in ASP.NET land, though, we where can just unzip a new install over the old one and, provided there isn’t any weirdness with renamed assemblies, etc, ASP.NET will pick up the file changes on the next request and recycle the app automagically. And, while we can unzip over a running application without complaint from IIS, we cannot nuke the directory of a running without, at the least, taking downt he web site (which may cause false alarms with your application monitoring infrastructure).
I get the impression that there’s a lot of people who are used to stopping the web site in order to do updates. I’ve never been one of those people (unless there’s file locking issues that prevent a clean unzip, in which case you have no choice but to stop the site). With this in mind, I prefer a deployment process that involves unpacking a zip and letting it override an existing, live app. If this is a grossly irresponsible perspective to take, I’d like to hear about that (I mean, as much as anyone likes to hear “you’re an idiot!”).
Anyways, back to the issue at hand: How to work around OpenWrap’s design limitation in terms of updating a “live” site? More rake scripting, of course!
For me, the strategy is as follows:
I decided to modify the rake task outlined above in Step 1.5 to include an additional rakefile (containing the snippet below) and a CLI unzipping utility (7zip, if you must know) in the deployable artifacts for my wrap.
The rakefile that is now embedded in the .wrap contains logic to:
Find out what the version # is of the “newest”, locally held wrap for our package that we want to continuously update
Check the remote repository and, using o list-write and some output parsing magic, find out what its newest version of our target wrap is
If the remote’s newest is not equal to ours (unlikely that we’ll slidebackwards in revisions and maybe even desirable, at times, if so (like a rollback scenario)), then:
Run o update-wrap which will, of course, fail to anchor our package (but still download the .wrap for us)
Having downloaded a new version, we now find the filename of the newest version of our package (which should be the new one)
We copy out our CLI unzip’ing util and then use it to unzip our new package (you have to copy it out, otherwise it’ll explode when trying to overwrite itself)
I ended up with a script something like:
# This is pretty lame and depends on the impl of o.exe's CLI output.. have to
# keep an eye peeled for changes, here
def newest_version_of_package_on_remote(pkg, repo)
output = `o list-wrap -Query #{pkg} -Remote #{repo}`
line = output.split("\n").last
line = line.sub(/^.*available: /,'').sub(/\)$/, '')
line.split(", ").last
end
task :hotupdate do # this is the task we'll run from a scheduled task on the app server
pkg = "myaspnetproject"
repo = "myaspnetprojectrepo"
Dir.chdir "..\\.." # we're executing from somerepo\wraps\myaspnetproject\tools ..
# need to move up the somerepo\wraps\ dir
localWrap = get_newest_wrap(Dir.pwd).sub(/#{pkg}-/, '').sub(/\.wrap/, '')
newestWrap = newest_version_of_package_on_remote pkg, repo
if localWrap == newestWrap
puts "at latest"
else
puts "updating..."
sh 'o update-wrap'
wrapFile = get_newest_wrap(Dir.pwd)
Dir.chdir pkg
toolsDir = "toolsTmp"
mkdir toolsDir
cp_r "tools\\7z.exe", toolsDir # have to copy out 7zip from the tools dir, so it doesn't
cp_r "tools\\7z.dll", toolsDir # barf when trying to overwrite itself
sh "#{toolsDir}\\7z.exe x -y -tzip ..\\#{wrapFile}"
rmtree toolsDir
end
end
Step 5: Flip The Switch
If you made it here successfully, that means that you’re worked out the details for how to facilitate OpenWrap’s use as a deployment framework for your project. Now you’ll have to figure out how to put it on auto-pilot. The key will be to get to the point where you won’t have to log into your application servers at all to do a deployment.
My personal preference is set up a scheduled task on the deployment server that polls for updates on a regular, frequent basis. This way, my deployment choke point is now the act of pushing packages to that repository that is regularly polled (something that I can automate pretty easily as part of my existing build infrastructure). You can easily invert this approach by automating the pushing of packages to your repo and making the update poll be the manually triggered step (via added functionality in your app or some existing tool/server to kick off processes/scripts on your production server).
I use the :hotupdate rake task outlined above, executed every five minutes, to check for new versions of our application and handle the downloading and unzipping on an as-needed basis.
Conclusion
This post is a skeletal depiction of a process that I’ve worked through in order use OpenWrap for automating my deployment process (which is currently only automated up-through the point of package generation. Actual deployment on the application server is manual and a hassle). I kept things vague on purpose, as I feel like an overly detailed post would be less valuable (due to be the tendency to get bogged down in semantics and the fact that every environment is different).
There are a lot of details to work out, but I think the steps outlined above serve as a good place to start for getting things up and running. There’s a pretty good chance that I’m doing something Really Wrong, so I’m interested in having a discussion about what could be done to improve this process.
With the caveats aside, there are two key points in the OpenWrap workflow where I, pretty much, opt-out in favor doing things a bit differently to suit my own laziness and preferences:
When building a new wrap, I roll one by hand (Step 1.5) using a rake script, so that I don’t have to bother with wrapping my brain around the configuration (see what I did there?).
When trying to update the wrap (Step 4.5), I go through the motions (in order to have OpenWrap download the newest .wrap for me), but anticipate the re-anchoring of the new version failing (because IIS has locked the directory) and will go ahead and do a manual upgrade-in-place (by unzipping the .wrap over the existing anchor, which will work, most of the time, unlike a delete-and-replace). This may or may not be a bad thing and exposes you to certain Troubles as your application evolves naturally over time (assembly conflicts due to renames, to name just one of many.. not to mention the possibility for .dll or other files not being updated due to IIS’ rather eccentric and unpredictable file-locking practices). This whole thing stems from the desire to avoid taking down the site/IIS when I can help it.
I would love to have a discussion around these issues and see if there’s a possibility to evolve the OpenWrap framework to handle these scenarios/approaches more gracefully, if they’re deemed safe enough, valuable, etc.
I hope this post has been valuable for you in your quest to be a Lazier Developer, because that’s really what automation about, isn’t it?
This post is a continuation from Part 1, which introduced the ideas of machine-javascript, namely the script loader and a simple example, showing how a HelloWorldController was instantiated and attached to the page. What was not explored, however, was what the HelloWorldController actually looks like and how it works. That is the topic of this post.
Let’s Dive Right In!
Here is the content of HelloWorldController.js, loaded into our page in the example shown in Part 1 of the series:
include('jquery.js');
include('machine-controller.js');
include(function() {
var global = this;
var viewLeft = '<div><a id="clicker" href="#">Hello world!</a> I have been clicked ';
var viewRight= ' times.</div>';
global.HelloWorldController = function() {
this.init(); // kicks off Machine.Controller's internal setup
// .. always call this first.
this.clickCount = 0;
this.setView(viewLeft + this.clickCount + viewRight);
this.addAction('click', '#clicker', this.onClickerDivClick);
};
global.HelloWorldController.prototype = new Machine.Controller();
var hw = global.HelloWorldController.prototype;
hw.onClickerDivClick = function(e) {
// 'this' is always bound to the controller object
this.clickCount += 1;
this.setView(viewLeft + this.clickCount + viewRight);
this.render(); // refresh the domRoot property
};
});
Okay! A lot to take in there. Let’s start with the include()s at the top of the file.
As noted in Part 1, calls to include() are never nested within the scope of a single file, but always occur serially. The first call is to load up machine-controller.js, upon which this file is dependant. If HelloWorldController were dependant upon a “ViewRenderer” (which is discussed further down), it would also be included at this point. Finally, the actual body of the script is contained within an include(). This is a convention that must be adherred to in order to leverage the utility of machine-includer.js. Strictly speaking, machine-controller.js takes no dependency upon machine-includer.js and you could easily use it with another script loader or no loader at all, but for the purposes of this example we are going to use it.
var global = this;
var viewLeft = '<div><a id="clicker" href="#">Hello world!</a> I have been clicked ';
var viewRight= ' times.</div>';
At this point, we’re just setting up some variables that will be used in the course of defining HelloWorldController. I, by habit, typically assign the top-level this in a given script to global. The next two variables, viewLeft and viewRight are two chunks of text that will make up the “view” rendered by HelloWorldController (that is, the markup that it attaches to the DOM). This approach is not-at-all optimal for general use, but is merely meant to demonstrate the basic functionality of setView(), which will be covered in-depth below.
This chunk of JavaScript is the declaration of HelloWorldController’s constructor/intializer and is where the majority of Machine.Controller-derived controller objects are configured. There’s a lot of important stuff here, so let’s go line by line.
this.init(); // kicks off Machine.Controller's internal setup
// .. always call this first.
The init() function in a Machine.Controller-derived object is where internal initialization is housed. It should be the first thing you call in any controller you define. Obviously, overloading/replacing this would be a Bad Thing unless you really know what you’re doing. Thankfully, at least, much like Controller development on the server, you’re not terribly likely to have deeply nested inheritance hierarchies of Controller classes. But if you do need to do so, you can always take measures to Make It Work.
this.clickCount = 0;
this.clickCount is a stateful counter that we’re going to use to track clicks to the DOM that this controller “owns”. This is an example of how client-side controllers are stateful (one thing that can be thought of differently from server-side controllers in many instances).
Another thing to note: it can generally be assumed that any call to this in the top-level scope of a function attached to the prototype for a Machine.Controller-derived object will reference the object itself. This is normal behavior typically but, interestingly, this also applies to functions bound to events using the addAction() function, which is shown below. Typically, at least with jQuery event binding, this is bound to some kind of context information for the event in question. Of course, callbacks passed into jQuery functions like $.each() and $.get() will still have their this variable re-bound, as is expected.
This is the setView() function, mentioned above. It is used to tell a controller what mechanism it will use to get some markup that will represent its presence in the DOM. There are two valid signatures for this function:
The first signature just takes a single string and is the format used in HelloWorldController. This treats the passed-in string as static markup text and will move to immediately convert it to a DOM and then attach it to the page’s DOM when needed. This is the simplest possible use case for setView() and isn’t often practical, but can be useful.
The second signature takes two strings: The first is an argument to pass in to a ViewRenderer. The second argument is the “key” for that ViewRenderer. In this case, the “key” refers to a string that the renderer uses to globally identify itself when it is registered with Machine.Controllers mechanism for tracking ViewRenderers. The details of how this works won’t be covered in this post, but just know that it is there.
If you’d like to see an example of this use of setView() and ViewRenderers right now, then look at the /example/example.html file in the machine-javascript github repo and check out the ‘ViewRenderers and Views’ example.
Back to the larger issue of the signifigance of setView(), after calling setView() you should know that any calls to the controller’s render() function will cause whatever “instructions” were passed in to setView() to be reevaluated and the results placed in the domRoot property of the object. You can also arbitrarily call setView() at your pleasure to change the “rendering strategy” for a given controller (but will of course want to call render() after that so the changes can be reflected in the domRoot).
The last line in our controller’s initializer function is:
This is the previously mentioned addAction() function. It is a wrapper around jQuery’s event binding mechanism that provides a few advantages:
Events bound in this fashion don’t need to be manually rebound when the controller’s DOM changes (this uses a combination of jQuery Live Events and manual re-binding on DOM change).
As mentioned above, callbacks passed in to addAction() keep their this property bound to the controller object, instead of the context for the event that called it.
It provides a straightforward interface to pool your event bindings in a single location.
The syntax is straightforward: The first argument is the name of the event that should be listened for. This uses the jQuery convention for event names (onClick becomes click, onBlur becomes blur, etc). The second argument is the CSS selector for the element(s) you want a callback bound to. And the third argument is the function callback you want to pass in. For the sake of keeping things clean, I specify my event handlers on the prototype of the controller itself and pass those in. There’s nothing that says you can’t specify the function inline, if you so desire. Do note, though, that this will be bound to the enclosing controller regardless.
One last interesting (and important) detail: event callbacks bound using addAction() will only be triggered when the event occurs on elements in the subset of the DOM owned by the controller.
So if, for example, you have several controllers in a given page that each expose elements that all have the foo class, then a call to addAction('click', 'a.foo', this.someCallback); will only trigger the this.someCallback function for clicks on those links in the view generated by the controller. Nifty, eh?
Looking at the next chunk of JavaScript in our HelloWorldController example, we have:
global.HelloWorldController.prototype = new Machine.Controller();
var hw = global.HelloWorldController.prototype;
Here, we see that Machine.Controller uses the usual prototypical inheritance found in JavaScript. It was originally based on John Resig’s “Simple JavaScript Inheritance” model but was subsequently converted to use the more common prototype model. After that, you can see that we assign HelloWorldController’s prototype to a simple, local variable called hw for convenience. This is to merely save keystrokes. It isn’t such a big time-saver for simple controllers like this one but, if your controller had more functions attached to its prototype, this sort of thing becomes more valuable.
Secondarily, it’s also useful to declare the “public” functions and properties for a Controller, while allowing you to create functions that aren’t attached to it and keep those as “private”, if that’s your thing.
And finally, we have:
hw.onClickerDivClick = function(e) {
// 'this' is always bound to the controller object
this.clickCount += 1;
this.setView(viewLeft + this.clickCount + viewRight);
this.render(); // refresh the domRoot property
};
This is the callback that was passed-in to addAction() in our constructor function. It is a typical event-handler callback, with the event information as the sole argument to the function. This may or may not be useful to you and you can omit it if you want to. As is indicated in the comment, this is bound to the enclosing controller. This means that we have access to any stateful information contained therein (clickCount, in this case).
The callback increments the clickCount property by one and then calls setView() with the same viewLeft and viewRight variables used in the constructor, but with the newly incremented clickCount. If we were using a more sophisticated rendering scheme, we would merely modify the controller’s model property and let that “trickle down” to the ViewRenderer. In this case, the call to setView() would go away and the subsequent call to render() will be all that would be needed to update the DOM. But, since we’re using the simplest possible approach to specifying a view in this example, we have to update the view’s markup ourselves via setView().
Conclusion
In this post, we covered:
– The contents of the HelloWorldController.js file that is first mentioned in Part 1 of this series. A typical Machine.Controller-derived object contains:
– A constructor function, with the first statement in it being a call to init(). This does the internal setup for Machine.Controller and should always be called first. Also don’t replace it in the prototype unless you know what you’re doing.
– In additional to any per-controller setup, a constructor will usually contain:
– A call to setView() to designate the view rendering scheme for this controller. You can either pass in static text (which you need to update yourself via calls to setView()) or designate a pre-registered ViewRenderer to handle the details for you. The details of how ViewRenderers work is for another post.
– One-or-more calls to addAction() to bind event callbacks to elements in the subset of the DOM owned by the controller.
– Each call t addAction() is going to need a corresponding callback. this in the callback will correspond to the enclosing controller, giving you access to state information.
– Call render() when you want to update the markup in the domRoot.
Stayed tuned for Part 3, where I’ll go over one of the most useful architectural features of machine-controller.js: ViewRenderers. Until then, take it easy!’
This is the first in a series of [hopefully many] blog posts on the topic of machine.javascript, an open source framework for helping developers create testable, deployable and well-factored UIs on the client in JavaScript. It covers some of my exposure to tools in this space and an overview of what machine.javascript consists of and how to use it. This post is tightly coupled to Part 2 in the series and each should not be read without considering the context of the other.
My experience with JavaScript MVC
As I’ve gotten involved in my professional and personal capacities with rich, client-based UIs in te browser, one of the most indispensible patterns I’ve had at my disposal would have to be “MVC”. I put MVC in scare-quotes because it is, in my opinion, one of the most overloaded terms in software development today. The most basic, valid definition for MVC in the context of this post, though, is to say that it is about creating patterns for application development that deliberately seek to segregate control and rendering logic which are optimized for the particular platform (in this case, JavaScript in the browser).
With this in mind, the Eleutian team (notably Daniel Tabuenca) produced a kick-ass library that I always felt really attacked the problem of building complex, client-side applications in JavaScript in an effective, sensical fashion. It could have to do with the fact that this framework was the first I ever used in this particular solution/problem space, but I’ve reviewed/experimented with several other frameworks since and never really liked any of them. Some reasons for this:
Many of them (JavaScript MVC in particular) are very heavy, requiring things like having Rhino installed to do command-line scaffolding (wtf?!)
Others are tightly coupled in the manner in which view/template rendering is handled. Modern JavaScript development in the browser is interesting because there are several different ways to “get there”, when it comes to how you programatically display content. Options include, but aren’t limited to: Direct DOM manipulation via jQuery/other frameworks ( $(‘#container’).append(‘’); ), JavaScript Templating (EJS, Pure, etc) and/or tools/plugins that take structured data and directly output markup (jQuery.datatables, jqplot, etc). Eleutian’s framework was originally tightly coupled to EJS for template rendering, but this has since changed (as I will explain shortly).
Some frameworks get the abstraction wrong, in my opinion. The scope of what I, the developer, want when doing client-side work is pretty narrow: aggregate events around some subset of the DOM and have a uniform way of manipulating/regenerating the markup in that subset in response to said events. The premise of “smart models” in an isolated browser environment (wiring up AJAX calls to the server) is way too much abstraction for me and, in the laissez-faire world of JavaScript, strikes me as very open to “entering a world of pain” scenarios.
From my time with the Eleutian team, one of the things I missed the most was the MVC framework I had used when I worked there. And while I can by no means take credit for creating or expanding machine.javascript, I can definitely say that I was able to harass Aaron Jensen and Dan into releasing the libraries in question as OSS (although they claim they were going to do it anyways). Hence, the github repo and this blog post.
What is machine-javascript?
At the heart of it, machine-javascript consists of two components, each an individual javascript file:
machine-includer.js — Specifies a “script loader” that works in a “nested” fashion (as opposed to “flat” includers like LABjs). It allows you to specify your dependencies in a manner similar to how you would in server-side code and then load it all (in a cascading fashion) during the page load. Like other script loaders, it is often not practical for performance-critical/production uses, but is great for dev/debug work and provides a critical advantage: The nested, per-file include() statements provide get hints for a recursive script parser to build a list of dependencies which you could use to create a single, bundled for your markup. The include() function that it creates can handle both external script loads (when passed a string) or evaluate functions passed to it. Regardless of which, both are evaluated in order of specification based upon the dependency tree created when looking at how invokations of include() are specified in files and the same external script is only loaded once.
machine-controller.js — Contains a prototype for the Machine.Controller object, which can be inherited in your “controllers” and provides a straightforward framework for specifying events and rendering logic on a piece of the DOM that the controller will be associated with.
Leveraging these two components, I will demonstrate how to create a simple controller with machine-javascript.
A Simple Example
Let’s consider the simplest possible hello world case, utilizing machine-javascript.
tag section, we have a single script include. All we are going to load in this page in an explicit, static fashion is the machine-includer.js script.
Somewhere down in the
, we have a chunk of markup that looks something like:
// this is the initial setup for machine.includer
Machine.Includer.configure({scriptLocations : {'.*':''}});
// include our external dependencies
include('jquery.js');
include('HelloWorldController.js');
// and kick it all off once the page loads
include(function() {
$(function() {
var hw = new HelloWorldController();
hw.render();
$('#container').append(hw.domRoot);
});
});
include.load();
</script>
<div id="container"></div>
So this is interesting; We do some inital configuration for machine-includer.js, then load our external scripts that the page is dependant upon and then instantiate an object (defined in an external dependency) and attach one of its properties to the DOM via jquery’s append() function.
Let’s take each signifigant chunk on its own…
// this is the initial setup for machine.includer
Machine.Includer.configure({scriptLocations : {'.*':''}});`</pre>
This is the only initialization call needed to set up machine.includer.js; after this, you can safely call include(). The hash that is passed into configure() has one notable parameter: scriptLocations. It is a series of pairs that say to the includer, “Hey, if you encounter the regex pattern in the left component in an includer URL, please prepend it with the contents of the right component.” This means that, if you had some hinting from the server based on environment (dev, production, etc), you could configure machine-includer.js to mangle the script loads done via include() so they actually called out to an external CDN or local folder, depending on the runtime environment.
For example, consider if your static content was delivered by a CDN like Cloudfront in your production setting, but was served from a local Scripts directory in the same web server when run in the dev/test environments.
In an ASP.NET MVC WebForms template with a strongly-typed view that had an Environment property hooked up to some mythical enum or the like (this applies just as easily to other server frameworks) the server-side template might look like:
// this is the initial setup for machine.includer
<%
var prefix = Model.Env == Env.Production ? 'http://s3.yourdomain.com/path/'
: '/Scripts/';
%>
Machine.Includer.configure({scriptLocations : {'^static.*': '<%= prefix %>'}});
In this way, you can drive how your scripts are loaded at runtime with a simple, per-page check to the ambient environment.
// include our external dependencies
include('jquery.js');
include('HelloWorldController.js');`</pre>
In this chunk of code, we are declaring our dependencies for this page, utilizing calls to the include() function with strings for the paths to the files in question. As with