In order to reproduce what this will look like in your app/website it can be helpful to artificially slow down your connection to Typekit. This can be accomplished via ipfw (no guarantee that these commands will work exactly as below for other unix variants).
First get the IP address of use.typekit.com by pinging it (for me it is currently 126.96.36.199). Then:
sudo ipfw add pipe 1 ip from 188.8.131.52 to any
sudo ipfw pipe 1 config bw 80kbit/s plr 0.05 delay 50ms
Play with the values 80 to change bandwidth, 0.05 to change packet loss ratio (you can just remove this as well), and 50 to change latency.
When you’re done (note that this will flush all existing ipfw rules):
sudo ipfw flush
# size commit date
439323 d4d09e047d50388180a1e317efc61af5d8961275 20130201
439323 fd30e151e35efba1bda65488e621c7338895542e 20130130
439241 6ce650d7e97add955b7cd07150732890c0edaf49 20130129
439241 3c1d2aec69f874926965843800163be71ec5f376 20130128
If the name of the file stays the same, it turn out this is pretty simple. The following git command will show the size of the file for the commit in question:
git ls-tree -r -l <COMMIT> <PATH>
So we can do something like
git ls-tree -r -l HEAD~$COUNTER compiledjs.min.js
in a bash script and increment $COUNTER as much as we want, grabbing the file size with some ugly use of tr and cut, e.g:
git ls-tree -r -l HEAD~39 compiledjs.min.js | tr -s ' ' | tr '\t' ' ' | cut -d ' ' -f 4
But if the name of the file changes across commits, as it will if you are tagging it with a date or SHA1 for cache-busting, this approach won’t work. The approach I came up with, which is hacky, involves creating and deleting temporary branches based on HEAD~1, HEAD~2, etc., and getting the requisite date, file size, and commit info by pattern-matching on the name of the file in question.
Shell script to accomplish this, along with some basic gnuplot commands to plot the output, here: https://gist.github.com/4700556
Closure Compiler externs for underscore 1.4.4 are now available.
Underscore 1.4.4 adds two new functions, _.findWhere, and _.partial.
Also pushed some fixes to 1.4.3 externs:
- typo in `properties` param for _.where
- a bunch of methods should be able to operate on Arguments objects as well as Arrays. Previously, only Arrays were noted as valid params by the externs.
QAing js popup windows in your web app is a pain in the ass. It’s easy to miss windows in obscure corners of your UI and can be hard to recreate the circumstances under which they’re shown, and the various states that determine how and when they appear. Automated js tests with a library like Qunit or Jasmine, while great, don’t really help when you need to make sure things look a certain way.
Fortunately, if you make a couple assumptions that I think hold true for a majority of web apps, we can simplify this testing process substantially.
Just pushed fixes and updates to my Closer Compiler externs for underscore, and created a repo for Qunit externs (which is probably not of any use to anyone, but is used by for testing accuracy/completeness of underscore externs).
- Externs for latest underscore (1.4.3).
- Some fixes to 1.3.3 externs, indicating that more methods may return wrapped objects (this is still incomplete).
- Tests for externs file completeness/accuracy by running underscore’s own qunit tests through closure compiler. The output (using advanced compilation) is nasty and contains a lot of errors, but most of them are irrelevant. Some do point to legitimate issues, mostly related to uncommon ways of passing arguments to various methods.
- Externs missing for `throws`, callbacks, and configuration. Mostly complete.
Subsequent to some performance issues with a few Backbone apps, I spent some time digging further into Google Chrome’s timeline panel, and specifically the memory view that shows allocated memory, and breaks down the number of DOM nodes held in memory.
Some tests/demos along with provisional findings are provided here: chrome timeline exploration. The official documentation for Chrome’s dev tools is getting better, but could still use improvement. Hopefully this will go some way to providing a bit more insight into what’s going on, what different numbers mean, and what sort of behaviour you can expect from common scenarios.
As I’m nowhere near an expert of Chrome’s internals nor on memory profiling in general, any suggestions or corrections are more than welcome (pull requests or whatever).
There’s a variety of mildly annoying things about V3 of Google’s Contacts API (lack of control over which fields are returned, basically non-existent sorting ability, use of oauth bearer tokens; also they seem to be confused about their supposed mitigation of the confused deputy problem: validation does not actually seem to be required, nor does this in any way prevent an attacker from stealing the token and doing whatever they like with it).
One of the most annoying “features” is the undocumented rate limit when requesting images for a user’s contacts, each request of which must be authenticated. The rate limit appears to be around 10/s.
I wrote a lightweight jQuery plugin that will help you avoid the 503 errors that Google’s API returns when you begin exceeding this limit.
jQuery batched image loader
The github page has basic implementation details.
If you’ve spent much time with Chrome’s debugger, you’ve probably encountered a few annoying scenarios, in which the locked-up view of the web page you’re working on prevents you from inspecting elements or scrolling.
An easy workaround for the scrolling lock-up is to just jump to the console, and: